Data Logging and Alarm Logging Problems


Data Logging and Alarm Logging Problems

In selecting a data logging and alarm logging solution it is best to consider what happens to the data when everything is not performing perfectly. In a system’s lifetime at least one of the following if not all five will occur multiple times:

1. Database Engine Failure
2. Network Failure
3. Missing High Speed Data
4. Inaccurate Manual Setup
5. Defective Controller Handshaking


When the connection to the database engine fails during database backup, maintenance, or network failure to remote engines data will be lost if the data logging and alarm logging solution does not provide store and forward functionality .

For systems that do have store and forward capabilities it is important to select one that can maintain the data for long periods of time. This requires that the data to be inserted or updated to the database must be stored to disk instead of just buffered to RAM.

Steps to replicate condition:

  1. Stop database engine during data logging and alarm logging.
  2. Verify data buffered to disk.
  3. Start database engine.
  4. Verify all data has been archived before, during, and after database engine shutdown.

Note: An advanced test to account for a system restart during data buffering is to shutdown and restart the logging server between steps 2 and 3. Most store and forward solutions will lose data if the server PC is shutdown.


When the network connection between the data source server and data logging server is down, data will be lost if the solution does not implement a distributed network design with store and forward at the data source.

By relying on cloud solutions for data and alarm archiving data will be lost when the connection to the data source is broken and data cannot be sent to the cloud solution . Also there are many tunneling solutions that provide network transport, but do not maintain the data during network outages.

Steps to replicate condition:

  1. Disconnect network between data server and cloud system or remote data logging server.
  2. Verify data buffered to disk at the source.
  3. Reconnect network.
  4. Verify all data has been archived before, during, and after network outage.


It is important for a logging solution to provide data processing right at the source and process all values received from the data source. Data and alarms can be missed if data transitions quickly and the solution processes only the current value sampled.

Consider a sensor that receives a sharp spike in data for a very brief time interval. If sampling occurs before and after the spike, no alarm would be recorded and the high data value would be missed in the data archiving.

Some systems can sample at a much faster rate, but then have problems moving the large amount of data to the database engine efficiently and then data begins to backup in the system. It is important to select a system that can handle high bursts of data during critical events.

Also data that is processed directly within the controller can be queued to be handed over to a data logging system. It is important that the data logging solution have the ability to provide handshaking to the controller to inform when the data is received and processed and ready for the next record.

Steps to replicate condition:

  1. Use a data source of a .NET application or high speed communications that can provide time stamp and data at high rate, microsecond samples if possible.
  2. Have the data change cycle below and above alarm conditions a series of times.
  3. Verify that all data samples are recorded to the database, and all alarm events have been captured and recorded.


Automated or programmatic setup is important in system setup so human error does not affect the system accuracy of what data is logged and alarmed upon.

With large amounts of data to be collected and processed it is very easy to point to the wrong variable address, alarm limit, or database field. Steps to verify setup accuracy:

  1. Insure there is automated or programmatic setup of the data source, data logging, and alarm logging configuration.
  2. Use a CSV Export to spreadsheet and verify each row for data addresses and field names match.


Some systems will queue data in the controller for best data accuracy and assurance of no data loss on communication failure to the controller.

This is typically done with a queue or array within the controller where the data is buffered and then passed to the data logging engine when it is available to record the data.

It is important to enable a handshaking technique that can validate data has been successfully archived. Steps to verify accuracy:

  1. Start the controller cycles processing records to be logged
  2. Stop database engine.
  3. Have controller execute several cycles of records to be logged.
  4. Start database engine.
  5. Verify that no records are missing from the cycles performed before, during, and after the database engine shutdown.