Real-time DataLogger reference
Available from firmware 2019.6
Sources and data types
The DataLogger can record data from any IN or OUT ports and variables. The following data sources are available:
- Global Data Space IN and OUT ports
- Type real-time program (C++, IEC 61131-3 and MATLAB®/Simulink®) – task-synchronous mode
- Type component
- Global IEC 61131-3 variables
The following data types are supported:
- Elementary data types (according to Supported elementary data types);
all STRING variables, regardless of their length, are supported and thus belong to the elementary data types.
Schematic overview
Find all details on the attributes in the tables and sections below.
Configuration parameters explained visually
Sampling intervals, publishing intervals, and writing intervals? Why are there so many parameters for transporting the data?
Here's why this is the best way to log data in a real-time automation environment ‒ explained by Martin Boers, Technical Specialist in the Product Management PLCnext Runtime System at Phoenix Contact Electronics:
05m:26s HDTV 720p English none
XML configuration file
An XML configuration file for the DataLogger is structured as shown in the following example:
<?xml version="1.0" encoding="utf-8"?>
<DataLoggerConfigDocument
xmlns="http://www.phoenixcontact.com/schema/dataloggerconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.phoenixcontact.com/schema/dataloggerconfig.xsd">
<General name="data-logger" samplingInterval="100ms" publishInterval="500ms" bufferCapacity="10"/>
<Datasink type="db" dst="test.db" rollover="true" maxFiles="3" writeInterval="1000"
maxFileSize="4000000" storeChangesOnly="false" tsfmt="Iso8601"/>
<Variables>
<Variable name = "Arp.Plc.ComponentName/GlobalUint32Var"/>
<Variable name = "Arp.Plc.ComponentName/PrgName.Uint32Var"/>
<Variable name = "Arp.Plc.ComponentName/PrgName.StuctVar.IntElement"/>
<Variable name = "Arp.Plc.ComponentName/PrgName.IntArrayVarElement[1]"/>
</Variables>
</DataLoggerConfigDocument>
As you can see in the structure, there are only a few XML elements to be configured.
Within the <DataLoggerConfigDocument>
schema, there are always:
<General>
<Datasink>
<Variables>
, containing one or more<Variable>
elements
Attributes for file-based configuration
Of course, there's a bunch of attributes adding some complexity. See the following 3 tables in which the attributes are grouped by the XML tags they belong to. Where there's much more to explain to an attribute you will find additional information below the tables.
<General>
Attribute | Description |
name |
Unique name of the logging session. Note: Must not begin with "PCWE_ " which is reserved for the triggered Logic Analyzer of the PLCnext Engineer. |
samplingInterval |
Interval during which the data points of a variable are created, e.g. |
taskContext |
Available from firmware 2021.6 or newer. The name of an ESM task which samples the values of all variables of this session, e.g. taskContext="myTaskName" . The attribute is optional. If it is configured the samplingInterval attribute will be ignored. |
publishInterval |
Frequency for forwarding the collected data from the ring buffer to the data sink, e.g. publishInterval="1s" .The default value is 500ms .The following suffixes can be used: ms, s, m, h. Note: This attribute will be ignored if a taskContext is also configured.
|
bufferCapacity |
Configuration of the buffer capacity. Capacity of data sets for the internal buffer memory. The default value is 2 . |
<Datasink>
In each cycle, the values of all ports of a task are stored in a ring buffer. Therefore, the capacity of the ring buffer determines the maximum number of task cycles that can be recorded before data must be forwarded to the data sink. The data to be archived is written to an SQLite database. For each configured DataLogger instance, a separate SQLite database is created.
Attribute | Description |
type |
Configuration of the data sink:
|
dst |
With file-based configuration, this is the file name and path under which the data sink is to be stored. If no specific path is given it will be placed in the working directory of the firmware which is If the configuration is made with the PLCnext Engineer interface (available from software 2020.6), this attribute is set to |
rollover |
|
writeInterval |
Number of data records the DataLogger collects and writes to the SD card. The default value is In other words, as soon as 1000 data records have been transferred to the data sink, they are grouped in a block and written to the SD card. When the data sink or the firmware is closed, all the values that have not yet been transferred are written to the SD card. Note: If the value of the attribute Otherwise, it is possible that the data cannot be written to the SD card in the required speed. This may result in the loss of data. If there is a loss of data it will be displayed in the database in the |
maxFileSize |
Maximum memory size of the log file in bytes. (Note: |
maxFiles |
Maximum number of rolling files (default value is If the maximum number of files is set to If the maximum number of files is set to a negative number (e.g. |
storeChangesOnly |
(see details below at Recording mode) |
|
(available from firmware 2020.0 LTS) Percentage of maximum memory size to be deleted for the logging of new data. Default is 30 %. Note: For large data sinks (larger than 10 MB) the |
|
(available from firmware 2020.0 LTS) Configuration of the timestamp format
(see details below at Timestamp) Note: Logging with |
<Variables>
Attribute | Description |
Variable name |
Complete name (URI) of a variable or a port whose values are to be recorded. Example: A maximum of 996 variables per session is allowed. |
<TriggerCondition>
Available from firmware version 2020.3
From firmware version 2020.3 on you have the possibility to define trigger conditions for starting a DataLogger session via the XML configuration file. To do this, use the element <TriggerCondition>
and the attributes described below.
This element starts a list of <RpnItem>
where one item consists of an attribute type and a text that, depending on the type, either names a variable, a constant or an operation. The optional attributes postCycles
and preCycles
can be used to specify the amount of datasets recorded before and after the trigger condition is fulfilled.
Note: RPN (Reverse Polish Notation) is used for the configuration of the trigger.
<TriggerCondition>
Attribute | Description |
postCycles |
The |
preCycles |
The |
taskContext |
Name of the task in which the trigger condition is evaluated. |
<RpnItem type>
List of trigger items. A trigger item can be a variable or a constant or an operation.
Attribute | Description |
Variable |
The attribute type |
Constant |
The attribute type Constant is used to define a constant as a trigger condition item. The item must contain the value of the constant, e.g. 5 . |
Operation |
The attribute is used to define an operation. The following values are valid for the
|
Example:
The trigger condition (Variable a > Variable b) & (Variable c > Variable d) can be configured using the following list (a, b, c, and d are used in this example instead of the complete names (URI) of the variables or ports for better readability):
<TriggerCondition postCycles="200" preCycles="100" taskContext="Cyclic100">
<RpnItem type="Variable">a</RpnItem>
<RpnItem type="Variable">b</RpnItem>
<RpnItem type="Operation">Greater</RpnItem>
<RpnItem type="Variable">c</RpnItem>
<RpnItem type="Variable">d</RpnItem>
<RpnItem type="Operation">Greater</RpnItem>
<RpmItem type="Operation">And</RpnItem>
</TriggerCondition>
Database layout
The values of the configured variables are saved in a table inside the SQLite database. The default path for the database files on your controller is /opt/plcnext. The database files are saved as *.db files. The file system of the controller is accessed via the SFTP protocol. Use a suitable SFTP client software for this, e.g., WinSCP.
Copy the *.db files to your PC and use a suitable software tool to open and evaluate the *.db files (e.g. DB Browser for SQLite).
Depending on your configuration, a database table that is created by the DataLogger can consist of the following columns:
- Timestamp:
Timestamp for the logged variable value (see details below at Timestamp). - Consistent Data Series:
This index shows if there is a inconsistency in the logged data (see details below at ConsistentDataSeries). - Task/Variable:
One column for each variable that is configured for data logging. The column name consists of the task name and the variable name (see details below at Variables). - Task/Variable_change_count:
In case ofstoreChangesOnly="true"
(see details below at Recording_mode), this column serves as change counter. There is a change counter for every configured variable.
Time stamp
The DataLogger provides a time stamp for each value of a port. Only one time stamp is generated for ports from the same task because this time stamp is identical for all the values of the task. Time resolution has a precision of 100 ns.
- Firmware 2019.6 to 2019.9 :
The time stamp is always displayed as raw 64 bit integer value. - From firmware 2020.0 LTS:
It is possible to configure the format of the time stamp inside the database. It can be displayed as ISO 8601 or as raw 64 bit integer value.
Despite the format, all time stamps are reported using the UTC timezone. The implementation and internal representation complies to the Microsoft® .NET DateTime
class, see the documentation of the DateTime.ToBinary
method on docs.microsoft.com.
The time stamp is created in the task cycle via the system time of the controller. It is set at the start of the task (task executing event) and maps exactly the cycle time of the task, so that the values of the following task cycles are always one interval apart.
Data consistency check
If recording gaps caused by performance problems or memory overflow occur this information will be saved in the data sink. If there is a loss of data it will be displayed in the database in the ConsistentDataSeries
column.
This column can contain the values 0
or 1
:
- Value
0
:
If the value is0
a data gap occurred during the recording of the preceding data series. The first data series always has the value0
because there is no preceding data series for referencing. - Value
1
:
If the value is 1 the data is recorded without a gap related to the preceding data series. Therefore the data series tagged with a1
is consistent to the preceding data series.
RowId | ConsistentDataSeries | VarA |
1 |
0 |
6 |
2 |
1 |
7 |
3 |
1 |
8 |
4 |
1 |
9 |
5 |
1 |
10 |
6 |
1 |
11 |
7 |
1 |
12 |
8 |
1 |
13 |
9 |
0 |
16 |
10 |
1 |
17 |
11 |
1 |
18 |
In this recording, the first 8 data rows are consistent and without gaps caused by data loss (ConsistentDataSeries=1
). Between rows 8 and 9 a data gap is indicated (ConsistentDataSeries=0
). The rows 9 to 11 are consistent again.
ConsistentDataSeries
flag to ensure that the data is consistent.If
ConsistentDataSeries=0
is stated in other rows than row 1, an inconsistency has occurred during recording.Recording mode
The recording mode is set by the attribute storeChangesOnly
. There are two recording modes available:
- Endless mode
The DataLogger records the data in endless mode. All the ports and variables configured for recording are recorded without interruption (storeChangesOnly="false"
). - Save on change
The DataLogger only records the data when they change. If the value stays the same it is displayed in the data base with aNULL
(storeChangesOnly="true"
).
Examples for storeChangesOnly
configuration
Note: In these examples the time stamp is displayed in a readable format. In a *.db file generated by the DataLogger, the time stamp is UTC of typeArp::DateTime
. It is displayed as 64 bit value in the database. The implementation and internal representation complies to the .NET DateTime
class, refer to the documentation of DateTime
struct at https://docs.microsoft.com to convert the time stamp into a readable format.
Attribute storeChangesOnly="false"
show this exampleshow this example
In this example the logged variables are from the same task. Therefore there are values for every timestamp.
Timestamp | ConsistentDataSeries | Task10ms/VarA | Task10ms/VarB |
10 ms | 1 | 0 | 0 |
20 ms | 1 | 1 | 0 |
30 ms | 1 | 2 | 2 |
40 ms | 1 | 3 | 2 |
50 ms | 1 | 4 | 4 |
60 ms | 1 | 5 | 4 |
Attribute storeChangesOnly="true"
show this exampleshow this example
In this example the logged variables are from the same task. Therefore there are values for every timestamp. When there is no change in the value of a variable in relation to the value of the preceding timestamp, it is displayed as NULL, meaning that the value has not changed.
Timestamp | ConsistentDataSeries | Task10ms/VarA | Task10ms/VarA_change_count | Task10ms/VarB | Task10ms/VarB_change_count |
10 ms | 1 | 0 | 0 | 0 | 0 |
20 ms | 1 | 1 | 1 | NULL | 0 |
30 ms | 1 | 2 | 2 | 2 | 1 |
40 ms | 1 | 3 | 3 | NULL | 1 |
50 ms | 1 | 4 | 4 | 4 | 2 |
60 ms | 1 | 5 | 5 | NULL | 2 |
Attribute storeChangesOnly="false"
and variables from different tasks
show this exampleshow this example
In this example the logged variables are from different tasks (Task10ms and Task 20ms).
Usually different tasks have different timestamps which affects the layout of the table. When the variable values of a task are added to the database table, the variable values of the other task are displayed as NULL.
Timestamp | ConsistenDataSeries | Task10ms/VarA | Task20ms/VarB |
10 ms | 1 | 0 | NULL |
20 ms | 1 | 1 | NULL |
21 ms | 1 | NULL | 1 |
30 ms | 1 | 2 | NULL |
40 ms | 1 | 3 | NULL |
41 ms | 1 | NULL | 2 |
50 ms | 1 | 4 | NULL |
60 ms | 1 | 5 | NULL |
61 ms | 1 | NULL | 3 |