Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Version 1.3.2
NetGuardians SA <info@netguardians.ch>
Table of Contents
Disclaimer
All material in these pages, including text, layout, presentation, logos, icons,
photos, and all other artwork is the Intellectual Property of NetGuardians SA,
unless otherwise stated, and subject to NetGuardians SA copyright. No
commercial use of any material is authorised without the express permission of
NetGuardians SA. Information contained in, or derived from these pages must not
be used for development, production, marketing or any other act, which infringes
copyright. This document is for informational purposes only. NetGuardians SA
makes no warranties, expressed or implied, in this document.
Preface | 1
NG|Polling System Admin Guide
The different polling mechanisms will be further explored in the next chapters
• Database Polling - Collects any data stored in a database with generic JDBC connector.
• T24 Polling (simple legacy version) - Collects T24 data stored in a temenos T24
database. Simple version of the T24 Connector.
• New T24 Polling - Collects T24 data stored in a temenos T24 database . simple version
of the T24 Connector.
• Specific Post Processor for T24 History Table - A specific Post Postprocessor to collect
T24 History table information.
2.1. Introduction
The Polling System enables NG|Screener Platform administrators to define target database
tables te be polled on a regular basis (Online Polling mode) or on a casual basis (batch
execution mode).
• A syslog service and host name to index the data within the /log-collector.
The Polling System is usually started in online service mode in which mode a system
service is started and executed polling target regularly after each configured delay.
The Polling System can also work in batch mode in which mode it takes a polling target
name as argument and creates a process that fetches this specific target and then finishes.
Polling configuration is done by editing configuration files in the directory dedicated to the
polling method chosen.
/etc/ng-screener/polling-system/targets
+ jdbcTargets
+ t24Targets
+ newT24Targets
+ mswmiTargets
+ sapTargets
+ flexcubeTargets
• Service Name: The name of the service where audit trails will be classified in (should
match with the connector).
• Host Name: The name of the host where audit trails will be classified in (will be visible
in NG|Browser).
• Specific information: Depending on the polling type and mode chosen (SQL Query,
status field, username, etc)
/etc/ng-screener/polling-system/targets
Polling status (next poll date and last polled id in case of poll with status mode) is stored in
/etc/ng-screener/polling-system/status
Once the new polling configuration has been done, in order to apply it you should restart
the Polling System.
At start-up, NG|Screener will read all polling configurations and check if there is a next
polling time (service@host.nextpoll in status directory) already existing. If yes, the
next polling time will be preserved. If no, the polling will start immediately.
Checking polling logs may help to verify that everything has been configured correctly.
• Polling status log: Provides the status of polled targets at the last ng-screener daemon
startup time.
/var/log/ng-screener/pollingConnectionStatus-all.log
• Polling module log: Provides all polling logs (when polling, how many events polled,
etc)
/var/log/ng-screener/polling-system/polling-system.log
Every Polling target configuration support notion of a working mode corresponding to both
ways of invoking the Polling System.
• batch: mode means the polling target can only be called in batch mode. Batch mode is
a unique execution of the polling target scheduled from the outside. The Polling System
instantiates a JVM, executes the polling target and shuts down Note : a batch execution
will fail immediately if it runs a target at the precise same time an online execution of
the same target is running. Polling target configured in batch mode are not taken into
consideration by the online targets execution JVM, They are left over (skipped)
• online: mode means the polling target can only be called in online mode Online is the
usual behaviour : The Polling System starts a JVM and keeps running forever, running
targets again and again after every configured ’Delay’
• both: modes means the polling target can be executed in both modes. In this case the
online targets execution JVM will consider the target and schedule its execution after
every ’Delay’ but the target can also be executed by an outside scheduler in batch
mode. In this case, a protection mechanism is put in place to ensure that a ``batch''
execution cannot be executed at the same time an online execution is running
It is important to note that if a polling target is intended to be used only on a case by case
basis, it has to be explicitly defined as batch. Otherwise the online system will attempt to
execute it on a regular basis as well in online mode.
A script is provided by the polling system to invoke polling targets in batch mode. This
script is named ngPolling and available in the system PATH.
Invocation of the script is straightforward : it simply takes the name of the target in the
form service@host (Syslog service name and host name) as argument.
A protection mechanism is put in place that forbids the execution of a specific target in
batch mode in case the same target is executed in online mode at the same time. This
protection mechanism is pretty robust and for instance it checks that the process owning
NG|Screener polling engine is based on JDBC and allow audit trails collection from any
databases supporting JDBC.
JDBC drivers for the following databases are included by default in NG|Analytics Server [1:
Additional drivers may be added by the administrator if necessary. Please contact
NetGuardians SA for the detailed procedure]:
• Oracle
• Microsoft SQL
• MySQL
• AS400
• PostgreSQL
• Fetch with status: Relies on a status field to determine the audit trails to fetch. The
status field is usually an ID or a timestamp (should be incremental) NG|Screener will
remember the last status and fetch only elements inserted after it. This is the standard
mode that will allow NG|Screener to collect all new elements entered since the last
poll.
• Fetch all: fetches all elements defined in the SQL query at each poll. This mode is used
if the database is emptied between two polls. If it not the case the same events will be
polled at each poll. Which will generate duplicates.
• Fetch and delete: fetches all elements and delete them from the database. This mode is
used when audit trails need to be cleaned on the source system after NG|Screener have
collected them.
An XML Field parsing mechanism is implemented that enables the Polling system to
extract XML fields in a specific way.
Internal XML elements values can be extracted and put un the resulting log as if they were
native database columns.
Please refer to the sample configuration below for more information in this regards.
A duplicate detection mechanism is implemented that enables the Polling system to detect
when a DB row is duplicate of a previously fetched row. This is especially useful for the
FetchAll mode but can also well be useful for polling with status when the status is not a
sufficiently unique identifier.
The cache file is stored within the same folder that the status file and the user can reset the
cache by deleting this file after having properly stopped the Polling System.
The duplicate detection system requires proper configuration and the user may want to
refer to the sample configuration provided below to get more information on the topic.
3.4. Post-processors
It can happen that the standard mechanism to generate events from the data fetched from
the target database is not sufficient, for instance when a database row needs some very
specific processing to be transformed in one, or sometimes several, events stored in
/log-collector files.
A Post-Processor architecture is available for this purpose.
The following Post-processors are available as of this version of the Polling-System:
3.4.1. SplittingPostProcessor
The SplittingPostProcessor searches within a specific splitting field read from the
database for a specific splitting pattern. The target field is then split into multiple partial
fields according to these pattern.
Instead of generating one single event for the database row, the Splitting Post-Processor
generates an event for each and every split. The other fields read from the database, i.e.
additional fields to the split field, are duplicated for each and every split.
postProcessorClass=com.netguardians.pollingSystem.postprocessors.Splitting
PostProcessor
postProcessorProperty_1=splittingField=CONTENT
postProcessorProperty_2=splittingPattern=,
The above configuration would look for instance for a field CONTENT=a1,b2,c3 and
generate cloned rows for every values a1, b2 and c3.
This is mostly useful when the same database row containing the various splits is updated
throughout the day and new splits added to it. Using the CachingSplittingPostProcessor, the
Polling System can use the FetchAll fetching mode to get the same row over and over
again and only process the new splits added to the row.
postProcessorClass=com.netguardians.pollingSystem.postprocessors.CachingSp
littingPostProcessor
postProcessorProperty_1=splittingField=CONTENT
postProcessorProperty_2=splittingPattern=,
postProcessorProperty_3=keyingField=ROW_ID
3.4.3. KeyValueParsingPostProcessor
All the identified values along with matching keys will be added as new fields in the original
rows, as if they’d been database columns read from the original row.
postProcessorClass=com.netguardians.pollingSystem.postprocessors.KeyValueP
arsingPostProcessor
postProcessorProperty_1=parsingField=CONTENT
postProcessorProperty_2=parsingSplittingPattern=,
postProcessorProperty_3=parsingFieldsList=MSG,NAME,SURNAME,ADDRESS
3.4.4. CompositePostProcessor
postProcessorClass=com.netguardians.pollingSystem.postprocessors.Composite
PostProcessor
postProcessorProperty_1=processorClasses=com.netguardians.pollingSystem.po
stprocessors.SplittingPostProcessor,com.netguardians.pollingSystem.postpro
cessors.KeyValueParsingPostProcessor
postProcessorProperty_2=splittingField=CONTENT
postProcessorProperty_3=splittingPattern=\\?
postProcessorProperty_4=parsingField=CONTENT
postProcessorProperty_5=parsingSplittingPattern=/
postProcessorProperty_6=parsingFieldsList=USER,APP,ID,DATE,IP,FULLDATE
2. Would be split by the Splitting Post-processor into the following distincts rows :
CONTENT=UG0010022/KCB.ACCT.STMT.ONLINE/2202204024/20150820/172.31
.1.184/20150820152828
CONTENT=UG0010022/KCB.ACCT.STMT.ONLINE/2202204024/20150820/172.31
.1.184/20150820152833
3. Then The Key Value Post-processor would replace the CONTENT field by the a new set
of fields DATE=20150820 APP=KCB.ACCT.STMT.ONLINE ID=2202204024
USER=UG0010022 FULLDATE=20150820152828 IP=172.31.1.184
DATE=20150820 APP=KCB.ACCT.STMT.ONLINE ID=2202204024
Template JDBC polling configuration files are available on the NG|Analytics Server for the
Oracle, MS SQL, MySQL, AS400 and PostgreSQL databases. The Oracle configuration file is
listed below as an example.
#---------------------------------------------------------------------
# WORKING MODE
#---------------------------------------------------------------------
# SYSLOG HEADER
#---------------------------------------------------------------------
# DATABASE CONNECTION
# Password to connect to DB
# If the password is encoded, you should append == in front of the
# encoded value
Password=password
#---------------------------------------------------------------------
# MODE POLLING
# In FetchWithStatus mode
# The status field is extracted from the Query, thus a statusField
# name is given and a StatusQuery is provided in order to retrieve
# the status during the first start of the module (in order to dump
# the whole BD during the first start)
Mode=fetchWithStatus
# In FetchAll mode
# No status are needed, the whole table data is fetched at each poll.
# Such mode can be used if the table is purged between each poll
# Mode=fetchAll
# In FetchAndDelete mode
# No status are needed (a row is "mark" as fetch when the row is not
# anymore available into the DB. Thus, a unique row id should be
# provided by the Query and then used by the DeleteQuery in order to
# delete the right row
# Mode=fetchAndDelete
#---------------------------------------------------------------------
# DATA EXTRACTION
#---------------------------------------------------------------------
# XML CONFIGURATION
# The XmlTag contains a map between the XML tags in the XMLField and
# the key word in the log file. An XML tag represents the path of an
# element of the XML document from the root node. All mapping fields
# must be put in the order as they occur in the log file.
#XmlTag_1=/row/c1:date
#XmlTag_2=/row/c3:time
#XmlTag_3=/row/c5:terminal
#XmlTag_4=/row/c7:company
#XmlTag_5=/row/c8:user
#XmlTag_6=/row/c9:application
#XmlTag_7=/row/c10:level
#XmlTag_8=/row/c11:app
#XmlTag_9=/row/c12:remark
#XmlTag_10=/row/c13:method
#---------------------------------------------------------------------
# DUPLICATES DETECTION
#postProcessorClass=com.netguardians.pollingSystem.postprocessors.historyt
ablepostprocessor.HistoryTablePostProcessor
#postProcessorProperty_1=APPLI.trigger=Application
#postProcessorProperty_2=APPLI.xxx.currentTableQuery=SELECT id, xmlrecord
FROM USER_T24 WHERE id like '{1}'
#postProcessorProperty_3=APPLI.xxx.historyTableQuery=SELECT id, xmlrecord
FROM USER_T24_HISTORY WHERE id like '{1};%'
#postProcessorProperty_4=APPLI.xxx.historyMapping=id
#postProcessorProperty_5=APPLI.xxx.currentMapping=id
#postProcessorProperty_6=APPLI.xxx.strategyType=DIFF
Temenos T24 audit trails may be polled by NG|Screener using database polling. It is based
on JDBC but Temenos T24 provides a special format in the dedicated database.
NetGuardians has adapted its JDBC polling to provide a clean and easy way to poll these
audit trails.
Parameter Description
SyslogServiceName The service name for audit trails classification
SyslogHostname The host name for audit trails classification
Delay Interval between two polls
Username DB user with Read Only rights.
Password DB password
DriverClass JDBC Driver (see example)
ConnectionString JDBC connection string (see example)
QueryWithoutStatus SQL query run the first time when status is not determined.
After this running, a status is defined and all the data in the
database is extracted.
QueryWithStatus SQL query run every ’delay’ interval and extract data inserted
between the previous poll.
StatusField Field to use for the status (ID, timestamp)
StatusFormat The format of the status field
DateField_X Field(s) used to define audit trails timestamp
DateFieldFormat_X Position of the timestamp in the date field
XmlField The XML field specifies the columns in the query that contains
the results in XML format.
XmlTag_X The XmlTag contains a map between the XML tags in the
XMLField and the key word in the audit trails file.
4.2. Post-processors
Same PostProcessors can be applied than for generic JDBC Connector as documented
here : Post-processors.
# Author : NetGuardians SA
# Date : 24-JUL-2013
# Description : T24 polling configuration sample file
#
# Information : This file presents a sample of the T24 polling
configuration file
#
# Copyright (c) 2013 NetGuardians SA, all rights reserved.
#
################################################################
# Unique Identifier
################################################################
# Unique name of the current polling configuration
# with the following form: SyslogServiceName@SyslogHostname
# e.g. testService@testHost
################################################################
# Working Mode
################################################################
################################################################
# Syslog Header
################################################################
################################################################
# Database Connection
################################################################
# Password to connect to DB
Password=password
################################################################
# Data extraction
################################################################
# SQL query to run the first time when status is not determined.
# It is used to extract all the audit trails in the database. The
# query should return String values.
QueryWithoutStatus=SELECT recid as key, T.xmlrecord.getStringVal() as
xmlrecord FROM table T WHERE recid LIKE '20%'
################################################################
# XML Configuration
################################################################
# The XML field specifies the columns in the query that contains
# the results in XML format. If this field is not specified, all
# XmlTag fields will be discarded.
#XmlField=xmlrecord
# The XmlTag contains a map between the XML tags in the XMLField
# and the key word in the log file. An XML tag represents the path
# of an element of the XML document from the root node. All mapping
# fields must be put in the order as
# they occur in the log file.
#XmlTag_1=/row/c1:date
#XmlTag_2=/row/c3:time
#XmlTag_3=/row/c5:terminal
#XmlTag_4=/row/c7:company
#XmlTag_5=/row/c8:user
#XmlTag_6=/row/c9:application
#XmlTag_7=/row/c10:level
#XmlTag_8=/row/c11:app
#XmlTag_9=/row/c12:remark
#XmlTag_10=/row/c13:method
################################################################
# Duplicates detection
################################################################
################################################################
# Post-processors configuration
################################################################
#postProcessorClass=com.netguardians.pollingSystem.postprocessors.historyt
ablepostprocessor.HistoryTablePostProcessor
#postProcessorProperty_1=APPLI.trigger=Application
#postProcessorProperty_2=APPLI.xxx.currentTableQuery=SELECT id, xmlrecord
FROM USER_T24 WHERE id like '{1}'
#postProcessorProperty_3=APPLI.xxx.historyTableQuery=SELECT id, xmlrecord
FROM USER_T24_HISTORY WHERE id like '{1};%'
#postProcessorProperty_4=APPLI.xxx.historyMapping=id
#postProcessorProperty_5=APPLI.xxx.currentMapping=id
#postProcessorProperty_6=APPLI.xxx.strategyType=DIFF
Temenos T24 audit trails may be polled by NG|Screener using database polling. NG has
implemented an improved polling connector specifically designed to handle T24’s specific
RECID in a way that enables to:
• avoid duplicates that may occur when several rows are inserted for same RECID in the
target table
• eliminate missing entries when rows are added for RECID’s identified by a timestamp in
the past or after the latest RECID previously found in the target table
Parameter Description
SyslogServiceName The service name for audit trails
classification
SyslogHostname The host name for audit trails classification
Delay Interval between two polls
Username DB user with Read Only rights.
Password DB password
DriverClass JDBC Driver (see example)
ConnectionString JDBC connection string (see example)
QueryTimeout Timeout (in seconds) used for all the queries
QueryWithoutStatus SQL query run the first time when status is
not determined. After this running, a status
is defined and all the data in the database is
extracted. This version extracts only the
RECIDs
QueryWithStatus SQL query run every ’delay’ interval and
extract data inserted between the previous
poll. The query should return String values.
QueryForDataTemplate This the 2nd stage query that is used to
actually fetch the data. It processes a list of
RECIDs given in an SQL IN clause statement.
StatusField Field to use for the status (ID, timestamp)
StatusFormat The format of the status field
StatusDateFormat In case the date is given as CCCCCCCC, it
needs a date format
As the following section will present, the handling of T24’s specific format of RECIDs is
implemented in the New T24 Connector by computing new last status for the next polling
request in a very specific way.
For this reason, the format of the RECID in the target table needs to be precisely indicated
using the following pattern :
A few examples :
CCCCCCCCUUUUUSSSSSS?NNXXXX 20150304abcde121530.011234
DDDDDUUUUUTTTTT?NNXXXX 12345abcde45678.011234
DDDDDTTTTT 1234545698
CCCCCCCCCCUUUUUTTTTT?NNXXXX 2015-03-04abcde00600.011234
The Duplicate detection mechanism in the FlexCube connector is somewhat similar to the
generic mechanism implemented in the generic JDBC connector but uses solely the RECID
to identify duplicated rows.
This specific duplicate detection system requires proper configuration and the user may
want to refer to the sample configuration provided below to get more information on the
topic.
5.5. Post-processors
Same PostProcessors can be applied than for generic JDBC Connector as documented
here in Post-processors.
################################################################
# Working Mode
################################################################
################################################################
# Syslog Header
################################################################
################################################################
# Database Connection
################################################################
################################################################
# Data Extraction
################################################################
# SQL query to run the first time when status is not determined. It
# is used to extract all the audit trails keys in the database.
# Unlike the former T24 connector, this version extracts only the
# RECIDs
# This is used only if the format of the status doesn't allow to use
# the TimeInSecondsInitOffset
QueryWithoutStatus=SELECT recid as key FROM table T WHERE recid LIKE '20%'
# SQL query to run every "delay" interval in order to fetch last audit
# trails.
# The query needs to filter audit trails in order to fetch last entries.
# The query should return String values.
QueryWithStatus=SELECT recid as key FROM table T WHERE recid LIKE '20%'
AND SUBSTR(recid,1,8)||SUBSTR(recid,14,5) >= ?
# This the 2nd stage query that is used to actually fetch the data.
# It processes a list of RECIDs given in an SQL IN clause statement.
QueryForDataTemplate=SELECT recid as key, T.xmlrecord.getStringVal() as
xmlrecord FROM table T WHERE recid IN
# This is the time in seconds that keys are kept in the cache of
# keys already processed, i.e. already sent to the log collector.
# This value must in any case be greater that the value of the
# TimeInSecondsOffsetLogFetching above, at
# least one more hour
# The default value is 2 hours = 7200
TimeInSecondsCacheConservation=7200
# This is the time offset that is used when no former Status could
# be found (i.e. upon first execution of the polling)
# Logs are fetched from "current time " minus that offset
# Default is 86400 seconds = 24 hours
# IMPORTANT : use value "-1" to disable this feature and force fetch
# everything if no former status was found
TimeInSecondsInitOffset=86400
################################################################
# XML Configuration
################################################################
# The XML field specifies the columns in the query that contains
# the results in XML format. If this field is not specified, all
# XmlTag fields will be discarded.
#XmlField=xmlrecord
# The XmlTag contains a map between the XML tags in the XMLField
# and the key word in the log file. An XML tag represents the path
# of an element of the XML document from the root node. All mapping
# fields must be put in the order as
# they occur in the log file.
#XmlTag_1=/row/c1:date
#XmlTag_2=/row/c3:time
#XmlTag_3=/row/c5:terminal
#XmlTag_4=/row/c7:company
#XmlTag_5=/row/c8:user
#XmlTag_6=/row/c9:application
#XmlTag_7=/row/c10:level
#XmlTag_8=/row/c11:app
#XmlTag_9=/row/c12:remark
#XmlTag_10=/row/c13:method
################################################################
################################################################
# Post-processors configuration
################################################################
#postProcessorClass=com.netguardians.pollingSystem.postprocessors.historyt
ablepostprocessor.HistoryTablePostProcessor
#postProcessorProperty_1=APPLI.trigger=Application
#postProcessorProperty_2=APPLI.xxx.currentTableQuery=SELECT id, xmlrecord
FROM USER_T24 WHERE id like '{1}'
#postProcessorProperty_3=APPLI.xxx.historyTableQuery=SELECT id, xmlrecord
FROM USER_T24_HISTORY WHERE id like '{1};%'
#postProcessorProperty_4=APPLI.xxx.historyMapping=id
#postProcessorProperty_5=APPLI.xxx.currentMapping=id
#postProcessorProperty_6=APPLI.xxx.strategyType=DIFF
The History Table PostProcessor is provided to handle the specific case of staged polling
for architectures based on a current/history table and a mutation log. It gives the ability to
detect and report changes of business entities stored in those tables.
6.1. Background
The reason this post-processing is required is that there is no direct and efficient means of
detecting whether a new change has been appended to the history table. There is no
common and incremental key for these entries that would allow querying for recent
changes, which means that each business entity (like a user, account, etc.) would need to
be queried for separately. This is impossible due to performance reasons. There is a
mutation log table though (called the protocol log as well) which contains information about
every change performed in the system. This log contains only the metadata of the change
(like the time and entity ID) so the current and history queries still need to be queried for
the exact changes performed. However, that allows for selective querying of the history
table, which results in substantially reduced traffic to the database.
6.2. Overview
6.3. Implementation
This post-processor is fed with data retrieved by the main polling mechanism from the
mutation log. Each such record is cached and serves as a suggestion for the post-
processor that changes might have occurred or might occur in the near future. These are
then used to selectively poll the history table for the specified business entities to detect
changes.
To make change detection possible, a cache of the maximum versions of all the
records in the history table is built when initializing the application for the first time (a
process that might take some significant time) and maintained throughout it’s whole
lifetime. This cache is then used as a reference point when polling for the candidates
detected beforehand. This is also the reason for which only changes that have occurred
after the polling system has been initialized will be detected.
A poll is performed periodically on the history table using information from those cached
entries as a filter. The version number of the most current record in the history table is
then compared to the maximum version that the post-processor keeps in its cache. If the
version present in the history table is higher, then that explicitly means that a change (or
changes, because multiple of them might have occurred between the polling interval) has
been performed on the entity. In that case, all the history table records with a version
number higher than the one cached are fetched, the current record from the current table
is retrieved and all of them are then used to generate and send the respective log(s). The
highest version number encountered is then cached as the most recent one by the post-
processor for future reference and the mutation log entry that triggered the process is
discarded.
When a user opens an edition window in a T24 application, a record is generated in the
protocol table indicating that event. He can then apply and save some modifications to the
entity or simply close the windows without performing any changes. No record is generated
when a change is committed though, that’s why there is no direct means to detect that a
change was performed for an entry in a T24 application. This is the reason that the polling
needs to be repeated periodically - to support the case where a user keeps the edition
window open for some time and only then saves the changes. On the other hand, if such
changes are never saved, no new records will appear in the history table. To prevent the
protocol logs from occupying the cache forever, they are evicted after some configurable
time. This time can be understood as the maximum expected time that the user might
perform changes in.
This is to clarify how these tables work and why caching just the maximum version of a
record in the history table is enough.
The current table holds the current (most up-to-date) entries for all business entities
(users, accounts, etc.), that the application is about. The schema is the following:
+----------+---------+
|RECID |XMLRECORD|
+----------+---------+
|MICHAEL | <DATA> |
|JOHN | <DATA> |
|PETER | <DATA> |
+----------+---------+
The history table holds all the historical values a particular entry ever had. The significant
difference here being that the ID now has the version number appended:
+----------+---------+
|RECID |XMLRECORD|
+----------+---------+
|MICHAEL;1 | <DATA> |
|MICHAEL;2 | <DATA> |
|MICHAEL;3 | <DATA> |
|JOHN;1 | <DATA> |
+----------+---------+
If a record had only one (initial) state ever then that entry obviously does not appear in the
history table (PETER being the example here). Now, whenever a change is made to a user,
its current value from the current table is moved to the history table with an incremental
version appended to the ID and the new version of the entry is replaced in the current table.
That makes it enough to only hold the maximum version of a record (or none if not existent)
to detect any changes. Whenever a query is performed on that particular entry, merely
comparing the current version with the cached version reveals the information whether a
change has been performed and how many of those changes were performed.
In contrary to the T24 implementation, Finnova implements its entity IDs and versions in
separate table columns. What is more, to differentiate unique entities, a composite key
needs to be used - a combination of creation ID and creation date.
The post-processor can be configured with the following parameters. Note: when added to
the polling configuration, all of the parameter keys need to be prefixed with
postProcessorProperty_x=, where x are consecutive numbers starting from 1.
• APPLI.trigger
The name of the field which specifies the name of the application.
The character # can be added at the end of the field name when only a part of the field
content should be matched for the application name - only the part before the
expression specified in APPLI.separator. For example, if you audit the table USER
and want to match USER.TEST as well, APPLI.separator should be set to \\.. That
way the configuration specified for USER will match USER.TEST as well.
• APPLI.separator
Used when APPLI.trigger is suffixed with a #. The field can be any valid regular
expression according to
https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html.
• APPLI.actionOnMissing
Defines whether a protocol log received from the main polling should be sent to syslog
when the configuration for the application it comes from is missing.
Possible values : SEND (default) or DROP
• APPLI.secondaryPollingIntervalSeconds
Time to wait (in seconds) between consecutive polling runs
• APPLI.cacheEvictionMinutes
Time (in minutes) to keep the protocol logs as candidates for polling in the cache
• APPLI.applications
A comma-separated list of applications (table names) for which to perform secondary
polling on. Each such application must have a complete set of application-specific
parameters defined.
• APPLI.metadataExtractor
An implementation-specific metadata extractor. Implements the logic required to
extract the ID and version of records.
Possible values: T24MetadataExtractor or FinnovaMetadataExtractor
• APPLI.metadataExtractorIdFields
A comma-separated list of field names that uniquely identify the business entity. Can be
a single field or a composition.
34 | Chapter 6. History Table Post Processor
NG|Polling System Admin Guide
• APPLI.metadataExtractorVersionField
The name of the field that contains the record version.
Application-specific parameters:
Replace the xyz with a specific application name.
• APPLI.xyz.strategyType
The comparison strategy:
1. ALL: All values from the older entry are prefixed with OLD_ and all new values are
prefixed with NEW_. No comparison is done and all values are appended to the log.
2. DIFF: Only the differences are appended to the log. For values which exist in both
entries the old values are prefixed with OLD_ and new values with NEW_. If a value
exists only in one entry, that value is added as well.
4. WHITE: Performs the DIFF strategy but only on the fields provided in the
APPLI.xyz.strategyFields parameter.
5. BLACK: Performs the DIFF strategy but skips the fields provided in the
APPLI.xyz.strategyFields parameter.
• APPLI.xyz.strategyFields
A comma-separated list of fields to use with the BLACK, WHITE or
DIFF_CONSOLIDATED strategy.
• APPLI.xyz.historyTableMetadataQuery
The query used to fetch all history table records for metadata (usually ID and version).
Only the minimum required fields should be defined to extract the required data and
their names should match those specified in the metadata extractor configuration. This
query is used only once when the application is first stared to initialize the maximum IDs
cache.
• APPLI.xyz.historyTableQuery
The query used to fetch data from the history table. The rows returned should be limited
• APPLI.xyz.historyMapping
A comma-separated list of fields to inject into APPLI.xyz.historyTableQuery.
• APPLI.xyz.currentTableQuery
The query used to fetch data from the current table. Only one row must be returned. All
the details from APPLI.xyz.historyTableQuery apply here as well.
• APPLI.xyz.currentMapping
A comma-separated list of fields to inject into APPLI.xyz.currentTableQuery.
DateField_3=NEW_ROW.C85
DateFieldFormat_3=yyMMddHHmm
XmlField=xmlrecord
XmlTag_8=/row/c11:app
postProcessorClass=com.netguardians.pollingSystem.postprocessors.historyta
blepostprocessor.HistoryTablePostProcessor
postProcessorProperty_1=APPLI.trigger=APPLICATION#
postProcessorProperty_2=APPLI.cacheEvictionMinutes=120
postProcessorProperty_3=APPLI.secondaryPollingIntervalSeconds=20
postProcessorProperty_4=APPLI.USER.currentTableQuery=SELECT recid as
t24_id, T.xmlrecord.getClobVal() as xmlrecord FROM T24_F_USER T WHERE
recid like '{1}'
postProcessorProperty_5=APPLI.USER.historyTableQuery=SELECT recid as
t24_id, T.xmlrecord.getClobVal() as xmlrecord FROM T24_F_USER#HIS T WHERE
recid like '{1};%'
postProcessorProperty_6=APPLI.USER.historyMapping=app
postProcessorProperty_7=APPLI.USER.currentMapping=app
postProcessorProperty_8=APPLI.USER.strategyType=DIFF
postProcessorProperty_9=APPLI.separator=\,
postProcessorProperty_10=APPLI.applications=USER
postProcessorProperty_11=APPLI.USER.historyTableMetadataQuery=SELECT recid
as t24_id FROM T24_F_USER#HIS
postProcessorProperty_12=APPLI.metadataExtractor=T24MetadataExtractor
postProcessorProperty_13=APPLI.metadataExtractorIdFields=t24_id
#postProcessorProperty_14=APPLI.metadataExtractorVersionField=
Notes:
Only parameters relevant to the post-processor are included in the sample configuration.
Application name must be available from the protocol logs as well (the recid/key is just a
number in this case), that’s why it is extracted from the XML field in
QueryForDataTemplate and assigned to t24_id.
postProcessorClass=com.netguardians.pollingSystem.postprocessors.historyta
blepostprocessor.HistoryTablePostProcessor
postProcessorProperty_1=APPLI.trigger=TAB_BEZ
postProcessorProperty_2=APPLI.separator=\,
postProcessorProperty_3=APPLI.secondaryPollingIntervalSeconds=20
postProcessorProperty_4=APPLI.cacheEvictionMinutes=120
postProcessorProperty_5=APPLI.applications=KD_ADR
postProcessorProperty_6=APPLI.KD_ADR.historyTableMetadataQuery=SELECT
to_char(CREATE_DT, 'YYYY-MM-DD HH24:MI:SS') as finnova_create_date,
create_id as finnova_create_id, vers as finnova_version FROM
F2LARCP0_KD_ADR
postProcessorProperty_7=APPLI.KD_ADR.currentTableQuery=select KD_WERB_CD,
to_char(CREATE_DT, 'YYYY-MM-DD HH24:MI:SS') as CREATE_DT,
to_char(CREATE_DT, 'YYYY-MM-DD HH24:MI:SS') as finnova_create_date,
CREATE_ID, CREATE_ID as finnova_create_id, MUT_VON, VERS_VOR, STAT_CD,
USERBK_NR, VERS, VERS as finnova_version from KD_ADR where CREATE_ID =
'{1}' and CREATE_DT = TO_DATE('{2}', 'YYYY-MM-DD HH24:MI:SS') and VERS =
'{3}'
postProcessorProperty_8=APPLI.KD_ADR.currentMapping=finnova_create_id,finn
ova_create_date,finnova_version
postProcessorProperty_9=APPLI.KD_ADR.historyTableQuery=select KD_WERB_CD,
to_char(CREATE_DT, 'YYYY-MM-DD HH24:MI:SS') as CREATE_DT,
to_char(CREATE_DT, 'YYYY-MM-DD HH24:MI:SS') as finnova_create_date,
CREATE_ID, CREATE_ID as finnova_create_id, MUT_VON, VERS_VOR, STAT_CD,
USERBK_NR, VERS, VERS as finnova_version from F2LARCP0_KD_ADR where
CREATE_ID = '{1}' and CREATE_DT = TO_DATE('{2}', 'YYYY-MM-DD HH24:MI:SS')
and VERS < '{3}'
postProcessorProperty_10=APPLI.KD_ADR.historyMapping=finnova_create_id,fin
nova_create_date,finnova_version
postProcessorProperty_11=APPLI.KD_ADR.strategyType=DIFF_CONSOLIDATED
postProcessorProperty_12=APPLI.metadataExtractor=FinnovaMetadataExtractor
postProcessorProperty_13=APPLI.metadataExtractorIdFields=finnova_create_da
te,finnova_create_id
postProcessorProperty_14=APPLI.metadataExtractorVersionField=finnova_versi
on
Notes:
Only parameters relevant to the post-processor are included in the sample configuration.
• postProcessorProperty_x=APPLI.xyz.historyQueryTimestampKey
• postProcessorProperty_x=APPLI.xyz.historyQueryTimestampFormat
• postProcessorProperty_x=APPLI.xyz.currentQueryTimestampKey
• postProcessorProperty_x=APPLI.xyz.currentQueryTimestampFormat
• postProcessorProperty_x=APPLI.xyz.originalRowTimestampKey
• postProcessorProperty_x=APPLI.xyz.originalRowTimestampFormat
• postProcessorProperty_x=APPLI.xyz.maxDelta
• postProcessorProperty_x=APPLI.applicationFieldName
• postProcessorProperty_x=APPLI.applications
• postProcessorProperty_x=APPLI.xyz.historyTableMetadataQuery
• postProcessorProperty_x=APPLI.xyz.metadataExtractor
• postProcessorProperty_x=APPLI.xyz.metadataExtractorIdFields
• postProcessorProperty_x=APPLI.xyz.metadataExtractorVersionField
FlexCube Polling is a specific version of the JDBC Connector that supports processing of
the very specific FlexCube XML.
The Flexcube polling connector serializes the XML information in a KEY=VALUE form for
simplifying the data interpretation.
For each FV tag, the related FN description (using TYPE to find the appropriate description)
should be found and the event formatted
One should note that Post-processor are not available (yet) for the FlexCube connector.
#---------------------------------------------------------------------
# WORKING MODE
#---------------------------------------------------------------------
# SYSLOG HEADER
#---------------------------------------------------------------------
# Password to connect to DB
# If the password is encoded, you should append == in front of the encoded
value
Password=password
#---------------------------------------------------------------------
# MODE POLLING
# In FetchWithStatus mode
# The status field is extracted from the Query, thus a statusField
# name is given and a StatusQuery is provided in order to retrieve the
# status during the first start
# of the module (in order to dump the whole BD during the first start)
Mode=fetchWithStatus
# In FetchAll mode
# No status are needed, the whole table data is fetched at each
# poll. Such mode can be used if the table is purged between each
# poll
# Mode=fetchAll
# In FetchAndDelete mode
# No status are needed (a row is "mark" as fetch when the row is not
# anymore available into the DB. Thus, a unique row id should be
# provided by the Query and then used by the DeleteQuery in order to
# delete the right row
# Mode=fetchAndDelete
#---------------------------------------------------------------------
# DATA EXTRACTION
# SQL query run during the first polling startup in order to fetch
# the last audit trail unique identifier
StatusQuery=SELECT MAX(id) FROM table
#---------------------------------------------------------------------
# XML PARSING
Windows audit trails may be polled by NG|Screener using WMI (Windows Management
Instrumentation). All audit trails (Security, System, Application, etc) may be polled using
this method.
The following parameters may be configured for WMI polling. Configuration is needed in
windows server to allow WMI polling. Refer to windows connector documentation.
Parameter Description
SyslogServiceName The service name for audit trails classification (never change
it!)
Username User with WMI RO rights
Password Password
Hostname Server hostname or IP address
Domain Microsoft Domain
Delay Interval between two polls
WmiDefaultNamespace Default Namespace
StatusField Field used for the status (should be a date!)
Facilities Windows Event Logs to poll (Security, Application, System, etc).
WindowsVersion Windows Server Version (Windows 2003 and 2008 are
supported).
# Author : NetGuardians SA
# Date : 21-AUG-2012
# Description: MSWMI polling config sample for Windows system eventlog
#
# Copyright (c) 2007-2012 NetGuardians SA, all rights reserved.
#
################################################################
# Unique Identifier
################################################################
# Unique name of the current polling configuration
# with the following form: SyslogServiceName@SyslogHostname
# e.g. mswmiServiceName@MyServer
################################################################
# Syslog Header
################################################################
################################################################
# MSWMI Connection
################################################################
# user information
Username=Username
Password=password
# connection settings
Hostname=Hostname
################################################################
# MSWMI Options
################################################################
# Default Namespace
WmiDefaultNamespace=ROOT\\CIMV2
################################################################
# MSWMI Expert Mode
################################################################
# The date reference use to fetch new Windows Event Log entries
# (default value: NOW)
# Possible values:
# - ALL
# - NOW
# - Specific date (needed format: yyyyMMddHHmmss.SSSSSS-SSS)
# Mechanism:
# - if the PollingStatus exist, we will use it to search the last
# records changed from Windows Event Log
# - if the PollingStatus does not exist and the StartDate exist, we
# will use the StartDate to search the last records changed from
# Windows Event Log
# - if even PollingStatus and StartDate does not exist, we will get
# all the records from Windows Event Log
#StartDate=20110617080000.000000-000
# Delimiters
# For special characters use the following syntax [character].
# E.g. [\t] [\n]
#Delimiter=[\t]
#KeyValueSeparator=:
#MultipleValueSeparator=,
# Event fields list, the list of fields is only used with the
# "LogFormatOutput=default"
#EventFields=Category,CategoryString,ComputerName,EventCode,EventIdentifie
r,EventType,InsertionStrings,Logfile,Message,RecordNumber,SourceName,TimeG
enerated,TimeWritten,Type,User
# EventCodes to skip
# We skip the events 4661 and 4658 because they are generated by the
# polling WMI when an object is requested
EventCodesToSkip=4661,4658