Deployment of the TA-metricator-for-nmon


Operating system

The Technical Add-on is compatible with:

  • Linux OS X86 in 32/64 bits, PowerPC (PowerLinux), s390x (ZLinux), ARM
  • IBM AIX 7.1 and 7.2
  • Oracle Solaris 11

Third party software and libraries

To operate as expected, the Technical Add-on requires a Python or a Perl environment available on the server:

Python environment: used in priority


Python 3 support

  • From the release 1.1.0 of the Add-ons, Python 3.x is required (unless using Perl)
  • The last release supporting Python 2.x is the release 1.0.11
Requirement Version
Python interpreter 3.x

Perl environment: used only in fallback

Requirement Version
Perl interpreter 5.x
Time::HiRes module any
Text::CSV or Text::CSV_XS module any


  • IBM AIX does not generally contain Python. Nevertheless, Perl is available as a standard and the Technical Add-on has the Perl “Text::CSV” module built-in. More, Time::HiRes is part of Perl core modules.
  • Modern Linux distribution generally have Python available and do not require any further action.
  • Linux distributions lacking Python will fallback to Perl and must satisfy the Perl modules requirements.
  • If running on a full Splunk instance (any Splunk dedicated machine running Splunk Enterprise), the Technical Add-on uses Splunk built-in Python interpreter.


The TA-metricator-for-nmon can be deployed to any full Splunk instance or Universal Forwarder instances.

The technical Add-on should be deployed to the regular Splunk directory for application:


where $SPLUNK_HOME refers to the root directory of the Splunk installation

The Technical Add-on uses relative paths referring to $SPLUNK_HOME, as such it is fully compatible with any deployment where $SPLUNK_HOME refers to a custom directory for your installation.

Deployment by Splunk deployment server

The TA-metricator-for-nmon can be deployed by any Splunk deployment server:

Upload the tgz archive on your deployment server in a temporary directory, example:

cd /tmp/
<upload the archive here>

The Support Add-on tgz archive must be uncompressed and installed in $SPLUNK_HOME/etc/deployment-server:

cd /opt/splunk/etc/deployment-server/
tar -xvzf /tmp/TA-metricator-for-nmon_*.tar.gz

If you have any customization required, create a local directory and configure your settings in local/ configuration files.

Finally, create a serverclass or add the TA-metricator-for-nmon application into existing serverclass, required parameters are:

  • Enable App
  • Restart Splunkd

There are no additional configuration actions required, the monitoring inputs are activated by default and the Technical Add-on will start as soon as it is deployed and splunkd has been restarted

Deployment by any configuration management solution

The Technical Add-on can be deployed by any configuration management product such as Ansible, Chef or Pupet.

Steps are the same than for a deployment by Splunk deployment server and the configuration management solution must ensure to issue a proper restart of the Splunk instance after the Technical Add-on deployment.

What happens once the Technical Add-on has been deployed

Once the technical Add-on has been deployed, and the Splunk instance restarted, the following actions are taken automatically:

Fifo reader processes and Nmon processes startup

At startup time, Splunk will automatically trigger the execution of the “bin/” script.

This script does several actions, such as:

  • Identifying the operating system and its sub-version
  • For Linux OS, locally extracting the “bin/linux.tgz” archive if existing and if first deployment/upgrade
  • starting the fifo_reader processes
  • starting the nmon binary according to the guest Operating System and configuration settings

The script activity is available in:

  • standard output:
eventtype=nmon:collect host=<server hostname>
  • error output:
index=_internal sourcetype=splunkd host=<server hostname> error

Running processes in machine

Several processes can be found in machine, at initial run you will find fifo_reader processes (output might differ specially for paths):

Using Python interpreter: (Universal Forwarder example)

python /opt/splunkforwarder/etc/apps/TA-metricator-for-nmon/bin/ --fifo fifo1
/bin/sh -c /opt/splunkforwarder/etc/apps/TA-metricator-for-nmon/bin/ /opt/splunkforwarder/var/log/metricator/var/nmon_repository/fifo1/nmon.fifo
/bin/sh /opt/splunkforwarder/etc/apps/TA-metricator-for-nmon/bin/ /opt/splunkforwarder/var/log/metricator/var/nmon_repository/fifo1/nmon.fifo

Using Perl interpreter: (Universal Forwarder example)

/usr/bin/perl /opt/splunkforwarder/etc/apps/TA-metricator-for-nmon/bin/ --fifo fifo1
/bin/sh /opt/splunkforwarder/etc/apps/TA-metricator-for-nmon/bin/ /opt/splunkforwarder/var/log/metricator/var/nmon_repository/fifo1/nmon.fifo

The startup operation will be visible by a message logged:

eventtype=nmon:collect starting fifo_reader


12-02-2018 05:12:14, INFO: starting the fifo_reader fifo1

In addition, you will find an nmon binary instance running, example: (output will differ depending on operating systems and settings)

/opt/splunkforwarder/var/log/metricator/bin/linux/rhel/nmon_power_64_rhel6_be -F /opt/splunkforwarder/var/log/metricator/var/nmon_repository/fifo1/nmon.fifo -T -s 60 -c 1440 -d 1500 -g auto -D -p

The startup operation will be visible by a message logged:

eventtype=nmon:collect starting nmon


12-02-2018 05:12:15, INFO: starting nmon : /opt/splunkforwarder/var/log/metricator/bin/linux/sles/nmon_power_64_sles12_le -F /opt/splunkforwarder/var/log/metricator/var/nmon_repository/fifo1/nmon.fifo -T -s 60 -c 1440 -d 1500 -g auto -D -p in /opt/splunkforwarder/var/log/metricator/var/nmon_repository/fifo1

Nmon data processing

The Nmon data processing is achieved every minute by the script “”

Its activity is indexed in Splunk, and available via the following search:

eventtype=nmon:processing host=<server hostname>


12-02-2018 09:50:02 Reading NMON data: 440 lines 26766 bytes
Splunk Root Directory ($SPLUNK_HOME): /opt/splunkforwarder
Add-on type: /opt/splunkforwarder/etc/apps/TA-metricator-for-nmon
Add-on version: 1.0.0
nmonparser version: 2.0.0
Guest Operating System: linux
Python version: 2.7.5
TIME of Nmon Data: 05:11.54
DATE of Nmon data: 12-FEB-2018
logical_cpus: 1
NMON OStype: Linux
virtual_cpus: 1
SerialNumber: PPD-Linux
NMON ID: 12-FEB-2018:05:11.54,,PPD-Linux,26766,1518430314,1518446953
ANALYSIS: Enforcing fifo mode using --mode option
Starting_epochtime: 1518430314
Ending_epochtime: 1518446953
last known epoch time: 0
CONFIG section: will not be extracted (time delta of 66282 seconds is inferior to 86400 seconds)
Output mode is configured to run in minimal mode using the --silent option
Elapsed time was: 0.188985 seconds

Splunk indexing

Once the data processing steps have been achieved, several csv flow files are generated and consumed by Splunk in batch mode. (index and delete)

The traces of these activities are visible in Splunk using the following search:

index=_internal sourcetype=splunkd host=<server hostname> batch input


02-12-2018 10:01:09.073 -0500 INFO  TailReader - Batch input finished reading file='/opt/splunkforwarder/var/log/metricator/var/csv_repository/sys-91367.dal-ebis.ihost.com_01_DGBACKLOG.metrics.csv'
02-12-2018 10:01:09.073 -0500 INFO  TailReader - Batch input finished reading file='/opt/splunkforwarder/var/log/metricator/var/csv_repository/sys-91367.dal-ebis.ihost.com_01_DGIOTIME.metrics.csv'
02-12-2018 10:01:09.072 -0500 INFO  TailReader - Batch input finished reading file='/opt/splunkforwarder/var/log/metricator/var/csv_repository/sys-91367.dal-ebis.ihost.com_01_DGWRITESERV.metrics.csv'
02-12-2018 10:01:09.072 -0500 INFO  TailReader - Batch input finished reading file='/opt/splunkforwarder/var/log/metricator/var/csv_repository/sys-91367.dal-ebis.ihost.com_01_DGWRITEMERGE.metrics.csv'
02-12-2018 10:01:09.071 -0500 INFO  TailReader - Batch input finished reading file='/opt/splunkforwarder/var/log/metricator/var/csv_repository/sys-91367.dal-ebis.ihost.com_01_DGWRITES.metrics.csv'

Immediately after the files consumption, the metrics and events are available in Splunk.

For trouble shooting and any advanced configuration purposes, please consult the different pages of this documentation.