1. Setup a development system
This guide describes the requirements and the steps necessary in order to get started with the development of the OpenNMS project.
1.1. Operating System / Environment
To build/compile OpenNMS it is necessary to run a *nix system. You do not need to run it physically, a virtual machine is sufficient, but the choice is yours. We recommend one of the following:
-
Linux Mint with Cinnamon Desktop environment
-
Mac OS X
This documentation assumes that you chose a debian based desktop environment. |
1.2. Installation
The next chapter describes the full setup of your environment in order to meet the pre-requirements. Simply follow these instructions, they may vary depending on your Operating System.
# add OpenNMS as repository to install icmp and such
echo "deb http://debian.opennms.org stable main" > /etc/apt/sources.list.d/opennms.list
echo "deb-src http://debian.opennms.org stable main" >> /etc/apt/sources.list.d/opennms.list
# Add pgp key
wget -O - https://debian.opennms.org/OPENNMS-GPG-KEY | apt-key add -
# overall update
apt-get update
# install stuff
apt-get install -y software-properties-common
apt-get install -y git-core
apt-get install -y nsis
# install Oracle Java 8 JDK
# this setup is based on: http://www.webupd8.org/2014/03/how-to-install-oracle-java-8-in-debian.html
add-apt-repository -y ppa:webupd8team/java
apt-get update
apt-get install -y oracle-java8-installer
apt-get install -y oracle-java8-set-default
# install and configure PostgreSQL
apt-get install -y postgresql
echo "local all postgres peer" > /etc/postgresql/9.3/main/pg_hba.conf
echo "local all all peer" >> /etc/postgresql/9.3/main/pg_hba.conf
echo "host all all 127.0.0.1/32 trust" >> /etc/postgresql/9.3/main/pg_hba.conf
echo "host all all ::1/128 trust" >> /etc/postgresql/9.3/main/pg_hba.conf
# restart postgres to use new configs
/etc/init.d/postgresql restart
# install OpenNMS basic dependencies
apt-get install -y maven
apt-get install -y jicmp jicmp6
apt-get install -y jrrd
# clone opennms
mkdir -p ~/dev/opennms
git clone https://github.com/OpenNMS/opennms.git ~/dev/opennms
After this you should be able to build OpenNMS:
cd ~/dev/opennms
./clean.pl
./compile.pl -DskipTests
./assemble.pl -p dir
For more information on how to build OpenNMS from source check this wiki Install from Source.
After OpenNMS successfully built, please follow the wiki Running OpenNMS.
1.3. Tooling
We recommend the following toolset:
-
DB-Tool: DBeaver or Postgres Admin - pgAdmin
-
Graphing: yEd
-
Other: atom.io
1.4. Useful links
1.4.1. General
-
https://www.github.com/OpenNMS/opennms: The source code hosted on GitHub
-
http://wiki.opennms.org: Our Wiki, especially the start page is of interest. It points you in the right directions.
-
http://issues.opennms.org: Our issue/bug tracker.
-
https://github.com/opennms-forge/vagrant-opennms-dev: A vagrant box to setup a virtual box to build OpenNMS
-
https://github.com/opennms-forge/vagrant-opennms: A vagrant box to setup a virtual box to run OpenNMS
2. Minion development
2.1. Introduction
This guide is intended to help developers get started with writing Minion related features. It is not intented to be an exhaustive overview of the Minion architecture or feature set.
2.2. Container
This section details the customizations we make to the standard Karaf distribution for the Minion container.
2.2.1. Clean Start
We clear the cache on every start by setting karaf.clean.cache = true
in order to ensure that only the features listed in the featuresBoot (or installed by the karaf-extender
) are installed.
2.2.2. Karaf Extender
The Karaf Extender was developed to make it easier to manage and extend the container using existing packaging tools. It allows packages to register Maven Repositories, Karaf Feature Repositories and Karaf Features to Boot by overlaying additional files, avoiding modifying any of the existing files.
Here’s an overview, used for reference, of the relevant directories that are (currently) present on a default install of the opennms-minion
package:
├── etc
│ └── featuresBoot.d
│ └── custom.boot
├── repositories
│ ├── .local
│ ├── core
│ │ ├── features.uris
│ │ └── features.boot
│ └── default
│ ├── features.uris
│ └── features.boot
└── system
When the karaf-extender
feature is installed it will:
-
Find all of the folders listed under
$karaf.home/repositories
that do not start with a '.' and sort these by name. -
Gather the list of Karaf Feature Repository URIs from the
features.uris
files in the repositories. -
Gather the list of Karaf Feature Names from the
features.boot
files in the repositories. -
Gather the list of Karaf Feature Names form the files under
$karaf.etc/featuresBoot.d
that do not start with a '.' and sort these by name. -
Register the Maven Repositories by updating the
org.ops4j.pax.url.mvn.repositories
key for the PIDorg.ops4j.pax.url.mvn
. -
Wait up to 30 seconds until all of the Karaf Feature URIs are resolvable (the Maven Repositiries may take a few moments to update after updating the configuration.)
-
Install the Karaf Feature Repository URIs.
-
Install the Karaf Features.
Features listed in the features.boot files of the Maven Repositiries will take precedence over those listed in featuresBoot.d .
|
Any existing repository registered in org.ops4j.pax.url.mvn.repositories will be overwritten.
|
2.3. Packaging
This sections describes packages for Minion features and helps developers add new features to these packages.
We currently provide two different feature packages for Minion:
- openns-minion-features-core
-
Core utilities and services required for connectivity with the OpenNMS controller
- openns-minion-features-default
-
Minion-specific service extensions
Every package bundles all of the Karaf Feature Files and Maven Dependencies into a Maven Repository with additional meta-data used by the KarafExtender
.
2.3.1. Adding a new feature to the default feature package
-
Add the feature definition to
container/features/src/main/resources/features-minion.xml
. -
Add the feature name in the
features
list configuration for thefeatures-maven-plugin
infeatures/minion/repository/pom.xml
. -
Optionally add the feature name to
features/minion/repository/src/main/resources/features.boot
if the feature should be automatically installed when the container is started.
2.4. Guidelines
This sections describes a series of guidelines and best practices when developing Minion modules:
2.4.1. Security
-
Don’t store any credentials on disk, use the
SecureCredentialVault
instead.
2.5. Testing
This sections describes how developers can test features on the Minion container.
2.5.1. Local Testing
You can compile, assemble, and spawn an interactive shell on the Minion container using:
cd features/minion && ./runInPlace.sh
2.5.2. System Tests
The runtime environment of the Minion container and features differs greatly from those provided by the unit and integration tests. For this reason, it is important to perform automated end-to-end testing of the features.
The system tests provide a framework which allows developers to instantiate a complete Docker-based Minion system using a single JUnit rule.
For further details, see the minion-system-tests project on Github.
3. Topology
3.1. Info Panel Items
This section is under development. All provided examples or code snippet may not fully work. However they are conceptionally correct and should point in the right direction. |
Each element in the Info Panel is defined by an InfoPanelItem
object.
All available InfoPanelItem
objects are sorted by the order.
This allows to arrange the items in a custom order.
After the elements are ordered, they are put below the SearchBox and the Vertices in Focus list.
3.1.1. Programmatic
It is possible to add items to the Info Panel in the Topology UI by simply implementing the interface InfoPanelItemProvider
and expose its implementation via OSGi.
public class ExampleInfoPanelItemProvider implements InfoPanelItemProvider {
@Override
public Collection<? extends InfoPanelItem> getContributions(GraphContainer container) {
return Collections.singleton(
new DefaultInfoPanelItem() (1)
.withTitle("Static information") (2)
.withOrder(0) (3)
.withComponent(
new com.vaadin.ui.Label("I am a static component") (4)
)
);
}
}
1 | The default implementation of InfoPanelItem .
You may use InfoPanelItem instead if the default implementation is not sufficient. |
2 | The title of the InfoPanelItem .
It is shown above the component. |
3 | The order. |
4 | A Vaadin component which actually describes the custom component. |
In order to show information based on a selected vertex or edge, one must inherit the classes EdgeInfoPanelItemProvider
or VertexInfoPanelItemProvider
.
The following example shows a custom EdgeInfoPanelItemProvider
.
public class ExampleEdgeInfoPanelItemProvider extends EdgeInfoPanelItemProvider {
@Override
protected boolean contributeTo(EdgeRef ref, GraphContainer graphContainer) { (1)
return "custom-namespace".equals(ref.getNamespace()); // only show if of certain namespace
}
@Override
protected InfoPanelItem createInfoPanelItem(EdgeRef ref, GraphContainer graphContainer) { (2)
return new DefaultInfoPanelItem()
.withTitle(ref.getLabel() + " Info")
.withOrder(0)
.withComponent(
new com.vaadin.ui.Label("Id: " + ref.getId() + ", Namespace: " + ref.getNamespace())
);
}
}
1 | Is invoked if one and only one edge is selected. It determines if the current edge should provide the InfoPanelItem created by createInfoPanelItem. |
2 | Is invoked if one and only one edge is selected. It creates the InfoPanelItem to show for the selected edge. |
Implementing the provided interfaces/classes, is not enough to have it show up.
It must also be exposed via a blueprint.xml
to the OSGi service registry.
The following blueprint.xml
snippet describes how to expose any custom InfoPanelItemProvider implementation to the OSGi service registry and have the Topology UI pick it up.
<service interface="org.opennms.features.topology.api.info.InfoPanelItemProvider"> (1)
<bean class="ExampleInfoPanelItemProvider" /> (2)
</service>
1 | The service definition must always point to InfoPanelItemProvider. |
2 | The bean implementing the defined interface. |
3.1.2. Scriptable
By simply dropping JinJava templates (with file extension .html) to $OPENNMS_HOME/etc/infopanel
a more scriptable approach is available.
For more information on JinJava refer to https://github.com/HubSpot/jinjava.
The following example describes a very simple JinJava template which is always visible.
{% set visible = true %} (1)
{% set title = "Static information" %} (2)
{% set order = -700 %} (3)
This information is always visible (4)
1 | Makes this always visible |
2 | Defines the title |
3 | Each info panel item is ordered at the end. Making it -700 makes it very likely to pin this to the top of the info panel item. |
A template showing custom information may look as following:
{% set visible = vertex != null && vertex.namespace == "custom" && vertex.customProperty is defined %} (1)
{% set title = "Custom Information" %}
<table width="100%" border="0">
<tr>
<td colspan="3">This information is only visible if a vertex with namespace "custom" is selected</td>
</tr>
<tr>
<td align="right" width="80">Custom Property</td>
<td width="14"></td>
<td align="left">{{ vertex.customProperty }}</td>
</tr>
</table>
1 | This template is only shown if a vertex is selected and the selected namespace is "custom". |
It is also possible to show performance data.
One can include resource graphs into the info panel by using the following HTML element:
<div class="graph-container" data-resource-id="RESOURCE_ID" data-graph-name="GRAPH_NAME"></div>
Optional attributes data-graph-start
and data-graph-end
can be used to specify the displayed time range in seconds since epoch.
{# Example template for a simple memory statistic provided by the netsnmp agent #}
{% set visible = node != null && node.sysObjectId == ".1.3.6.1.4.1.8072.3.2.10" %}
{% set order = 110 %}
{# Setting the title #}
{% set title = "System Memory" %}
{# Define resource Id to be used #}
{% set resourceId = "node[" + node.id + "].nodeSnmp[]" %}
{# Define attribute Id to be used #}
{% set attributeId = "hrSystemUptime" %}
{% set total = measurements.getLastValue(resourceId, "memTotalReal")/1000/1024 %}
{% set avail = measurements.getLastValue(resourceId, "memAvailReal")/1000/1024 %}
<table border="0" width="100%">
<tr>
<td width="80" align="right" valign="top">Total</td>
<td width="14"></td>
<td align="left" valign="top" colspan="2">
{{ total|round(2) }} GB(s)
</td>
</tr>
<tr>
<td width="80" align="right" valign="top">Used</td>
<td width="14"></td>
<td align="left" valign="top" colspan="2">
{{ (total-avail)|round(2) }} GB(s)
</td>
</tr>
<tr>
<td width="80" align="right" valign="top">Available</td>
<td width="14"></td>
<td align="left" valign="top" colspan="2">
{{ avail|round(2) }} GB(s)
</td>
</tr>
<tr>
<td width="80" align="right" valign="top">Usage</td>
<td width="14"></td>
<td align="left" valign="top">
<meter style="width:100%" min="0" max="{{ total }}" low="{{ 0.5*total }}" high="{{ 0.8*total }}" value="{{ total-avail }}" optimum="0"/>
</td>
<td width="1">
{{ ((total-avail)/total*100)|round(2) }}%
</td>
</tr>
</table>
{# Example template for the system uptime provided by the netsnmp agent #}
{% set visible = node != null && node.sysObjectId == ".1.3.6.1.4.1.8072.3.2.10" %}
{% set order = 100 %}
{# Setting the title #}
{% set title = "System Uptime" %}
{# Define resource Id to be used #}
{% set resourceId = "node[" + node.id + "].nodeSnmp[]" %}
{# Define attribute Id to be used #}
{% set attributeId = "hrSystemUptime" %}
<table border="0" width="100%">
<tr>
<td width="80" align="right" valign="top">getLastValue()</td>
<td width="14"></td>
<td align="left" valign="top">
{# Querying the last value via the getLastValue() method: #}
{% set last = measurements.getLastValue(resourceId, attributeId)/100.0/60.0/60.0/24.0 %}
{{ last|round(2) }} day(s)
</td>
</tr>
<tr>
<td width="80" align="right" valign="top">query()</td>
<td width="14"></td>
<td align="left" valign="top">
{# Querying the last value via the query() method. A custom function 'currentTimeMillis()' in
the namespace 'System' is used to get the timestamps for the query: #}
{% set end = System:currentTimeMillis() %}
{% set start = end - (15 * 60 * 1000) %}
{% set values = measurements.query(resourceId, attributeId, start, end, 300000, "AVERAGE") %}
{# Iterating over the values in reverse order and grab the first value which is not NaN #}
{% set last = "NaN" %}
{% for value in values|reverse %}
{%- if value != "NaN" && last == "NaN" %}
{{ (value/100.0/60.0/60.0/24.0)|round(2) }} day(s)
{% set last = value %}
{% endif %}
{%- endfor %}
</td>
</tr>
<tr>
<td width="80" align="right" valign="top">Graph</td>
<td width="14"></td>
<td align="left" valign="top">
{# We use the start and end variable here to construct the graph's Url: #}
<img src="/opennms/graph/graph.png?resourceId=node[{{ node.id }}].nodeSnmp[]&report=netsnmp.hrSystemUptime&start={{ start }}&end={{ end }}&width=170&height=30"/>
</td>
</tr>
</table>
3.2. GraphML
In OpenNMS Horizon the GraphMLTopoloyProvider
uses GraphML formatted files to visualize graphs.
GraphML is a comprehensive and easy-to-use file format for graphs. It consists of a language core to describe the structural properties of a graph and a flexible extension mechanism to add application-specific data. […] Unlike many other file formats for graphs, GraphML does not use a custom syntax. Instead, it is based on XML and hence ideally suited as a common denominator for all kinds of services generating, archiving, or processing graphs.
OpenNMS Horizon does not support the full feature set of GraphML. The following features are not supported: Nested graphs, Hyperedges, Ports and Extensions. For more information about GraphML refer to the Official Documentation.
A basic graph definition using GraphML usually consists of the following GraphML elements:
-
Graph element to describe the graph
-
Key elements to define custom properties, each element in the GraphML document can define as data elements
-
Node and Edge elements
-
Data elements to define custom properties, which OpenNMS Horizon will then interpret.
A very minimalistic example is given below:
<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns
http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<!-- key section -->
<key id="label" for="all" attr.name="label" attr.type="string"></key>
<key id="namespace" for="graph" attr.name="namespace" attr.type="string"></key>
<!-- shows up in the menu -->
<data key="label">Minimalistic GraphML Topology Provider</data> (1)
<graph id="minicmalistic"> (2)
<data key="namespace">minimalistic</data> (3)
<node id="node1"/> (4)
<node id="node2"/>
<node id="node3"/>
<node id="node4"/>
</graph>
</graphml>
1 | The optional label of the menu entry. |
2 | The graph definition. |
3 | Each graph must have a namespace, otherwise OpenNMS Horizon refuses to load the graph. |
4 | Node definitions. |
3.2.1. Create/Update/Delete GraphML Topology
In order to create a GraphML Topology, a valid GraphML xml file must exist. Afterwards this is send to the OpenNMS Horizon REST API to create it:
curl -X POST -H "Content-Type: application/xml" -u admin:admin -d@graph.xml 'http://localhost:8980/opennms/rest/graphml/topology-name'
The topology-name
is a unique identifier for the Topology.
If a label
property is defined for the Graphml element this is used to be displayed in the Topology UI, otherwise the topology-name
defined here is used as a fallback.
To delete an already existing Topology a HTTP DELETE request must be send:
curl -X DELETE -u admin:admin 'http://localhost:8980/opennms/rest/graphml/topology-name'
There is no PUT method available. In order to update an existing GraphML Topology one must first delete and afterwards re-create it.
Even if the HTTP Request was successful, it does not mean, that the Topology is actually loaded properly.
The HTTP Request states that the Graph was successfully received, persisted and is in a valid GraphML format.
However, the underlying GraphMLTopologyProvider may perform additional checks or encounters problems while parsing the file.
If the Topology does not show up, the karaf.log should be checked for any clues what went wrong.
In addition it may take a while before the Topology is actually selectable from the Topology UI.
|
3.2.2. Supported Attributes
A various set of GraphML attributes are supported and interpreted by OpenNMS Horizon while reading the GraphML file. The following table explains the supported attributes and for which GraphML elements they may be used.
The type of the GraphML-Attribute can be either boolean, int, long, float, double, or string. These types are defined like the corresponding types in the Java™-Programming language.
Property | Required | For element | Type | Default | Description |
---|---|---|---|---|---|
|
yes |
Graph |
|
- |
The namespace must be unique overall existing Topologies. |
|
no |
Graph |
|
- |
A description, which is shown in the Info Panel. |
|
no |
Graph |
|
|
Defines a preferred layout. |
|
no |
Graph |
|
|
Defines a focus strategy. See Focus Strategies for more information. |
|
no |
Graph |
|
- |
Refers to nodes ids in the graph.
This is required if |
|
no |
Graph |
|
|
Defines the default SZL. |
|
no |
Graph |
|
- |
Defines which Vertex Status Provider should be used, e.g. |
|
no |
Node |
|
|
Defines the icon. See Icons for more information. |
|
no |
Graph, Node |
|
- |
Defines a custom label. If not defined, the |
|
no |
Node |
|
- |
Allows referencing the Vertex to an OpenNMS node. |
|
no |
Node |
|
- |
Allows referencing the Vertex to an OpenNMS node identified by foreign source and foreign id.
Can only be used in combination with |
|
no |
Node |
|
- |
Allows referencing the Vertex to an OpenNMS node identified by foreign source and foreign id.
Can only be used in combination with |
|
no |
Node, Edge |
|
Defines a custom tooltip. If not defined, the |
|
|
no |
Node |
|
|
Sets the level of the Vertex which is used by certain layout algorithms i.e. |
|
no |
Graph, Node |
|
|
Controls the spacing between the paths drawn for the edges when there are multiple edges connecting two vertices. |
|
no |
GraphML |
|
|
Defines the breadcrumb strategy to use. See Breadcrumbs for more information. |
3.2.3. Focus Strategies
A Focus Strategy
defines which Vertices should be added to focus when selecting the Topology.
The following strategies are available:
-
EMPTY No Vertex is add to focus.
-
ALL All Vertices are add to focus.
-
FIRST The first Vertex is add to focus.
-
SPECIFIC Only Vertices which id match the graph’s property
focus-ids
are added to focus.
3.2.4. Icons
With the GraphMLTopoloygProvider it is not possible to change the icon from the Topology UI.
Instead if a custom icon should be used, each node must contain a iconKey
property referencing an SVG element.
3.2.5. Vertex Status Provider
The Vertex Status Provider calculates the status of the Vertex.
There are multiple implementations available which can be configured for each graph: default
, script
and propagate
.
If none is specified, there is no status provided at all.
Default Vertex Status Provider
The default
status provider calculates the status based on the worst unacknowledged alarm associated with the Vertex’s node.
In order to have a status calculated a (OpenNMS Horizon) node must be associated with the Vertex.
This can be achieved by setting the GraphML attribute nodeID
on the GraphML node accordingly.
Script Vertex Status Provider
The script
status provider uses scripts similar to the Edge Status Provider.
Just place Groovy scripts (with file extension .groovy) in the directory $OPENNMS_HOME/etc/graphml-vertex-status
.
All of the scripts will be evaluated and the most severe status will be used for the vertex in the topology’s visualization.
If the script shouldn’t contribute any status to a vertex just return null
.
Propagate Vertex Status Provider
The propagate
status provider follows all links from a node to its connected nodes.
It uses the status of these nodes to calculate the status by determining the worst one.
3.2.6. Edge Status Provider
It is also possible to compute a status for each edge in a given graph.
Just place Groovy scripts (with file extension .groovy) in the directory $OPENNMS_HOME/etc/graphml-edge-status
.
All of the scripts will be evaluated and the most severe status will be used for the edge in the topology’s visualization.
The following simple Groovy script example will apply a different style and severity if the edge’s associated source node is down.
import org.opennms.netmgt.model.OnmsSeverity;
import org.opennms.features.topology.plugins.topo.graphml.GraphMLEdgeStatus;
if ( sourceNode != null && sourceNode.isDown() ) {
return new GraphMLEdgeStatus(OnmsSeverity.WARNING, [ 'stroke-dasharray' : '5,5', 'stroke' : 'yellow', 'stroke-width' : '6' ]);
} else {
return new GraphMLEdgeStatus(OnmsSeverity.NORMAL, []);
}
If the script shouldn’t contribute any status to an edge just return null
.
3.2.7. Layers
The GraphMLTopologyProvider can handle GraphML files with multiple graphs. Each Graph is represented as a Layer in the Topology UI. If a vertex from one graph has an edge pointing to another graph, one can navigate to that layer.
<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns
http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<!-- Key section -->
<key id="label" for="graphml" attr.name="label" attr.type="string"></key>
<key id="label" for="graph" attr.name="label" attr.type="string"></key>
<key id="label" for="node" attr.name="label" attr.type="string"></key>
<key id="description" for="graph" attr.name="description" attr.type="string"></key>
<key id="namespace" for="graph" attr.name="namespace" attr.type="string"></key>
<key id="preferred-layout" for="graph" attr.name="preferred-layout" attr.type="string"></key>
<key id="focus-strategy" for="graph" attr.name="focus-strategy" attr.type="string"></key>
<key id="focus-ids" for="graph" attr.name="focus-ids" attr.type="string"></key>
<key id="semantic-zoom-level" for="graph" attr.name="semantic-zoom-level" attr.type="int"/>
<!-- Label for Topology Selection menu -->
<data key="label">Layer Example</data>
<graph id="regions">
<data key="namespace">acme:regions</data>
<data key="label">Regions</data>
<data key="description">The Regions Layer.</data>
<data key="preferred-layout">Circle Layout</data>
<data key="focus-strategy">ALL</data>
<node id="north">
<data key="label">North</data>
</node>
<node id="west">
<data key="label">West</data>
</node>
<node id="south">
<data key="label">South</data>
</node>
<node id="east">
<data key="label">East</data>
</node>
</graph>
<graph id="markets">
<data key="namespace">acme:markets</data>
<data key="description">The Markets Layer.</data>
<data key="label">Markets</data>
<data key="description">The Markets Layer</data>
<data key="semantic-zoom-level">1</data>
<data key="focus-strategy">SPECIFIC</data>
<data key="focus-ids">north.2</data>
<node id="north.1">
<data key="label">North 1</data>
</node>
<node id="north.2">
<data key="label">North 2</data>
</node>
<node id="north.3">
<data key="label">North 3</data>
</node>
<node id="north.4">
<data key="label">North 4</data>
</node>
<node id="west.1">
<data key="label">West 1</data>
</node>
<node id="west.2">
<data key="label">West 2</data>
</node>
<node id="west.3">
<data key="label">West 3</data>
</node>
<node id="west.4">
<data key="label">West 4</data>
</node>
<node id="south.1">
<data key="label">South 1</data>
</node>
<node id="south.2">
<data key="label">South 2</data>
</node>
<node id="south.3">
<data key="label">South 3</data>
</node>
<node id="south.4">
<data key="label">South 4</data>
</node>
<node id="east.1">
<data key="label">East 1</data>
</node>
<node id="east.2">
<data key="label">East 2</data>
</node>
<node id="east.3">
<data key="label">East 3</data>
</node>
<node id="east.4">
<data key="label">East 4</data>
</node>
<!-- Edges in this layer -->
<edge id="north.1_north.2" source="north.1" target="north.2"/>
<edge id="north.2_north.3" source="north.2" target="north.3"/>
<edge id="north.3_north.4" source="north.3" target="north.4"/>
<edge id="east.1_east.2" source="east.1" target="east.2"/>
<edge id="east.2_east.3" source="east.2" target="east.3"/>
<edge id="east.3_east.4" source="east.3" target="east.4"/>
<edge id="south.1_south.2" source="south.1" target="south.2"/>
<edge id="south.2_south.3" source="south.2" target="south.3"/>
<edge id="south.3_south.4" source="south.3" target="south.4"/>
<edge id="north.1_north.2" source="north.1" target="north.2"/>
<edge id="north.2_north.3" source="north.2" target="north.3"/>
<edge id="north.3_north.4" source="north.3" target="north.4"/>
<!-- Edges to different layers -->
<edge id="west_north.1" source="north" target="north.1"/>
<edge id="north_north.2" source="north" target="north.2"/>
<edge id="north_north.3" source="north" target="north.3"/>
<edge id="north_north.4" source="north" target="north.4"/>
<edge id="south_south.1" source="south" target="south.1"/>
<edge id="south_south.2" source="south" target="south.2"/>
<edge id="south_south.3" source="south" target="south.3"/>
<edge id="south_south.4" source="south" target="south.4"/>
<edge id="east_east.1" source="east" target="east.1"/>
<edge id="east_east.2" source="east" target="east.2"/>
<edge id="east_east.3" source="east" target="east.3"/>
<edge id="east_east.4" source="east" target="east.4"/>
<edge id="west_west.1" source="west" target="west.1"/>
<edge id="west_west.2" source="west" target="west.2"/>
<edge id="west_west.3" source="west" target="west.3"/>
<edge id="west_west.4" source="west" target="west.4"/>
</graph>
</graphml>
3.2.8. Breadcrumbs
When multiple Layers are used it is possible to navigate between them (navigate to
option from vertex' context menu).
To give the user some orientation breadcrumbs can be enabled with the breadcrumb-strategy
property.
The following strategies are supported:
-
NONE No breadcrumbs are shown.
-
SHORTEST_PATH_TO_ROOT generates breadcrumbs from all visible vertices to the root layer (TopologyProvider). The algorithms assumes a hierarchical graph. Be aware, that all vertices MUST share the same root layer, otherwise the algorithm to determine the path to root does not work.
The following figure visualizes a graphml defining multiple layers (see below for the graphml definition).
From the given example, the user can select the Breadcrumb Example
Topology Provider from the menu.
The user can switch between Layer 1
, Layer 2
and Layer 3
.
In addition for each vertex which has connections to another layer, the user can select the navigate to
option from the context menu of that vertex to navigate to the according layer.
The user can also search for all vertices and add it to focus.
The following behaviour is implemented:
-
If a user navigates from one vertex to a vertex in another layer, the view is switched to that layer and adds all vertices to focus, the
source vertex
pointed to. The Breadcrumb is<parent layer name> > <source vertex>
. For example, if a user navigates fromLayer1:A2
toLayer2:B1
the view switches toLayer 2
and addsB1
andB2
to focus. In additionLayer 1 > A2
is shown as Breadcrumbs. -
If a user directly switches to another layer, the default focus strategy is applied, which may result in multiple vertices with no unique parent. The calculated breadcrumb is:
<parent layer name> > Multiple <target layer name>
. For example, if a user switches toLayer 3
, all vertices of that layer are added to focus (focus-strategy=ALL
). No unique path to root is found, the following breadcrumb is shown instead:Layer 1 > Multiple Layer 1
>Multiple Layer 2
-
If a user adds a vertex to focus, which is not in the current selected layer, the view switches to that layer and only the "new" vertex is added to focus. The generated breadcrumb shows the path to root through all layers. For example, the user adds
C3
to focus, and the current layer isLayer 1
, than the generated breadcrumb is as follows:Layer 1 > A1 > B3
. -
Only elements between layers are shown in the breadcrumb. Connections on the same layer are ignored. For example, a user adds
C5
to focus, the generated breadcrumb is as follows:Layer 1 > A2 > B2
The following graphml file defines the above shown graph. Be aware, that the root vertex shown above is generated to help calculating the path to root. It must not be defined in the graphml document.
<?xml version="1.0" encoding="UTF-8"?>
<graphml xmlns="http://graphml.graphdrawing.org/xmlns"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns
http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd">
<key id="breadcrumb-strategy" for="graphml" attr.name="breadcrumb-strategy" attr.type="string"></key>
<key id="label" for="all" attr.name="label" attr.type="string"></key>
<key id="description" for="graph" attr.name="description" attr.type="string"></key>
<key id="namespace" for="graph" attr.name="namespace" attr.type="string"></key>
<key id="focus-strategy" for="graph" attr.name="focus-strategy" attr.type="string"></key>
<key id="focus-ids" for="graph" attr.name="focus-ids" attr.type="string"></key>
<key id="preferred-layout" for="graph" attr.name="preferred-layout" attr.type="string"></key>
<key id="semantic-zoom-level" for="graph" attr.name="semantic-zoom-level" attr.type="int"/>
<data key="label">Breadcrumb Example</data>
<data key="breadcrumb-strategy">SHORTEST_PATH_TO_ROOT</data>
<graph id="L1">
<data key="label">Layer 1</data>
<data key="namespace">acme:layer1</data>
<data key="focus-strategy">ALL</data>
<data key="preferred-layout">Circle Layout</data>
<node id="a1">
<data key="label">A1</data>
</node>
<node id="a2">
<data key="label">A2</data>
</node>
<edge id="a1_b3" source="a1" target="b3"/>
<edge id="a1_b4" source="a1" target="b4"/>
<edge id="a2_b1" source="a2" target="b1"/>
<edge id="a2_b2" source="a2" target="b2"/>
</graph>
<graph id="L2">
<data key="label">Layer 2</data>
<data key="focus-strategy">ALL</data>
<data key="namespace">acme:layer2</data>
<data key="preferred-layout">Circle Layout</data>
<data key="semantic-zoom-level">0</data>
<node id="b1">
<data key="label">B1</data>
</node>
<node id="b2">
<data key="label">B2</data>
</node>
<node id="b3">
<data key="label">B3</data>
</node>
<node id="b4">
<data key="label">B4</data>
</node>
<edge id="b1_c2" source="b1" target="c2"/>
<edge id="b2_c1" source="b2" target="c1"/>
<edge id="b3_c3" source="b3" target="c3"/>
</graph>
<graph id="Layer 3">
<data key="label">Layer 3</data>
<data key="focus-strategy">ALL</data>
<data key="description">Layer 3</data>
<data key="namespace">acme:layer3</data>
<data key="preferred-layout">Grid Layout</data>
<data key="semantic-zoom-level">1</data>
<node id="c1">
<data key="label">C1</data>
</node>
<node id="c2">
<data key="label">C2</data>
</node>
<node id="c3">
<data key="label">C3</data>
</node>
<node id="c4">
<data key="label">C4</data>
</node>
<node id="c5">
<data key="label">C5</data>
</node>
<node id="c6">
<data key="label">C6</data>
</node>
<edge id="c1_c4" source="c1" target="c4"/>
<edge id="c1_c5" source="c1" target="c5"/>
<edge id="c4_c5" source="c4" target="c5"/>
</graph>
</graphml>
4. CORS Support
4.1. Why do I need CORS support?
By default, many browsers implement a same origin policy which prevents making requests to a resource, on an origin that’s different from the source origin.
For example, a request originating from a page served from http://www.opennms.org to a resource on http://www.adventuresinoss.com would be considered a cross origin request.
CORS (Cross Origin Resource Sharing) is a standard mechanism used to enable cross origin requests.
For further details, see:
4.2. How can I enable CORS support?
CORS support for the REST interface (or any other part of the Web UI) can be enabled as follows:
-
Open '$OPENNMS_HOME/jetty-webapps/opennms/WEB-INF/web.xml' for editing.
-
Apply the CORS filter to the '/rest/' path by removing the comments around the <filter-mapping> definition. The result should look like:
<!-- Uncomment this to enable CORS support --> <filter-mapping> <filter-name>CORS Filter</filter-name> <url-pattern>/rest/*</url-pattern> </filter-mapping>
-
Restart OpenNMS Horizon
4.3. How can I configure CORS support?
CORS support is provided by the org.ebaysf.web.cors.CORSFilter servlet filter.
Parameters can be configured by modifying the filter definition in the 'web.xml' file referenced above.
By default, the allowed origins parameter is set to '*'.
The complete list of parameters supported are available from:
5. ReST API
A RESTful interface is a web service conforming to the REST architectural style as described in the book RESTful Web Services. This page is describes the RESTful interface for OpenNMS Horizon.
5.1. ReST URL
The base URL for Rest Calls is: http://opennmsserver:8980/opennms/rest/
For instance, http://localhost:8980/opennms/rest/alarms/ will give you the current alarms in the system.
5.2. Authentication
Use HTTP Basic authentication to provide a valid username and password. By default you will not receive a challenge, so you must configure your ReST client library to send basic authentication proactively.
5.3. Data format
Jersey allows ReST calls to be made using either XML or JSON.
By default a request to the API is returned in XML. XML is delivered without namespaces. Please note: If a namespace is added manually in order to use a XML tool to validate against the XSD (like xmllint) it won’t be preserved when OpenNMS updates that file. The same applies to comments.
To get JSON encoded responses one has to send the following header with the request: Accept: application/json
.
5.4. Standard Parameters
The following are standard params which are available on most resources (noted below)
Parameter | Description |
---|---|
|
integer, limiting the number of results. This is particularly handy on events and notifications, where an accidental call with no limit could result in many thousands of results being returned, killing either the client or the server. If set to 0, then no limit applied |
|
integer, being the numeric offset into the result set from which results should start being returned. E.g., if there are 100 result entries, offset is 15, and limit is 10, then entries 15-24 will be returned. Used for pagination |
Filtering: All properties of the entity being accessed can be specified as parameters in either the URL (for GET) or the form value (for PUT and POST). If so, the value will be used to add a filter to the result. By default, the operation is equality, unless the |
|
|
Checks for equality |
|
Checks for non-equality |
|
Case-insensitive wildcarding ( |
|
Case-sensitive wildcarding ( |
|
Greater than |
|
Less than |
|
Greater than or equal |
|
Less than or equal |
If the value null
is passed for a given property, then the obvious operation will occur (comparator will be ignored for that property).
notnull
is handled similarly.
-
Ordering: If the parameter
orderBy
is specified, results will be ordered by the named property. Default is ascending, unless theorder
parameter is set todesc
(any other value will default to ascending)
5.5. Standard filter examples
Take /events
as an example.
Resource | Description |
---|---|
|
would return the first 10 events with the rtc subscribe UEI, (10 being the default limit for events) |
|
would return all the rtc subscribe events (potentially quite a few) |
|
would return the first 10 events with an id greater than 100 |
|
would return the first 10 events that have a non-null Ack time (i.e. those that have been acknowledged) |
|
would return the first 20 events that have a non-null Ack time and an id greater than 100. Note that the notnull value causes the comparator to be ignored for eventAckTime |
|
would return the first 20 events that have were acknowledged after 28th July 2008 at 4:41am (+12:00), and an id greater than 100. Note that the same comparator applies to both property comparisons. Also note that you must URL encode the plus sign when using GET. |
|
would return the 10 latest events inserted (probably, unless you’ve been messing with the id’s) |
|
would return the first 10 events associated with some node in location 'MINION' |
5.6. HTTP Return Codes
The following apply for OpenNMS Horizon 18 and newer.
-
DELETE requests are going to return a 202 (ACCEPTED) if they are performed asynchronously otherwise they return a 204 (NO_CONTENT) on success.
-
All the PUT requests are going to return a 204 (NO_CONTENT) on success.
-
All the POST requests that can either add or update an entity are going to return a 204 (NO_CONTENT) on success.
-
All the POST associated to resource addition are going to return a 201 (CREATED) on success.
-
All the POST requests where it is required to return an object will return a 200 (OK).
-
All the requests excepts GET for the Requisitions end-point and the Foreign Sources Definitions end-point will return 202 (ACCEPTED). This is because all the requests are actually executed asynchronously and there is no way to know the status of the execution, or wait until the processing is done.
-
If a resource is not modified during a PUT request, a NOT_MODIFIED will be returned. A NO_CONTENT will be returned only on a success operation.
-
All GET requests are going to return 200 (OK) on success.
-
All GET requests are going to return 404 (NOT_FOUND) when a single resource doesn’t exist; but will return 400 (BAD_REQUEST), if an intermediate resource doesn’t exist. For example, if a specific IP doesn’t exist on a valid node, return 404. But, if the IP is valid and the node is not valid, because the node is an intermediate resource, a 400 will be returned.
-
If something not expected is received from the Service/DAO Layer when processing any HTTP request, like an exception, a 500 (INTERNAL_SERVER_ERROR) will be returned.
-
Any problem related with the incoming parameters, like validations, will generate a 400 (BAD_REQUEST).
5.7. Identifying Resources
Some endpoints deal in resources, which are identified by Resource IDs. Since every resource is ultimately parented under some node, identifying the node which contains a resource is the first step in constructing a resource ID. Two styles are available for identifying the node in a resource ID:
Style | Description | Example |
---|---|---|
|
Identifies a node by its database ID, which is always an integer |
|
|
Identifies a node by its foreign-source name and foreign-ID, joined by a single colon |
|
The node identifier is followed by a period, then a resource-type name and instance name. The instance name’s characteristics may vary from one resource-type to the next. A few examples:
Value | Description |
---|---|
|
Node-level (scalar) performance data for the node in question. This type is the only one where the instance identifier is empty. |
|
A layer-two interface as represented by a row in the SNMP |
|
The root filesystem of a node running the Net-SNMP management agent. |
Putting it all together, here are a few well-formed resource IDs:
-
node[1].nodeSnmp[]
-
node[42].interfaceSnmp[eth0-04013f75f101]
-
node[Servers:115da833-0957-4471-b496-a731928c27dd].dskIndex[_root_fs]
5.8. Expose ReST services via OSGi
In order to expose a ReST service via OSGi the following steps must be followed:
-
Define an interface, containing java jax-rs annotations
-
Define a class, implementing that interface
-
Create an OSGi bundle which exports a service with the interface from above
5.8.1. Define a ReST interface
At first a public interface must be created which must contain jax-rs annotations.
@Path("/datachoices") (1)
public interface DataChoiceRestService {
@POST (2)
void updateCollectUsageStatisticFlag(@Context HttpServletRequest request, @QueryParam("action") String action);
@GET
@Produces(value={MediaType.APPLICATION_JSON})
UsageStatisticsReportDTO getUsageStatistics();
}
1 | Each ReST interface must either have a @Path or @Provider annotation.
Otherwise it is not considered a ReST service. |
2 | Use jax-rs annotations, such as @Post, @GET, @PUT, @Path , etc. to define the ReST service. |
5.8.2. Implement a ReST interface
A class must implement the ReST interface.
The class may or may not repeat the jax-rs annotations from the interface. This is purely for readability. Changing or adding different jax-rs annotations on the class, won’t have any effect. |
public class DataChoiceRestServiceImpl implements DataChoiceRestService {
@Override
public void updateCollectUsageStatisticFlag(HttpServletRequest request, String action) {
// do something
}
@Override
public UsageStatisticsReportDTO getUsageStatistics() {
return null;
}
}
5.8.3. Export the ReST service
At last the ReST service must be exported via the bundlecontext. This can be either achieved using an Activator or the blueprint mechanism.
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
">
<bean id="dataChoiceRestService" class="org.opennms.features.datachoices.web.internal.DataChoiceRestServiceImpl" /> (1)
<service interface="org.opennms.features.datachoices.web.DataChoiceRestService" ref="dataChoiceRestService" > (2)
<service-properties>
<entry key="application-path" value="/rest" /> (3)
</service-properties>
</service>
</blueprint>
1 | Create the ReST implementation class |
2 | Export the ReST service |
3 | Define where the ReST service will be exported to, e.g. /rest , /api/v2 , but also completely different paths can be used.
If not defined, /services is used. |
For a full working example refer to the datachoices feature.
5.9. Currently Implemented Interfaces
5.9.1. Acknowledgements
the default offset is 0, the default limit is 10 results.
To get all results, use limit=0 as a parameter on the URL (ie, GET /acks?limit=0 ).
|
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of acknowledgements. |
|
Get the number of acknowledgements. (Returns plaintext, rather than XML or JSON.) |
|
Get the acknowledgement specified by the given ID. |
POSTs (Setting Data)
Resource | Description |
---|---|
|
Creates or modifies an acknowledgement for the given alarm ID or notification ID. To affect an alarm, set an |
Usage examples with curl
curl -u 'admin:admin' -X POST -d notifId=3 -d action=ack http://localhost:8980/opennms/rest/acks
curl -u 'admin:admin' -X POST -d alarmId=42 -d action=esc http://localhost:8980/opennms/rest/acks
5.9.2. Alarm Statistics
It is possible to get some basic statistics on alarms, including the number of acknowledged alarms, total alarms, and the newest and oldest of acknowledged and unacknowledged alarms.
GETs (Reading Data)
Resource | Description |
---|---|
|
Returns statistics related to alarms. Accepts the same Hibernate parameters that you can pass to the |
|
Returns the statistics related to alarms, one per severity. You can optionally pass a list of severities to the |
5.9.3. Alarms
the default offset is 0, the default limit is 10 results. To get all results, use limit=0 as a parameter on the URL (ie, GET /events?limit=0 ).
|
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of alarms. |
|
Get the number of alarms. (Returns plaintext, rather than XML or JSON.) |
|
Get the alarms specified by the given ID. |
Note that you can also query by severity, like so:
Resource | Description |
---|---|
|
Get the alarms with a severity greater than or equal to MINOR. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
Resource | Description |
---|---|
|
Acknowledges (or unacknowledges) an alarm. |
|
Acknowledges (or unacknowledges) alarms matching the additional query parameters. eg, |
New in OpenNMS 1.11.0
In OpenNMS 1.11.0, some additional features are supported in the alarm ack API:
Resource | Description |
---|---|
|
Clears an alarm. |
|
Escalates an alarm. eg, NORMAL → MINOR, MAJOR → CRITICAL, etc. |
|
Clears alarms matching the additional query parameters. |
|
Escalates alarms matching the additional query parameters. |
Additionally, when acknowledging alarms (ack=true) you can now specify an ackUser
parameter.
You will only be allowed to ack
as a different user IF you are PUTting as an authenticated user who is in the admin role.
5.9.4. Events
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of events. The default for offset is 0, and the default for limit is 10. To get all results, use limit=0 as a parameter on the URL (ie, |
|
Get the number of events. (Returns plaintext, rather than XML or JSON.) |
|
Get the event specified by the given ID. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded
as a Content-Type.
Resource | Description |
---|---|
|
Acknowledges (or unacknowledges) an event. |
|
Acknowledges (or unacknowledges) the matching events. |
POSTs (Adding Data)
POST requires XML (application/xml) or JSON (application/json) as its Content-Type.
See ${OPENNMS_HOME}/share/xsds/event.xsd for the reference schema.
|
Resource | Description |
---|---|
|
Publish an event on the event bus. |
5.9.5. Categories
GETs (Reading Data)
Resource | Description |
---|---|
|
Get all configured categories. |
|
Get the category specified by the given name. |
|
Get the category specified by the given name for the given node (similar to |
|
Get the categories for a given node (similar to |
|
Get the categories for a given user group (similar to |
POSTs (Adding Data)
Resource | Description |
---|---|
|
Adds a new category. |
PUTs (Modifying Data)
Resource | Description |
---|---|
|
Update the specified category |
|
Modify the category with the given node ID and name (similar to |
|
Add the given category to the given user group (similar to |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete the specified category |
|
Remove the given category from the given node (similar to |
|
Remove the given category from the given user group (similar to |
5.9.6. Flow API
The Flow API can be used to retrieve summary statistics and time series data derived from persisted flows.
Unless specific otherwise, all unit of time are expressed in milliseconds. |
GETs (Reading Data)
Resource | Description |
---|---|
|
Retrieve the number of flows available |
|
Retrieve basic information for the exporter nodes that have flows available |
|
Retrieve detailed information about a specific exporter node |
|
Retrieve traffic summary statistics for the top N applications |
|
Retrieve time series metrics for the top N applications |
|
Retrieve traffic summary statistics for the top N conversations |
|
Retrieve time series metrics for the top N conversations |
All of the endpoints support the following query string parameters to help filter the results:
The given filters are combined using a logical AND .
There is no support for using OR logic, or combinations thereof.
|
name |
default |
comment |
start |
-14400000 |
Timestamp in milliseconds. If > 0, the timestamp is relative to the UNIX epoch (January 1st 1970 00:00:00 AM). If < 0, the timestamp is relative to the |
end |
0 |
Timestamp in milliseconds. If <= 0, the effective value will be the current timestamp. |
ifIndex |
(none) |
Filter for flows that came in through the given SNMP interface. |
exporterNode |
(none) |
Filter for flows that came where exported by the given node. Support either node id (integer) i.e. 1, or foreign source and foreign id lookups i.e. FS:FID. |
The exporters
endpoints do not support any parameters.
The applications
endpoints also support:
name | default | comment |
---|---|---|
N |
10 |
Number of top entries (determined by total bytes transferred) to return |
includeOther |
false |
When set to |
The applications
and conversations
endpoints also support:
name | default | comment |
---|---|---|
N |
10 |
Number of top entries (determined by total bytes transferred) to return |
The series
endpoints also support:
name | default | comment |
---|---|---|
step |
300000 |
Requested time interval between rows. |
Examples
curl -u admin:admin http://localhost:8980/opennms/rest/flows/count
915
curl -u admin:admin http://localhost:8980/opennms/rest/flows/applications
{
"start": 1513788044417,
"end": 1513802444417,
"headers": ["Application", "Bytes In", "Bytes Out"],
"rows": [
["https", 48789, 136626],
["http", 12430, 5265]
]
}
curl -u admin:admin http://localhost:8980/opennms/rest/flows/conversations
{
"start": 1513788228224,
"end": 1513802628224,
"headers": ["Location", "Protocol", "Source IP", "Source Port", "Dest. IP", "Dest. Port", "Bytes In", "Bytes Out"],
"rows": [
["Default", 17, "10.0.2.15", 33816, "172.217.0.66", 443, 12166, 117297],
["Default", 17, "10.0.2.15", 32966, "172.217.0.70", 443, 5042, 107542],
["Default", 17, "10.0.2.15", 54087, "172.217.0.67", 443, 55393, 5781],
["Default", 17, "10.0.2.15", 58046, "172.217.0.70", 443, 4284, 46986],
["Default", 6, "10.0.2.15", 39300, "69.172.216.58", 80, 969, 48178],
["Default", 17, "10.0.2.15", 48691, "64.233.176.154", 443, 8187, 39847],
["Default", 17, "10.0.2.15", 39933, "172.217.0.65", 443, 1158, 33913],
["Default", 17, "10.0.2.15", 60751, "216.58.218.4", 443, 5504, 24957],
["Default", 17, "10.0.2.15", 51972, "172.217.0.65", 443, 2666, 22556],
["Default", 6, "10.0.2.15", 46644, "31.13.65.7", 443, 459, 16952]
]
}
curl -u admin:admin http://localhost:8980/opennms/rest/flows/applications/series?N=3&includeOther=true&step=3600000
{
"start": 1516292071742,
"end": 1516306471742,
"columns": [
{
"label": "domain",
"ingress": true
},
{
"label": "https",
"ingress": true
},
{
"label": "http",
"ingress": true
},
{
"label": "Other",
"ingress": true
}
],
"timestamps": [
1516291200000,
1516294800000,
1516298400000
],
"values": [
[9725, 12962, 9725],
[70665, 125044, 70585],
[10937,13141,10929],
[1976,2508,2615]
]
}
curl -u admin:admin http://localhost:8980/opennms/rest/flows/conversations/series?N=3&step=3600000
{
"start": 1516292150407,
"end": 1516306550407,
"columns": [
{
"label": "10.0.2.15:55056 <-> 152.19.134.142:443",
"ingress": false
},
{
"label": "10.0.2.15:55056 <-> 152.19.134.142:443",
"ingress": true
},
{
"label": "10.0.2.15:55058 <-> 152.19.134.142:443",
"ingress": false
},
{
"label": "10.0.2.15:55058 <-> 152.19.134.142:443",
"ingress": true
},
{
"label": "10.0.2.2:61470 <-> 10.0.2.15:8980",
"ingress": false
},
{
"label": "10.0.2.2:61470 <-> 10.0.2.15:8980",
"ingress": true
}
],
"timestamps": [
1516294800000,
1516298400000
],
"values": [
[17116,"NaN"],
[1426,"NaN"],
[20395,"NaN",
[1455,"NaN"],
["NaN",5917],
["NaN",2739]
]
}
5.9.7. Flow Classification API
The Flow Classification API can be used to update, create or delete flow classification rules.
If not otherwise specified the Content-Type of the response is application/json .
|
GETs (Reading Data)
Resource | Description |
---|---|
|
Retrieve a list of all enabled rules.
The request is limited to |
|
Retrieve the rule identified by |
|
Retrieve all existing groups.
The request is limited to |
|
Retrieve the group identified by |
|
Retrieve all supported tcp protocols. |
The /classifications
endpoint supports the following url parameters:
The given filters are combined using a logical AND .
There is no support for using OR logic, or combinations thereof.
|
name | default |
---|---|
comment |
groupFilter |
(none) |
The group to filter the rules by. Should be the |
query |
(none) |
Examples
curl -X GET -u admin:admin http://localhost:8980/opennms/rest/classifications
[
{
"group": {
"description": "Classification rules defined by the user",
"enabled": true,
"id": 2,
"name": "user-defined",
"priority": 10,
"readOnly": false,
"ruleCount": 1
},
"id": 1,
"ipAddress": null,
"name": "http",
"port": "80",
"position": 0,
"protocols": [
"TCP"
]
}
]
curl -X GET -u admin:admin http://localhost:8980/opennms/rest/classifications/groups
[
{
"description": "Classification rules defined by OpenNMS",
"enabled": false,
"id": 1,
"name": "pre-defined",
"priority": 0,
"readOnly": true,
"ruleCount": 6248
},
{
"description": "Classification rules defined by the user",
"enabled": true,
"id": 2,
"name": "user-defined",
"priority": 10,
"readOnly": false,
"ruleCount": 1
}
]
curl -X GET -u admin:admin http://localhost:8980/opennms/rest/classifications/1
{
"group": {
"description": "Classification rules defined by the user",
"enabled": true,
"id": 2,
"name": "user-defined",
"priority": 10,
"readOnly": false,
"ruleCount": 1
},
"id": 1,
"ipAddress": null,
"name": "http",
"port": "80",
"position": 0,
"protocols": [
"TCP"
]
}
curl -X GET -H "Accept: application/json" -u admin:admin http://localhost:8980/opennms/rest/classifications/groups/1
{
"description": "Classification rules defined by OpenNMS",
"enabled": false,
"id": 1,
"name": "pre-defined",
"priority": 0,
"readOnly": true,
"ruleCount": 6248
}
curl -X GET -H "Accept: text/comma-separated-values" -u admin:admin http://localhost:8980/opennms/rest/classifications/groups/2
name;ipAddress;port;protocol
http;;80;TCP
POSTs (Creating Data)
Resource | Description |
---|---|
|
Post a new rule or import rules from CSV. If multiple rules are imported (to user-defined group) from a CSV file all existing rules are deleted. |
|
Classify the given request based on all enabled rules. |
Examples
curl -X POST -H "Content-Type: application/json" -u admin:admin -d '{"name": "http", "port":"80,8080", "protocols":["tcp", "udp"]}' http://localhost:8980/opennms/rest/classifications
HTTP/1.1 201 Created
Date: Thu, 08 Feb 2018 14:44:27 GMT
Location: http://localhost:8980/opennms/rest/classifications/6616
curl -X POST -H "Content-Type: application/json" -u admin:admin -d '{"protocol": "tcp", "ipAddress": "192.168.0.1", "port" :"80"}' http://localhost:8980/opennms/rest/classifications/classify
{
"classification":"http"
}
curl -X POST -H "Content-Type: application/json" -u admin:admin -d '{"protocol": "tcp", "ipAddress": "192.168.0.1", "port" :"8980"}' http://localhost:8980/opennms/rest/classifications/classify
HTTP/1.1 204 No Content
curl -X POST -H "Content-Type: text/comma-separated-values" -u admin:admin -d $'name;ipAddress;port;protocol\nOpenNMS;;8980;tcp,udp\n' http://localhost:8980/opennms/rest/classifications\?hasHeader\=true
HTTP/1.1 204 No Content
curl -X POST -H "Content-Type: text/comma-separated-values" -u admin:admin -d $'OpenNMS;;INCORRECT;tcp,udp\nhttp;;80,8080;ULF' http://localhost:8980/opennms/rest/classifications\?hasHeader\=false
{
"errors": {
"1": {
"context": "port",
"key": "rule.port.definition.invalid",
"message": "Please provide a valid port definition. Allowed values are numbers between 0 and 65536. A range can be provided, e.g. \"4000-5000\", multiple values are allowed, e.g. \"80,8080\""
},
"2": {
"context": "protocol",
"key": "rule.protocol.doesnotexist",
"message": "The defined protocol 'ULF' does not exist"
}
},
"success": false
}
PUTs (Updating Data)
Resource | Description |
---|---|
|
Update a rule identified by |
|
Retrieve the rule identified by |
|
Update a group. At the moment, only the enabled property can be changed |
DELETEs (Deleting Data)
Resource | Description |
---|---|
|
Deletes all rules of a given group. |
|
Delete the given group and all it’s containing rules. |
5.9.8. Foreign Sources
ReSTful service to the OpenNMS Horizon Provisioning Foreign Source definitions. Foreign source definitions are used to control the scanning (service detection) of services for SLA monitoring as well as the data collection settings for physical interfaces (resources).
This API supports CRUD operations for managing the Provisioner’s foreign source definitions. Foreign source definitions are POSTed and will be deployed when the corresponding requisition gets imported/synchronized by Provisiond.
If a request says that it gets the "active" foreign source, that means it returns the pending foreign source (being edited for deployment) if there is one, otherwise it returns the deployed foreign source.
GETs (Reading Data)
Resource | Description |
---|---|
|
Get all active foreign sources. |
|
Get the active default foreign source. |
|
Get the list of all deployed (active) foreign sources. |
|
Get the number of deployed foreign sources. (Returns plaintext, rather than XML or JSON.) |
|
Get the active foreign source named {name}. |
|
Get the configured detectors for the foreign source named {name}. |
|
Get the specified detector for the foreign source named {name}. |
|
Get the configured policies for the foreign source named {name}. |
|
Get the specified policy for the foreign source named {name}. |
POSTs (Adding Data)
POST requires XML using application/xml as its Content-Type.
Resource | Description |
---|---|
|
Add a foreign source. |
|
Add a detector to the named foreign source. |
|
Add a policy to the named foreign source. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
Resource | Description |
---|---|
|
Modify a foreign source with the given name. |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete the named foreign source. |
|
Delete the specified detector from the named foreign source. |
|
Delete the specified policy from the named foreign source. |
5.9.9. Groups
Like users, groups have a simplified interface as well.
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of groups. |
|
Get a specific group, given a group name. |
|
Get the users for a group, given a group name. (new in OpenNMS 14) |
|
Get the categories associated with a group, given a group name. (new in OpenNMS 14) |
POSTs (Adding Data)
Resource | Description |
---|---|
|
Add a new group. |
PUTs (Modifying Data)
Resource | Description |
---|---|
|
Update the metadata of a group (eg, change the |
|
Add a user to the group, given a group name and username. (new in OpenNMS 14) |
|
Associate a category with the group, given a group name and category name. (new in OpenNMS 14) |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete a group. |
|
Remove a user from the group. (new in OpenNMS 14) |
|
Disassociate a category from a group, given a group name and category name. (new in OpenNMS 14) |
5.9.10. Heatmap
GETs (Reading Data)
Resource | Description |
---|---|
|
Sizes and color codes based on outages for nodes grouped by Surveillance Categories |
|
Sizes and color codes based on outages for nodes grouped by Foreign Source |
|
Sizes and color codes based on outages for nodes grouped by monitored services |
|
Sizes and color codes based on outages for nodes associated with a specific Surveillance Category |
|
Sizes and color codes based on outages for nodes associated with a specific Foreign Source |
|
Sizes and color codes based on outages for nodes providing a specific monitored service |
Resource | Description |
---|---|
|
Sizes and color codes based on alarms for nodes grouped by Surveillance Categories |
|
Sizes and color codes based on alarms for nodes grouped by Foreign Source |
|
Sizes and color codes based on alarms for nodes grouped by monitored services |
|
Sizes and color codes based on alarms for nodes associated with a specific Surveillance Category |
|
Sizes and color codes based on alarms for nodes associated with a specific Foreign Source |
|
Sizes and color codes based on alarms for nodes providing a specific monitored service |
5.9.11. Categories
Obtain or modify the status of a set of monitored services based on a given search criteria, based on nodes, IP interfaces, Categories, or monitored services itself.
Examples:
-
/ifservices?node.label=onms-prd-01
-
/ifservices?ipInterface.ipAddress=192.168.32.140
-
/ifservices?category.name=Production
-
/ifservices?status=A
GETs (Reading Data)
Resource | Description |
---|---|
|
Get all configured monitored services for the given search criteria. |
Example:
Get the forced unmanaged services for the nodes that belong to the requisition named Servers:
curl -u admin:admin "http://localhost:8980/opennms/rest/ifservices?status=F&node.foreignSource=Servers"
PUTs (Modifying Data)
Resource | Description |
---|---|
|
Update all configured monitored services for the given search criteria. |
Example:
Mark the ICMP and HTTP services to be forced unmanaged for the nodes that belong to the category Production:
curl -u admin:admin -X PUT "status=F&services=ICMP,HTTP" "http://localhost:8980/opennms/rest/ifservices?category.name=Production"
5.9.12. KSC Reports
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of all KSC reports, this includes ID and label. |
|
Get a specific KSC report, by ID. |
|
Get a count of all KSC reports. |
PUTs (Modifying Data)
Resource | Description |
---|---|
|
Modify a report with the given ID. |
POSTs (Creating Data)
Documentation incomplete see issue: NMS-7162
DELETEs (Removing Data)
Documentation incomplete see issue: NMS-7162
5.9.13. Maps
The SVG maps use ReST to populate their data. This is the interface for doing that.
GETs (Reading Data)
Resource | Description |
---|---|
|
Get the list of maps. |
|
Get the map with the given ID. |
|
Get the elements (nodes, links, etc.) for the map with the given ID. |
POSTs (Adding Data)
Resource | Description |
---|---|
|
Add a map. |
PUTs (Modifying Data)
Resource | Description |
---|---|
|
Update the properties of the map with the given ID. |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete the map with the given ID. |
5.9.14. Measurements API
The Measurements API can be used to retrieve collected values stored in RRD (or JRB) files and in Newts.
Unless specific otherwise, all unit of time are expressed in milliseconds. |
GETs (Reading Data)
Resource | Description |
---|---|
|
Retrieve the measurements for a single attribute |
The following table shows all supported query string parameters and their default values.
name | default | comment |
---|---|---|
start |
-14400000 |
Timestamp in milliseconds. If > 0, the timestamp is relative to the UNIX epoch (January 1st 1970 00:00:00 AM). If < 0, the timestamp is relative to the |
end |
0 |
Timestamp in milliseconds. If <= 0, the effective value will be the current timestamp. |
step |
300000 |
Requested time interval between rows. Actual step may differ. |
maxrows |
0 |
When using the measurements to render a graph, this should be set to the graph’s pixel width. |
aggregation |
AVERAGE |
Consolidation function used. Can typically be |
fallback-attribute |
Secondary attribute that will be queried in the case the primary attribute does not exist. |
Step sizes
The behavior of the step
parameter changes based on time series strategy that is being used.
When using persistence strategies based on RRD, the available step sizes are limited to those defined by the RRA when the file was created. The effective step size used will be one that covers the requested period, and is closest to the requested step size. For maximum accuracy, use a step size of 1.
When using Newts, the step size can be set arbitrarily since the aggregation is performed at the time of request.
In order to help prevent large requests, we limit to the step size of a minimum of 5 minutes, the default collection rate.
This value can be decreased by setting the org.opennms.newts.query.minimum_step
system property.
Usage examples with curl
curl -u admin:admin "http://127.0.0.1:8980/opennms/rest/measurements/node%5B1%5D.nodeSnmp%5B%5D/CpuRawUser?start=-7200000&maxrows=30&aggregation=AVERAGE"
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<query-response end="1425588138256" start="1425580938256" step="300000">
<columns>
<values>159.5957271523179</values>
<values>158.08531037527592</values>
<values>158.45835584842285</values>
...
</columns>
<labels>CpuRawUser</labels>
<timestamps>1425581100000</timestamps>
<timestamps>1425581400000</timestamps>
<timestamps>1425581700000</timestamps>
...
</query-response>
POSTs (Reading Data)
Resource | Description |
---|---|
|
Retrieve the measurements for one or more attributes, possibly spanning multiple resources, with support for JEXL expressions. |
Here we use a POST instead of a GET to retrieve the measurements, which allows us to perform complex queries which are difficult to express in a query string. These requests cannot be used to update or create new metrics.
An example of the POST body is available bellow.
Usage examples with curl
curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" -u admin:admin -d @report.json http://127.0.0.1:8980/opennms/rest/measurements
{
"start": 1425563626316,
"end": 1425585226316,
"step": 10000,
"maxrows": 1600,
"source": [
{
"aggregation": "AVERAGE",
"attribute": "ifHCInOctets",
"label": "ifHCInOctets",
"resourceId": "nodeSource[Servers:1424038123222].interfaceSnmp[eth0-04013f75f101]",
"transient": "false"
},
{
"aggregation": "AVERAGE",
"attribute": "ifHCOutOctets",
"label": "ifHCOutOctets",
"resourceId": "nodeSource[Servers:1424038123222].interfaceSnmp[eth0-04013f75f101]",
"transient": "true"
}
],
"expression": [
{
"label": "ifHCOutOctetsNeg",
"value": "-1.0 * ifHCOutOctets",
"transient": "false"
}
]
}
{
"step": 300000,
"start": 1425563626316,
"end": 1425585226316,
"timestamps": [
1425563700000,
1425564000000,
1425564300000,
...
],
"labels": [
"ifHCInOctets",
"ifHCOutOctetsNeg"
],
"columns": [
{
"values": [
139.94817275747508,
199.0062569213732,
162.6264894795127,
...
]
},
{
"values": [
-151.66179401993355,
-214.7415503875969,
-184.9012624584718,
...
]
}
]
}
More Advanced Expressions
The JEXL 2.1.x library is used to parse the expression string and this also allows java objects and predefined functions to be included in the expression.
JEXL uses a context which is pre-populated by OpenNMS with the results of the query. Several constants and arrays are also predefined as references in the context by OpenNMS.
Constant or prefix | Description |
---|---|
__inf |
Double.POSITIVE_INFINITY |
__neg_inf |
Double.NEGATIVE_INFINITY |
NaN |
Double.NaN |
__E |
java.lang.Math.E |
__PI |
java.lang.Math.PI |
__diff_time |
Time span between start and end of samples |
__i |
Index into the samples array which the present calculation is referencing |
__AttributeName (where AttributeName is the searched for attribute) |
This returns the complete double[] array of samples for AttributeName |
OpenNMS predefines a number of functions for use in expressions which are referenced by namespace:function. All of these functions return a java double value.
Pre defined functions
Function | Description | Example |
---|---|---|
jexl:evaluate("_formula"): |
Passes a string to the JEXL engine to be evaluated as if it was entered as a normal expression. Like normal expressions, expressions evaluated through this function will return a Java double value. This makes it possible to reference and evaluate a formula which has been stored in OpenNMS as a string variable. The use case for this capability is that it gives us the ability to define and store a per-node and per-value correction formula which can normalise samples from different sample sources. |
|
math: |
References java.lang.Math class |
math:cos(20) |
strictmath: |
References java.lang.StrictMath class |
math:cos(20) |
fn: |
References the class org.opennms.netmgt.measurements.impl.SampleArrayFunctions. This contains several functions which can reference previous samples in the time series. |
|
fn:arrayNaN("sampleName", n) |
References the nth previous sample in the "sampleName" sample series. Replacing the n samples before the start of the series with NaN. |
fn:arrayNaN("x", 5) |
fn:arrayZero("sampleName", n) |
References the nth previous sample in the "sampleName" sample series. Replacing the n samples before the start of the series with 0 (zero). |
fn:arrayZero("x", 5) |
fn:arrayFirst("sampleName", n) |
References the nth previous sample in the "sampleName" sample series. Replacing the n samples before the start of the series with the first sample. |
fn:arrayFirst("x", 5) |
fn:arrayStart("sampleName", n, constant) |
References the nth previous sample in the "sampleName" sample series. Replacing the n samples before the start of the series with a supplied constant. |
fn:arrayStart("x", 5, 10) |
So for example with these additional variables and functions it is possible to create a Finite Impulse Response (FIR) filter function such as
y = a * f(n) + b * f(n-1) + c * f(n-2)
using the following expression where a,b and c are string constants and x is a time series value
a * x + b * fn:arrayNaN("x", 1) + c * fn:arrayNaN("x", 2)
5.9.15. Nodes
Note: the default offset is 0, the default limit is 10 results. To get all results, use limit=0 as a parameter on the URL (ie, GET /nodes?limit=0).
Additionally, anywhere you use "id" in the queries below, you can use the foreign source and foreign ID separated by a colon instead (ie, GET /nodes/fs:fid).
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of nodes. This includes the ID and node label. |
|
Get a specific node, by ID. |
|
Get the list of IP interfaces associated with the given node. |
|
Get the IP interface for the given node and IP address. |
|
Get the list of services associated with the given node and IP interface. |
|
Get the requested service associated with the given node, IP interface, and service name. |
|
Get the list of SNMP interfaces associated with the given node. |
|
Get the specific interface associated with the given node and ifIndex. |
|
Get the list of categories associated with the given node. |
|
Get the category associated with the given node and category name. |
|
Get the asset record associated with the given node. |
POSTs (Adding Data)
POST requires XML using application/xml as its Content-Type.
Resource | Description |
---|---|
|
Add a node. |
|
Add an IP interface to the node. |
|
Add a service to the interface for the given node. |
|
Add an SNMP interface to the node. |
|
Add a category association to the node. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
Resource | Description |
---|---|
|
Modify a node with the given ID. |
|
Modify the IP interface with the given node ID and IP address. |
|
Modify the service with the given node ID, IP address, and service name. |
|
Modify the SNMP interface with the given node ID and ifIndex. |
|
Modify the category with the given node ID and name. |
DELETEs (Removing Data)
Perform a DELETE to the singleton URLs specified in PUT above to delete that object.
Deletion of nodes, ipinterfaces and services are asynchronous so they will return 202 (ACCEPTED). Deletion of snmpinterfaces and categories are synchronous calls so they will return 204 (NO_CONTENT) on success. |
5.9.16. Notifications
Note: the default offset is 0, the default limit is 10 results.
To get all results, use limit=0
as a parameter on the URL (ie, GET /events?limit=0
).
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of notifications. |
|
Get the number of notifications. (Returns plaintext, rather than XML or JSON.) |
|
Get the notification specified by the given ID. |
To acknowledge or unacknowledge a notification, use the acks
endpoint — see Acknowledgements.
5.9.17. Outage Timelines
GETs (Reading Data)
Resource | Description |
---|---|
|
Generate the timeline header |
|
Generate the timeline image |
|
Generate an empty timeline for non-monitored services |
|
Generate the raw HTML for the image |
5.9.18. Outages
GETs (Reading Data)
Resource | Description |
---|---|
|
Get a list of outages. |
|
Get the number of outages. (Returns plaintext, rather than XML or JSON.) |
|
Get the outage specified by the given ID. |
|
Get the outages that match the given node ID. |
5.9.19. Requisitions
RESTful service to the OpenNMS Horizon Provisioning Requisitions. In this API, these "groups" of nodes are aptly named and treated as requisitions.
This current implementation supports CRUD operations for managing provisioning requisitions. Requisitions are first POSTed and no provisioning (import/synchronize) operations are taken. This is done so that a) the XML can be verified and b) so that the operations can happen at a later time. They are moved to the deployed state (put in the active requisition repository) when an import is run.
If a request says that it gets the active requisition, that means it returns the pending requisition (being edited for deployment) if there is one, otherwise it returns the deployed requisition. Note that anything that says it adds/deletes/modifies a node, interface, etc. in these instructions is referring to modifying that element from the requisition not from the database itself. That will happen upon import/synchronization.
You may write requisition data if the authenticated user is in the provision, rest, or admin roles.
GETs (Reading Data)
Resource | Description |
---|---|
|
Get all active requisitions. |
|
Get the number of active requisitions. (Returns plaintext, rather than XML or JSON.) |
|
Get the list of all deployed (active) requisitions. |
|
Get the number of deployed requisitions. (Returns plaintext, rather than XML or JSON.) |
|
Get the active requisition for the given foreign source name. |
|
Get the list of nodes being requisitioned for the given foreign source name. |
|
Get the node with the given foreign ID for the given foreign source name. |
|
Get the interfaces for the node with the given foreign ID and foreign source name. |
|
Get the interface with the given IP for the node with the specified foreign ID and foreign source name. |
|
Get the services for the interface with the specified IP address, foreign ID, and foreign source name. |
|
Get the given service with the specified IP address, foreign ID, and foreign source name. |
|
Get the categories for the node with the given foreign ID and foreign source name. |
|
Get the category with the given name for the node with the specified foreign ID and foreign source name. |
|
Get the assets for the node with the given foreign ID and foreign source name. |
|
Get the value of the asset for the given assetName for the node with the given foreign ID and foreign source name. |
POSTs (Adding Data or Updating existing Data)
Expects JSON/XML |
Resource | Description |
---|---|
|
Adds (or replaces) a requisition. |
|
Adds (or replaces) a node in the specified requisition. This operation can be very helpful when working with [[Large Requisitions]]. |
|
Adds (or replaces) an interface for the given node in the specified requisition. |
|
Adds (or replaces) a service on the given interface in the specified requisition. |
|
Adds (or replaces) a category for the given node in the specified requisition. |
|
Adds (or replaces) an asset for the given node in the specified requisition. |
PUTs (Modifying Data)
Expects form-urlencoded |
Resource | Description |
---|---|
|
Performs an import/synchronize on the specified foreign source. This turns the "active" requisition into the "deployed" requisition. |
|
Performs an import/synchronize on the specified foreign source. This turns the "active" requisition into the "deployed" requisition. Existing nodes will not be scanned until the next rescan interval, only newly-added nodes will be. Useful if you’re planning on making a series of changes. |
|
Update the specified foreign source. |
|
Update the specified node for the given foreign source. |
|
Update the specified IP address for the given node and foreign source. |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete the pending requisition for the named foreign source. |
|
Delete the active requisition for the named foreign source. |
|
Delete the node with the given foreign ID from the given requisition. |
|
Delete the IP address from the requisitioned node with the given foreign ID and foreign source. |
|
Delete the service from the requisitioned interface with the given IP address, foreign ID and foreign source. |
|
Delete the category from the node with the given foreign ID and foreign source. |
|
Delete the field from the requisition’s nodes asset with the given foreign ID and foreign source. |
5.9.20. Resources API
The Resources API can be used to list or delete resources at the node level and below. This service is especially useful in conjunction with the Measurements API.
GETs (Reading Data)
Resource | Description |
---|---|
|
Retrieve the full tree of resources in the system (expensive, use with care) |
|
Retrieve the tree of resources starting with the named resource ID |
|
Retrieve the tree of resources for a node, given its database ID or |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete resource with the named resource ID, and all its child resources, if any |
The following table shows all supported query string parameters and their default values.
name | default | comment |
---|---|---|
depth |
varies |
GET only. Limits the tree depth for retrieved resources. Defaults to 1 when listing all resources, or to -1 (no limit) when listing a single resource. |
Usage examples with curl
1
, by resource IDcurl -u admin:admin "http://127.0.0.1:8980/opennms/rest/resources/node%5B1%5D"
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<resource id="node[1]"
label="anode"
name="1"
link="element/node.jsp?node=1"
typeLabel="Node">
<children count="11" totalCount="11">
<resource id="node[1].nodeSnmp[]"
label="Node-level Performance Data"
name=""
typeLabel="SNMP Node Data"
parentId="node[1]">
<children/>
<stringPropertyAttributes/>
<externalValueAttributes/>
<rrdGraphAttributes>
<entry>
<key>loadavg1</key>
<value name="loadavg1"
relativePath="snmp/1"
rrdFile="loadavg1.jrb"/>
</entry>
<key>tcpActiveOpens</key>
<value name="tcpActiveOpens"
relativePath="snmp/1"
rrdFile="tcpActiveOpens.jrb"/>
</entry>
<entry>
<key>memTotalFree</key>
<value name="memTotalFree"
relativePath="snmp/1"
rrdFile="memTotalFree.jrb"/>
</entry>
...
</rrdGraphAttributes>
</resource>
<resource id="node[1].interfaceSnmp[lo]"
label="lo (10 Mbps)"
name="lo"
link="element/snmpinterface.jsp?node=1&ifindex=1"
typeLabel="SNMP Interface Data"
parentId="node[1]">
<children/>
<stringPropertyAttributes>
<entry>
<key>ifName</key>
<value>lo</value>
</entry>
...
</stringPropertyAttributes>
<externalValueAttributes>
<entry>
<key>ifSpeed</key>
<value>10000000</value>
</entry>
<entry>
<key>ifSpeedFriendly</key>
<value>10 Mbps</value>
</entry>
</externalValueAttributes>
<rrdGraphAttributes>
...
<entry>
<key>ifHCInOctets</key>
<value name="ifHCInOctets"
relativePath="snmp/1/lo"
rrdFile="ifHCInOctets.jrb"/>
</entry>
<entry>
<key>ifHCOutOctets</key>
<value name="ifHCOutOctets"
relativePath="snmp/1/lo"
rrdFile="ifHCOutOctets.jrb"/>
</entry>
...
</rrdGraphAttributes>
</resource>
...
</children>
<stringPropertyAttributes/>
<externalValueAttributes/>
<rrdGraphAttributes/>
</resource>
1
, without having to construct a resource IDcurl -u admin:admin "http://127.0.0.1:8980/opennms/rest/resources/fornode/1"
node42
in requisition Servers
, by resource IDcurl -u admin:admin "http://127.0.0.1:8980/opennms/rest/resources/nodeSource%5BServers:node42%5D"
node42
in requisition Servers
, without having to construct a resource IDcurl -u admin:admin "http://127.0.0.1:8980/opennms/rest/resources/fornode/Servers:node42"
5.9.21. Realtime Console data
The Realtime Console (RTC) calculates the availability for monitored services. Data provided from the RTC is available to the ReST API.
GETs (Reading Data)
Resource | Description |
---|---|
|
Get all nodes and availability data from a given SLA category filter, i.e. Web Servers (Web+Servers) |
|
Get node availability data for each node of a given SLA category filter |
|
Get detailed service availability for a given node in a given SLA category filter |
|
Get detailed availability for all services on a given node |
Example
curl -u demo:demo http://demo.opennms.org/opennms/rest/availability/categories/Web+Servers
curl -u demo:demo http://demo.opennms.org/opennms/rest/availability/categories/nodes
curl -u demo:demo http://demo.opennms.org/opennms/rest/availability/categories/nodes/31
curl -u demo:demo http://demo.opennms.org/opennms/rest/availability/nodes/31
5.9.22. Scheduled Outages
GETs (Reading Data)
Parameter | Description |
---|---|
|
to get a list of configured scheduled outages. |
|
to get the details of a specific outage. |
POSTs (Setting Data)
Parameter | Description |
---|---|
|
to add a new outage (or update an existing one). |
PUTs (Modifying Data)
Parameter | Description |
---|---|
|
to add a specific outage to a collectd’s package. |
|
to add a specific outage to a pollerd’s package. |
|
to add a specific outage to a threshd’s package. |
|
to add a specific outage to the notifications. |
DELETEs (Removing Data)
Parameter | Description |
---|---|
|
to delete a specific outage. |
|
to remove a specific outage from a collectd’s package. |
|
to remove a specific outage from a pollerd’s package. |
|
to remove a specific outage from a threshd’s package. |
|
to remove a specific outage from the notifications. |
5.9.23. SNMP Configuration
You can edit the community string, SNMP version, etc. for an IP address using this interface. If you make a change that would overlap with an existing snmp-config.xml, it will automatically create groups of <definition /> entries as necessary. If no <definition /> entry is created it matches the defaults.
There are different versions of the interface (see below). The following operations are supported:
GETs (Reading Data)
Parameter | Description |
---|---|
|
Get the SNMP configuration for a given IP address. |
|
Get the SNMP configuration for a given IP address at a given location. |
PUTs (Modifying Data)
Parameter | Description |
---|---|
|
Add or update the SNMP configuration for a given IP address. |
Determine API version
To determine the version of the API running in your OpenNMS Horizon type http://localhost:8980/opennms/rest/snmpConfig/1.1.1.1 in your browser and have a look at the output:
-
Version 1: If the output only have attributes
community
,port
,retries
,timeout
andversion
-
Version 2: If there are more attributes than described before (e.g. max Repetitions)
API Version 1
In version 1 only a few attributes defined in snmp-config.xsd
are supported.
These are defined in snmp-info.xsd
:
<xs:schema
xmlns:tns="http://xmlns.opennms.org/xsd/config/snmp-info"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified"
version="1.0"
targetNamespace="http://xmlns.opennms.org/xsd/config/snmp-info">
<xs:element name="snmp-info" type="tns:snmpInfo"/>
<xs:complexType name="snmpInfo">
<xs:sequence>
<xs:element name="community" type="xs:string" minOccurs="0"/>
<xs:element name="port" type="xs:int"/>
<xs:element name="retries" type="xs:int"/>
<xs:element name="timeout" type="xs:int"/>
<xs:element name="version" type="xs:string" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:schema>
The following table shows all supported attributes, optional restrictions and the mapping between snmp-info.xsd
and snmp-config.xsd
.
All parameters can be set regardless the version.
attribute snmp-info.xml | attribute snmp-config.xml | default | restricted to version | restriction |
---|---|---|---|---|
version |
version |
v1 |
- |
"v1", "v2c" or "v3" are valid arguments. If an invalid or empty argument is provided "v1" is used. |
port |
port |
161 |
- |
Integer > 0 |
retries |
retry |
1 |
- |
Integer > 0 |
timeout |
timeout |
3000 |
- |
Integer > 0 |
community |
read-community |
public |
- |
any string with a length >= 1 |
curl -v -X PUT -H "Content-Type: application/xml" \
-H "Accept: application/xml" \
-d "<snmp-info>
<community>yRuSonoZ</community>
<port>161</port>
<retries>1</retries>
<timeout>2000</timeout>
<version>v2c</version>
</snmp-info>" \
-u admin:admin http://localhost:8980/opennms/rest/snmpConfig/10.1.1.1
Creates or updates a <definition/>
-entry for IP address 10.1.1.1 in snmp-config.xml
.
curl -v -X GET -u admin:admin http://localhost:8980/opennms/rest/snmpConfig/10.1.1.1
Returns the SNMP configuration for IP address 10.1.1.1 as defined in example 1.
API Version 2
Since Version 2 all attributes of a <definition />
entry defined in snmp-config.xsd
(http://xmlns.opennms.org/xsd/config/snmp) can be set or get via the interface - except it is only possible to set the configuration for one IP address and not for a range of IP addresses.
This may change in the future.
The interface uses SnmpInfo objects for communication.
Therefore it is possible to set for example v1 and v3 parameters in one request (e.g. readCommunity
String and privProtocol
String).
However OpenNMS Horizon does not allow this.
It is only allowed to set attributes which have no version restriction (e.g. timeout value) or the attributes which are limited to the version (e.g. readCommunity
String if version is v1/v2c).
The same is for getting data from the API, even if it is possible to store v1 and v3 parameters in one definition block in the snmp-config.xml
manually, the ReST API will only return the parameters which match the version.
If no version is defined, the default is assumed (both in PUT and GET requests).
The SnmpInfo schema is defined as follows:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<xs:schema
elementFormDefault="qualified"
version="1.0"
targetNamespace="http://xmlns.opennms.org/xsd/config/snmp-info"
xmlns:tns="http://xmlns.opennms.org/xsd/config/snmp-info"
xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="snmp-info" type="tns:snmpInfo"/>
<xs:complexType name="snmpInfo">
<xs:sequence>
<xs:element name="authPassPhrase" type="xs:string" minOccurs="0"/>
<xs:element name="authProtocol" type="xs:string" minOccurs="0"/>
<xs:element name="community" type="xs:string" minOccurs="0"/>
<xs:element name="contextEngineId" type="xs:string" minOccurs="0"/>
<xs:element name="contextName" type="xs:string" minOccurs="0"/>
<xs:element name="engineId" type="xs:string" minOccurs="0"/>
<xs:element name="enterpriseId" type="xs:string" minOccurs="0"/>
<xs:element name="maxRepetitions" type="xs:int" minOccurs="0"/>
<xs:element name="maxRequestSize" type="xs:int" minOccurs="0"/>
<xs:element name="maxVarsPerPdu" type="xs:int" minOccurs="0"/>
<xs:element name="port" type="xs:int" minOccurs="0"/>
<xs:element name="privPassPhrase" type="xs:string" minOccurs="0"/>
<xs:element name="privProtocol" type="xs:string" minOccurs="0"/>
<xs:element name="proxyHost" type="xs:string" minOccurs="0"/>
<xs:element name="readCommunity" type="xs:string" minOccurs="0"/>
<xs:element name="retries" type="xs:int" minOccurs="0"/>
<xs:element name="securityLevel" type="xs:int" minOccurs="0"/>
<xs:element name="securityName" type="xs:string" minOccurs="0"/>
<xs:element name="timeout" type="xs:int" minOccurs="0"/>
<xs:element name="version" type="xs:string" minOccurs="0"/>
<xs:element name="writeCommunity" type="xs:string" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
</xs:schema>
The following table shows all supported attributes, the mapping between snmp-info.xsd
and snmp-config.xsd
.
It also shows the version limitations, default values and the restrictions - if any.
attribute snmp-info.xml | attribute snmp-config.xml |
---|---|
default |
restricted to version |
restriction |
version |
version |
v1 |
- |
"v1", "v2c" or "v3" are valid arguments. If an invalid or empty argument is provided "v1" is used. |
port |
port |
161 |
- |
Integer > 0 |
retries |
retry |
1 |
- |
Integer > 0 |
timeout |
timeout |
3000 |
- |
Integer > 0 |
maxVarsPerPdu |
max-vars-per-pdu |
10 |
- |
Integer > 0 |
maxRepetitions |
max-repetitions |
2 |
- |
Integer > 0 |
maxRequestSize |
max-request-size |
65535 |
- |
Integer > 0 |
proxyHost |
proxy-host |
- |
|
readCommunity |
|
read-community |
public |
v1, v2c |
|
writeCommunity |
write-community |
private |
v1, v2c |
securityName |
|
security-name |
opennmsUser |
v3 |
|
securityLevel |
security-level |
noAuthNoPriv |
v3 |
Integer value, which can be null, 1, 2, or 3. <ul><li>1 means noAuthNoPriv</li><li>2 means authNoPriv</li><li>3 means authPriv</li></ul> If you do not set the security level manually it is determined automatically: <ul><li>if no authPassPhrase set the securityLevel is 1</li><li>if a authPassPhrase and no privPassPhrase is set the security level is 2.</li><li>if a authPassPhrase and a privPassPhrase is set the security level is 3.</li></ul> |
authPassPhrase |
auth-passphrase |
0p3nNMSv3 |
v3 |
|
authProtocol |
auth-protocol |
MD5 |
v3 |
only MD5 or SHA are valid arguments |
privPassPhrase |
privacy-passphrase |
0p3nNMSv3 |
v3 |
|
privProtocol |
privacy-protocol |
DES |
v3 |
only DES, AES, AES192 or AES256 are valid arguments. |
engineId |
engine-id |
|
v3 |
|
contextEngineId |
context-engine-id |
v3 |
|
contextName |
|
context-name |
|
v3 |
|
enterpriseId |
enterprise-id |
v3 |
curl -v -X PUT -H "Content-Type: application/xml" \
-H "Accept: application/xml" \
-d "<snmp-info>
<readCommunity>yRuSonoZ</readCommunity>
<port>161</port>
<retries>1</retries>
<timeout>2000</timeout>
<version>v2c</version>
</snmp-info>" \
-u admin:admin http://localhost:8980/opennms/rest/snmpConfig/10.1.1.1
Creates or updates a <definition/>
-entry for IP address 10.1.1.1 in snmp-config.xml
.
curl -v -X GET -u admin:admin http://localhost:8980/opennms/rest/snmpConfig/10.1.1.1
Returns the SNMP configuration for IP address 10.1.1.1 as defined in example 1.
curl -v -X PUT -H "Content-Type: application/xml" \
-H "Accept: application/xml" \
-d "<snmp-info>
<readCommunity>yRuSonoZ</readCommunity>
<port>161</port>
<retries>1</retries>
<timeout>2000</timeout>
<version>v1</version>
<securityName>secret-stuff</securityName>
<engineId>engineId</engineId>
</snmp-info>" \
-u admin:admin http://localhost:8980/opennms/rest/snmpConfig/10.1.1.1
Creates or updates a <definition/>
-entry for IP address 10.1.1.1 in snmp-config.xml
ignoring attributes securityName
and engineId
.
curl -v -X PUT -H "Content-Type: application/xml" \
-H "Accept: application/xml" \
-d "<snmp-info>
<readCommunity>yRuSonoZ</readCommunity>
<port>161</port>
<retries>1</retries>
<timeout>2000</timeout>
<version>v3</version>
<securityName>secret-stuff</securityName>
<engineId>engineId</engineId>
</snmp-info>" \
-u admin:admin http://localhost:8980/opennms/rest/snmpConfig/10.1.1.1
Creates or updates a <definition/>
-entry for IP address 10.1.1.1 in snmp-config.xml
ignoring attribute readCommunity
.
5.9.24. Users
Since users are not currently stored in the database, the ReST interface for them is not as full-fledged as that of nodes, etc.
You cannot use hibernate criteria for filtering.
You may need to touch the $OPENNMS_HOME/etc/users.xml file on the filesystem for any addition or modification actions to take effect (see NMS-6469 for details).
|
GETs (Reading Data)
Parameter | Description |
---|---|
|
Get a list of users. |
|
Get a specific user, by username. |
POSTs (Adding Data)
Parameter | Description |
---|---|
|
Add a user. If supplying a password it is assumed to be hashed or encrypted already, at least as of 1.12.5.
To indicate that the supplied password uses the salted encryption algorithm rather than the older MD5 based algorithm, you need to pass an element named |
PUTs (Modifying Data)
Parameter | Description |
---|---|
|
Update an existing user’s full-name, user-comments, password, passwordSalt and duty-schedule values. |
|
Add a security role to the user. (new in OpenNMS 19) |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Delete a user. |
|
Remove a security role from the user. (new in OpenNMS 19) |
5.9.25. SNMP Trap Northbounder Interface Configuration
GETs (Reading Data)
Resource | Description |
---|---|
|
Gets full content of the configuration. |
|
Gets the status of the SNMP Trap NBI (returns either true or false). |
|
Gets the name of all the existing destinations. |
|
Gets the content of the destination named {name} |
PUTs (Update defaults)
On a successful request, the Syslog NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Sets the status of the SNMP Trap NBI. |
POSTs (Adding Data)
POST requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the SNMP Trap NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the full content of the configuration. |
|
Adds a new or overrides an existing destination. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the SNMP Trap NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the destination named {name} |
DELETEs (Remove Data)
On a successful request, the SNMP Trap NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the destination named {name} |
5.9.26. Email Northbounder Interface Configuration
GETs (Reading Data)
Resource | Description |
---|---|
|
Gets full content of the configuration. |
|
Gets the status of the Email NBI (returns either true or false). |
|
Gets the name of all the existing destinations. |
|
Gets the content of the destination named {name} |
PUTs (Update defaults)
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Sets the status of the Email NBI. |
POSTs (Adding Data)
POST requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Adds a new or overrides an existing destination. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the destination named {name} |
DELETEs (Remove Data)
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the destination named {name} |
5.9.27. Javamail Configuration
GETs (Reading Data)
Resource | Description |
---|---|
|
Get the name of the default readmail config. |
|
Get the name of the default sendmail config. |
|
Get the name of all the existing readmail configurations. |
|
Get the name of all the existing sendmail configurations. |
|
Get the name of all the existing end2end mail configurations. |
|
Get the content of the readmail configuration named {name} |
|
Get the content of the sendmail configuration named {name} |
|
Get the content of the end2end mail configuration named {name} |
POSTs (Adding/Updating Data)
POST requires form data using application/xml or application/json as a Content-Type.
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Adds a new or overrides an existing readmail configuration. |
|
Adds a new or overrides an existing sendmail configuration. |
|
Adds a new or overrides an existing end2ends mail configuration. |
PUTs (Update defaults)
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Sets the readmail named {name} as the new default. |
|
Sets the sendmail named {name} as the new default. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the readmail configuration named {name} |
|
Updates the content of the sendmail configuration named {name} |
|
Updates the content of the end2end mail configuration named {name} |
DELETEs (Remove Data)
On a successful request, the Email NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Removes the readmail configuration named {name} |
|
Removes the sendmail configuration named {name} |
|
Removes the end2end mail configuration named {name} |
5.9.28. Syslog Northbounder Interface Configuration
GETs (Reading Data)
Resource | Description |
---|---|
|
Gets full content of the configuration. |
|
Gets the status of the Syslog NBI (returns either true or false). |
|
Gets the name of all the existing destinations. |
|
Gets the content of the destination named {name} |
PUTs (Update defaults)
On a successful request, the Syslog NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Sets the status of the Syslog NBI. |
POSTs (Adding Data)
POST requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the Syslog NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the full content of the configuration. |
|
Adds a new or overrides an existing destination. |
PUTs (Modifying Data)
PUT requires form data using application/x-www-form-urlencoded as a Content-Type.
On a successful request, the Syslog NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the destination named {name} |
DELETEs (Remove Data)
On a successful request, the Syslog NBI will be notified about the configuration change.
Resource | Description |
---|---|
|
Updates the content of the destination named {name} |
5.9.29. Business Service Monitoring
Every aspect of the Business Service Monitoring feature can be controlled via a ReST API.
The API’s endpoint for managing Business Services is located at /opennms/api/v2/business-services
.
It supports XML content to represent the Business Services.
The schema file describing the API model is located in $OPENNMS_HOME/share/xsds/business-service-dto.xsd
.
The responses generated by the ReST API do also include location
elements that contain references to other entities managed by the API.
The Business Service response data model for the ReST API has the following basic structure:
<business-service>
<id>42</id>
<name>Datacenter North</name>
<attributes/>
<ip-service-edges>
<ip-service-edge>
<id>23</id>
<operational-status>WARNING</operational-status>
<map-function>
<type>Identity</type>
</map-function>
<location>/api/v2/business-services/2/edges/23</location>
<reduction-keys>
<reduction-key>uei.opennms.org/nodes/nodeLostService::12:10.10.10.42:ICMP</reductionKey>
<reduction-key>uei.opennms.org/nodes/nodeDown::12</reductionKey>
</reduction-keys>
<weight>1</weight>
</ip-service-edge>
</ip-service-edges>
<reduction-key-edges>
<reduction-key-edge>
<id>111</id>
<operational-status>INDETERMINATE</operational-status>
<map-function>
<type>Identity</type>
</map-function>
<location>/api/v2/business-services/42/edges/111</location>
<reduction-keys>
<reduction-key>my-reduction-key1</reduction-key>
</reduction-keys>
<weight>1</weight>
</reduction-key-edge>
</reduction-key-edges>
<child-edges>
<child-edge>
<id>123</id>
<operational-status>MINOR</operational-status>
<map-function>
<type>Identity</type>
</map-function>
<location>/api/v2/business-services/42/edges/123</location>
<reduction-keys/>
<weight>1</weight>
<child-id>43</child-id>
</child-edge>
</child-edges>
<parent-services><parent-service>144</parent-service></parent-services>
<reduce-function><type>HighestSeverity</type></reduce-function>
<operational-status>INDETERMINATE</operational-status>
<location>/api/v2/business-services/146</location>
</business-service>
<business-service>
<name>Datacenter North</name>
<attributes/>
<ip-service-edges>
<ip-service-edge>
<ip-service-id>99</ip-service-id>
<map-function>
<type>Identity</type>
</map-function>
<weight>1</weight>
</ip-service-edge>
</ip-service-edges>
<reduction-key-edges>
<reduction-key-edge>
<reduction-key>my-reduction-key1</reduction-key>
<map-function>
<type>Identity</type>
</map-function>
<weight>1</weight>
</reduction-key-edge>
</reduction-key-edges>
<child-edges>
<child-edge>
<child-id>43</child-id>
<map-function>
<type>Identity</type>
</map-function>
<weight>1</weight>
</child-edge>
</child-edges>
<reduce-function><type>HighestSeverity</type></reduce-function>
</business-service>
The whole model is defined in jetty-webapps/opennms/WEB-INF/lib/org.opennms.features.bsm.rest.api-*.jar
which can be used as a dependency for a Java program to query the API.
GETs (Reading Data)
Resource | Description |
---|---|
|
Provides a brief list of all defined Business Services |
|
Returns the Business Service identified by |
|
Returns the edge of the Business Service identified by |
|
Provides a list of supported Map Function definitions |
|
Returns the definition of the Map Function identified by |
|
Provides a list of supported Reduce Function definitions |
|
Returns the definition of the Reduce Function identified by |
PUTs (Modifying Data)
Resource | Description |
---|---|
|
Modifies an existing Business Service identified by |
POSTs (Adding Data)
Resource | Description |
---|---|
|
Creates a new Business Service |
|
Adds an edge of type IP Service to the Business Service identified by |
|
Adds an edge of type Reduction Key to the Business Service identified by |
|
Adds an edge of type Business Service to the Business Service identified by |
|
Reload the configuration of the Business Service Daemon |
DELETEs (Removing Data)
Resource | Description |
---|---|
|
Deletes the Business Service identified by |
|
Removes an edge with the identifier |
5.9.30. Discovery
This endpoint can be used to trigger a one-time discovery scan.
POSTs (Submitting one-time scan configuration)
Resource | Description |
---|---|
|
Submits an one-time scan configuration |
The following XML structure is used to define a scan job.
discovery.xml
<discoveryConfiguration>
<specifics>
<specific>
<location>Default</location>
<retries>3</retries>
<timeout>2000</timeout>
<foreignSource>My-ForeignSource</foreignSource>
<content>192.0.2.1</content>
</specific>
</specifics>
<includeRanges>
<includeRange>
<location>Default</location>
<retries>3</retries>
<timeout>2000</timeout>
<foreignSource>My-ForeignSource</foreignSource>
<begin>192.0.2.10</begin>
<end>192.0.2.254</end>
</includeRange>
</includeRanges>
<excludeRanges>
<excludeRange>
<begin>192.0.2.60</begin>
<end>192.0.2.65</end>
</excludeRange>
</excludeRanges>
<includeUrls>
<includeUrl>
<location>Default</location>
<retries>3</retries>
<timeout>2000</timeout>
<foreignSource>My-ForeignSource</foreignSource>
<content>http://192.0.2.254/addresses.txt</content>
</includeUrl>
</includeUrls>
</discoveryConfiguration>
The scan itself can be triggered by posting the configuration to the ReST endpoint as follows:
curl -H "Content-Type: application/xml" -u admin:admin -X POST -d @discovery.xml http://localhost:8980/opennms/api/v2/discovery
5.10. ReST API Examples
5.10.1. Getting Graph data
While graphs aren’t technically available via ReST, you can parse some ReST variables to get enough data to pull a graph. This isn’t ideal because it requires multiple fetches, but depending on your use case, this may be adequate for you.
I’m in-lining some sample PHP code which should do this (not tested at all, cut & paste from old code I have that does not use the ReST- interface, and/or coded straight into the browser so YMMV). If you go to your NMS and click the resource graphs, then right click the graph you want and hit _View Image you will get the full URL that would need to be passed to pull that graph as a standalone image.
From that just take the URL and plug in the values you pulled from ReST to get a graph for whatever node you wanted.
function fetchit($thing, $user = "user", $pass = "pass") {
$url = "http://localhost:8980/opennms";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url . $thing);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, $useragent);
curl_setopt($ch, CURLOPT_USERPWD, $user.':'.$pass);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
// this assumes you already have found the nodeId via a previous REST call or some other means. Provided more as an example than what you might want.
function getNodeInterfaces($nodeId) {
$data = fetchit("/rest/nodes/$nodeId/snmpinterfaces");
return simplexml_load_string($data);
}
function fetchGraphs($nodeId) {
$ints = getNodeInterfaces($nodeId);
$chars = array('/','.',':','-',' ');
$endtime = time();
$starttime = (string)(time() - ($days * 24 * 60 * 60)) ;
// use bcmath or a better version of PHP if you don't want this hypocrisy here.
$endtime = $endtime . '000';
$starttime = $starttime . '000';
for($i=0; $i<count($ints->snmpInterfaces); $i++) {
$ifname = $ints->snmpInterfaces[$i]->snmpInterface->ifName;
$mac = $ints->snmpInterfaces[$i]->snmpInterface->physAddr;
$if = str_replace($chars, "_", $ifname);
if ( strlen(trim($mac)) < 12 ) { $mac_and_if = $if; } else { $mac_and_if = $if .'-'. $mac; };
$image = fetchit("$url/graph/graph.png?resource=node[$nodeId].interfaceSnmp[$mac_and_if]&report=report=mib2.HCbits&start=$starttime&end=$endtime");
// you can poop this to a file now, or set header('Content-type: image/png'); then print "$image";
}
}
5.10.2. provision.pl examples and notes
One way to test out the new ReST interface is to use provision.pl
.
If you run it you’ll get a summary of the output, but it’s not totally obvious how it all works.
Here is an example of adding a new node using the ReST interface:
# add a new foreign source called ubr
/usr/share/opennms/bin/provision.pl requisition add ubr
/usr/share/opennms/bin/provision.pl node add ubr 10341111 clownbox
/usr/share/opennms/bin/provision.pl node set ubr 10341111 city clownville
/usr/share/opennms/bin/provision.pl node set ubr 10341111 building clown-town-hall
/usr/share/opennms/bin/provision.pl node set ubr 10341111 parent-foreign-id 1122114
/usr/share/opennms/bin/provision.pl interface add ubr 10341111 10.1.3.4
# this is like a commit. No changes will take effect until you import a foreign source
/usr/share/opennms/bin/provision.pl requisition import ubr
You will probably need to specify the username/password of an admin. To do this add:
--username=admin --password=clownnms
to the command line.
5.10.3. Debian (Lenny) Notes
For Lenny, you’ll need to pull a package out of backports to make everything work right.
Read http://backports.org/dokuwiki/doku.php?id=instructions for instructions on adding it to sources.list
.
# install liburi-perl from backports
sudo apt-get -t lenny-backports install liburi-perl
5.10.4. Windows Powershell ReST
Example of using Windows Powershell to fill some asset fields with ReST.
# Installdate of Windows
$wmi = Get-WmiObject -Class Win32_OperatingSystem
$dateInstalled = $wmi.ConvertToDateTime($wmi.InstallDate)
# Serialnumber and manufacturer of server
Get-WmiObject win32_bios | select SerialNumber
$wmi = Get-WmiObject -Class win32_bios
$manufacturer = $wmi.Manufacturer
# Text file with a description of the server for the comments field
$comment = Get-Content "C:\Program Files\BGInfo\Info_Description.txt" | Out-String
$user ="admin"
$pass= "admin"
$secpasswd = ConvertTo-SecureString $user -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ($pass, $secpasswd)
$nodeid = Invoke-RestMethod -Uri http://opennms.domain.nl:8980/opennms/rest/nodes?label=servername.domain.nl -Credential $cred
$nodeid = $nodeid.nodes.node.id
$uri="http://opennms.domain.nl:8980/opennms/rest/nodes/$nodeid/assetRecord"
Invoke-RestMethod -Uri "http://opennms.massxess.nl:8980/opennms/rest/nodes/$nodeid/assetRecord/?building=133" -Credential $cred -Method PUT
Invoke-RestMethod -Uri "$uri/?manufacturer=$manufacturer" -Credential $cred -Method PUT
Invoke-RestMethod -Uri "$uri/?dateInstalled=$dateInstalled" -Credential $cred -Method PUT
Invoke-RestMethod -Uri "$uri/?comment=$comment" -Credential $cred -Method PUT
6. Develop Documentation
This document is the guideline for people who wish to contribute to writing documentation for the OpenNMS project. The OpenNMS software is free and open source, contribution of any kind is welcome. We ask that you observe the rules and guidelines outlined here to maintain consistency across the project.
Each (sub)project is represented as a section of the documentation.
Each section will produce a HTML output in the file system that is generated in the target/generated
sources folder.
The chosen file format for documentation is AsciiDoc (Asciidoc Homepage).
Document files use the .adoc
file extension.
Note that there are different ways to contribute documentation, each suitable for the different use cases:
-
Tutorials and How To’s should be published on the OpenNMS Wiki. For example: you want to describe how to use the Net-SNMP agent and the SNMP monitor from OpenNMS to solve a special use case with OpenNMS.
-
The documentation in the source code should be formal technical documentation. The writing style should be accurate and concise. However, ensure that you explain concepts in detail and do not make omissions.
6.1. File Structure in opennms-doc
Directory | Contents |
---|---|
|
module with the guide for OpenNMS user e.g. NOC user who don’t change behavior of OpenNMS. |
|
module with the guide for administrators configuring, optimizing and running OpenNMS |
|
module with the guide for those who want to develop OpenNMS |
|
module with the guide of how to install OpenNMS on different operating systems |
|
module with the changelog and release notes |
6.2. Writing
The following rules will help you to commit correctly formatted and prepared documentation for inclusion in the OpenNMS project. It is important that we maintain a level of consistency across all of our committers and the documentation they produce.
When writing place a single sentence on each line. This makes it easy to move content around, and also easy to spot long, or fragmented, sentences. This will also allow us to assign comments on a sentence in GitHub which will facilitate easier merging.
Other than writing documentation, you can help out by providing comments on documentation, reviewing, suggesting improvements or reporting bugs. To do this head over to: issue tracker for documentation! |
6.2.1. Conventions for text formatting
The following conventions are used:
-
File names and path are written in `poller-configuration.xml` they will be rendered in:
poller-configuration.xml
; -
Names that indicate special attention, e.g. this configuration matches *any* entry: this is rendered as: this configuration matches any entry;
-
_Italics_ is rendered as Italics and used for emphasis and indicate internal names and abbreviations;
-
*Bold* is rendered as Bold and should be used sparingly, for strong emphasis only;
-
+methodName()+ is rendered as methodName() and is also used for literals, (note: the content between the
+
signs will be parsed); -
`command` is rendered as
command
(typically used for command-line or parts used in configuration files), (note: the content between the ` signs will not be parsed); -
`my/path/` is rendered as
my/path/
this is used for file names and paths; -
\``double quote'' (which is two grave accents to the left and two acute accents to the right) renders as ``double quote'';
-
\`single quote' (which is a single grave accent to the left and a single acute accent to the right) renders as `single quote'.
6.2.2. Gotchas
-
Always leave a blank line at the top of the documents section. It might be the title ends up in the last paragraph of the document;
-
Start in line 2 setting a relative path to the images directory to picture rendering on GitHub:
// Allow image rendering
:imagesdir: relative/path/to/images/dir
-
Always leave a blank line at the end of documents;
-
As {} are used for Asciidoc attributes, everything inside will be treated as an attribute. To avoid this you have to escape the opening brace: \\{. If you do not escape the opening brace, the braces and the text inside them will be removed without any warning being issued!;
-
Forcing line breaks can be achieved with ` +` at the end of the line followed by a line break.
This is the first line +
and this a forced 2nd line
This is the first line
and this a forced 2nd line
6.3. Headings and document structure
Each document starts over with headings from level zero (the document title). Each document should have an id. In some cases sections in the document need to have id’s as well, this depends on where they fit in the overall structure. If you wish to have a link to specific content that content has to have an id. A missing id in a mandatory place will cause the build to fail.
To start a document:
[[unique-id-verbose-is-ok]]
= The Document Title
If you are including the document inside another document and you need to push the headings down to the right level in the output, the leveloffset attribute is used.
Subsequent headings in a document should use the following syntax:
== Subheading
... content here ...
=== Subsubheading
content here ...
6.4. Links
When you need to link to other parts of the manual you use the target id. To use a target id you follow this syntax:
<<doc-guidelines-links>>
This will render as: [doc-guidelines-links]
To use the target id in you document simply write the target id in your text, for example: |
see <<target-id>>
this should suffice for most cases.
If you need to link to another document with your own link text, then follow this procedure:
<<target-id, link text that fits in the context>>
Having lots of linked text may work well in a web context but is a distracting in print. The documentation we are creating is intended for both mediums so be considerate of this in your usage. |
If you wish to use an external link, they are are added as:
http://www.opennms.org/[Link text here]
This will render in the output as: Link text here
For short links it may be beneficial not to use accompanying link text:
http://www.opennms.org/
Which renders as: http://www.opennms.org/
It is acceptable to have a period trailing after the URL, it will not render as a part of the link. |
6.5. Admonitions and useful notes
These are useful for defining specific sections, such as Notes, Tips and Important information. We encourage the use of them in the documentation as long as they are used appropriately. Choose from the following:
NOTE: This is my note.
This is how its rendered:
This is my note. |
TIP: This is my tip.
This is how its rendered:
This is my tip. |
IMPORTANT: This is my important hint.
This is how its rendered:
This is my important hint. |
CAUTION: This is my caution.
This is how its rendered:
This is my caution. |
WARNING: This is my warning.
This is how its rendered:
This is my warning. |
A multiline variation:
TIP: Tiptext. +
Line 2.
Which is rendered as:
Tiptext. Line 2. |
Remember to write these in full caps. There is no easy manner in which to add new admonitions, do not create your own. |
6.6. Attributes
Common attributes you can use in documents:
-
{opennms-version} - rendered as "22.0.2"
These can substitute part of URLs that point to, for example, APIdocs or source code. Note that opennms-git-tag also handles the case of snapshot/master.
Sample Asciidoc attributes which can be used:
-
{docdir} - root directory of the documents
-
{nbsp} - non-breaking space
6.7. Comments
There’s a separate build that includes comments.
When the comments are used they show up with a yellow background.
This build doesn’t run by default, but after a normal build, you can use make annotated
to create a build yourself.
You can use the resulting 'annotated' page to search for content as the full manual is a single page.
To write a comment:
// this is a comment
Comments are not visible in the standard build. Comment blocks won’t be included in the output of any build. The syntax for a comment block is:
////
Note that includes in here will still be processed, but not make it into the output.
That is, missing includes here will still break the build!
////
6.8. Tables
For representing structured information you can use tables. A table is constructed in the following manner:
[options="header, autowidth"]
|===
| Parameter | Description | Required | Default value
| `myFirstParm` | my first long description | required | `myDefault`
| `myScndParm` | my second long description | required | `myDefault`
|===
This is rendered as:
Parameter | Description | Required | Default value |
---|---|---|---|
|
my first long description |
required |
|
|
my second long description |
required |
|
Please align your columns in the AsciiDoc source in order to give better readability when editing in text view. If you have a very long description, break at 120 characters and align the text to improve source readability. |
this is rendered as:
Parameter | Description | Required | Default value |
---|---|---|---|
|
Authentication credentials to perform basic authentication.
Credentials should comply to RFC1945 section 11.1,
without the Base64 encoding part. That’s: be a string made of the concatenation of: |
optional |
|
|
Additional headers to be sent along with the request. Example of valid parameter’s names are
|
optional |
|
6.9. Include images
When visualizing complex problems you can help the explanation and provide greater information by using an image. We use in OpenNMS documentation modules two directories for images.
The image folder structure mirrors the text structure. In this case it is a little bit easier to locate the AsciiDoc text file where the image is included.
.
└── opennms-doc(1)
└── guide-doc(2)
├── README.adoc
├── pom.xml
├── src(3)
| └── asciidoc(4)
| ├── configs
| | └── poller-configuration.xml
| ├── images(5)
| | ├── 01_opennms-logo.png(6)
| | └── 02_pris-overview.png
| ├── images_src(7)
| | └── pris-overview.graphml(8)
| ├── index.adoc(9)
| └── text
| ├── images.adoc(10)
| ├── include-source.adoc
| ├── introduction.adoc
| └── writing.adoc
└── target(11)
1 | This folder contains all documentation modules; |
2 | The module for this documentation for target group of documentation contributors; |
3 | Indicates a source folder; |
4 | The documentation root folder; |
5 | Folder for images. Images should be *.png or *.jpg if included in the documentation; |
6 | The image used, the format is a leading <number>_ followed by a name using no spaces; |
7 | Some images are created from tools like yED, this folder should contain the editable version of the file with the same file name; |
8 | Editable version of the image source file, note no spaces in the name; |
9 | Main document file which includes all documentation parts and is rendered as index.html for the web; |
10 | AsciiDoc source file which can include images; |
11 | Target folder with generated HTML output after mvn clean package has been performed; |
All images in the entire manual share the same namespace, it is therefore best practice to use unique identifiers for images. |
To include an image file, make sure that it resides in the 'images/' directory relative to the document you’re including it within. Then use the following syntax for inclusion in the document:
.This is a caption of the image
image::docs/02_opennms-logo.png[]
Which is rendered as:
The image path for the images you include is relative to the *.adoc source file, where you use the image. |
6.10. Code Snippets
You can include code snippets, configuration- or source code files in the documentation. You can enable syntax highlighting by providing the given language parameter, this will work on source code or configuration.
6.10.1. Explicitly defined in the document
be careful to use this kind of code snippets as sparsely as possible. Code becomes obsolete very quickly, archaic usage practices are detrimental. |
if you do wish to include snippets use the following method:
<service name="DNS" interval="300000" user-defined="false" status="on">
<parameter key="retry" value="2" />
<parameter key="timeout" value="5000" />
<parameter key="port" value="53" />
<parameter key="lookup" value="localhost" />
<parameter key="fatal-response-codes" value="2,3,5" /><!-- ServFail, NXDomain, Refused -->
<parameter key="rrd-repository" value="/opt/opennms/share/rrd/response" />
<parameter key="rrd-base-name" value="dns" />
<parameter key="ds-name" value="dns" />
</service>
If there’s no suitable syntax highlighter for the code used just omit the language: [source].
Currently the following syntax highlighters are enabled:
-
Bash
-
Groovy
-
Java
-
JavaScript
-
Python
-
XML
For other highlighters that could be added see https://code.google.com/p/google-code-prettify/.
6.10.2. Included from an example file
You can include source or configuration from an external file. In this way you can provide a working example configuration maintaining doc and example at the same time. The procedure and rules are the same as with images, the path is relative to the *.adoc file where the file to be used is included.
[source,xml] ---- include::../configs/wmi-config.xml[] ----
This is how it’s rendered:
<?xml version="1.0"?>
<wmi-config retry="2" timeout="1500"
username="Administrator" domain="WORKGROUP" password="password">
</wmi-config>
6.10.3. Include parts of a file
If you want to include just a specific segment of a large configuration file, you can assign tags that indicate to AsciiDoc the section that is to be included. In this example just the service definition of the ICMP monitor should be included.
In the 'poller-configuration.xml' tag the section in the following manner:
...
<rrd step="300">
<rra>RRA:AVERAGE:0.5:1:2016</rra>
<rra>RRA:AVERAGE:0.5:12:1488</rra>
<rra>RRA:AVERAGE:0.5:288:366</rra>
<rra>RRA:MAX:0.5:288:366</rra>
<rra>RRA:MIN:0.5:288:366</rra>
</rrd>
<!-- # tag::IcmpServiceConfig[] -->
<service name="ICMP" interval="300000" user-defined="false" status="on">
<parameter key="retry" value="2" />
<parameter key="timeout" value="3000" />
<parameter key="rrd-repository" value="/opt/opennms/share/rrd/response" />
<parameter key="rrd-base-name" value="icmp" />
<parameter key="ds-name" value="icmp" />
</service>
<!-- # end::IcmpServiceConfig[] -->
<service name="DNS" interval="300000" user-defined="false" status="on">
<parameter key="retry" value="2" />
<parameter key="timeout" value="5000" />
<parameter key="port" value="53" />
...
[source,xml] ---- include::../configs/poller-configuration.xml[tags=IcmpServiceConfig] ----
<service name="ICMP" interval="300000" user-defined="false" status="on">
<parameter key="retry" value="2" />
<parameter key="timeout" value="3000" />
<parameter key="rrd-repository" value="/opt/opennms/share/rrd/response" />
<parameter key="rrd-base-name" value="icmp" />
<parameter key="ds-name" value="icmp" />
</service>
Spaces and tabs are taken from the original file. |
6.11. Cheat Sheets and additional hints
The documentation uses the AsciiDoc format. There are a number of guides that will help you to get started with using AsciiDoc:
For other resources, to gain familiarity with AsciiDoc, you can visit:
6.12. Migrating content from project wiki
The project wiki contains much information that ought to be migrated to the official documentation set. To help with this effort, we have a wiki template which informs readers of articles that are tagged for migration to the official docs, or that have already been migrated. When you identify an article in the OpenNMS wiki whose information should be migrated (either in its entirety, or just individual sections), use the following process.
-
If you do not already have a wiki account, request one before getting started. Your request must be approved by a wiki admin. If you don’t get approved within a day, send a note to the opennms-devel mailing list or on the OpenNMS Development chat channel.
-
Create an issue in the project issue tracker, in project NMS. Note the issue number; you will use it below.
-
After logging in to the wiki, visit the article whose content should be migrated.
-
Click on the Edit Source link at the top of the article view.
-
Add text like the following to the top of the article source editor:
{{OfficialDocs | scope=article | guide=admin | issue=NMS-9926 | date=March 2018 | completed=false}}
-
The value of the
scope
attribute must be eitherarticle
, if the entire article should be migrated,orsection
if only specific section(s) should be migrated.-
When using
scope = section
, it’s fine to use this template multiple times in a single article.
-
-
The value of the
guide
attribute must be one ofadmin
,development
,install
, oruser
.-
If the information in an article should be migrated to multiple official guides, use multiple instances of the
{{OfficialDocs}}
template; try to target these by section when possible.
-
-
The value of the
issue
parameter must be a valid issue ID in the project issue tracker, and will become a live link -
The value of the
date
parameter should be the month and year when the tag was added, e.g.March 2018
. -
The
completed
parameter is optional; it is assumed to be false if omitted, or true if its value is eithertrue
oryes
.
OfficialDocs
template usage-
Enter an edit summary such as Tagged for migration to official docs, NMS-12345 and click Show preview
-
After verifying that your changes render as expected (see image), click Save changes.
OfficialDocs
wiki template on an article pending migration-
Migrate the information, making sure to follow the guidelines laid out earlier in this section; do not just copy and paste, and watch out for obsolete information. If you need help, contact the developers through one of the methods mentioned above.
-
Once the migration is complete and the issue is closed, edit the wiki article again and change
completed=false
tocompleted=true
. -
The rendering of the template will change to indicate that the migration has been completed.
OfficialDocs
wiki template on an article whose migration is completedAdding the {{OfficialDocs}}
template to an article will implicitly add that article to a pair of wiki categories:
-
Migration to official docs pending or Migration to official docs completed, according to the value of the
completed
attribute -
Migrate to
X
guide, according to the value of theguide
attribute
7. AMQP Integration
The AMQP Integration allows external systems to communicate with the event bus of OpenNMS Horizon and receive alarms via the AMQP protocol.
AMQP is standard messaging protocol supported by a number of brokers including ActiveMQ and QPID. |
The integration is written using Camel + OSGi and has the following components:
-
Event Forwarder
-
Event Receiver
-
Alarm Northbounder
Custom filtering (i.e. which events to forward) and transformations (i.e. how the events are represented in the messages) can be used in each of the components. Generic implementations
Each componenent can be configured and setup independently, i.e. you can choose to only forward alarms. |
7.1. Event Forwarder
The event forwarder listens for all events on the internal event bus of OpenNMS Horizon. Events from the bus are sent to a Camel processor, which can filter or transform these, before being sent to the AMQP endpoint.
The event forwarder exposes the following properties via the org.opennms.features.amqp.eventforwarder
pid:
Property | Default | Description |
---|---|---|
connectionUrl |
amqp://localhost:5672 |
Used by the JmsConnectionFactory. See AMQP for details. |
username |
guest |
Username |
password |
guest |
Password |
destination |
amqp:topic:opennms-events |
Target queue or topic. See AMQP for details. |
processorName |
default-event-forwarder-processor |
Named |
The default processor, the default-event-forwarder-processor
, marshalls events to XML and does not perform any filtering.
This means that when enabled, all events will be forwarded to the AMQP destination with XML strings as the message body.
7.1.1. Setup
Start by logging into a Karaf shell.
Update the properties with your deployment specific values:
config:edit org.opennms.features.amqp.eventforwarder
config:property-set connectionUrl amqp://localhost:5672
config:property-set destination amqp:topic:opennms-events
config:property-set processorName default-event-forwarder-processor
config:update
Install the feature:
feature:install opennms-amqp-event-forwarder
7.1.2. Debugging
You can get detailed information on the Camel route using:
camel:route-info forwardEvent
7.2. Event Receiver
The event receiver listens for messages from an AMQP target and forwards them onto the internal event bus of OpenNMS Horizon. Messages are sent to a Camel processor, which can filter or transform these, before being sent onto the event bus.
The event receiver exposes the following properties via the org.opennms.features.amqp.eventreceiver
pid:
Property | Default | Description |
---|---|---|
connectionUrl |
amqp://localhost:5672 |
Used by the JmsConnectionFactory. See AMQP for details. |
username |
guest |
Username |
password |
guest |
Password |
source |
amqp:queue:opennms-events |
Source queue or topic. See AMQP for details. |
processorName |
default-event-receiver-processor |
Named |
The default processor, the default-event-receiver-processor
, expects the message bodies to contain XML strings which are it unmarshalls to events.
7.2.1. Setup
Start by logging into a Karaf shell.
Update the properties with your deployment specific values:
config:edit org.opennms.features.amqp.eventreceiver
config:property-set connectionUrl amqp://localhost:5672
config:property-set source amqp:queue:opennms-events
config:property-set processorName default-event-receiver-processor
config:update
Install the feature:
feature:install opennms-amqp-event-receiver
7.2.2. Debugging
You can get detailed information on the Camel route using:
camel:route-info receiveEvent
7.3. Alarm Northbounder
The alarm northbounder listens for all northbound alarms. Alarms are sent to a Camel processor, which can filter or transform these, before being sent to the AMQP endpoint.
The alarm northbounder exposes the following properties via the org.opennms.features.amqp.alarmnorthbounder
pid:
Property | Default | Description |
---|---|---|
connectionUrl |
amqp://localhost:5672 |
Used by the JmsConnectionFactory. See AMQP for details. |
username |
guest |
Username |
password |
guest |
Password |
destination |
amqp:topic:opennms-alarms |
Target queue or topic. See AMQP for details. |
processorName |
default-alarm-northbounder-processor |
Named |
The default processor, the default-alarm-northbounder-processor
, converts the alarms to a string and does not perform any filtering.
This means that when enabled, all alarms will be forwarded to the AMQP destination with strings as the message body.
7.3.1. Setup
Start by logging into a Karaf shell.
Update the properties with your deployment specific values:
config:edit org.opennms.features.amqp.alarmnorthbounder
config:property-set connectionUrl amqp://localhost:5672
config:property-set destination amqp:topic:opennms-alarms
config:property-set processorName default-alarm-northbounder-processor
config:update
Install the feature:
feature:install opennms-amqp-alarm-northbounder
7.3.2. Debugging
You can get detailed information on the Camel route using:
camel:route-info forwardAlarm
7.4. Custom Processors
If your integration requires specific filtering and or formatting, you can write your own processor by implementing the org.apache.camel.Processor
interface.
For example, we can implement a custom processor used for event forwarding:
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
public class MyEventProcessor implements Processor {
@Override
public void process(final Exchange exchange) throws Exception {
final Event event = exchange.getIn().getBody(Event.class);
// Filtering
if (!shouldForward(event)) {
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
return;
}
// Transforming
MyDTO eventAsDTO = toDTO(event);
exchange.getIn().setBody(eventAsDTO, MyDTO.class);
}
}
In order to use the processor, package it as a bundle, and expose it to the OSGi service registry using:
<bean id="myEventProcessor" class="org.opennms.integrations.evilcorp.MyEventProcessor" />
<service id="myEventProcessorService" ref="myEventProcessor" interface="org.apache.camel.Processor">
<service-properties>
<entry key="name" value="evilcorp-event-forwarder-processor"/>
</service-properties>
</service>
Once your bundle in the Karaf container, you can update the loaded you can refer to your processor with:
config:edit org.opennms.features.amqp.eventforwarder
config:property-set processorName evilcorp-event-forwarder-processor
config:update
If the event forwarder feature was already started, it should automatically restart and start using the new processor. Otherwise, you can start the feature with:
feature:install opennms-amqp-event-forwarder
8. Design and Styleguidelines
8.1. Jasper Report Guideline
Building and contributing JasperReports is a way to contribute to the project. To make it easier to maintain and style reports the following layout guideline can be used to have similar and more consistent report layout.
The following formatting can be applied:
Type | Convention |
---|---|
Date |
yyyy/MM/dd HH:mm:ss |
Report Range |
Report Begin: ${startDate} Report End: ${endDate} |
Paging |
Page ${current} of ${total} |
Based on this template definition there exist a GitHub repository which contains a JasperReport template. |