Red Hat

Hawkular Blog

Hawkular APM: The Future

19 April 2017, by Gary Brown

Since October last year, the Hawkular APM project has been actively working on supporting, and contributing to, the OpenTracing standard.

We began by implementing our own Java OpenTracing provider, followed by a Node.js/Javascript implementation. We have also been active in the OpenTracing community, developing Java framework integrations and more recently a Java Agent that can be used to automatically install those framework integrations without any changes to the application code.

However we realised that the core model used in Hawkular APM (fragments of call traces) was not particularly well aligned with the model used in OpenTracing (span based). This made it more complicated to implement client providers, as they need to build the call trace fragments from multiple spans, and it means the information stored in the server wasn’t in the same form expected by users of the OpenTracing API.

Therefore at the beginning of this year we made the decision to change the core model used by Hawkular APM to be compatible with the model used by the OpenTracing API - and essentially provide an "OpenTracing" native solution.

During the initial design meetings to discuss the proposed changes, we became aware of a blog post from Uber regarding the future of their Jaeger project. Up until that point Jaeger used the Zipkin project to provide its backend tracing solution.

This new version of Jaeger provides very similar functionality to Zipkin, which is focused on visualising individual traces. It does not have the aggregated views currently supported in Hawkular APM - however from discussions with the Jaeger project, they are keen to be able to provide aggregated views.

Therefore we have made the decision that, rather than refactor the Hawkular APM project’s model to be more OpenTracing compatible, it makes more sense to collaborate on the Jaeger project.

2017 04 19 jaeger trace
Figure 1: Individual trace captured from the Jaeger HotRod example

The current Hawkular APM codebase will still be available as an OpenTracing solution - however our intention is to focus more effort on the Jaeger project, initially with the aim of making it easy to use within OpenShift, and then adding aggregated views.

CloudNativeCon summary

07 April 2017, by Juraci Paixão Kröhling

CloudNativeCon happened last week in Berlin and Hawkular APM was represented there with a talk, "Operational and Business Insights from Tracing Distributed Microservices". Here’s a short summary of the event, with a focus on OpenTracing.

CloudNativeCon logo

This event was co-located with KubeCon and as you would expect, pretty much everything revolved around Kubernetes. There were quite a few tracing vendors with booths and/or in the audience: Instana, DataDog, Dynatrace, Sysdig, CA Technologies, among others. It was a great opportunity to talk to them, watch each others demos, exchange ideas and invite them to the OpenTracing community. A few of them have components which are similar in purpose to some that we are developing as part of "opentracing-contrib" and would be the perfect opportunity for a collaboration!

After the talk, we were able to answer some questions around OpenTracing and Hawkular APM. Those with good questions got "data containers and books". There was also quite some interest in OpenShift: most people knew about it already and among those who didn’t, most seemed impressed by its user interface once presented to it.

On the last day, there was a Distributed Tracing Salon: in this workshop, people were able to get started with OpenTracing by using a demo application written in Go and adding instrumentation to it. The final goal was to see the traces on Zipkin’s user interface, and I believe most participants reached the goal! Those who didn’t could continue working on it after the workshop, as the source code is available on GitHub.

It was also a great opportunity to meet some OpenTracing members from LightStep, and I made sure to smuggle a few stickers and T-Shirts back to Munich, to be distributed during meetup talks there. All the videos from this conference, including the ones from the OpenTracing track, can be watched on YouTube.

Some interesting topics were brought up during the conference, a few of them more than once:

  • What’s the overhead? Hint: As OpenTracing is a specification and API, the overhead is dependent on the concrete implementation.

  • Multiplexer: sending traces to more than one backend seems to be a common use-case.

  • Should the "wire transport" be part of the specification?

  • Should an export format, like pcap, be part of the specification?

  • What can be used for "reacting" on data (alerting)? Hint: Hawkular Alerts is a great alternative and can be used as a standalone solution!

And you, what’s your opinion on the questions related to the specification? Join the OpenTracing community and make your opinion count!

Alerts and Notifications for Elasticsearch using Hawkular Alerting

06 April 2017, by Lucas Ponce

Elasticsearch Alerter

Hawkular Alerting includes a new Alerter capable to query Elasticsearch servers and represent documents as Hawkular Events.

Managing Elasticsearch documents as Events allows Hawkular Alerting to define rich and complex conditions using EventConditions.

Triggers tagged with Elasticsearch will be used to schedule a periodic query to an Elasticsearch server. Trigger’s context will be used to define the interval between queries, additional query filtering or mapping options that will be used by the alerter to convert documents into events.

For example,

    "tags": {
      "Elasticsearch": "Localhost instance" (1)
    "context": {
      "timestamp": "@timestamp", (2)
      "interval": "30s", (3)
      "index": "log",  (4)
      "filter": "{\"match\":{\"app\":\"AppA\"}}", (5)
      "mapping": "level:category,@timestamp:ctime,message:text,app:dataId,index:tags" (6)
  1. Elasticsearch tag is used by the alerter to schedule a query from this trigger

  2. Elasticsearch field "@timestamp" will be used to fetch documents in regular intervals

  3. Interval between queries will be 30 seconds

  4. Documents are fetched from log index

  5. Optional filtering to refine the query

  6. Mapping between Elasticsearch document fields and Hawkular Events

The full reference of the Elasticsearch Alerter can be found under Elasticsearch Alerter.

Elasticsearch Plugin

Hawkular Alerting also includes a new Action Plugin capable to transform Hawkular Events and Alerts into Elasticsearch documents.

The Elasticsearch Plugin properties allows to define how an Event or Alert will be transformed in a document and under which index and type will be stored.

For example,

      "actionPlugin": "elasticsearch",
      "actionId": "write-partial-alert",
      "properties": {
        "index": "alerts_summary",  (1)
        "timestamp_pattern": "yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ",   (2)
        "transform": "{\"tenantId\":\"tenant\",\"ctime\":\"timestamp\",\"text\":\"trigger\",\"context\":{\"interval\":\"fetch-interval\"},\"evalSets\":\"details\"}" (3)
  1. Index used to store documents transformed by this plugin

  2. This timestamp_pattern indicates that all timestamp fields of Events/Alerts should be transformed into strings using this pattern

  3. JSON → JSON mapping to convert Events/Alerts into document using JOLT Shiftr

The full reference of the Elasticsearch Plugin can be found under Elasticsearch Plugin.

Logging Example

An small but rich example can be found under Elasticsearch example.

This example simulates different logs entries that are stored in Elasticsearch.

These entries define a log level (INFO, WARN or ERROR), a source application (AppA, AppB and AppC) and in the body message describes some reference about the subcomponent related (Security, Frontend or Backend).

These logs are stored in Elasticsearch with the following format:

    "message":"Message 6167 from Security"

In the example a first trigger is defined to fetch documents just for AppA, for this, the in the context we define a match filter as

    "context": {
      "timestamp": "@timestamp",
      "filter": "{\"match\":{\"app\":\"AppA\"}}", (1)
      "interval": "30s",
      "index": "log",
      "mapping": "level:category,@timestamp:ctime,message:text,app:dataId,index:tags" (2)
  1. From all possible documents, only AppA are interested for this trigger

  2. Note that app field is used as dataId for Events

Once events are mapped we can define an EventsCondition to detect any ERROR log related to Backend components.

          "type": "EVENT",
          "dataId": "AppA",
          "expression": "category == 'ERROR',text contains 'Backend'"

In a similar way, a second trigger is defined in the example to fetch documents for AppB.

    "context": {
      "filter": "{\"match\":{\"app\":\"AppB\"}}", (1)
  1. From all possible documents, only AppB are interested for this trigger

On this second trigger, we want to detect when we have a suspected high number of WARN messages in the log. There are several ways to map this, in the example we are going to use a Dampening

      "dampenings": [
          "triggerMode": "FIRING",
          "evalTrueSetting": 3,
          "evalTotalSetting": 10
          "type": "EVENT",
          "dataId": "AppB",
          "expression": "category == 'WARN'"

So, this second trigger will alert when more than 3 WARN messages are detected in a rate of 10 messages processed.

Finally, the example defines several ways of notifications: sending an email to administrators and writting back into Elasticsearch the alerts fired.

Elasticsearch Integration

As a result, we can combine all the Logs and Alerts in custom searches and provide them a UI dashboard in Kibana.

Kibana - Hawkular Alerting

Deploying Hawkular OpenShift Agent Easily

05 April 2017, by John Mazzitelli

The Hawkular OpenShift Agent has its docker images released on Docker Hub. So it is easily deployed in your OpenShift nodes.

You simply need to run some "oc" commands to create the agent and its configmap using a few OpenShift templates.

To make deploying the agent easier - especially for someone just wanting to try it out - a new script has been provided.

Now all you need to do in order to deploy HOSA is run this one command:

wget && sh ./

This will deploy the latest agent version in your OpenShift node in the "default" OpenShift project. If you wish to install a different version, use the DOCKER_VERSION environment variable like this:

wget && DOCKER_VERSION=1.4.0.Final sh ./

Note that this script will ask you for your OpenShift credentials if you are not yet logged in - it runs "oc login" which requires you to enter your username and password at the prompts. You may optionally log in ahead of time by running the "oc login" command prior to running this deploy script.

Deploying the Examples

Now that you have the Hawkular OpenShift Agent deployed, you can just as easily deploy some of the examples to see the agent work. There is a similar script located in the examples directory called You pass to it a command line argument whose value is the name of an example. The names of the examples are found in the examples directory - names such as jolokia-wildfly-example and multiple-endpoints-example.

All you need to do in order to deploy an example is run this one command (note that this deploys the jolokia-wildfly-example but you can pass in any example name you want):

wget && sh ./ jolokia-wildfly-example

This will deploy the latest example version in your OpenShift node in your user’s default OpenShift project. If you wish to install a different version, use the DOCKER_VERSION environment variable. If you wish to install your example in a different OpenShift project, use the EXAMPLE_NAMESPACE environment variable:

wget && DOCKER_VERSION=1.4.0.Final EXAMPLE_NAMESPACE=myproject sh ./ jolokia-wildfly-example

If you have already logged into OpenShift, the script will not ask you to log in again. It will only log you into OpenShift if you have not yet done so.

Hawkular Metrics 0.26.0 - Release

04 April 2017, by Stefan Negrea

I am happy to announce release 0.26.0 of Hawkular Metrics. This release is anchored by performance improvements and enhancements.

Here is a list of major changes:

Hawkular Alerting - Included

Hawkular Metrics Clients

Release Links

A big "Thank you" goes to John Sanda, Matt Wringe, Michael Burman, Joel Takvorian, Jay Shaughnessy, Lucas Ponce, and Heiko Rupp for their project contributions.

Older posts:

RSS Feed


© 2016 | Hawkular is released under Apache License v2.0