Red Hat

Hawkular Blog

Distributed Tracing with Apache Camel and OpenTracing

24 March 2017, by Gary Brown

In this post we will discuss how the OpenTracing standard is being supported in the upcoming release of Apache Camel (version 2.19).

OpenTracing is the vendor neutral open standard for distributed tracing, supported across many languages and with a growing number of tracer implementations and framework integrations. To illustrate the vendor neutrality, we will demonstrate two examples, each using a different OpenTracing compliant Tracer.

2017 03 24 camel loanbroker jms instance
Figure 1: Distributed trace for the Camel Loan Broker JMS example

How to Instrument a Camel Application

The first thing to note is that it is not necessary to modify the Camel routes to have them instrumented using an OpenTracing compliant Tracer. The only requirement is to add some relevant dependencies and initialize the tracer on the CamelContext.

This can be achieved in three ways:

Explicit initialization

The dependencies required to explicitly initialize OpenTracing are

    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-opentracing</artifactId>
    </dependency>

and any other dependencies specific to the OpenTracing compliant Tracer implementation being used.

Then either add the initialization of the tracer to the Spring configuration,

  <bean id="tracer" class="..."/>

  <bean id="ottracer" class="org.apache.camel.opentracing.OpenTracingTracer">
    <property name="tracer" ref="tracer"/>
  </bean>

or create an observer on the CamelContextStartingEvent to trigger the initialization of the tracer,

@ContextName("....")
public class MyApplication {
    public void setupCamel(@Observes CamelContextStartingEvent event) {
	// Location for initializing the Tracer if not implicitly loaded
        OpenTracingTracer ottracer = new OpenTracingTracer();
        ottracer.init(event.getContext());
    }
}

Spring Boot Annotation

The dependencies required to use OpenTracing in a Spring Boot application are simply

    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-opentracing-starter</artifactId>
    </dependency>

and any other dependencies specific to the OpenTracing compliant Tracer implementation being used. Then simply annotate the application using CamelOpenTracing:

@SpringBootApplication
@CamelOpenTracing
public class MyApplication {
    public static void main(String[] args) {
	// Location for initializing the Tracer if not implicitly loaded
        SpringApplication.run(MyApplication.class, args);
    }
}

Java Agent

This approach uses a javaagent to handle the initialization, so does not require any code/configuration changes. The dependencies required are:

    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-opentracing</artifactId>
    </dependency>
    <dependency>
      <groupId>io.opentracing.contrib</groupId>
      <artifactId>opentracing-agent</artifactId>
      <scope>provided</scope>
    </dependency>

Changes are also required in the pom.xml to download the opentracing-agent.jar and then execute the application supplying the java agent on the command line.

Instrumenting Some Examples

This section will show how two examples can be instrumented using two different OpenTracing compliant Tracer implementations.

The first is a new example introduced to demonstrate the three ways a Camel application can be instrumented with OpenTracing, and will be updated to use the Zipkin project.

The second shows how an existing example (Loan Broker JMS) can be instrumented using the Hawkular APM project.

Camel OpenTracing Example

This section will show how the new example, which uses a simple logging tracer implementation by default, can be changed to use Zipkin.

First edit each of the client/service1/service2 pom.xml files to:

  • Add properties

  <properties>
    ...
    <version.brave.opentracing>0.18.3</version.brave.opentracing>
    <version.zipkin.reporter>0.6.12</version.zipkin.reporter>
  </properties>
  • Replace the camel-example-opentracing-loggingtracer dependency with:

    <dependency>
      <groupId>io.opentracing.brave</groupId>
      <artifactId>brave-opentracing</artifactId>
      <version>${version.brave.opentracing}</version>
    </dependency>
    <dependency>
      <groupId>io.zipkin.reporter</groupId>
      <artifactId>zipkin-sender-urlconnection</artifactId>
      <version>${version.zipkin.reporter}</version>
    </dependency>

Then the following Java code changes are required:

  • Common code for ClientApplication, Service1Application and Service2Application

import brave.opentracing.BraveTracer;
import io.opentracing.Tracer;
import io.opentracing.contrib.global.GlobalTracer;
import zipkin.Span;
import zipkin.reporter.AsyncReporter;
import zipkin.reporter.Reporter;
import zipkin.reporter.urlconnection.URLConnectionSender;

....

    public static void initTracer() {
        System.out.println("Using Zipkin Tracer");
        String zipkinServerUrl = String.format("%s/api/v1/spans", System.getenv("ZIPKIN_SERVER_URL"));
        Reporter<Span> reporter = AsyncReporter.builder(URLConnectionSender.create(zipkinServerUrl)).build();
        brave.Tracer tracer = brave.Tracer.newBuilder().localServiceName("client").reporter(reporter).build();
        GlobalTracer.register(BraveTracer.wrap(tracer));
    }
  • Invoking the initTracer method

    • In ClientApplication.java it should be called at the end of the setupCamel method.

    • In Service1Application.java it should be called before the SpringApplication.run method.

    • In Service2Application.java it should be called as the first statement in the main method.

Once the changes have all been made, compile the example using mvn compile from the examples/camel-example-opentracing folder. Next step is to start the Zipkin server:

docker run -p 9411:9411 openzipkin/zipkin

Then follow the instructions in the README.md for running the three applications. Before running each command, you will need to run:

export ZIPKIN_SERVER_URL=http://localhost:9411

After the example has executed, start up the Zipkin UI in a brower. Select the client service and you should see some traces:

2017 03 24 camel opentracing example zipkin
Figure 2: Zipkin trace instance showing communication between client, service1 and service2

Camel Loan Broker JMS Example

The standard example can be found in examples/camel-example-loan-broker-jms folder within the Apache Camel source code. To enable tracing, against the Hawkular APM server, you need to edit the pom.xml based on the differences shown here . This patch file can be used to quickly update the example.

These differences can be broken down into three main changes:

  • Added dependencies, including the camel-opentracing component, opentracing-agent and hawkular-apm (OpenTracing client and REST client publisher)

  • Retrieval of the opentracing-agent.jar, which is placed in a local lib folder for subsequent use

  • Changes to the exec-maven-plugin configuration to enable the services to be started with the opentracing-agent.jar (using mvn exec:exec instead of mvn exec:java)

To try out the example, it is first necessary to start the Hawkular APM server:

docker run -p 8180:8080 jboss/hawkular-apm-server-dev

The next step is to open up two command windows and setup the environment variables in each:

export HAWKULAR_APM_USERNAME=admin
export HAWKULAR_APM_PASSWORD=password
export HAWKULAR_APM_URI=http://localhost:8180

Then run the following in the first window:

mvn compile
mvn exec:java -PQueue.LoanBroker

Once fully started, then run the following in the second window:

mvn exec:java -PQueue.Client

After the client has finished, open a browser window and log in using username admin password password, and view the Distributed Tracing page to view the dependencies between the JMS endpoints:

2017 03 24 camel loanbroker jms dependencies
Figure 3: Service endpoint dependencies for the loan broker JMS example

Selecting the Show 1 Instance(s) Details button will show the trace instance shown in Figure 1 above.

Summary

This post has shown how OpenTracing can be used to provide distributed tracing capabilities for Camel applications. This capability will be introduced in version 2.19 of Apache Camel, but we would encourage anyone interested in this functionality to obtain a snapshot version to try it out. If you find any issues, or would like additional capabilities prior to the release, then please get in touch.

For any Apache Camel users currently using the camel-zipkin component we would encourage you to try out the new camel-opentracing component with the brave-opentracing bridge.





Adjusting sampling rates for Hawkular APM on OpenShift

22 March 2017, by Juraci Paixão Kröhling

When doing distributed tracing of a busy application, it’s a good practice to limit the amount of data being collected or stored. The techniques for that are diverse, ranging from a simple "percentage based sampling" of the incoming requests, up to complex heuristics to keep/discard traces based on the code path.

Final result

It’s also a good practice to empower the admin, or an automated monitoring system, with the capability of adjusting the sampling based on the current conditions. For instance, we’d probably want to get more traces from a newly deployed version of an existing application, as we might want to compare the performance and adjust the percentage in steps. Or even, we want more information from an application that we suspect to be behaving badly. Similarly, if we are experiencing expected traffic peaks and need every resource available, we might enable tracing only to a small percentage of the incoming requests.

For the simple scenario, let’s call it "percentage based sampling", we have a few alternatives on how to implement it:

Application-based decision

In this scenario, we embed the logic into the application, using the percentage as the probability a random number is within a range. In other words: 20% sampling would be 20% chance that a number is lower or equal to 2 on a range of 1-10. Anything more complex than that would be "too much" for our scenario.

As hinted before, we want to externalize the percentage so that an external actor can change this. One solution is to store this percentage number into an environment variable on the Deployment Configuration (DC). This will, however, rollout a new deployment after every change, which might not be desirable in some scenarios.

Another solution is to externalize this into a Configuration Map (ConfigMap), but then, we also need to let our application listen to events related to this ConfigMap. For a microservice, we might end up with a bunch of unrelated dependencies, just to decide whether or not to trace a request.

OpenShift-based decision

In this scenario, we delegate the decision to our runtime environment. Similar to a simple A/B testing scenario, our deployment architecture would allow for quick changes in the routing decision. It does require a simple change to our application, but it’s a small compromise for the benefits it provides.

For that, we’ll use one single Image Stream (is) with our code, two Deployment Configurations (dc), two Services (svc) and one Route object. Each one of our dc objects would have a different value for an environment variable called TRACER_ENABLED. Each svc has its own dc, and the router sends a percentage of the requests to each of the svc instances. The main advantage of this approach is that the application image is the same for both tracing-enabled and tracing-disabled scenarios.

Doing it!

Enough talk, let’s start doing it. The first step is to create an OpenShift cluster. The following commands will give us a recent version of OpenShift Origin and will deploy Hawkular APM on it:

$ oc cluster up --version=latest
$ oc create -f https://raw.githubusercontent.com/jboss-dockerfiles/hawkular-apm/master/openshift-templates/hawkular-apm-server-deployment.yml

We’ll now bootstrap a vert.x application from scratch via Obsidian Toaster. For our example, click on "…​ selecting a runtime project framework", select "Vert.x" and give it a name like conditional-tracer (leave the package name as com.example and the version as 1.0.0-SNAPSHOT). On the next step, set the Vert.x version to 3.3.3 and select Vert.x Web. Scroll down the list and click Finish and download the ZIP file.

Note
Obsidian Toaster is still in alpha. If it’s not working for some reason, we have a ZIP file ready for you to download link: conditional-tracer.zip

Let’s add our Hawkular APM dependencies to our new project by adding the following lines to the pom.xml:

<dependency>
  <groupId>org.hawkular.apm</groupId>
  <artifactId>hawkular-apm-trace-publisher-rest-client</artifactId>
  <version>0.14.2.Final</version>
</dependency>
<dependency>
  <groupId>org.hawkular.apm</groupId>
  <artifactId>hawkular-apm-client-opentracing</artifactId>
  <version>0.14.2.Final</version>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-core</artifactId>
  <version>2.6.0</version>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-annotations</artifactId>
  <version>2.6.0</version>
</dependency>
<dependency>
  <groupId>com.fasterxml.jackson.core</groupId>
  <artifactId>jackson-databind</artifactId>
  <version>2.6.0</version>
</dependency>
Note
we need to override the Jackson dependencies because of incompatibilities between the version Hawkular APM uses and the one Vert.x uses.

We now remove the src/main/fabric8/svc.yml:

$ rm src/main/fabric8/svc.yml

We’ll create two service files instead:

src/main/fabric8/no-tracer-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: conditional-tracer-disabled
  labels:
    tracer: disabled
spec:
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  selector:
    group: com.example
    project: conditional-tracer
    tracer: disabled
  type: ClusterIP

src/main/fabric8/tracer-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: conditional-tracer-enabled
  labels:
    tracer: enabled
spec:
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  selector:
    group: com.example
    project: conditional-tracer
    tracer: enabled
  type: ClusterIP

We now create our two dc objects:

src/main/fabric8/no-tracer-dc.yml

apiVersion: v1
kind: DeploymentConfig
metadata:
  labels:
    group: com.example
    project: conditional-tracer
    tracer: disabled
  name: conditional-tracer-disabled
spec:
  replicas: 1
  selector:
    group: com.example
    project: conditional-tracer
    tracer: disabled
  template:
    metadata:
      labels:
        group: com.example
        project: conditional-tracer
        tracer: disabled
    spec:
      containers:
      - env:
        - name: HAWKULAR_APM_URI
          value: http://hawkular-apm
        - name: HAWKULAR_APM_USERNAME
          value: admin
        - name: HAWKULAR_APM_PASSWORD
          value: password
        - name: TRACER_ENABLED
          value: false
        - name: JAVA_APP_DIR
          value: /deployments
        - name: JAVA_MAIN_CLASS
          value: io.vertx.core.Launcher
        name: java-exec
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
      dnsPolicy: ClusterFirst
  triggers:
  - type: ConfigChange
  - imageChangeParams:
      automatic: true
      containerNames:
      - java-exec
      from:
        kind: ImageStreamTag
        name: conditional-tracer:latest
    type: ImageChange

src/main/fabric8/tracer-dc.yml

apiVersion: v1
kind: DeploymentConfig
metadata:
  labels:
    group: com.example
    project: conditional-tracer
    tracer: enabled
  name: conditional-tracer-enabled
spec:
  replicas: 1
  selector:
    group: com.example
    project: conditional-tracer
    tracer: enabled
  template:
    metadata:
      labels:
        group: com.example
        project: conditional-tracer
        tracer: enabled
    spec:
      containers:
      - env:
        - name: HAWKULAR_APM_URI
          value: http://hawkular-apm
        - name: HAWKULAR_APM_USERNAME
          value: admin
        - name: HAWKULAR_APM_PASSWORD
          value: password
        - name: TRACER_ENABLED
          value: true
        - name: JAVA_APP_DIR
          value: /deployments
        - name: JAVA_MAIN_CLASS
          value: io.vertx.core.Launcher
        name: java-exec
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
      dnsPolicy: ClusterFirst
  triggers:
  - type: ConfigChange
  - imageChangeParams:
      automatic: true
      containerNames:
      - java-exec
      from:
        kind: ImageStreamTag
        name: conditional-tracer:latest
    type: ImageChange

And finally, we change our route.yml to alternate the requests among the services:

src/main/fabric8/route.yml

apiVersion: v1
kind: Route
metadata:
  name: conditional-tracer
spec:
  alternateBackends:
  - kind: Service
    name: conditional-tracer-disabled
    weight: 80
  to:
    kind: Service
    name: conditional-tracer-enabled
    weight: 20

The only remaining change is within our Java code, to use a NoopTracer by default, using a concrete tracer (APMTracer) if the environment variable TRACER_ENABLED parses to true:

src/main/java/com/example/MainVerticle.java

public class MainVerticle extends AbstractVerticle {
    private Tracer tracer = NoopTracerFactory.create();

    @Override
    public void start() {
        boolean tracerEnabled = Boolean.parseBoolean(System.getenv("TRACER_ENABLED"));
        if (tracerEnabled) {
            tracer = new APMTracer();
        }
        vertx.createHttpServer()
                .requestHandler((req) -> {
                    Span span = tracer.buildSpan("hello-world-request").start();
                    span.setTag("enabled", tracerEnabled);
                    req.response().end(String.format("Hello World! Are we tracing this request? %s", System.getenv("TRACER_ENABLED")));
                    span.finish();
                })
                .listen(8080);
    }
}

At this point, we are ready to test it! As we already have an OpenShift cluster running with our Hawkular APM server, we just need to run:

$ mvn clean fabric8:deploy

Once the build finishes running, we should see the following on OpenShift’s Web Console:

Final result

To check whether our tracing is working, we can use a script like this:

$ export URL="http://conditional-tracer-myproject.192.168.2.107.xip.io"
$ for i in `seq 1 10` ; do curl ${URL} 2>/dev/null | awk '{print $NF}' ; done | sort | uniq -c

An output similar to this one should be printed out:

8 false
2 true

This means that 8 didn’t have their traces published and two requests had their traces published to Hawkular APM. You can check that by logging into Hawkular APM (username: admin, password: password) and looking into the Distributed Tracing screen. There should be only two traces there.

Tip
You might be tempted to try opening the URL in your browser and hitting refresh, but you’ll notice that the OpenShift router tries to send you to the same backend service all the time. For this reason, it makes more sense to test on the command line with curl.

From this point and on, we can adjust the percentage by changing the route. One way to do that is by issuing the command oc edit route conditional-tracer, or by navigating on the UI to Applications - Routes - conditional-tracer - Actions - Edit:

Final result

Wrapping up

Deciding whether a transaction should be traced or not is a decision that should be outsourced as much as possible to the underlying Tracer, possibly with data coming from different sources. Having a simple switch on the code loading the Tracer, however, might prove invaluable to admins in determined situations. Given the simplicity of this solution, there’s no reason not to have it!

The full source code for this example can be found at github.com/jpkrohling/conditional-tracer.





Distributed Tracing Workshop and OpenTracing Collaboration

15 March 2017, by Pavol Loffay

The distributed tracing workshop is organized by the Distributed Tracing Workgroup and takes place approximately once a year. This time it was 18th-19th February in Berlin. It was a pleasure to be there with people from various companies and share ideas and a common goal to improve distributed tracing.

Agenda

The agenda covered various hot topics like in-process context propagation, proposal for trace context header, state of OpenZipkin ecosystem and the latest happenings in OpenTracing. There were also research presentations by Jonathan Mace from Brown University about Baggage protocol and Baggage buffers. The workshop also included product presentations by:

  • Instana

  • Datadog

  • Amazon X-Ray

  • Jaeger by Uber

It was great to see how other projects present data to end users and what type of higher level analysis they focus on.

Summary

We discussed many things, and here are my main takeaways.

Key upcoming changes in OpenTracing:

  • Formal project organization called OpenTracing Specification Council (keep reading).

  • Clarification between logging, metrics and tracing. The discussion continues in specification/pull/46.

  • In-process context propagation. This feature will probably be the biggest change in the next OpenTracing version. It should solve inconvenient passing of parent context. Users would just start new span and if there is a parent available it will be automatically linked.

Trace context propagation:

  • Currently there are no standard headers for passing trace context. Several tracing systems adopted OpenZipkin’s B3 Propagation. Standardization of context header should help with concerns like CORS and linking together traces between different service providers. Proposed document.

Hawkular on OpenTracing Specification Council!

I am also happy to announce that the Hawkular project is part of the OpenTracing Specification Council (OTSC). This more formal organization structure will help with project growth and ensure that all major constituencies are involved. There will be regular calls to discuss priorities for OpenTracing standard and project, strategize around third-party project integrations and offer tie-breaking votes for issues where consensus among the larger community has proven elusive.

A big "Thank you" goes to Adrian Cole for organizing it and also to all attendees for sharing great ideas and knowledge around distributed tracing. Do not forget to join the group and be informed about upcoming events!





Hawkular Alerting: Tutorial Now Available!

14 March 2017, by Jay Shaughnessy

Hawkular Alerting is a pluggable, API-driven component for adding alerting to your application or tool. It’s one of the many Hawkular offerings for monitoring and management. To help quick-start new users, and as a reference for current users, we’ve released the first five lessons of a new tutorial, with more to follow. It’s a great way to hit the ground running.

For Git Users:

git clone https://github.com/hawkular/hawkular-alerts.git
cd hawkular-alerts/examples/tutorial

For Others:

  1. Click this link to visit Hawkular Alerting on GitHub.

  2. Click Clone or download

  3. Click Download ZIP

    unzip hawkular-alerts-master.zip
    cd hawkular-alerts/examples/tutorial

For Everyone:

Start with the README.adoc. If you don’t have an asciidoc extension in your browser you can read the rendered README directly from GitHub. From here just click links in the document to proceed.

Hawkular Alerting has several deployment options, it can be deployed standalone, or embedded in Hawkular Metrics, or as part of Hawkular Services. The tutorial drives off of a docker container running alerting as part of Hawkular Metrics, generating alerts off of incoming metric data. The first lesson will help you get set up.

Try it and let us know how it goes! If you have any questions or want to share your thoughts, join us on IRC at #hawkular on freenode.





Hawkular Metrics 0.25.0 - Release

07 March 2017, by Stefan Negrea

I am happy to announce release 0.25.0 of Hawkular Metrics. This release is anchored by general stability improvements and enhanced query capabilities for the external metrics alerter.

Here is a list of major changes:

INFO  [org.hawkular.metrics.api.jaxrs.util.RequestLoggingFilter] (default task-49)
REST API request:
--------------------------------------
path: /metrics
segments: [metrics]
method: GET
query parameters: {type=[availability]}
Tenant: T9a116f18-28cf-41b3-8ff8-c9752ac60e26232

Hawkular Alerting - Included

Hawkular Metrics Clients

Release Links

A big "Thank you" goes to John Sanda, Matt Wringe, Michael Burman, Joel Takvorian, Jay Shaughnessy, Lucas Ponce, and Heiko Rupp for their project contributions.





Older posts:

RSS Feed

redhatlogo-white

© 2016 | Hawkular is released under Apache License v2.0