Red Hat

Hawkular Blog

Kubernetes and OpenShift Templates for Jaeger

25 May 2017, by Pavol Loffay

I am happy to announce that we have made it easier to deploy Jaeger on Kubernetes. This work has been carried out in a new github organization Jaegertracing established to contain contributions for the Jaeger ecosystem. At the moment this organization holds repositories for Kubernetes and OpenShift templates.

Currently, we provide templates for deploying an all-in-one distribution. These bits are designed for demo and test purposes since they use only in-memory database. Be careful and do not use it in production! More sophisticated templates using a pure microservice approach and Cassandra storage will be available soon.

The templates have been tested against minikube and minishift using an excellent integration testing framework Arquillian Cube.

jaeger openshift
Figure 1: Jaeger deployed on OpenShift





Hawkular Services 0.37.Final

03 May 2017, by Heiko W. Rupp

Hawkular services is a ready to run distribution of Hawkular-Metrics with Alerts, Inventory, the WildFly agent and other components. The version 0.37.0.Final has just been released and is ready to be tested and integrated with other projects.

What’s in this release

This release includes those changes:

  • Inventory.v3 (already introduced in 0.36). This version of inventory stores its data inside of Hawkular-Metrics as String metrics. See this page for technical details

  • This does no longer need Postgres for Inventory storage

  • Agent version 1.0.0.CR2

  • The agent inside the Hawkular-services server is now the javaagent instead of the subsystem-based agent.

  • Hawkular-Metrics version 0.26.1

  • Better support for running on OpenShift

  • Ruby client gem v 3.0.1 to support Inventory.v3

Note
Inventory.v3 is incompatible with the previous version of Inventory. We made this change to be able to get rid of the SQL database (Postgres) that was needed with the previous inventory. You need to use ruby client gem version 3.0.1+ to use Inventory.v3. Similar agent version 1.0.0.CR1 and later.

Hawkular Services is released every week on Tuesdays.

Get started

To get started with Hawkular Services, download the latest release, unzip it, add a user and set the Agent to use the credentials for the user just added.

It can be accomplished with the following commands:

export HAWKULAR_HOME="/path/to/hawkular-services"
export HAWKULAR_USERNAME="jdoe"
export HAWKULAR_PASSWORD="password"

cd "${HAWKULAR_HOME}"
"${HAWKULAR_HOME}/bin/add-user.sh" \
  -a \
  -u "${HAWKULAR_USERNAME}" \
  -p "${HAWKULAR_PASSWORD}" \
  -g read-write,read-only
"${HAWKULAR_HOME}/bin/standalone.sh" -Dhawkular.rest.user="${HAWKULAR_USERNAME}" -Dhawkular.rest.password="${HAWKULAR_PASSWORD}"

Before starting the server, you need to have Cassandra 3.0.9 up and running.

Get started via Docker builds

There are official Docker images of the release available on DockerHub at Hawkular services

The latest tag will always point to the latest release.

NOTE

There are also images with a devel tag, which are pushed by internal CI on individual commits.

Build from master and Cassandra

If you build from master, you can pass -Pembeddedc to get a distribution with embedded Cassandra for local development. Similarly if you build via -Pdev a default user of jdoe/password will be installed and also be used with the agent.

Use Grafana with Hawkular-services

We do now have a Grafana datasource to make it easier to connect to Hawkular-services from Grafana. You can download it from Grafana.net. The website of the datasource tells more.

Unofficial WildFly + agent

There is also an instrumented WildFly server available at Instrumented WildFly.

Both images have been instrumented with the jdoe/password user for the Hawkular server and the agent.

See also http://pilhuhn.blogspot.de/2016/06/using-hawkular-services-via-docker.html on some more details about using those images.





Hawkular Alerting Tutorial: Lesson 06 - Events!

27 April 2017, by Jay Shaughnessy

Announcing availability of Tutorial Lesson 06: Events!

This new lesson builds on the previously announced tutorial. It talks about events in Hawkular alerting, how they differ from alerts, and how to work with them. Check it out!

Hawkular Alerting is a pluggable, API-driven component for adding alerting to your application or tool. It’s one of the many Hawkular offerings for monitoring and management. To help quick-start new users, and as a reference for current users, we’ve now released six tutorial lessons, with more to follow. It’s a great way to hit the ground running.

For Git Users:

git clone https://github.com/hawkular/hawkular-alerts.git
cd hawkular-alerts/examples/tutorial

For Others:

  1. Click this link to visit Hawkular Alerting on GitHub.

  2. Click Clone or download

  3. Click Download ZIP

    unzip hawkular-alerts-master.zip
    cd hawkular-alerts/examples/tutorial

For Everyone:

Start with the README.adoc. If you don’t have an asciidoc extension in your browser you can read the rendered README directly from GitHub. From here just click links in the document to proceed.

Hawkular Alerting has several deployment options, it can be deployed standalone, or embedded in Hawkular Metrics, or as part of Hawkular Services. The tutorial drives off of a docker container running alerting as part of Hawkular Metrics, generating alerts off of incoming metric data. The first lesson will help you get set up.

Try it and let us know how it goes! If you have any questions or want to share your thoughts, join us on IRC at #hawkular on freenode.





Hawkular APM: The Future

19 April 2017, by Gary Brown

Since October last year, the Hawkular APM project has been actively working on supporting, and contributing to, the OpenTracing standard.

We began by implementing our own Java OpenTracing provider, followed by a Node.js/Javascript implementation. We have also been active in the OpenTracing community, developing Java framework integrations and more recently a Java Agent that can be used to automatically install those framework integrations without any changes to the application code.

However we realised that the core model used in Hawkular APM (fragments of call traces) was not particularly well aligned with the model used in OpenTracing (span based). This made it more complicated to implement client providers, as they need to build the call trace fragments from multiple spans, and it means the information stored in the server wasn’t in the same form expected by users of the OpenTracing API.

Therefore at the beginning of this year we made the decision to change the core model used by Hawkular APM to be compatible with the model used by the OpenTracing API - and essentially provide an "OpenTracing" native solution.

During the initial design meetings to discuss the proposed changes, we became aware of a blog post from Uber regarding the future of their Jaeger project. Up until that point Jaeger used the Zipkin project to provide its backend tracing solution.

This new version of Jaeger provides very similar functionality to Zipkin, which is focused on visualising individual traces. It does not have the aggregated views currently supported in Hawkular APM - however from discussions with the Jaeger project, they are keen to be able to provide aggregated views.

Therefore we have made the decision that, rather than refactor the Hawkular APM project’s model to be more OpenTracing compatible, it makes more sense to collaborate on the Jaeger project.

2017 04 19 jaeger trace
Figure 1: Individual trace captured from the Jaeger HotRod example

The current Hawkular APM codebase will still be available as an OpenTracing solution - however our intention is to focus more effort on the Jaeger project, initially with the aim of making it easy to use within OpenShift, and then adding aggregated views.





CloudNativeCon summary

07 April 2017, by Juraci Paixão Kröhling

CloudNativeCon happened last week in Berlin and Hawkular APM was represented there with a talk, "Operational and Business Insights from Tracing Distributed Microservices". Here’s a short summary of the event, with a focus on OpenTracing.

CloudNativeCon logo

This event was co-located with KubeCon and as you would expect, pretty much everything revolved around Kubernetes. There were quite a few tracing vendors with booths and/or in the audience: Instana, DataDog, Dynatrace, Sysdig, CA Technologies, among others. It was a great opportunity to talk to them, watch each others demos, exchange ideas and invite them to the OpenTracing community. A few of them have components which are similar in purpose to some that we are developing as part of "opentracing-contrib" and would be the perfect opportunity for a collaboration!

After the talk, we were able to answer some questions around OpenTracing and Hawkular APM. Those with good questions got "data containers and books". There was also quite some interest in OpenShift: most people knew about it already and among those who didn’t, most seemed impressed by its user interface once presented to it.

On the last day, there was a Distributed Tracing Salon: in this workshop, people were able to get started with OpenTracing by using a demo application written in Go and adding instrumentation to it. The final goal was to see the traces on Zipkin’s user interface, and I believe most participants reached the goal! Those who didn’t could continue working on it after the workshop, as the source code is available on GitHub.

It was also a great opportunity to meet some OpenTracing members from LightStep, and I made sure to smuggle a few stickers and T-Shirts back to Munich, to be distributed during meetup talks there. All the videos from this conference, including the ones from the OpenTracing track, can be watched on YouTube.

Some interesting topics were brought up during the conference, a few of them more than once:

  • What’s the overhead? Hint: As OpenTracing is a specification and API, the overhead is dependent on the concrete implementation.

  • Multiplexer: sending traces to more than one backend seems to be a common use-case.

  • Should the "wire transport" be part of the specification?

  • Should an export format, like pcap, be part of the specification?

  • What can be used for "reacting" on data (alerting)? Hint: Hawkular Alerts is a great alternative and can be used as a standalone solution!

And you, what’s your opinion on the questions related to the specification? Join the OpenTracing community and make your opinion count!





Older posts:

RSS Feed

redhatlogo-white

© 2016 | Hawkular is released under Apache License v2.0