A Distributed Tracing Adventure in Apache Beam

A Distributed Tracing Adventure in Apache Beam
 
Distributed systems are hard, and things can often get much more difficult when problems arise. This is only exacerbated by the fact that many of these systems can be notoriously difficult to dig into when they are actually out in the wild and not just running "on your machine".
 
They say that a picture is worth a thousand words, but in the world of distributed systems, a picture can easily be worth a thousand hours. While I can't promise you that this post will in any way save you a thousand hours, I hope that you find value in the thought process that I explored when introducing tracing and visibility into an Apache Beam pipeline.
 

What is Your Quest? What Do You Seek?

 
A Distributed Tracing Adventure in Apache Beam
 
Before embarking on this journey, it was important to establish a few reasons why tracing is important, both in a development and production sense, as well as what the overarching goal of introducing it would be.
 
These are the few that I came up with,
  • Improve Development Story / Observability
    Developing and local testing can be challenging when working with distributed systems like Apache Beam and in particular streaming. Because of this, it can be difficult to debug and examine the data that is coming through your streaming pipeline and determine if it matches the expected outputs at various points. Additionally, the ability to trace an error to a given section of code can be invaluable.

  • Provide Production Value
    While you wouldn't want to actually trace every single request through your pipeline in a production environment, you could enable sampling to ensure that your production workflows are working as intended and in states where you find inconsistent results, a trace can be a valuable tool to help investigate.

  • Ubiquitous Tracing
    While the story itself may focus on tracing within a distributed streaming infrastructure, when done properly, it can extend outside of a streaming pipeline and provide an end-to-end tracing story from when a given element was introduced to the system and all of the actions that were performed relative to the element via the OpenTelemetry

Choose Your Own Adventure

 
Approaching the problem given my previous experience with Kafka and still (at the time) being relatively new to working with Beam, the following three approaches came to mind,
  • The "Kafka" Approach
    In Kafka, all messages that flow through the system contain a series of headers similar to those in HTTP Requests. In a tracing scenario, you would have an opportunity to inject a correlation id within the headers to persist the trace throughout the course of the pipeline. Even after a message lands within another topic, the trace would still be persisted and could be picked up further down the pipeline by simply extracting the trace from the header at any point.

  • The "Wrapper" Approach
    Apache Beam has no notion of headers similar to how Kafka handles storing the tracing identifier, which can make persisting the trace challenging. As a result, one approach can be to create a "wrapper" for each of the elements within your pipeline such as a TracingElement which will just wrap an existing element and contain the key-value pairs for the record as well as the tracing id. The downside of this approach is that it requires an adjustment to all of the entities and transforms throughout your system to look within the wrapper each time.

  • The "Data" Approach
    As mentioned in the previous point, since Apache Beam has no semblance of a ubiquitous external storage at the record level, another option is simply to add an additional property to all entities / elements within the pipeline that denotes the tracing identifier. Storing this data on the record itself will also easily allow the trace to be persisted into other technologies and will require no changes to the overall pipeline itself (as the records will be unchanged save for the property related to tracing).
After some exploration, we found that the approach with the least overhead was simply adjusting the records themselves such that each record could be responsible for persisting its own trace (aka the "Data" Approach).
 
The Wrapper had significant overhead with regards to coding issues after transforms and added another layer of complexity when trying to retrieve the elements to operate on. The Kafka record lent itself too heavily to Kafka and made transforming difficult, not to mention it was inefficient since it persisted information specific to Kafka throughout the process (e.g. topic names, partitions, etc.)
 

Take These, You’ll Need Them!

 
A Distributed Tracing Adventure in Apache Beam
 
With the hopes of following open-standards like those defined by OpenTracing, I figured it best to explain a bit about what goes into a trace. These terms come up frequently when discussing tracing and frameworks that handle it, so it could hurt before we embark into the code.
  • Span
    A single building block representing an operation or some unit of work that you want to capture. They are capable of standing on their own, referencing (or following) from other spans, storing metadata, tags, etc.

  • Trace
    A trace is a visualization of the life of a request (or series of other operations) that is made up of one or more spans. This collection of spans work together to paint a picture of the overall request, allowing you to easily reconstruct what happened within each span, etc.

  • SpanContext
    This is a wrapper of key values pairs that associates a trace to one or more spans and is the key ingredient when carrying traces across data-boundaries (different transforms, systems, etc.). This is the primary component that we store and work with in the context of a distributed system.

Follow the Map

 
As mentioned earlier, a picture can say a thousand words, so it’s probably worth providing a very rudimentary example of what these pieces look like composed together, or what a timeline between actual actions and operations in the system parallel with a series of traces,
 
A Distributed Tracing Adventure in Apache Beam
 
If we look at the diagram above, we can see how a given series of operations within an Apache Beam pipeline can parallel with building a trace to allow visibility into the pipeline. From the first encounter, a context will be created for the trace and spans will be associated with the context as it travels through the pipeline. It'll provide opportunities to tag searchable properties, output exceptions and logs, and much, much more.
 

The Adventure Begins (Using and Building a Trace)

 
There are four components necessary to initialize or create a trace/span within Beam, which this section will cover:
  • Context
    You need some type of context, which is typically just a HashMap of Key-Value pairs that is used to store the tracing information and information about the span context. This can be done in a variety of ways, but the simplest can be just to add a property for it on one of your objects.

  • Tracing Configuration
    If you are planning on pushing the traces to be consumed through a service such as Jaeger, you'll need to have the appropriate configuration added to your pipeline to resolve the tracer and send the traces off.

  • Resolving the Tracer
    Once you have the tracer configured, the next step is to resolve it within the individual element-wise transformations that are part of your pipeline. You'll need a reference (a static one) to the tracer in order to properly send off traces.

  • Building a Trace
    After resolving the tracer, you can easily initialize and build a trace to send off to Jaeger within your function and add the appropriate tags, logs, etc.
Defining a Context
 
As mentioned in the previous section, a span context can come in a variety of forms (such as a byte[], map, HTTP Headers, etc.). If you want to perform tracing at the element level, you'll want to ensure that your specific class or element has something defined to store it,
  1. public class TraceablePerson {    
  2.     // Other properties omitted for brevity  
  3.     // Define a publicly accessible tracing context  
  4.     public val tracingContext = mutableMapOf<String, String>()  
  5. }  
Likewise, if you were defining an Avro schema, this context might be defined as follows,
  1. {  
  2.   "name""tracing_context",  
  3.   "type": {  
  4.       "type""map",  
  5.       "values""string"  
  6.   },  
  7.   "default": {}  
  8. }  
Configuring a Tracer
 
Configuring the tracer is a requirement if you want to start sending your traces to Jaeger or another service that handles distributed tracing via the OpenTracing standard.
 
Thankfully, it's quite easy to configure via a custom TracingOptions class that your overall Apache Beam pipeline can inherit from,
  1. interface TracingOptions: PipelineOptions {    
  2.     @get:Description("The tracing application name")  
  3.     @get:Default.String("your_application_name")  
  4.     var tracingApplicationName: String  
  5.   
  6.     @get:Description("The tracing host name")  
  7.     @get:Default.String("localhost")  
  8.     var tracingHost: String  
  9.   
  10.     @get:Description("The tracing host name")  
  11.     @get:Default.Integer(6831)  
  12.     var tracingPort: Int  
  13. }   
This will allow this configuration to be driven from command line arguments, files, or environmental variables. Next, you'll want to make sure that your overall pipeline inherits from these so they can be accessible via the pipelineOptions property within your transforms,
  1. // Define a pipeline configuration that is traceable and interacts with Kafka  
  2. // (this is just an example, your mileage may vary)  
  3. public interface YourPipelineOptions : TracingOptions, KafkaOptions {    
  4.     // Other pipeline specific configurations here  
  5. }  
Using a Tracer
 
After your individual elements and tracing has been configured, you are ready to build your first trace. Since tracing is done at the element-level, you'll only be able to interact with the tracer at the DoFn level within Apache Beam. As such, there are two ways to handle this, you can either explicitly initialize this during the @StartBundle operation of a given transform as seen below,
  1. class SomeTraceableFunction() : DoFn<KV<...>, KV<...>() {    
  2.         private lateinit var tracer: Tracer;  
  3.   
  4.         @StartBundle  
  5.         fun initializeTracing(context: StartBundleContext){  
  6.             // Resolve the tracer if configured from the pipeline options  
  7.             val tracingOptions = context.pipelineOptions.`as`(TracingOptions::class.java)  
  8.   
  9.             if(tracingOptions != null) {  
  10.                 tracer = TracingConfiguration.getTracer(  
  11.                         tracingOptions.tracingApplicationName,  
  12.                         tracingOptions.tracingHost,  
  13.                         tracingOptions.tracingPort  
  14.                 )  
  15.             }  
  16.             else {  
  17.                 // If no tracing configuration was found, use an in-memory one  
  18.                 tracer = NoopTracerFactory.create()  
  19.             }  
  20.         }  
  21.   
  22.         @ProcessElement  
  23.         fun processElement(@Element element: KV<...>) {  
  24.              // Omitted for brevity  
  25.         }  
  26.     }  
  27. }  
We can take a look deeper into the usage of TracingConfiguration, which is simply a wrapper class that will create our tracer using a specified configuration, which you can tailor to suit your needs,
  1. open class TracingConfiguration {    
  2.     companion object {  
  3.         fun getTracer(application: String, host: String, port: Int): Tracer {  
  4.             return io.jaegertracing.Configuration  
  5.                 .fromEnv(application)  
  6.                 .withSampler(  
  7.                     io.jaegertracing.Configuration.SamplerConfiguration  
  8.                         .fromEnv()  
  9.                         .withType(ConstSampler.TYPE)  
  10.                         .withParam(1)  
  11.                 )  
  12.                 .withReporter(  
  13.                     io.jaegertracing.Configuration.ReporterConfiguration  
  14.                         .fromEnv()  
  15.                         .withLogSpans(true)  
  16.                         .withFlushInterval(1000)  
  17.                         .withMaxQueueSize(10000)  
  18.                         .withSender(  
  19.                             io.jaegertracing.Configuration.SenderConfiguration  
  20.                                 .fromEnv()  
  21.                                 .withAgentHost(host)  
  22.                                 .withAgentPort(port)  
  23.                         )  
  24.                 )  
  25.                 .tracer  
  26.         }  
  27.     }  
  28. }  
This static reference will allow you access to the tracer which will be used to build your traces and send them to Jaeger (or not do anything if you haven't configured it). If you plan on doing any decent amount of tracing, you'll likely find it beneficial to constructor your own TraceableDoFn to handle this,
  1. public lateinit var tracer: Tracer;  
  2.   
  3. @StartBundle  
  4. fun initializeTracing(context: StartBundleContext) {  
  5.     // Resolve the appropriate tracer if configured  
  6.     val tracingOptions = context.pipelineOptions.`as`(TracingOptions::class.java)  
  7.     if (tracingOptions != null) {  
  8.         tracer = TracingConfiguration.getTracer(  
  9.             tracingOptions.tracingApplicationName,  
  10.             tracingOptions.tracingHost,  
  11.             tracingOptions.tracingPort  
  12.         )  
  13.     } else {  
  14.         tracer = NoopTracerFactory.create()  
  15.     }  
  16. }  
This will allow you a publicly accessible tracer instance within any usages of the TraceableDoFn, which will cover using in the next section.
 
Constructing a Trace
 
As we discussed earlier in this post, we would be adopting an element-wise tracing context that could follow each individual message as it flowed through the pipeline:
  1. public val tracingContext = mutableMapOf<String, String>()    
Now, sometimes creating a trace can be somewhat involved, but might typically look like this,
  1. fun trace(context: MutableMap<String, String>, name: String, tracer: Tracer){    
  2.     // Create a builder for this span  
  3.     val spanBuilder = tracer.buildSpan(name)  
  4.   
  5.     // If we have some type of previous context, we need this to associate them  
  6.     if (context.isNotEmpty()) {  
  7.         // If so, indicate this is a continuation from the previous context  
  8.         val existingSpan = tracer.extract(TEXT_MAP, TracingContextExtractor(context))  
  9.         spanBuilder.addReference(References.FOLLOWS_FROM, existingSpan)  
  10.     }  
  11.   
  12.     // Start the context  
  13.     val span = spanBuilder.start()  
  14.     try {  
  15.         // Activate this span and update the context  
  16.         tracer.scopeManager().activate(span)  
  17.         tracer.inject(span.context(), TEXT_MAP, TracingContextInjector(context))  
  18.   
  19.         // Add tracing information here  
  20.         span  
  21.             .setTag("some-tag""some-value")  
  22.             .log("log some message")  
  23.   
  24.     } catch (ex: Exception) {  
  25.         Tags.ERROR.set(span, true)  
  26.         span.log("$ex");  
  27.     } finally {  
  28.         span.finish()  
  29.     }  
  30. }  
As you might imagine, that can be a lot, so we can create some extension methods to handle simplifying this into two functions: one to initialize a trace and another to diverge an existing trace,
  1. // Initializes a new trace/span  
  2. fun Tracer.trace(context: MutableMap<String, String>, name: String, traceFunction: (span: Span) -> Unit) {    
  3.     // Create a builder for this span  
  4.     val spanBuilder = this.buildSpan(name)  
  5.   
  6.     // If we have some type of previous context, we need this to associate them  
  7.     if (context.isNotEmpty()) {  
  8.         // If so, indicate this is a continuation from the previous context  
  9.         val existingSpan = this.extract(TEXT_MAP, TracingContextExtractor(context))  
  10.         spanBuilder.addReference(References.FOLLOWS_FROM, existingSpan)  
  11.     }  
  12.   
  13.     // Start the context  
  14.     val span = spanBuilder.start()  
  15.     try {  
  16.         // Activate this span and update the context  
  17.         this@trace.scopeManager().activate(span)  
  18.         this@trace.inject(span.context(), TEXT_MAP, TracingContextInjector(context))  
  19.   
  20.         // Apply any internal tracing  
  21.         traceFunction(span)  
  22.     } catch (ex: Exception) {  
  23.         Tags.ERROR.set(span, true)  
  24.         span.log("$ex");  
  25.     } finally {  
  26.         span.finish()  
  27.     }  
  28. }  
  29.   
  30. // Creates a new span that follows from an existing one  
  31. fun Tracer.follows(    
  32.     context: MutableMap<String, String>,  
  33.     name: String,  
  34.     traceFunction: (span: Span) -> Unit  
  35. ): MutableMap<String, String> {  
  36.     // Create a copy of the context if one exists  
  37.     val contextualCopy = HashMap(context)  
  38.   
  39.     // Create a builder for this span  
  40.     val spanBuilder = this.buildSpan(name)  
  41.   
  42.     // If we have some type of previous context, we need this to associate them  
  43.     if (context.isNotEmpty()) {  
  44.         // If so, indicate this is a continuation from the previous context  
  45.         val existingSpan = this.extract(TEXT_MAP, TracingContextExtractor(context))  
  46.         spanBuilder.addReference(References.FOLLOWS_FROM, existingSpan)  
  47.     }  
  48.   
  49.     // Start the context  
  50.     val span = spanBuilder.start()  
  51.     try {  
  52.         // Activate this span and update the context  
  53.         this@follows.scopeManager().activate(span)  
  54.         this@follows.inject(span.context(), TEXT_MAP, TracingContextInjector(contextualCopy))  
  55.   
  56.         // Apply any internal tracing  
  57.         traceFunction(span)  
  58.     } catch (ex: Exception) {  
  59.         Tags.ERROR.set(span, true)  
  60.         span.log("$ex");  
  61.     } finally {  
  62.         span.finish()  
  63.     }  
  64.   
  65.     // If we are not explicitly overwriting, we want to be   
  66.     // able to capture the underlying context  
  67.     return contextualCopy  
  68. }  
After you've established your tracer within your appropriate element-wise transform, you can use the trace() method to build and start your trace as seen below leveraging those extension methods,
  1. fun processElement(@Element element: KV<...>) {    
  2.      // Omitted for brevity  
  3.   
  4.      // Create a span (which will create a new trace behind the scenes)  
  5.      // that applies contextually to this specific element (via tracingContext)  
  6.      tracer.trace(element.tracingContext, "name_of_span") { span ->  
  7.          // In here you can perform any operations that you might care about  
  8.          // and use the span reference to add tagging, logging, etc. as seen  
  9.          // below  
  10.          span  
  11.              .setTag("some_property", element.someProperty)  
  12.              .log("Log some message about the element here")  
  13.      }  
  14. }  
Behind the scenes here, the following is happening,
  • The element context is examined to determine if any previous spans exist.
  • If a span did exist, a FOLLOWS_FROM attribute is added in order to relate this operation to the chain of other potential spans for this element.
  • If a span did not exist, a new span is generated and injected into the context.
  • The trace() call itself is finalized at the close of the closing bracket, which disposes of the span and it is finalized/committed to the appropriate tracer.
  • Any errors within the body of the trace() function will be properly decorated as errors and the log within the span/trace will contain the complete stack trace for the error.
  • Upon the finalization of a trace, it is committed to Jaeger (or your preferred/configured tracing system) and it should appear within the UI for those tools. This is performed within the trace() call automatically, so you don't need to worry about it yourself.
Support for Divergent Traces
 
Pipelines are seldom linear. Complex ones frequently branch, diverge and split off onto multiple paths, therefore tracing needs to support such operations and thankfully we can.
 
Let's consider you had a single event that was coming into your system and it had some notion of being traced. As we saw in the earlier example, we could easily do this via the trace() we showed off in the previous step,
  1. // Start a trace for your event  
  2. tracer.trace(event.tracingContext, "name_of_span") { ... }  
This will establish your trace and expose it up to Jaeger, Google Operations (formerly StackDriver), or your preferred OpenTracing consumer. However, if during your pipeline, you wanted to trace other entities that branch off from your event (e.g. an event contains multiple user instances that we care about, so we want to initialize traces from those that follow from our event).

To accomplish this, you can use the follows() API to create a new trace that follows from an existing one. What this means is that you can have an element traced independently downstream, but ultimately it can still be linked back to the originating record that introduced it into the system,
  1. // Initialize the trace for a new traceable instance from an existing context  
  2. user.tracingContext = tracer.follows(event.tracingContext, "found_user") { ... }   
After introducing this into your pipeline and running some data through, you can view the trace (covered in the next section) to visualize this branching within the trace,
 
A Distributed Tracing Adventure in Apache Beam
 
What you can see in this chart is the following steps,
  1. An event was introduced into the system and its trace was initialized.
  2. Within the identification Apache Beam pipeline, two separate users were identified from this event with their own independent tracing contexts.
  3. Downstream each of these users were sent to Kafka topics with appropriate tracing during that process (this step having no notion of the existence of an event itself)
As you might imagine, an entirely separate Apache Beam pipeline could pick up one of these users, and apply an additional trace, which will ultimately appear on the overall graph for the originating event.
 

Accessing Your Trace

 
NOTE
This assumes that either have a production instance of Jaeger running or a local instance (perhaps inside of a Docker container) where you can send your traces.
Once you've updated your applications to configure a tracer, your elements to contain and support contextual tracing, and actually run them, you are ready to leave your application and take a look at the traces themselves in Jaeger.
 
After running your application, you should be able to visit your Jaeger instance, which provide a UI to visualize the traces themselves,
 
A Distributed Tracing Adventure in Apache Beam
 
From the Jaeger UI, you can do quite a bit in terms of exploration. You get an overview of all of the applications that are currently performing traces and filter down by a specific application. You can also search all of the known spans for a given tag that was defined within your application (e.g. a search for error=true would display every span that contained an error, so you could easily find errors within your pipeline).
 
Additionally, you can drill into any given trace to see more information about it such as timings, individual tags, logging information and more,
 
A Distributed Tracing Adventure in Apache Beam
 
While this example is a very simple use case, you can imagine the value in more complex systems, especially during the development process, for logging aggregations in real-time, ensuring that transformations are being performed correctly, etc. The UI also provides a graphical representation of a trace as well,
 
A Distributed Tracing Adventure in Apache Beam
 
You can also take two traces and compare them against each other to see where one might diverge (e.g. if one contained erroneous data it might be offloaded to a Kafka topic for manual review and another would continue on to its expected destination). As you might imagine, adding more traces, spans, and applications, this image would begin to expand into a complete graph of your entire system.
 

The End?

 
Obviously tracing and logging are their own massive topics that entire books have been dedicated to, so this is really just the tip of the iceberg. This post just covered the most simple of use cases for getting started with a tracing framework in a distributed system. There's a wide range of different frameworks, implementations, and strategies but this was just one of the options that was chosen.