Today we're excited to officially introduce support for annotations, a powerful tool for correlating your metrics to aperiodic events such as deploy operations. Here's an example using annotations to surface a strong correlation between our batch summarization process and a spike in the number of read operations:
Annotations are a first-class resource in our RESTful API. This means that unlike tools that model events as simple RRD values of either 0 or 1, our annotations enable you to record (and later modify) complex context with each event. Related annotations are modeled in Librato as event streams identified by a common name e.g. (deploys).
At a minimum each individual annotation requires a title (e.g. Deployed v45) and a timestamp i.e. start_time. You have the option to additionally specify:
- description - More verbose account of the event.
- end_time - A second timestamp that in concert with start_time conveys the event duration
- source - Similarly to your metrics, you can more tightly-scope an annotation to some spatial reference.
- links - Links are a proper sub-resource of annotations enabling you to link the annotation to multiple external contexts.
Adding Annotations to a Graph
Once you're creating annotations through the API, you can easily overlay them on your graphs using the existing instruments interface. Just specify the name of an annotation stream (and optionally a source) in the Add Metrics form and hit submit:
As with metrics, annotation streams support both wildcard and dynamic sources. You can also toggle their display from expanded vertical lines that span the graph (default) to just the compact badges. In either mode of display, hovering over a badge will bring up a tool-tip displaying the annotation title, start/end times, and description:
Annotations are always displayed at the same resolution as the graph e.g. if the graph is currently showing hourly data, any annotations within a given hour will be displayed as a single coalesced entity on the chart. In expanded display mode a small badge placed at the top of the each annotation line will indicate how many events were coalesced, and the tooltip displays a list of title/source for each coalesced event. Annotation downsampling is purely a display function however, so the complete set of raw events in list form is always available in through the annotation feed:
Once you've overlaid annotations on your graphs, you can tell at a glance if a particular event correlates to a shift in system behavior. Whenever a correlation exists, the immediate next step is establishing (or disproving) it as causal. This typically requires the full context of the event, and we've worked hard to make it fast as possible to shift from recognizing a correlation to understanding it. The annotation feed is a scrollable list (in temporal order) the complete set of raw events that correspond to the date range and annotation streams covered by the graph. Each entry in the annotation feed contains the complete context of the event, including links. Clicking on an annotation badge in the graph automatically scrolls the feed to the corresponding entry and highlights it:
That's everything you need to get started tracking important events through annotations! Let us know if you have any questions, and we hope you'll find them as useful as we do :-).