Saturday, February 17, 2018
Home » DevOps » Data for DevOps : Part I

Data for DevOps : Part I

The movement toward DevOps is transforming IT departments worldwide, providing improvements across a host of metrics. The growth of DevOps-centered IT departments is happening simultaneously with an explosion in interest in data analytics for operations applications. Within the DevOps framework, people are just beginning to dig beneath the surface of how data analytics can improve DevOps implementation and the possibilities are bountiful. While it would be difficult to develop an exhaustive list of what can be done, we think it is useful to understand the opportunity through the lens of the types of data available for analysis.

Data Types

Broadly, IT departments have available to them “Operations Data”, “Monitoring Data”, and “Event Data”.

Operations Data can come from varied sources, though perhaps the largest source is the IT department’s ticketing and service logging system, often provided through third-party vendors such as ServiceNow. In addition, Operations Data comprises many other forms of data, including staffing schedules and detailed project timelines. Much information about the latter is captured in tools, such as Jira, designed to facilitate agile programming methodology. These data provide insights into IT workloads that can help inform operations efficiency improvements.

Monitoring Data are generated automatically from infrastructure systems. These data take the form of continuous data streams showing the evolution of critical metrics over time. Public cloud vendors provide hundreds of metrics that users can collect and analyze through portals such as Amazon Web Service’s CloudWatch. On-premise and co-located infrastructure also generate several data streams. These data provide a treasure trove of information that can be used to improve IT outcomes.

Event Data are a special case of Monitoring Data, but their value warrants separate consideration. These data are also generated by monitoring systems, but they are not continuous and instead log information about changes in state. Analyzing these data can provide critical insight into performance metrics such as persistence, availability, and discoverability. Coupling Event Data with other data types can also inform improvements in system design and operations practices. Below, we will take a more detailed look at how these data may be used to facilitate DevOps practices.

Diving Deeper: Operations Data

Operations Data are the most heterogeneous of the data types introduced above. They come in a variety of structures and from a variety of sources. Taken together, though, they can provide an in-depth view of how an IT department operates. These data include service tickets, service logs, personnel schedules, and digital communications about IT issues.

Often, analyzing only the metadata related to many of these sources — for example, service ticket metadata — can provide useful insights:

  • Patterns in IT workloads – Examining service ticket metadata – such as the date on which they are created, the date on which they are resolved, and the type of issue they address — can help identify patterns in how IT work is distributed. Tracking daily, or even hourly, counts of ticket creation and/or resolution can help identify times of day and days-of-the-weeks/months that might regularly see heavy workloads. Knowing these patterns, departments can either take steps to appropriately match staffing levels to needs or modify processes in order to distribute work more evenly over time.
  • Resource Optimization – Analyzing service ticket metadata alongside other operational information can inform operations decision-making. Earlier this year, Datapipe examined how combining staffing schedules with ticketing data could help operations managers better allocate staffing resources.

In both of the above examples, combining keywords gleaned from the context of operations data such as tickets, logs, and emails can further enrich the understanding of your data analysis. Our next post will address the use of Monitoring Data.

About Arti Garg

Arti Garg
As a Principal Consultant at Datapipe, Arti strives to ensure that data and analytics solutions complement and support their existing processes and culture for all Datapipe clients. Arti has a deep understanding for how any new solution, technical or otherwise, must align with an organization's mission, culture, and values. She writes about her experience helping enterprises identify gaps in their existing processes and ways that data-driven software solutions can effectively close them.

Check Also

How DevOps Can Help Organizations Close the ‘IT Skills Gap’

The cloud computing industry moves forward at a lightning pace. A process that may have been considered an industry best practice a year ago could be considered passé today. This rapid rate of acceleration has created a gap between the skills demanded by employers and the skills present in the workforce, and this gap means that many organizations struggle to fill open positions in cloud computing, negatively impacting their ability to keep up with the speed of innovation.