LEARN

Infrastructure Analytics: A Beginner's Guide

Infrastructure analytics is the process of parsing the data produced by enterprise IT infrastructure to extract actionable insights. Essentially, infrastructure analytics processes and correlates log data and events produced by network devices to help organizations better understand their infrastructure operations, make informed decisions and understand their impact.

The emergence of the Internet of Things (IoT) over the last15 years, as well as automation and more recent cloud migration initiatives, have increased the complexity of enterprise networks and systems — including the volume of data they produce, which can reach terabytes each day. The resulting heterogeneous mix of hardware and applications has made monitoring, optimization, resource allocation, troubleshooting and performance reporting a bigger challenge than ever.

Infrastructure analytics can alleviate some of these challenges. It provides organizations comprehensive, real-time visibility into complex networks and the data center. It can help anticipate resource consumption and adjust allocation to dynamic user demands. And it can improve network resilience, optimize and streamline the data life cycle of big data and recommend preventative measures to reduce the likelihood of failure.

Infrastructure analytics has the potential to transform the way your organization views its infrastructure. In this article, we’ll look at available modern infrastructure tools; how real-time IT infrastructure analytics is changing the way environments are maintained; how to start using infrastructure and analytics for business intelligence insights; and the benefits you can realize from this technology.




Real-Time IT Infrastructure Analytics

Real-time IT infrastructure analytics describes the use of machine learning to continuously extract insights from log files and events.

Historically, infrastructure analytics has been performed manually by humans, whether it’s IT teams or external service providers. Infrastructure administrators comb through running programs or log files looking for clues as to why a process or system has failed — due to a security issue or bandwidth issue, for example — then intuit an appropriate solution from the data. The goal of the analysis is to understand or address a specific question about a past event. This is typically conducted after the event has been resolved as part of a client impact report or a root cause analysis.

Modern infrastructures pose a much bigger challenge for human analysis. Their microservice-based architecture and heavy reliance on the cloud tailor them for decentralization. While designed for flexibility and speed, they increasingly have no discernible perimeter. The result is often comparatively formless and fluid infrastructures that are more difficult to understand, let alone monitor and troubleshoot.

However, machine learning and automation have made the process of maintaining modern infrastructures more efficient, while also helping organizations understand their exploding volume of data and rapidly expanding data warehouses. Instead of scrutinizing logs to understand an incident after the fact, self-learning algorithms can parse millions of logs to find correlations in real time. Rather than responding to an event that has led to a security breach or taken a server offline, IT teams can identify triggers and anticipate events before they occur, leading to more informed decision making.

Most IT teams initially adopt infrastructure analytics to increase uptime, as it can greatly impact revenue. Its ability to detect, and even predict and prevent, system faults makes it an increasingly essential business need. However, there are a growing number of use cases for real-time infrastructure analytics, ranging from anticipating service spikes to automatically adjusting resource allocation to meet real-time demands. Infrastructure analytics can even be used to improve the design of the infrastructure itself.

Types of Infrastructure Analytics Tools

Infrastructure analytics tools are machine-learning-powered products that can interpret and correlate events from different device logs and reports that infrastructure produces. These tools typically deliver insights in real time through custom dashboards, alerts and notifications.

Infrastructure analytics requires a deep understanding of data sources and the data infrastructure environment, such as the cause of a failed system or the source of an event or incident. Compute power and machine intelligence have recently improved enough to perform infrastructure analytics, but they still struggle to accurately understand and correlate events over an entire ecosystem. Thus, organizations often rely on separate tools that focus on specific areas such as event analysis, log analysis, data management and endpoint detection and response.

Specific features and functionalities will vary with these tools, but all offer some important shared features and capabilities, including:

  • the ability to aggregate, correlate and process all the relevant data.
  • predesigned, interactive dashboards, drag-and-drop widgets and other shortcuts to help you get up and running quickly (with customization options as your needs change).
  • an array of visualizations, such as real-time graphs and charts, that you can modify to view the data from different perspectives.

Generating Insights from Data - With Infrastructure Analytics

There are numerous security analytics tools on the market today, many of which help enterprises detect and prioritize threats, while also creating response strategies, analyzing adversarial behavior and iterating against potential attacks.

While infrastructure analytics tools maket data analysis, easier and faster, the ability to gain insights isn’t always simple. The following steps offer a rough map for setting your implementation up for success.

Understand the different types of data analytics: It’s critical to know what you want to achieve with infrastructure analytics before implementing a system. There are four basic types of big data analytics:

  • Descriptive: The simplest type of analytics —descriptive— identifies a problem or answers the question “What happened?” when you use multiple data sources and key metrics to provide insights about a previous event. While effective for describing a problem, it won’t explain why it happened, and so is often used with one or more types of analytics.
  • Diagnostic: Diagnostic analytics dives deeper into data to make correlations to explain why something happened. It can help clarify what caused a system to fail or how a security threat was able to enter the environment.
  • Predictive: Relying on machine learning and predictive models, predictive analytics uses insights of descriptive and diagnostic analytics to predict what is likely to happen.
  • Prescriptive: Prescriptive analytics suggests what course of action to take to solve or prevent a problem, basing output on past and current performance, available resources and likely scenarios. It relies on machine learning and other algorithms and is the most sophisticated type of data analytics.

analytics chart

Measure what’s important: When starting out, it may be tempting to track data on everything. But this approach will lead to spending more time monitoring and maintaining data than actually analyzing it for insights. Analytics only provides benefits if you track information that provides critical business intelligence and insights. A good starting point is to have stakeholders such as the CIO or other decision makers identify what critical business questions need to be answered, and create corresponding SLAs that can set appropriate expectations for action items.

Collect and analyze the data: An infrastructure analytics tool does most of the heavy lifting here, collecting the relevant data from its various sources and processing it using either pre-trained or customized machine learning models. Raw data is transformed into meaningful insights in real time.

Contextualize and visualize: To successfully interpret and act on analytics, you must put the raw, unstructured data in context. Understanding who the stakeholders are will help determine what information needs to be communicated and how. Infrastructure analytics tools can help, allowing you to view data from different perspectives and create the appropriate visualizations that best relay the ideas you want to communicate.

Draw conclusions: Evaluate the insights in your dashboard and decide on the appropriate action. With this new clarity, you can respond accordingly and make more informed decisions for the future.

Using Infrastructure Analytics to Drive Infrastructure Development

Infrastructure analytics can drive infrastructure development over time by creating a proactive, self-learning environment that can observe and diagnose infrastructure events and respond quickly. In the short term, it shifts the burden of troubleshooting, resource allocation, optimization performance reporting and other tasks from the end user or service provider to the infrastructure itself. Over time, intelligence analytics can move beyond just predicting events to suggesting preventative measures and other performance adjustments.

AI & ML for Infrastructure Analytics

AI is a critical component of infrastructure analytics, and a foundational understanding of AI is crucial for any implementation to succeed.

AI is an umbrella term that describes machines or software engineered to observe, think and react like human beings. AI comprises many subfields that mimic specific behaviors we associate with a human’s natural intelligence — speech recognition and natural language processing, for example. Machine learning is perhaps the most widely applied sub-field of AI as well as the biggest driver of infrastructure analytics, allowing a computer system to learn from experience by processing the data it receives and autonomously improving the performance of its task.

Machine learning algorithms are classified as “supervised” or “unsupervised.” Supervised machine learning requires someone, usually a data scientist, to “teach” it the algorithm, providing it with labeled training data that includes a set of examples and a specific outcome for each. They indicate what variables to analyze and then provide feedback on the accuracy of the predictions based on that data. After sufficient training, the computer is then able to predict trends in future data.

Unsupervised machine learning algorithms also require administrators or data scientists to provide them with training data, but are not given known outcomes for comparison, instead analyzing data and inferring previously unknown patterns. Unsupervised machine learning algorithms can cluster similar data together, detect anomalies within a data set and find rules that associate multiple variables.

Both supervised and unsupervised machine learning tactics are essential to performing real-time infrastructure analytics. Supervised machine learning allows infrastructure analytics tools to build predictive models that allow them to anticipate system failures and other events in the infrastructure. Unsupervised machine learning makes it possible for machines to discover faulty hardware or recognize the patterns that indicate an event trigger.

The Business Case for Infrastructure Analytics

There are many things to consider before adopting infrastructure analytics, but the success of an implementation will depend on understanding the following:

  • Infrastructure analytics requires cultural change: Infrastructure analytics calls for humans to give up control of fault analysis and other infrastructure tasks and trust machines. Training machine learning models is both complex and time-consuming and comes with some risk of service impact, but it’s critical to yielding accurate insights. Everyone from the top down in the organization has to be on board and committed to the new system for it to succeed.
  • There is no one-size-fits-all tool: Currently, there is no universal master infrastructure analytics tool that can detect every error and find all the answers across your infrastructure. Instead, you have to piece together infrastructure analytics with different tools and services across the organization. This may necessitate one tool for event analytics, another for security, and so on, to provide a complete picture of your infrastructure operations
  • Organizations will have to shift their focus to the data: Once infrastructure analytics is implemented correctly, people will need to focus more on the data and the results than on the network itself. This means the organization will have to spend more time aligning development and testing new services and less time with implementation or deployment.



Infrastructure analytics improves a business’ visibility into its increasingly complex environment. It makes sense of the volumes of data the business produces and delivers data insights to make better, more strategic decisions.

By themselves, end users struggle to correlate large amounts of data. Machine learning, however, can help because it learns from data to make predictions, draw inferences, discover patterns and set benchmarks, allowing for more rapid and accurate data analysis. The more quickly a company can process its data, the faster it can act on important insights.

Infrastructure analytics can help an organization and its end users to resolve and even prevent system failures quickly, more accurately allocate resources and improve the quality of performance reporting, among other things. The result is less downtime, increased efficiency and reduced costs.

Perhaps more importantly, infrastructure analytics can push an organization down the path of data literacy. Although IDC forecasts that worldwide revenue for big data and analytics products will reach $274 billion by 2022, 50 percent of organizations will still lack the data literacy and AI skills needed to achieve business value. Regardless of how much data an organization collects, it provides no benefit if it can’t create business value. Infrastructure analytics fosters a greater understanding and ability to communicate about data, which you can apply to many other projects across your organization.

What is Splunk?

This posting does not necessarily represent Splunk's position, strategies or opinion.

Stephen Watts
Posted by

Stephen Watts

Stephen Watts works in growth marketing at Splunk. Stephen holds a degree in Philosophy from Auburn University and is an MSIS candidate at UC Denver. He contributes to a variety of publications including CIO.com, Search Engine Journal, ITSM.Tools, IT Chronicles, DZone, and CompTIA.