Table of Contents
- Current State of Data Observability
- Top 8 Data Observability Trends for 2024some text
- Data-Driven FinOps
- Observability Pipelines
- Declined Usage of Siloed Open-Source Tools
- Observability Tool Consolidation
- Emergence of AI-driven Observability
- Rise of Platform Engineering
- Bring-Your-Own Storage Backend
- The Growing Challenge of Kubernetes Environments
Current State of Data Observability
Observability is no longer just a buzzword. Although it’s a relatively new concept in ITOps, it has rightfully found its place in IT operations. Observability measures a system’s state based on generated data, usually logs, outputs, events, metrics, and traces.
With cloud-native environments becoming more mainstream, finding the root cause of anomalies or failures has become increasingly difficult. In 2023, based on Splunk’s report, close to 66% of organizations reported that every hour of downtime was costing them more than $150,000! Organizations need data observability now more than ever.
Grand View Research estimated the global data observability market size at $2.14 billion and expects it to grow by 12.2% from 2024 to 2030. This is driven by the need for organizations to have a clear picture of their systems and applications and a consolidated analysis of logs and metrics. Organizations have also started combining security data with other telemetry and applying pipeline analytics to optimize data more widely, leading to an uptick in the usage of observability tools.
That’s what the picture looks like at the moment. Let’s discuss the key data observability trends shaping the market in 2024 and how this space might evolve.
Top 8 Data Observability Trends for 2024
Data-Driven FinOps
With more organizations focusing on optimizing their IT budgets for efficiency and impact, we’ll see a more data-driven approach to FinOps.
Many organizations, especially the more established banks, are building their services on a technology stack that contains, on average, 12 different cloud platforms along with legacy on-premise systems. These hybrid environments are pretty complex, and according to a Dynatrace report, close to 54% of surveyed financial businesses feel they’ll become more complicated in the coming year. The growing demand for data-driven insights has led most fintech leaders to turn to AIOps to reduce multi-cloud complexity.
A benefit of data-driven FinOps is understanding why and where companies spend their money in the cloud. This will help inform decisions that avoid the trap of unused potential in revenue-generating areas.
According to Richard Hartman at Grafana Labs, data collection, organization, and analysis would be the key to this evolution.
Observability Pipelines
Global data creation is estimated to grow by over 180 exabytes from 2020 to 2025. With increasing data volume and costs, organizations want more granular access to their telemetry data. An approach that has gained traction as a way to reduce ingestion volumes and expenses is an observability pipeline.
Observability pipelines allow you to easily collect, route, and process logs in your infrastructure. By reducing low-value telemetry data and enriching it with more context, observability pipelines allow security and DevOps teams to make the right decisions while reducing the cost of processing/storage.
Along with cost savings, another common benefit of using observability pipelines is data reformatting for enterprises. According to Sapphire Ventures, observability pipeline customers can convert legacy data structures to more standards-based formats on the go without interacting or reinstrumenting legacy code bases.
Declined Usage of Siloed Open-Source Tools
Teams are facing significant challenges thanks to the rising complexity, cost, and data volumes associated with observability, meaning more people should turn to open-source observability tools. This is the year when development teams and companies focus on what they can control. More specifically, they want control and flexibility with their data without worrying about vendor lock-in.
So, what’s the solution? Companies are still seeking out open-source alternatives but leaning less toward siloed alternatives.
Enter OpenTelemetry (OTEL), a way to centralize telemetry collection without worrying about specific vendor software.
OTEL is a CNCF-incubated project, which means its standards-based approach makes it a reliable alternative in this space.
Observability Tool Consolidation
On average, organizations use ten observability tools to manage infrastructure, applications, and user experience. This fragmentation means IT teams have to spend too much time piecing together different information from various monitoring tools before manually querying it for insights.
Close to 80% of log data has no analytical value, yet most teams spend a lot of money to analyze it. Finding valuable data is like looking for a needle in a haystack. With a highly fragmented observability stack, that haystack keeps getting bigger, making it more challenging to find the needle.
Common problems borne out of a fragmented observability stack are as follows:
- Cost and Resource Drain: A financial services firm was charged $65 million for observability in the first quarter of 2022, which is insane. How did that happen? A commonly cited cause was the unpredictability of data source growth. There’s also dependency on the data you move to the log. Add multiple tools to the mix, and you have near-zero visibility into where your costs are piling up. It has led to more companies focusing on gaining more visibility into their observability costs. Adopting efficient data management practices and collecting less monitoring data have been the common approaches to overcome this hurdle.
- Data Overload: The emergence of more distributed and dynamic cloud-native technology stacks has opened the data dams, and organizations are, quite frankly, drowning. The data is being generated at an almost impossible rate for teams to analyze and capture using fragmented monitoring tools without costing a fortune. In this scenario, precise insights are a pipe dream.
Tool consolidation needs to be high on your radar to avoid downtime and improve customer retention and experience. By consolidating your monitoring tools into a single observability system, you can free up the time of your system admins and engineers, allowing them to focus on the core business.
According to Zongjie Dao, Splunk, tool consolidation will become more common because of the complexity of hybrid cloud environments.
One way to manage this complexity is through AI and automation:
Emergence of AI-driven Observability
AI has exploded onto the observability scene, partly because it allows organizations to parse huge volumes of data and figure out the state of their systems from a cybersecurity and site reliability POV.
Organizations would need more predictive AI-driven analytics instead of traditional AIOps solutions that run on training-based learning models to keep up with the pace of cloud-native delivery.
According to Sapphire Ventures, an exciting development in this space has been the rise of LLM observability. This technology builds on traditional ML monitoring to capture necessary signals for tuning and operating LLMs.
The most widely cited use of AI in observability has been anomaly detection, according to Grafana Labs’ 2024 Observability Survey. The other promising AI use cases are as follows:
Rise of Platform Engineering
In a Logz report, 87% of surveyed participants had been using some form of platform engineering model—one in which a single group enabled observability for multiple teams.
Platform engineering is a discipline focused on accelerating the development and deployment of resilient and effective software at scale. The goal is to provide an internal self-service platform to operationalize DevSecOps and SRE practices and reduce the cognitive load on engineering teams.
According to Jet Brains’ report, The State of the Developer Ecosystem, in 2023, 73% of developers had experienced burnout, and ineffective time management was a key contributing factor.
Through practical platform engineering, organizations tend to see a spike in development velocity, among other benefits:
- Improved efficiency and productivity of work by reducing the cognitive load on developers
- Faster delivery time
- Improved system reliability
Bring-Your-Own Storage Backend
Sapphire Ventures believes that an architectural shift is on the horizon. Data warehouses are becoming the new “backend,” and providers aim to decouple their compute and storage layers fully.
This separation would allow each tier to be scaled independently and aligned with individual capacity needs. Ultimately, this would help customers achieve more granular control over data access and residency.
The Growing Challenge of Kubernetes Environments
Kubernetes has become ubiquitous as the platform for organizations shifting to a cloud-native environment.
Even though scaling services for new users on Kubernetes architectures is pretty easy, monitoring/troubleshooting, security, and networking are key challenges preventing organizations from migrating more of their mission-critical services.
As you can see from the image, monitoring/troubleshooting is the biggest obstacle to the more widespread adoption of Kubernetes. IT and security teams can’t maintain visibility into Kubernetes’ frequent updates [76% of surveyed leaders felt this way in a Dyntrace report], which might cause these teams to miss out on handy real-time insights.
In summary, organizations are still getting started with observability. At this stage, observability tools are needed to solve the most pressing IT operations challenges, and visibility takes the cake. As the data volumes and costs keep increasing, companies will focus on finding unified platforms that provide more visibility into their observability stack.
That means there are plenty of opportunities for startups to change the status quo and for established companies to evolve. AI will be a game-changer for disruption, and the organizations that can take advantage of these trends to help companies improve their visibility and productivity while reducing costs will win.
Take, for instance, a platform like Secoda. Secoda consolidates your data monitoring and observability, lineage, catalog, and documentation in one central platform so you can reduce complexity and save budget. You also get visibility into the health of your entire stack and prevent data asset sprawl. Add to that the capabilities of Secoda AI to profile data, tag PII, analyze trends, generate documentation, etc, and you have a strong contender for disruption.