{
 "@context": "https://schema.org",
 "@type": "Article",
 "headline": "Key data observability trends in 2025",
 "description": "We looked at the top data observability trends for 2025, from AI-driven insights to tool consolidation and platform engineering. Learn how observability pipelines and data-driven FinOps are transforming IT operations for better system visibility and cost optimization.",
 "dateModified": "2024-11-08T19:37:37.709Z",
 "articleBody": "Current State of Data Observability\nObservability is no longer just a buzzword. Although it's a relatively new concept in ITOps, it has rightfully found its place in IT operations. Observability measures a system's state based on generated data, usually logs, outputs, events, metrics, and traces.\nWith cloud-native environments becoming more mainstream, finding the root cause of anomalies or failures has become increasingly difficult. In 2023, based on Splunk's report, close to 66% of organizations reported that every hour of downtime was costing them more than $150,000! Organizations need data observability now more than ever.\nGrand View Research estimated the global data observability market size at $2.14 billion and expects it to grow by 12.2% from 2024 to 2030. This is driven by the need for organizations to have a clear picture of their systems and applications and a consolidated analysis of logs and metrics. Organizations have also started combining security data with other telemetry and applying pipeline analytics to optimize data more widely, leading to an uptick in the usage of observability tools.\nThat's what the picture looks like at the moment. Let's discuss the key data observability trends shaping the market in 2024 and how this space might evolve.\nTop 8 Data Observability Trends for 2024\nData-Driven FinOps\nWith more organizations focusing on optimizing their IT budgets for efficiency and impact, we'll see a more data-driven approach to FinOps. \nMany organizations, especially the more established banks, are building their services on a technology stack that contains, on average, 12 different cloud platforms along with legacy on-premise systems. These hybrid environments are pretty complex, and according to a Dynatrace report, close to 54% of surveyed financial businesses feel they'll become more complicated in the coming year. The growing demand for data-driven insights has led most fintech leaders to turn to AIOps to reduce multi-cloud complexity.\nA benefit of data-driven FinOps is understanding why and where companies spend their money in the cloud. This will help inform decisions that avoid the trap of unused potential in revenue-generating areas.\nAccording to Richard Hartman at Grafana Labs, data collection, organization, and analysis would be the key to this evolution.\nObservability Pipelines\nGlobal data creation is estimated to grow by over 180 exabytes from 2020 to 2025. With increasing data volume and costs, organizations want more granular access to their telemetry data. An approach that has gained traction as a way to reduce ingestion volumes and expenses is an observability pipeline.\nObservability pipelines allow you to easily collect, route, and process logs in your infrastructure. By reducing low-value telemetry data and enriching it with more context, observability pipelines allow security and DevOps teams to make the right decisions while reducing the cost of processing/storage.\nAlong with cost savings, another common benefit of using observability pipelines is data reformatting for enterprises. According to Sapphire Ventures, observability pipeline customers can convert legacy data structures to more standards-based formats on the go without interacting or reinstrumenting legacy code bases.\nDeclined Usage of Siloed Open-Source Tools\nTeams are facing significant challenges thanks to the rising complexity, cost, and data volumes associated with observability, meaning more people should turn to open-source observability tools. This is the year when development teams and companies focus on what they can control. More specifically, they want control and flexibility with their data without worrying about vendor lock-in.\nSo, what's the solution? Companies are still seeking out open-source alternatives but leaning less toward siloed alternatives. \nEnter OpenTelemetry (OTEL), a way to centralize telemetry collection without worrying about specific vendor software.\nOTEL is a CNCF-incubated project, which means its standards-based approach makes it a reliable alternative in this space.\nObservability Tool Consolidation\nOn average, organizations use ten observability tools to manage infrastructure, applications, and user experience. This fragmentation means IT teams have to spend too much time piecing together different information from various monitoring tools before manually querying it for insights.\nClose to 80% of log data has no analytical value, yet most teams spend a lot of money to analyze it. Finding valuable data is like looking for a needle in a haystack. With a highly fragmented observability stack, that haystack keeps getting bigger, making it more challenging to find the needle.\nCommon problems borne out of a fragmented observability stack are as follows:\n\n  \n    Cost and Resource Drain: A financial services firm was charged $65 million for observability in the first quarter of 2022, which is insane. How did that happen? A commonly cited cause was the unpredictability of data source growth. There's also dependency on the data you move to the log. Add multiple tools to the mix, and you have near-zero visibility into where your costs are piling up. It has led to more companies focusing on gaining more visibility into their observability costs. Adopting efficient data management practices and collecting less monitoring data have been the common approaches to overcome this hurdle.\n  \n  Data Overload: The emergence of more distributed and dynamic cloud-native technology stacks has opened the data dams, and organizations are, quite frankly, drowning. The data is being generated at an almost impossible rate for teams to analyze and capture using fragmented monitoring tools without costing a fortune. In this scenario, precise insights are a pipe dream. \n\nTool consolidation needs to be high on your radar to avoid downtime and improve customer retention and experience. By consolidating your monitoring tools into a single observability system, you can free up the time of your system admins and engineers, allowing them to focus on the core business.\nAccording to Zongjie Dao, Splunk, tool consolidation will become more common because of the complexity of hybrid cloud environments. \nOne way to manage this complexity is through AI and automation:\nEmergence of AI-driven Observability\nAI has exploded onto the observability scene, partly because it allows organizations to parse huge volumes of data and figure out the state of their systems from a cybersecurity and site reliability POV.\nOrganizations would need more predictive AI-driven analytics instead of traditional AIOps solutions that run on training-based learning models to keep up with the pace of cloud-native delivery.\nAccording to Sapphire Ventures, an exciting development in this space has been the rise of LLM observability. This technology builds on traditional ML monitoring to capture necessary signals for tuning and operating LLMs.\nThe most widely cited use of AI in observability has been anomaly detection, according to Grafana Labs' 2024 Observability Survey. The other promising AI use cases are as follows:\n‍\nImage Source\nRise of Platform Engineering\nIn a Logz report, 87% of surveyed participants had been using some form of platform engineering model-one in which a single group enabled observability for multiple teams.\n‍\nImage Source\nPlatform engineering is a discipline focused on accelerating the development and deployment of resilient and effective software at scale. The goal is to provide an internal self-service platform to operationalize DevSecOps and SRE practices and reduce the cognitive load on engineering teams.\nAccording to Jet Brains' report, The State of the Developer Ecosystem, in 2023, 73% of developers had experienced burnout, and ineffective time management was a key contributing factor.\nThrough practical platform engineering, organizations tend to see a spike in development velocity, among other benefits:\n\n  Improved efficiency and productivity of work by reducing the cognitive load on developers\n  Faster delivery time\n  Improved system reliability\n\nBring-Your-Own Storage Backend\nSapphire Ventures believes that an architectural shift is on the horizon. Data warehouses are becoming the new "backend," and providers aim to decouple their compute and storage layers fully.\nThis separation would allow each tier to be scaled independently and aligned with individual capacity needs. Ultimately, this would help customers achieve more granular control over data access and residency.\nThe Growing Challenge of Kubernetes Environments\nKubernetes has become ubiquitous as the platform for organizations shifting to a cloud-native environment.\n‍\nImage Source\nEven though scaling services for new users on Kubernetes architectures is pretty easy, monitoring/troubleshooting, security, and networking are key challenges preventing organizations from migrating more of their mission-critical services.\n‍\nImage Source\nAs you can see from the image, monitoring/troubleshooting is the biggest obstacle to the more widespread adoption of Kubernetes. IT and security teams can't maintain visibility into Kubernetes' frequent updates [76% of surveyed leaders felt this way in a Dyntrace report], which might cause these teams to miss out on handy real-time insights.\nIn summary, organizations are still getting started with observability. At this stage, observability tools are needed to solve the most pressing IT operations challenges, and visibility takes the cake. As the data volumes and costs keep increasing, companies will focus on finding unified platforms that provide more visibility into their observability stack. \nThat means there are plenty of opportunities for startups to change the status quo and for established companies to evolve. AI will be a game-changer for disruption, and the organizations that can take advantage of these trends to help companies improve their visibility and productivity while reducing costs will win. \nTake, for instance, a platform like Secoda. Secoda consolidates your data monitoring and observability, lineage, catalog, and documentation in one central platform so you can reduce complexity and save budget. You also get visibility into the health of your entire stack and prevent data asset sprawl. Add to that the capabilities of Secoda AI to profile data, tag PII, analyze trends, generate documentation, etc, and you have a strong contender for disruption.",
 "image": "https://cdn.prod.website-files.com/61ddd0b42c51f89b7de1e910/66fd55030f317041080b1f61_Group%2080067%20(1).jpg",
 "author": {
   "@type": "Person",
   "name": "dexter-chu",
   "sameAs": "https://www.secoda.co/authors/dexter-chu"
 }
}

Updated
December 2, 2024

Key Data Observability Trends for 2025

We looked at the top data observability trends for 2024, from AI-driven insights to tool consolidation and platform engineering. Learn how observability pipelines and data-driven FinOps are transforming IT operations for better system visibility and cost optimization.

Dexter Chu
Head of Marketing
We looked at the top data observability trends for 2024, from AI-driven insights to tool consolidation and platform engineering. Learn how observability pipelines and data-driven FinOps are transforming IT operations for better system visibility and cost optimization.

Table of Contents

  • Current State of Data Observability
  • Top 8 Data Observability Trends for 2025
    • Data-Driven FinOps
    • Observability Pipelines
    • Declined Usage of Siloed Open-Source Tools
    • Observability Tool Consolidation
    • Emergence of AI-driven Observability
    • Rise of Platform Engineering
    • Bring-Your-Own Storage Backend
    • The Growing Challenge of Kubernetes Environments

Current State of Data Observability

Data observability is no longer just a buzzword. Although it’s a relatively new concept in ITOps, it has rightfully found its place in IT operations. Observability measures a system’s state based on generated data, usually logs, outputs, events, metrics, and traces.

With cloud-native environments becoming more mainstream, finding the root cause of anomalies or failures has become increasingly difficult. In 2023, based on Splunk’s report, close to 66% of organizations reported that every hour of downtime was costing them more than $150,000! Organizations need data observability now more than ever.

Grand View Research estimated the global data observability market size at $2.14 billion and expects it to grow by 12.2% from 2024 to 2030. This is driven by the need for organizations to have a clear picture of their systems and applications and a consolidated analysis of logs and metrics. Organizations have also started combining security data with other telemetry and applying pipeline analytics to optimize data more widely, leading to an uptick in the usage of observability tools.

That’s what the picture looks like at the moment. Let’s discuss the key data observability trends shaping the market for 2025 and how this space might evolve.

Top 8 Data Observability Trends for 2025

Data-Driven FinOps

With more organizations focusing on optimizing their IT budgets for efficiency and impact, we’ll see a more data-driven approach to FinOps. 

Many organizations, especially the more established banks, are building their services on a technology stack that contains, on average, 12 different cloud platforms along with legacy on-premise systems. These hybrid environments are pretty complex, and according to a Dynatrace report, close to 54% of surveyed financial businesses feel they’ll become more complicated in the coming year. The growing demand for data-driven insights has led most fintech leaders to turn to AIOps to reduce multi-cloud complexity.

A benefit of data-driven FinOps is understanding why and where companies spend their money in the cloud. This will help inform decisions that avoid the trap of unused potential in revenue-generating areas.

According to Richard Hartman at Grafana Labs, data collection, organization, and analysis would be the key to this evolution.

Observability Pipelines

Global data creation is estimated to grow by over 180 exabytes from 2020 to 2025. With increasing data volume and costs, organizations want more granular access to their telemetry data. An approach that has gained traction as a way to reduce ingestion volumes and expenses is an observability pipeline.

Observability pipelines allow you to easily collect, route, and process logs in your infrastructure. By reducing low-value telemetry data and enriching it with more context, observability pipelines allow security and DevOps teams to make the right decisions while reducing the cost of processing/storage.

Along with cost savings, another common benefit of using observability pipelines is data reformatting for enterprises. According to Sapphire Ventures, observability pipeline customers can convert legacy data structures to more standards-based formats on the go without interacting or reinstrumenting legacy code bases.

Declined Usage of Siloed Open-Source Tools

Teams are facing significant challenges thanks to the rising complexity, cost, and data volumes associated with observability, meaning more people should turn to open-source observability tools. This is the year when development teams and companies focus on what they can control. More specifically, they want control and flexibility with their data without worrying about vendor lock-in.

So, what’s the solution? Companies are still seeking out open-source alternatives but leaning less toward siloed alternatives. 

Enter OpenTelemetry (OTEL), a way to centralize telemetry collection without worrying about specific vendor software.

OTEL is a CNCF-incubated project, which means its standards-based approach makes it a reliable alternative in this space.

Observability Tool Consolidation

On average, organizations use ten observability tools to manage infrastructure, applications, and user experience. This fragmentation means IT teams have to spend too much time piecing together different information from various monitoring tools before manually querying it for insights.

Close to 80% of log data has no analytical value, yet most teams spend a lot of money to analyze it. Finding valuable data is like looking for a needle in a haystack. With a highly fragmented observability stack, that haystack keeps getting bigger, making it more challenging to find the needle.

Common problems borne out of a fragmented observability stack are as follows:

  • Cost and Resource Drain: A financial services firm was charged $65 million for observability in the first quarter of 2022, which is insane. How did that happen? A commonly cited cause was the unpredictability of data source growth. There’s also dependency on the data you move to the log. Add multiple tools to the mix, and you have near-zero visibility into where your costs are piling up. It has led to more companies focusing on gaining more visibility into their observability costs. Adopting efficient data management practices and collecting less monitoring data have been the common approaches to overcome this hurdle.
  • Data Overload: The emergence of more distributed and dynamic cloud-native technology stacks has opened the data dams, and organizations are, quite frankly, drowning. The data is being generated at an almost impossible rate for teams to analyze and capture using fragmented monitoring tools without costing a fortune. In this scenario, precise insights are a pipe dream. 

Tool consolidation needs to be high on your radar to avoid downtime and improve customer retention and experience. By consolidating your monitoring tools into a single observability system, you can free up the time of your system admins and engineers, allowing them to focus on the core business.

According to Zongjie Dao, Splunk, tool consolidation will become more common because of the complexity of hybrid cloud environments. 

One way to manage this complexity is through AI and automation:

Emergence of AI-driven Observability

AI has exploded onto the observability scene, partly because it allows organizations to parse huge volumes of data and figure out the state of their systems from a cybersecurity and site reliability POV.

Organizations would need more predictive AI-driven analytics instead of traditional AIOps solutions that run on training-based learning models to keep up with the pace of cloud-native delivery.

According to Sapphire Ventures, an exciting development in this space has been the rise of LLM observability. This technology builds on traditional ML monitoring to capture necessary signals for tuning and operating LLMs.

The most widely cited use of AI in observability has been anomaly detection, according to Grafana Labs’ 2024 Observability Survey. The other promising AI use cases are as follows:

Image Source

Rise of Platform Engineering

In a Logz report, 87% of surveyed participants had been using some form of platform engineering model—one in which a single group enabled observability for multiple teams.

Image Source

Platform engineering is a discipline focused on accelerating the development and deployment of resilient and effective software at scale. The goal is to provide an internal self-service platform to operationalize DevSecOps and SRE practices and reduce the cognitive load on engineering teams.

According to Jet Brains’ report, The State of the Developer Ecosystem, in 2023, 73% of developers had experienced burnout, and ineffective time management was a key contributing factor.

Through practical platform engineering, organizations tend to see a spike in development velocity, among other benefits:

  • Improved efficiency and productivity of work by reducing the cognitive load on developers
  • Faster delivery time
  • Improved system reliability

Bring-Your-Own Storage Backend

Sapphire Ventures believes that an architectural shift is on the horizon. Data warehouses are becoming the new “backend,” and providers aim to decouple their compute and storage layers fully.

This separation would allow each tier to be scaled independently and aligned with individual capacity needs. Ultimately, this would help customers achieve more granular control over data access and residency.

The Growing Challenge of Kubernetes Environments

Kubernetes has become ubiquitous as the platform for organizations shifting to a cloud-native environment.

Image Source

Even though scaling services for new users on Kubernetes architectures is pretty easy, monitoring/troubleshooting, security, and networking are key challenges preventing organizations from migrating more of their mission-critical services.

Image Source

As you can see from the image, monitoring/troubleshooting is the biggest obstacle to the more widespread adoption of Kubernetes. IT and security teams can’t maintain visibility into Kubernetes’ frequent updates [76% of surveyed leaders felt this way in a Dyntrace report], which might cause these teams to miss out on handy real-time insights.

In summary, organizations are still getting started with observability. At this stage, observability tools are needed to solve the most pressing IT operations challenges, and visibility takes the cake. As the data volumes and costs keep increasing, companies will focus on finding unified platforms that provide more visibility into their observability stack. 

That means there are plenty of opportunities for startups to change the status quo and for established companies to evolve. AI will be a game-changer for disruption, and the organizations that can take advantage of these trends to help companies improve their visibility and productivity while reducing costs will win. 

Take, for instance, a platform like Secoda. Secoda consolidates your data monitoring and observability, lineage, catalog, and documentation in one central platform so you can reduce complexity and save budget. You also get visibility into the health of your entire stack and prevent data asset sprawl. Add to that the capabilities of Secoda AI to profile data, tag PII, analyze trends, generate documentation, etc, and you have a strong contender for disruption.

Heading 1

Heading 2

Header Header Header
Cell Cell Cell
Cell Cell Cell
Cell Cell Cell

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote lorem

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

Text link

Bold text

Emphasis

Superscript

Subscript

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

Keep reading

See all stories