Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.
This article was contributed by Buddy Brewer, general VP and general manager, Partnerships, New Relic
In today’s global marketplace, every organization across all industry sectors face intense pressure to deliver exceptional experiences for customers, employees, and partners alike. It’s not enough to release an outstanding product: companies must deliver consistent innovation as quickly as possible while maintaining reliable services and efficient operations. In the long term, a company’s success depends on its engineers’ ability to plan, build, deploy, and run outstanding software.
But while engineering teams want to focus on agility and innovation, they’re often stuck trying to track down and fix errors in complex tech stacks. This complexity has a real and immediate business cost in the form of shipping delays, slow responses to outages, and poor customer experiences. Perhaps the most significant cost is the time wasted by engineers who could have been focusing on creating new value for users that could lead to sustained business growth.
While engineering teams recognize the importance of monitoring performance and identifying anomalies, today’s digital experiences are built on a web of microservices that struggle to interact and communicate with each other within the observability landscape. Often, a patchwork of analytical tools makes it possible for engineers to see limited glimpses of their tech stack, but not enough to uncover the reasons why an error might be occurring — let alone how to fix it.
According to a recent report from New Relic, 90% of IT leaders and engineers claim that observability is critical to the success of their business, with 94% stating that it’s critical to their role. True full-stack observability is becoming mission-critical to the success of modern businesses, enabling engineers to gain a comprehensive view of their operations and make fixes on an accelerated timeline.
This research finds that the coming year and beyond will see the data-driven observability landscape gain significant momentum. Here are four reasons why:
1. Fragmented monitoring fails to keep up with increasing outages in the changing observability landscape
The days of monolithic, DIY tech stacks are over. Modern engineering teams are adopting an overwhelming number of tools — both proprietary and open source — at a rapid pace. Seventy-two percent of survey respondents have to toggle between two or more tools, while 13% use ten or more different tools to monitor the health of their systems.
This proliferation of tools has generated an onslaught of new problems. Instead of helping teams innovate faster and improving mean time to detect (MTTD) and mean time to resolution (MTTR), engineers must spend an inordinate amount of time stitching together siloed data and context switching between tools. Following a year defined by massive outages across applications, cloud services, and internet providers, IT leaders have recognized the importance of observability for addressing sudden, costly outages.
2. Usage-based pricing shifts customer strategies
Many monitoring tools that rely on subscription pricing structures actually discourage IT leaders, engineers, and developers from ingesting all of their data. The pricing is difficult to predict and scale, in addition to being too expensive for most organizations. As a result, teams compromise on visibility: 60% of survey respondents still monitor telemetry data at the application level only, leaving massive amounts of data unmonitored in their software stack. However, modern observability tools are shifting to usage-based consumption and pricing models. These new offerings provide full visibility into a team’s telemetry data, with the organization only paying for what they use. By removing upfront guesswork on usage and potential overage penalties, this pricing model allows engineers to gain a comprehensive picture of their operations and reap the benefits of a true observability practice.
3. Tech teams try to keep pace with containerization in the observability landscape
We found that just 10% of IT decision-makers are using Kubernetes and containers in production. However, other responses point to significant near-term growth for containerization: 88% of respondents are exploring Kubernetes, with 25% of respondents conducting research, 25% evaluating, and 29% in development. As organizations make fundamental changes to their architectures, they will need to build in monitoring to maintain reliability and performance. Kubernetes’ observability will play a vital role in the next generation of tech stacks, delivering both operational visibility and tools for self-defense against malicious applications.
4. Observability improves service and reliability
The past two years have shown the necessity for digital services, as even the best-laid return-to-office plans have been upended by new COVID-19 variants and shifting corporate policies. In this always-online world, application data can give us greater detail and insight into real-world performance. For example, an increase in web traffic or application demand will typically be linked to higher transaction volumes. This increase can be seen and tracked across both application components and revenue figures, demonstrating observability’s value in both driving and quantifying business success.
Looking to the 2022 observability landscape and beyond, IT leaders face a key decision point in how to manage complexity in their tech stacks. The growing wave of containers and microservices is unavoidable, leaving organizations to choose how they build and monitor their expanding architectures. Survey data shows that industry leaders view observability as an important key to managing their complexity, and customer-friendly tools and pricing models should pave the way for a significant increase in observability deployments in the year to come.
For more information on the survey findings, please visit the 2021 Observability Forecast.
Buddy Brewer is the general VP and general manager of Strategic Partnerships at New Relic. A former entrepreneur in the observability space, Buddy has helped companies across every region and industry in the world to improve their software’s speed, quality, and user experience.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!