Managing Software Reliability Metrics: How to Build SRE Dashboards That Drive Positive Business Outcomes

Customers expect your business application to perform consistently and reliably at all times—and for good reason. Many have built their own business systems based on the reliability of your application. This reliability target is your service level objective (SLO), the measurable characteristics of a service level agreement (SLA) between a service provider and its customer.

The SLO sets target values and expectations on how your service(s) will perform over time. It includes service level indicators (SLIs)—quantitative measures of key aspects of the level of service—which may include measurements of availability, frequency, response time, quality, throughput and so on.

If your application goes down for longer than the SLO dictates, fair warning: All hell may break loose, and you may experience frantic pages from customers trying to figure out what’s going on. Furthermore, a breach to your SLO error budget—the rate at which service level objectives can be missed—could have serious financial implications as defined in the SLA.

Why an Error Budget?

Developers are always eager to release new features and functionality. But these upgrades don’t always turn out as expected, and this can result in an SLO violation. With that being said, your SRE team should be able to do deployments and system upgrades as needed, but anytime you make changes to applications, you introduce the potential for instability.

An error budget states the numeric expectations of SLA availability. Without one, your customer may expect 100% reliability at all times. The benefit of an error budget is that it allows your product development and site reliability engineering (SRE) teams to strike a balance between innovation and reliability. If you frequently violate your SLO, the teams will need to decide whether its best to pull back on deployment and spend more time investigating the cause of the SLO breach.

For example, imagine that an SLO requires a service to successfully serve 99.999% of all queries per quarter. This means the service’s error budget has a failure rate of 0.001% for a given quarter. If a problem causes a 0.0002% failure rate, it will consume 20% of the service’s quarterly error budget.

Don’t Aim for Perfection

Developing a workable SLO isn’t easy. You need to set realistic goals, as aiming for perfection (e.g. 100% availability) can prove very expensive and nearly  impossible to achieve. Your SRE team, which is responsible for the daily operation of an application in production, must work with interested parties (e.g., product owners) to find the correct transactions to monitor for your SLO.

To begin, you must define your SLIs to determine healthy levels of service, and then use metrics that expose a negative user experience. Your engineering and application teams must decide which metric(s) to monitor, since they know the application best. A typical approach is to find a key metric that represents your SLO. For instance, Netflix uses its starts-per-second metric as an indicator of overall system health, because its baselining has led the company to expect X number of starts within any given timeframe.

Once you’ve found the right metrics, make them visible on a dashboard. Of course, not all metrics are useful. Some won’t need alerts or dashboard visibility, and you’ll want to avoid cluttering your dashboard with too many widgets. Treat this as an iterative process. Start with just a few metrics as you gain a better understanding of your system’s performance. You also can implement alerting—email, Slack, ticketing and so on—to encourage a quick response to outages and other problems.

People often ask, “What happens when SLOs aren’t met?”

Because an SLA establishes that service availability will meet certain thresholds over time, there may be serious consequences for your business—including the risk of harming your reputation and, of course, financial loss resulting from an SLO breach and a depletion of your error budget. Since the penalty for an SLA violation can be severe, your SRE team should be empowered to fix problems within the application stack. Depending on the team’s composition, it’s possible they’ll either release a fix to the feature code, make changes to the underlying platform architecture or, in a severe case, ask the feature team to halt all new development until your service returns to an acceptable level of stability as defined by the error budget.

How AppDynamics Helps You

AppDynamics enables you to track numerous metrics for your SLI.

But you may be wondering, “Which metrics should I use?”

AppD users are often excited—maybe even a bit overwhelmed—by all the data collected, and they assume everything is important. But your team shouldn’t constantly monitor every metric on a dashboard. While our core APM product provides many valuable metrics, AppDynamics includes many additional tools that deliver deep insights as well, including End User Monitoring (EUM), Business iQ and Browser Synthetic Monitoring.

 Let’s break down which AppDynamics components your SRE team should use to achieve faster MTTR:

  • APM: Say your application relies heavily on APIs and automation. Start with a few API you want to monitor and ask, “Which one of these APIs, if it fails, will impact my application or affect revenue?”  These calls usually have a very demanding SLO.

  • End User Monitoring: EUM is the best way to truly understand the customer experience because it automatically captures key metrics, including end-user response time, network requests, crashes, errors, page load details and so on.

  • Business iQ: Monitoring your application is not just about reviewing performance data.  Biz iQ helps expose application performance from a business perspective, whether your app is generating revenue as forecasted or experiencing a high abandon rate due to degraded performance.

  • Browser Synthetic Monitoring: While EUM shows the full user experience, sometimes it’s hard to know if an issue is caused by the application or the user. Generating synthetic traffic will allow you to differentiate between the two.

So how does AppDynamics help monitor your error budget?

After determining the SLI, SLO and error budget for your application, you can display your error budget on a dashboard. First, convert your SLA to minutes—for example, 99.99% SLO allows 0.01% error budget and only 8.77 hours (526 minutes) of downtime per year. You can create a custom metric to count the duration of SLO violation and display it in a graph. Of course, you’ll need to take maintenance and planned downtime into consideration as well.

With AppDynamics you can use key metrics such as response time, HTTP error count, and timeout errors. Try to avoid using system metrics like CPU and memory because they tell you very little about the user experience. In addition, you can configure Slow Transaction Percentile to show which transactions are healthy.

Availability is another great metric to measure, but keep in mind that even if your application availability is 100%, that doesn’t mean it’s healthy. It’s best to start building your dashboard in the pre-prod environment, as you’ll need to time tweak thresholds and determine which metric to use with each business transaction. The sooner AppDynamics is introduced to your application SDLC, the more time your developers and engineers will have to get acclimated to it.

 What does the ideal SRE dashboard look like? Make sure it has these KPIs:

  • SLO violation duration graph, response time (99th percentile) and load for your critical API calls

  • Error rate

  • Database response time

  • End-user response time (99th percentile)

  • Requests per minute

  • Availability

  • Session duration

Providing Value to Customers with Software Reliability Metric Monitoring

SLI, SLO, SLA and error budget aren’t just fancy terms. They’re critical to determining if your system is reliable, available or even useful to your users. You should be able to measure these metrics and tie them to your business objectives, as the ultimate goal of your application is to provide value to your customers.

Learn how AppDynamics can help measure your business success.

Mean Time to Repair: What it Means to You

We’ve all been there: Flying home, late at night, a few delays. Our flight arrives at the airport and we’re anxious to get out of the tin can. Looking outside, we see no one is connecting the jet bridge to the aircraft. Seconds seems like minutes as the jet bridge just sits there. “This is not a random event, they should have been expecting the flight,” you tell yourself over and over again. Finally, a collective sigh of relief as the jet bridge starts to light up and inch ever closer to your freedom.

Even though the jet bridge was not broken per se, the process of attaching the bridge seemed broken to the end user, a.k.a. “the passenger.” The latency of this highly anticipated action was angst-causing.

As technologists, we deal with increasingly complex systems and platforms. The advent of the discipline around site reliability/chaos engineering brings rigor to mean-time-to-repair (MTTR) and mean-time-between-failure (MTBF) metrics.

For failure to occur, a system doesn’t have to be in a nonresponsive or crashed state. Going back to my jet bridge example, even high latency can be perceived as “failure” to your customers. This is why we have service level agreements (SLAs), which establish acceptable levels of service and the consequences of noncompliance. Violate a SLA, for example, and your business could find itself facing a sudden drop in customer sentiment as well as a hefty monetary fine.

Site reliability engineers (SREs) push for elastic and self-healing infrastructure that can anticipate and recover from SLA violations. However, these infrastructures are not without complexity to implement and instrument.

Mobile Launch Meltdown

I remember back when I was a consulting engineer with a major mobile carrier as a client. This was about a decade ago, when ordering a popular smartphone on its annual release date was an exercise in futility. I would wait up into the wee hours of the morning to be one of the first to preorder the device. After doing so on one occasion, I headed into the office.

By midday, after preordering had been open for some time, a cascading failure was occuring with my company, one of many vendors crucial to this preorder process. My manager called me to her office to listen in on a bridge call with the carrier. Stakeholders from the carrier were rightfully upset: “We will make more in an hour today than your entire company makes in a year,” they repeated multiple times.

The pressure was on rectify the issues and allow the business to continue. As in the novel The Phoenix Project, representatives from different technology verticals joined forces in a war room to fix things fast.

The failure was complex—multiple transaction and network boundaries, and speed of incoming orders on a massive scale. However, the notion of a large set of orders coming in on a specific date was not random, since the device manufacturer had set the launch date well in advance.

The Importance of Planning Ahead

The ability to tell when a violation state is going to occur—and to take corrective action ahead of time—is crucial. The more insight and time you have, the easier it is to get ahead of a violation, and the less pressure you’ll feel to push out a deployment or provision additional infrastructure.

With the rise of cloud-native systems, platforms and applications are increasingly distributed across multiple infrastructure providers. Design patterns such as Martin Fowler’s Strangler Pattern have become cemented as legacy applications evolve to handle the next generation of workloads. Managing a hybrid infrastructure becomes a challenge, a delicate balance between the granular control of an on-prem environment and the convenience and scalability of a public cloud provider.

Usually there is no silver bullet to fix problems at scale. If there’s a glaring issue the old adage, “Would have been addressed already,” proves true. In performance testing, death by a thousand paper cuts plays itself out in complex distributed systems. Fixing and addressing issues is an iterative approach. During a production-impacting event, haste can make waste. With all of the the investment in infrastructure-as-code and CI/CD, these deployments can systematically occur faster than ever.

We might not all experience an incident as major as a mobile phone preorder meltdown, but as technologists we strive to make our systems as robust as possible. We also invest in technologies that enable us change our systems faster—an essential capability today when so many of us are under the gun to fix what’s broken rather than adding new features that delight the customer.

I am very excited to join AppDynamics! I’ve been building and implementing large, distributed web-scale systems for many years now, and I’m looking forward to my new role as evangelist in the cloud and DevOps spaces. With the ever-increasing complexity of architectures, and the focus on performance and availability to enhance the end user experience, it’s crucial to have the right data to make insightful changes and investments. And with the synergies and velocity of the DevOps movement, it’s equally as important to make educated changes, too.

Site Reliability Engineering: DevOps 2.0

Has there ever been a better time to be in DevOps? TV shows like “Person of Interest” and “Mr. Robot” are getting better at showing what developers actually do, using chunks of working code. Movies like Michael Mann’s “Blackhat” (2015) won praise from Google’s security team for its DevOps accuracy in a few scenes. Look around and you’ll discover elements of DevOps culture filtering out into wider society, such as people in all walks of life discussing their uptime or fast approaching code lock.

On the other hand, perhaps the biggest thorn in the side of DevOps is that developers and operations teams don’t normally get along well. Developers want to rush ahead and compile some groundbreaking code under extremely tight schedules, while operations teams try to slow everyone down to identify systemic risks from accidents or malicious actors. Both teams want to end up with a better user experience, but getting there becomes a power struggle over what that experience truly means.

The dream that brought DevOps together is for someone who can be half dev and half ops. That split desire is exactly the point of the SRE (site reliability engineer).

Defining the SRE

In introducing the term SRE, Google’s VP of Engineering, Ben Treynor, stated,

“It’s what happens when you ask a software engineer to design an operations function…. The SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.”

Way back in 2010, Facebook SRE Mark Schonbach explained what he did this way:

“I’m part of a small team of Site Reliability Engineers (SRE) that works day and night to ensure that you and the other 400+ million users around the world are able to access Facebook, that the site loads quickly, and all of the features are working…. We regularly hack tools on the fly that help us manage and perform complex maintenance procedures on one of the largest, if not the largest memcached footprints in the world. We develop automated tools to provision new servers, reallocate existing ones, and detect and repair applications or servers that are misbehaving.”

Where Did SREs Come From?

Reliability engineering is a concept that grew out of the operations world and has been around for more than 100 years. It became more closely connected with electronic systems after World War II, when the IEEE created the Reliability Society. In the past 10 years, five 9s (99.999) became the golden standard for application performance management. That standard led to the creation of a class of operations experts who knew enough code to recover the site and put the last stable release back into production as fast as possible.

Treynor explained the impetus for creating this new category at Google with his typical deadpan humor: “One of the things you normally see in operations roles as opposed to engineering roles is that there’s a chasm not only with respect to duty, but also of background and of vocabulary, and eventually, of respect. This, to me, is a pathology.”

Which Toolsets Do SREs Use?

For SREs, stability and uptime top priorities. However, they should be able to take responsibility and code their own way out of hazards, instead of adding to the to-do lists of the development team. In the case of Google, SREs are often software engineers with a layer of network training on top. Typically, Google software engineers must demonstrate proficiency in:

  1. Google’s own Golang and OO languages such as C++, Python or Java

  2. A secondary language like JavaScript, CSS & HTML, PHP, Ruby, Scheme, Perl, etc

  3. Advanced fields like AI research, cryptography, compilers, UX design, etc

  4. Getting along with other coders

On top of those proficiencies, Google’s SREs must have experience in network engineering, Unix sys admin or more general networking/ops skills such as LDAP and DNS.

The Critical Role of SRE

Downtime is costing businesses around $300,000 per hour, according to a report from Emerson Network Power. The most obvious impact is when traffic spikes bring down e-commerce sites, which was covered in a recent AppDynamics white paper. However, Treynor also pointed out how standard dev vs. ops friction can be costly to businesses in other ways. The classic conflict starts with the support checklist that ops presents to dev before feature updates are released. Developers win when users like newly developed features, the sooner the better. Meanwhile, operations wins when there are the maximum 9s in their uptime reports. All change brings instability; how do you align their interests?

Treynor’s answer is a relief for those with compensation tied to user satisfaction metrics, but not so much for those with heart conditions. He said,

“100% is the wrong reliability target for basically everything. Perhaps a pacemaker is a good exception! But, in general, for any software service or system you can think of, 100% is not the right reliability target because no user can tell the difference between a system being 100% available and, let’s say, 99.999% available. Because typically there are so many other things that sit in between the user and the software service that you’re running that the marginal difference is lost in the noise of everything else that can go wrong.”

This response shifts the focus from specific uptime metrics, which may not act as accurate proxies for user expectations, to a reliability index based on market realities. Treynor explained,

“If 100% is the wrong reliability target for a system, what, then, is the right reliability target for the system? I propose that’s a product question. It’s not a technical question at all. It’s a question of what will the users be happy with, given how much they’re paying, whether it’s direct or indirect, and what their alternatives are.”

Who Is Hiring SREs?

The simple answer is “Everyone.” From software/hardware giants like Apple to financial portals like Morningstar to non-profit institutions like the Lawrence Berkeley National Laboratory. Berkeley is a great example of an organization that’s both at the cutting edge of energy research, yet also maintains some very old legacy systems. Assuring reliability across several generations of technologies can be an enormous challenge. Here’s a look into what their SREs at Berkeley Labs are responsible for:

  • Linux system administration skills to monitor and manage the reliability of the systems under the responsibility of the Control Room Bridge.

  • Develop and maintain monitoring tools used to support the HPC community within NERSC using programming languages like C, C++, Python, Java or Perl.

  • Provide input in the design of software, workflows and processes that improve the monitoring capability of the group to ensure the high availability of the HPC services provided by NERSC and ESnet.

  • Support in the testing and implementation of new monitoring tools, workflows and new capabilities for providing high availability for the systems in production.

  • Assist in direct hardware support of data clusters through managing component upgrades and replacements (dimms, hard drives, cards, cables, etc) to ensure the efficient return of nodes to production service.

  • Help in investigating and evaluating new technologies and solutions to push the group’s capabilities forward, getting ahead of users’ needs, and convincing staff who are incentivized to transform, innovate and continually improve.

Contrast that skill profile with an online company like Wikipedia, where an SRE assignment tends to be less technical and more diplomatic:

  • Improve automation, tooling and processes to support development and deployment

  • Form deep partnership with engineering teams to work on improving user site experience

  • Participate in sprint planning meetings, and support intra-department coordination

  • Troubleshoot site outages and performance issues, including on-call response

  • Help with the provisioning of systems and services, including configuration management

  • Support capacity planning, profiling of site performance, and other analysis

  • Help with general ops issues, including tickets and other ongoing maintenance tasks

Within the past year, there has been a marked shift to a more strategic level of decision-making that reflects the increasing automation of customer requests and failover procedures. Even at traditional companies like IBM, SREs work with some of the newest platforms available due to the advance of IoT agendas. For example, one opening for an SRE at IBM in Ireland requires experience in OpenStack Heat, Urban Code Deploy, Chef, Jenkins, ELK, Splunk, Collect D and Graphite.

How SREs Are Changing

The online world is quite different now than when SREs entered the scene nearly a decade ago. Since then, mobile has redefined development cycles, and easy access to cloud-based data centers has brought microservices into the mainstream IT infrastructure. Startups regularly come out of the gates using Rest and JSON as their preferred protocol for mobile apps. In accordance with the principles of Lean Startups, DevOps are often smaller, more focused teams that function as collective SREs.

You’ll find there’s a great deal more collaboration and less conflict between development and operations, simply because the continuous delivery model has collapsed the responsibilities of development and operations into a single cycle. The term DevOps is likely to disappear as the two distinct divisions merge in the new world, where UX is everything and updates may be pushed out weekly. Regardless of how many 9s are in any given SREs job description, this career path appears to offer you maximum reliability with job security.