Mean Time to Repair: What it Means to You

We’ve all been there: Flying home, late at night, a few delays. Our flight arrives at the airport and we’re anxious to get out of the tin can. Looking outside, we see no one is connecting the jet bridge to the aircraft. Seconds seems like minutes as the jet bridge just sits there. “This is not a random event, they should have been expecting the flight,” you tell yourself over and over again. Finally, a collective sigh of relief as the jet bridge starts to light up and inch ever closer to your freedom.

Even though the jet bridge was not broken per se, the process of attaching the bridge seemed broken to the end user, a.k.a. “the passenger.” The latency of this highly anticipated action was angst-causing.

As technologists, we deal with increasingly complex systems and platforms. The advent of the discipline around site reliability/chaos engineering brings rigor to mean-time-to-repair (MTTR) and mean-time-between-failure (MTBF) metrics.

For failure to occur, a system doesn’t have to be in a nonresponsive or crashed state. Going back to my jet bridge example, even high latency can be perceived as “failure” to your customers. This is why we have service level agreements (SLAs), which establish acceptable levels of service and the consequences of noncompliance. Violate a SLA, for example, and your business could find itself facing a sudden drop in customer sentiment as well as a hefty monetary fine.

Site reliability engineers (SREs) push for elastic and self-healing infrastructure that can anticipate and recover from SLA violations. However, these infrastructures are not without complexity to implement and instrument.

Mobile Launch Meltdown

I remember back when I was a consulting engineer with a major mobile carrier as a client. This was about a decade ago, when ordering a popular smartphone on its annual release date was an exercise in futility. I would wait up into the wee hours of the morning to be one of the first to preorder the device. After doing so on one occasion, I headed into the office.

By midday, after preordering had been open for some time, a cascading failure was occuring with my company, one of many vendors crucial to this preorder process. My manager called me to her office to listen in on a bridge call with the carrier. Stakeholders from the carrier were rightfully upset: “We will make more in an hour today than your entire company makes in a year,” they repeated multiple times.

The pressure was on rectify the issues and allow the business to continue. As in the novel The Phoenix Project, representatives from different technology verticals joined forces in a war room to fix things fast.

The failure was complex—multiple transaction and network boundaries, and speed of incoming orders on a massive scale. However, the notion of a large set of orders coming in on a specific date was not random, since the device manufacturer had set the launch date well in advance.

The Importance of Planning Ahead

The ability to tell when a violation state is going to occur—and to take corrective action ahead of time—is crucial. The more insight and time you have, the easier it is to get ahead of a violation, and the less pressure you’ll feel to push out a deployment or provision additional infrastructure.

With the rise of cloud-native systems, platforms and applications are increasingly distributed across multiple infrastructure providers. Design patterns such as Martin Fowler’s Strangler Pattern have become cemented as legacy applications evolve to handle the next generation of workloads. Managing a hybrid infrastructure becomes a challenge, a delicate balance between the granular control of an on-prem environment and the convenience and scalability of a public cloud provider.

Usually there is no silver bullet to fix problems at scale. If there’s a glaring issue the old adage, “Would have been addressed already,” proves true. In performance testing, death by a thousand paper cuts plays itself out in complex distributed systems. Fixing and addressing issues is an iterative approach. During a production-impacting event, haste can make waste. With all of the the investment in infrastructure-as-code and CI/CD, these deployments can systematically occur faster than ever.

We might not all experience an incident as major as a mobile phone preorder meltdown, but as technologists we strive to make our systems as robust as possible. We also invest in technologies that enable us change our systems faster—an essential capability today when so many of us are under the gun to fix what’s broken rather than adding new features that delight the customer.

I am very excited to join AppDynamics! I’ve been building and implementing large, distributed web-scale systems for many years now, and I’m looking forward to my new role as evangelist in the cloud and DevOps spaces. With the ever-increasing complexity of architectures, and the focus on performance and availability to enhance the end user experience, it’s crucial to have the right data to make insightful changes and investments. And with the synergies and velocity of the DevOps movement, it’s equally as important to make educated changes, too.

Site Reliability Engineering: DevOps 2.0

Has there ever been a better time to be in DevOps? TV shows like “Person of Interest” and “Mr. Robot” are getting better at showing what developers actually do, using chunks of working code. Movies like Michael Mann’s “Blackhat” (2015) won praise from Google’s security team for its DevOps accuracy in a few scenes. Look around and you’ll discover elements of DevOps culture filtering out into wider society, such as people in all walks of life discussing their uptime or fast approaching code lock.

On the other hand, perhaps the biggest thorn in the side of DevOps is that developers and operations teams don’t normally get along well. Developers want to rush ahead and compile some groundbreaking code under extremely tight schedules, while operations teams try to slow everyone down to identify systemic risks from accidents or malicious actors. Both teams want to end up with a better user experience, but getting there becomes a power struggle over what that experience truly means.

The dream that brought DevOps together is for someone who can be half dev and half ops. That split desire is exactly the point of the SRE (site reliability engineer).

Defining the SRE

In introducing the term SRE, Google’s VP of Engineering, Ben Treynor, stated,

“It’s what happens when you ask a software engineer to design an operations function…. The SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.”

Way back in 2010, Facebook SRE Mark Schonbach explained what he did this way:

“I’m part of a small team of Site Reliability Engineers (SRE) that works day and night to ensure that you and the other 400+ million users around the world are able to access Facebook, that the site loads quickly, and all of the features are working…. We regularly hack tools on the fly that help us manage and perform complex maintenance procedures on one of the largest, if not the largest memcached footprints in the world. We develop automated tools to provision new servers, reallocate existing ones, and detect and repair applications or servers that are misbehaving.”

Where Did SREs Come From?

Reliability engineering is a concept that grew out of the operations world and has been around for more than 100 years. It became more closely connected with electronic systems after World War II, when the IEEE created the Reliability Society. In the past 10 years, five 9s (99.999) became the golden standard for application performance management. That standard led to the creation of a class of operations experts who knew enough code to recover the site and put the last stable release back into production as fast as possible.

Treynor explained the impetus for creating this new category at Google with his typical deadpan humor: “One of the things you normally see in operations roles as opposed to engineering roles is that there’s a chasm not only with respect to duty, but also of background and of vocabulary, and eventually, of respect. This, to me, is a pathology.”

Which Toolsets Do SREs Use?

For SREs, stability and uptime top priorities. However, they should be able to take responsibility and code their own way out of hazards, instead of adding to the to-do lists of the development team. In the case of Google, SREs are often software engineers with a layer of network training on top. Typically, Google software engineers must demonstrate proficiency in:

  1. Google’s own Golang and OO languages such as C++, Python or Java

  2. A secondary language like JavaScript, CSS & HTML, PHP, Ruby, Scheme, Perl, etc

  3. Advanced fields like AI research, cryptography, compilers, UX design, etc

  4. Getting along with other coders

On top of those proficiencies, Google’s SREs must have experience in network engineering, Unix sys admin or more general networking/ops skills such as LDAP and DNS.

The Critical Role of SRE

Downtime is costing businesses around $300,000 per hour, according to a report from Emerson Network Power. The most obvious impact is when traffic spikes bring down e-commerce sites, which was covered in a recent AppDynamics white paper. However, Treynor also pointed out how standard dev vs. ops friction can be costly to businesses in other ways. The classic conflict starts with the support checklist that ops presents to dev before feature updates are released. Developers win when users like newly developed features, the sooner the better. Meanwhile, operations wins when there are the maximum 9s in their uptime reports. All change brings instability; how do you align their interests?

Treynor’s answer is a relief for those with compensation tied to user satisfaction metrics, but not so much for those with heart conditions. He said,

“100% is the wrong reliability target for basically everything. Perhaps a pacemaker is a good exception! But, in general, for any software service or system you can think of, 100% is not the right reliability target because no user can tell the difference between a system being 100% available and, let’s say, 99.999% available. Because typically there are so many other things that sit in between the user and the software service that you’re running that the marginal difference is lost in the noise of everything else that can go wrong.”

This response shifts the focus from specific uptime metrics, which may not act as accurate proxies for user expectations, to a reliability index based on market realities. Treynor explained,

“If 100% is the wrong reliability target for a system, what, then, is the right reliability target for the system? I propose that’s a product question. It’s not a technical question at all. It’s a question of what will the users be happy with, given how much they’re paying, whether it’s direct or indirect, and what their alternatives are.”

Who Is Hiring SREs?

The simple answer is “Everyone.” From software/hardware giants like Apple to financial portals like Morningstar to non-profit institutions like the Lawrence Berkeley National Laboratory. Berkeley is a great example of an organization that’s both at the cutting edge of energy research, yet also maintains some very old legacy systems. Assuring reliability across several generations of technologies can be an enormous challenge. Here’s a look into what their SREs at Berkeley Labs are responsible for:

  • Linux system administration skills to monitor and manage the reliability of the systems under the responsibility of the Control Room Bridge.

  • Develop and maintain monitoring tools used to support the HPC community within NERSC using programming languages like C, C++, Python, Java or Perl.

  • Provide input in the design of software, workflows and processes that improve the monitoring capability of the group to ensure the high availability of the HPC services provided by NERSC and ESnet.

  • Support in the testing and implementation of new monitoring tools, workflows and new capabilities for providing high availability for the systems in production.

  • Assist in direct hardware support of data clusters through managing component upgrades and replacements (dimms, hard drives, cards, cables, etc) to ensure the efficient return of nodes to production service.

  • Help in investigating and evaluating new technologies and solutions to push the group’s capabilities forward, getting ahead of users’ needs, and convincing staff who are incentivized to transform, innovate and continually improve.

Contrast that skill profile with an online company like Wikipedia, where an SRE assignment tends to be less technical and more diplomatic:

  • Improve automation, tooling and processes to support development and deployment

  • Form deep partnership with engineering teams to work on improving user site experience

  • Participate in sprint planning meetings, and support intra-department coordination

  • Troubleshoot site outages and performance issues, including on-call response

  • Help with the provisioning of systems and services, including configuration management

  • Support capacity planning, profiling of site performance, and other analysis

  • Help with general ops issues, including tickets and other ongoing maintenance tasks

Within the past year, there has been a marked shift to a more strategic level of decision-making that reflects the increasing automation of customer requests and failover procedures. Even at traditional companies like IBM, SREs work with some of the newest platforms available due to the advance of IoT agendas. For example, one opening for an SRE at IBM in Ireland requires experience in OpenStack Heat, Urban Code Deploy, Chef, Jenkins, ELK, Splunk, Collect D and Graphite.

How SREs Are Changing

The online world is quite different now than when SREs entered the scene nearly a decade ago. Since then, mobile has redefined development cycles, and easy access to cloud-based data centers has brought microservices into the mainstream IT infrastructure. Startups regularly come out of the gates using Rest and JSON as their preferred protocol for mobile apps. In accordance with the principles of Lean Startups, DevOps are often smaller, more focused teams that function as collective SREs.

You’ll find there’s a great deal more collaboration and less conflict between development and operations, simply because the continuous delivery model has collapsed the responsibilities of development and operations into a single cycle. The term DevOps is likely to disappear as the two distinct divisions merge in the new world, where UX is everything and updates may be pushed out weekly. Regardless of how many 9s are in any given SREs job description, this career path appears to offer you maximum reliability with job security.