Fewer Bugs, Faster Releases: How APM Improves the Software Development Life Cycle

Breaking down a series of interdependent movements into discrete actions is often the first step to improving the performance of everything from professional orchestras to sports teams. And so it is with software.

The Software Development Life Cycle (SDLC) evolved out of an effort in the late 1960s to put formal quality processes in place in a relatively immature industry. Although the implementation of SDLC varies from organization to organization, a typical cycle includes six stages: plan, design, develop, test, release, and monitor. Decades after it was introduced, SDLC still delivers results, though there has been some confusion as to its role with the recent interest in DevOps. Regardless of terminology, the principles remain the same. Yet solely following these principles cannot address the enormous challenge of developing applications that run on multiple platforms and in multiple geographies securely and at scale. To increase SDLC’s effectiveness for today’s enterprise development projects, organizations should consider supporting it with application performance management (APM).

APM helps developers determine whether changes are helping or hindering performance. Unified views of the application, infrastructure, and user experience eliminate guesswork while machine-learning algorithms automatically set baselines and correlate transactions within and across software tiers, stacks, and platforms, exposing dependencies and capturing errors and exceptions. Whether you are deploying an update to data centers around the world or rolling out a new containerized service in the cloud, APM makes developers’ jobs easier. Below I’ll explain how APM can lead to better results in each of the common SDLC stages.

Plan

The authors of “Solid Code: Optimizing the Software Development Life Cycle” may have put it best when they advised: “Think first, code later.” In the planning stage, product managers gather requirements and collect input from customers/end-users, salespeople, and other stakeholders. Who is using or will use the software? How will they use it? What is working and what needs to be fixed? Equally important, what resources will be needed for implementation and what is the estimated project cost? To answer these questions, developers need an accurate understanding of their current system architecture in order to surface additional requirements or identify necessary refactoring. This is where a unified monitoring solution comes in handy with automatic detection of application servers, databases, and infrastructure and the relationships between them as established by business transactions and visualized through flow maps.

Design

Insights gained during planning support better design decisions. During design, a developer may be determining a new architecture, choosing what frameworks or libraries to use, or making algorithmic or data management decisions. APM helps developers by easily showing what systems and frameworks are in place today and how they’re being used, making it easier to decide if there’s existing code that can be repurposed or if something new needs to be created. Developers and product managers can also take advantage of the insights provided by end-user monitoring capabilities to gain an understanding of application usage patterns, user locations, browser capabilities, and any existing performance bottlenecks or failures that should be addressed in new designs.

Develop

Coders code, and for many developers this is the most rewarding life cycle stage. During the era when the waterfall approach to development prevailed, developers wrote and debugged their code and then handed it over for integration and broad testing. However, as agile methods have gained popularity, developers have gotten used to receiving feedback early and often.

By deploying APM in developer environments, developers can identify upstream and downstream dependencies based on actual code execution. They can also directly observe the impact their new code has on the larger application and address scalability concerns. As a feature takes shape, developers use APM to investigate potential issues. Will the new code contribute to network latency? How will it affect memory consumption? How does the new design perform under load? What are the proper host environment specifications? How much CPU and memory will most efficiently serve the application?

It’s not uncommon for developers with access to APM to identify and pre-empt application issues before their code reaches Testing & QA. In addition, by paying rigorous attention to scale factors, developers will minimize bugs and code changes after an application is deployed in production.

Test

The ongoing shift to Continuous Release has increased the use of automated testing in the build pipeline—often at the cost of visibility. The result is that code containing damaging performance regressions can pass functional testing and be pushed into production. This potentially costly scenario is avoided when the test environment is monitored by APM, allowing performance metrics to be easily compared against established thresholds or prior releases. Also, as in the development environment, APM speeds up root-cause analysis of performance problems.

Release

One of the biggest benefits APM offers to release engineers is the peace of mind of knowing that an application has been stress-tested and all existing issues have been identified in pre-production environments. Visualizations like flow maps that show what calls an application is making and heat maps that reveal performance anomalies and outliers in a microservices architecture provide additional confidence as you compare against the previous release in canary or blue-green deployments.

Monitor

APM was first adopted for use in production because of the critical role it plays in reducing the mean time to resolution of performance issues. It continues to do that, but as the agility of developer teams grows, APM is also increasingly leveraged to kick off the next development cycle, providing insights into end-user behavior and the impact of application performance on business objectives. In this way, APM grounds developers in the realities of the most recent changes in the application, allowing them to make better design decisions for the next release.

While APM contributes value at every stage of the Software Development Life Cycle, adopting APM across development, testing, and production environments brings an additional reward. Giving developers and IT operations equal ability to drill down into code-level diagnostics provides the common language that is needed for a true DevOps culture to take root. DevOps is all about communication, and effective communication requires a shared perspective. APM delivers a single source of truth, eliminating finger-pointing and academic disputes over what happened, when, and why. Developers and IT operations can focus on the shared goal: the rapid delivery of high-performing, low-maintenance code.

Take a tour to see how AppDynamics APM can help your own organization throughout your SDLC!

A version of this article originally appeared in the SDTimes.

The Top 5 Trends That Changed Software Development in 2016

Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but took off in a whole new direction in this past year. Here’s a review of the top trends that changed software development this year.

1. Linking Application Performance and Business Performance

Application performance management (APM) has grown incredibly sophisticated over the past decade. By 2010, Gartner had defined five dimensions of end-to-end performance for best-in-class software:

  • Monitoring how the end user experiences the application and surfacing discontents
  • Defining the scope of problems in execution, runtime architecture, and communications
  • Mapping out user-defined transactions across components (a.k.a. business transaction management)
  • Tools for going deeper into the components identified as sources of the problems
  • Behavioral learning analytics for patterns of breakdowns and issue forecasting

Management has been so impressed with the results of reducing Mean Time to Resolution (MTTR), they want to apply these lessons to reduce the overall Mean Time to Business Awareness (MTBA). Today, highly advanced tools like AppDynamics are doing more than organizing the priorities of development teams. Real-time insights into the customer experience can auto-correlate the relationship between specific performance data and business goals.

At AppSphere 2016, David Wadhwani, President and CEO at AppDynamics, explained it best:

“I can’t say this enough: there are very few times in a person’s career where you’re sitting on a precipice of a change like this. Take advantage of it. Accelerate your careers, redefine your goals. Don’t think of yourselves as IT professionals, think of yourselves as business owners who happen to run the technology as well.”

Ultimately, the application of APM expectations into MTBA measurements is aiding CIOs in understanding how technical choices and priorities impact a business. More than ever, that’s what the CIO is expected to explain to senior managers. At the intersection of technology and finance, the role of the CIO has become the locus for all the most critical data analytics.

2. Application Teams as Human Microservices

The microservices model applies to more than just software. Software tends to match the organizational structure of the design team, just as the switch from waterfall to agile required a restructuring of development teams.

Teams of application developers have always shared out projects like sub-routines or specific software integrations. What’s different in 2016 is that software engineering teams are acting more like independent business units. The microservices model has happened in companies like Google and Amazon, where individual and autonomous “application teams” are organized around specific business objectives. At Google, these application teams include a crucial new role: Site Reliability Engineers (SRE) who combine development and operations skills. As Google’s Ben Treynor defined it, “The SRE is fundamentally doing work that has historically been done by an operations team, but using engineers with software expertise, and banking on the fact that these engineers are inherently both predisposed to, and have the ability to, substitute automation for human labor.”

 

Figure 1: Decoupled applications with autonomous application teams centered around individual business capabilities

In the year ahead, expect this to spread to more organizations inside and outside of the software industry. You will see more work teams that include their own developers, deployment models, performance engineers, business analysts and product management teams. Like miniature companies within a company, they will operate as autonomous groups responsible for innovation, execution, deployment, application performance monitoring, and business performance monitoring.

In early experiments with this sort of microservices team structure, here are some of the challenges that commonly arise:

  • Displaced business priorities: When the microservice goal becomes the team’s primary responsibility, they may pull off course from the overall company strategy, strengthening the argument that more insight into business performance is necessary.
  • Microservices that don’t communicate: API’s connecting the functions of microservices can fall between the cracks as teams argue over who is responsible for making sure that they work together. Attaching and detaching microservices from the main functionality of the application is never as easy in practice as it is in theory.
  • Struggles with team cohesion: Many developers have developed their skills in isolation and may have difficulty aligning their work habits with a tighter team structure.

In the end, it can only work in the presence of leaders who reinforce communication, collaboration, and success measurements among application teams.

3. Microservices, Containers, and DevOps

One of the most massive shifts in the world of software development hit at the same time as the dot-com bubble. It was the shift from monolithic apps residing on bare metal to distributed applications populating virtual machines. This was partially due to the improved reliability of networked infrastructures. However, it was also a reaction to waterfall development methodologies built on the aging manufacturing model of ideation to coding to testing to production, and then shifting into maintenance mode. This was the period that introduced agile methodologies that made much more sense in the bootstrapping world of software startups.

The point is that we are now headed into another shift that will be at least that pervasive. It has grown out of agile concepts like interactions over processes, minimal viable products and responses over planning. The emerging app-driven world will be defined by the rules of DevOps, where feature development and application performance monitoring have to happen simultaneously. Enterprise software is now a whirling mass of microservices, APIs, and containers in constant communication with each other through the hybrid cloud.

Agile was a powerful framework for development teams, but agile couldn’t keep up with the demands of near perfect uptime and spiraling customer experience expectations. At the same time, it’s clear that developers and testers can have critical inputs into solving operational issues. Everyone suffers when there is internal friction between functionality and security.

When you combine this trend with the AP to BP bridge, the image emerges of a new and comprehensive BizDevOps. It will fold business strategy and analysis into the DevOps formula.

4. Scale as a Service

Popularity can be a problem, as too many startups have discovered. Brooks’ Law, established four decades ago but still disputed, states unequivocally that, “Adding manpower to a late software project makes it later.” Updating Brooks’ Law for the age of enterprise application development means adding warnings like “Rails doesn’t scale,” and “Green dashboards make users see red.

Going into 2017, watch the boom in vendors supporting services like Elasticsearch to help applications scale without blowing up. To help them get ready to scale, the majority of companies running software are using a mixture of six clouds, both public and private. Three are used for running their applications and another three are used for innovating their next level of services and features.

There are many sides to scaling issues, like bigger nodes vs. more nodes, so scaling up has to be done as a company-wide collaboration. Channel vendors are better positioned to see the bigger picture of inter-related adjustments to security, stability, performance, and cost.

In a turbulent market, which won’t be calming down in the foreseeable future, the ability to scale rapidly is the most essential survival skill.

5. Remote Work and Crowdsourcing

In the past, remote work was merely a geographical extension of work. Managers oversaw projects and directed teams of developers. Instead of the team being in another wing of the building, the team moved to another time zone. The biggest structural change in the relationship was the communication channel from in-person to collaboration tech. In many cases, the application performance monitoring (APM) and Business iQ platform served as the collaboration engine, with voice/video/chat software like Skype or Slack on top.

What’s happening in 2016 is that the concept of crowdsourcing is further abstracting the work from the worker to take advantage of the model’s essential efficiencies. The manager still sets expectations and manages routines, but now the coder’s primary transaction is with automation. They submit code and move on to the next assignment. Managers many not even know the people (or bots) who submitted the code.

A good example is Elastic.co, the 100 percent remote-driven group that created Elasticsearch. The open-source ELK stack (Elasticsearch, Logstash, and Kibana) has built up enough contributors to challenge Splunk for the log analysis market. Flexjobs lists 125 virtual companies running on globally distributed teams so far this year, up from 76 a year ago, and only 26 in 2014.

Moving Ahead of the Trends

There are several ways AppDynamics can help businesses take advantage of areas where the latest trends are converging and take on a leadership position. Microservices iQ is good way to efficiently track microservices deployed in elastic infrastructures, such as containers or clouds where nodes scale up and down rapidly. Use Business iQ to transform your application performance monitoring into business results. Advance your digital transformation, discover real-time business awareness, and improve customer experiences with deep application analytics.

Learn more about Business iQ: Business iQ Correlates Business Metrics with Application Performance