11 Best Practices for Continuous Performance Testing

Capturing reliable telemetry on your apps is less about which toolchains you use, and more about using them effectively to automate the processes and techniques we discussed in my prior blogs (part 1, part 2, part 3) in this series. Don’t get me wrong, having the right tools is paramount, but when it comes to getting value from your tools, value comes when you maximize the integration and automation, using all of what those tools can offer. When it comes to incorporating and integrating the APM tools on the market you want to keep the following best practices in mind.

1. Leverage hierarchy such as applications and tiers to logically group and represent the business by Process/Functional Group. For example, use application definitions to separate lines of business, or functional groups within the LOB. Also, consider aggregating all your SOA and backend landscapes together as common shared services. Come up with a naming standard and stick to it. Having a standard for naming and/or tagging will go a long way in lessening the complexity challenge in the long-term.

2. Leverage tiers to group together common processes. Tiers should define the boundaries of process groups, where all processes are doing the same thing. Even if they are running separate versions (such as in canary deployments), or in different geographic locations, a common identity should be used to group processes doing the same thing. If you are running multiple applications or app components in a single java process, consider breaking them out into their own processes. This will aid in better understanding the topology and their dependencies.

3. Automate configuration management of monitoring configuration. Your monitoring tool configuration should follow the same mantra of config as code. Keep a tight grip on the definition of the configuration that produces your metrics. Change these configurations in lower environments and promote changes to your monitoring with your deploys.

4. Use the built-in capabilities of your APM tooling to create robust gates during integration and performance tests. Use APIs to integrate with orchestration tools such as Jenkins or TeamCity to fail builds when baselines and trends are violated. Embed links to APM violations in the console logs within these orchestration tools so they surface to developers quickly.

5. Leverage header injection within test harnesses to decorate transactions with test-metadata which can be captured within the APM tools. This will allow you to tie failed transactions to specific points within the scripts driving load and provide valuable context to the developer or test engineer when diagnosing an issue.

6. Expose throughput and response time metrics to the surface through real-time dashboards. These should be considered primary gating metrics for every endpoint on an API and for every component in your app. Everyone should be focused on lowering latency and increasing throughput. If an endpoint’s response time is not predictable, it should be re-written.

7. Integrate your pipeline orchestration tools to signal to your APM software when something is happening. Signal the start and completion of jobs which deploy software, execute tests, etc… Doing this will ensure you can overlay events that change the environment with performance data enabling you to see the impact of the change in real time.

8. Performance metrics used as gates such as Response Time and Transactions Per Second should be incorporated with other quality gating metrics such as those tracked in tools like Sonar.

9. Automate performance tests so components are routinely taken to the break point. Create an environment with the flexibility and automation that will allow the parallel execution of performance test cases with ease.

10. Leverage mocks in component tests, leverage dependencies in system-wide tests. Mocking will allow you to test components in isolation and should allow you to play out any number of scenarios under duress.

11. Test continuously, and when something breaks, it must be fixed before anything else can happen.

In this blog series, we have examined the principles of performance engineering, component testing and how to achieve robust lean-agile states. We talked about app decomposition and the importance of defining a strategy for measuring applications and components as well as 4 types of performance test cases I personally use as the basis for understanding application performance. I hope you found these as useful as I enjoyed writing them!

How Continuous Integration Works, and The Big Benefit No One Talks About

In a digital world that moves as fast as ours, programmers are applying new, creative ways of thinking to the software development process in a non-stop push for ever-faster turnaround times. In DevOps, Continuous Integration (CI) is increasingly the integration method of choice, in large part because of the speed at which it enables the release of new features, bug fixes, and product updates.

CI dictates that every time a developer pushes code to an application, an automated process grabs the current code from a shared repository, integrates the new build, and tests the software for problems. This approach leads to faster results and ensures software is tested on a regular basis, which enables further DevOps automation processes such as delivery, deployment, and experimentation.

How CI Works

With CI, the software code is contained in a shared repository, accessible by developers so they can “check out” code to individual workstations. When ready, they push the code back into the repository to commit changes. From there, the automated CI server takes over, automatically building the system and running unit and integration tests to ensure the code did not break another part of the software. If the build is unsuccessful, the server pinpoints where in the testing process the code failed, letting the team address it at the earliest point of failure.

This CI process occurs many times per day, meaning the system constantly builds and tests new code. The updated software can then be released manually, or DevOps teams can further automate the project by electing to have the system deploy and deliver the product.

Since developers don’t have to backtrack to find where code breaks, DevOps teams save big in terms of time and resources. And because the process is continuous, programmers never work on out-of-date products, or try to hurriedly push changes to keep up with demand or internal deadlines.

CI allows developers to automate long-running processes and use parallel containers or virtual machines to run tests and builds. Because these processes are automated, programmers can work on other tasks while everything runs in the background. And since the code is only merged once the build passes testing, the chances of breaking master code are greatly reduced.

The (not so) Secret Benefit of CI

Sure, CI saves time and reduces costs, but so does every other noteworthy innovation in technology or business processes these days. There’s another major reason CI is so successful that isn’t talked about as much because it’s more difficult to quantify than productivity and cost: team morale.

If you talk to any development team, they’ll tell you that almost nothing in the world is as frustrating as building a process, integrating it with the code, and then watching the software break. Not only have hours of work been wasted, but team members know that more hours lie in front of them trying to comb back through the process to pinpoint where it failed. As any business leader knows, an unhappy team results in an inferior and/or more costly product. As frustration mounts, work slows down. Then as a deadline approaches, changes are frantically pushed through, increasing the probability of a flaw in the master branch or a bug being deployed with the product.

The transparency of CI can be a big boost to the confidence level within DevOps. Suddenly, as developers work, they can see exactly where problems arise, which allows for a much faster response and repair. And if the build passes, team members can feel good about a job well done.

Takeaways for CIOs

The continuous integration approach to DevOps increases transparency, builds automation into processes, decreases costs by maximizing developers’ time, and creates repeatable processes upon proven successes. On top of all that, it relieves pressure from programmers and helps teams gain confidence.

Though there are variations among details of different platforms and approaches, the key tenets of CI hold true among development teams:

  • Maintain a single source repository with easy access for developers
  • Automate the build and testing processes
  • Make sure every build occurs on an integration container or VM
  • Utilize a production environment for testing
  • Make the testing results and processes transparent and visible to teams

Conclusion

If you want more speed and more smiles out of your development team, consider applying a continuous integration approach to your DevOps processes. Make sure to consult both with your team and a CI service provider to determine what makes sense for your organization and ensures a smooth implementation. Then sit back and watch the code fly.

 

The Role of APM in Continuous Integration and Continuous Release

Today’s software defined and driven business requires fast changes in business models, this permeates successful businesses. What this means is that almost every company is learning how to make small rapid changes and adjustments within their business, especially within software systems. The result of this is that IT is feeling immense pressure to evolve, the net result is a major uptick of private cloud (especially Apprenda, Pivotal Cloud Foundry, RedHat OpenShift) and public clouds (especially Amazon and Microsoft) within our install base of large enterprises. These new platforms enable faster development, testing, and releasing of software. Enterprise customers are getting much smarter, by using automation to drive the software lifecycle including continuous integration, and even experimenting and beginning to use continuous release processes. This is no longer a startup scenario, it’s a strategy for our large enterprise customers. These companies trying to build agile development and operations teams, by changing people, process, and technology to enable a DevOps feedback loop:


Wikipedia describes the continuous delivery loop as follows:

 

Within this loop, APM can be part of both build, unit, regression, and load tests to drive the automated acceptance testing before new code is pushed to production. In fact, at the recent AppSphere 2015 user conference we had several great talks on this subject.

The first talk was put on by our customer The Container Store in this presentation August Azzarello digs into “How The Container Store uses AppDynamics in the Development Lifecycle”. in their environment they began first with APM in production. The goal was to improve software quality before production, hence they began the expansion by installing APM within test, and integrating it into functional and performance test suites. They also enabled alerting from dev/test so that developers and testers understood when they had performance deviations and issues before production. Key features they leveraged in pre-production included dashboards such as this one:

 

Comparative analysis view such as this one:

and being able to understand between releases if they were improving or degrading:

The Container Store is using open source tools including Selenium for functional testing, and Locust.io for performance testing. In the video August also explains some of the major benefits they saw from implementing APM in pre-production. Here are some best practices outlined by August in his discussion:

  • Monitor everything

  • Test continuously

  • Performance test early in development life cycle

  • Empower development & QA team members

Some benefits the container store has seen according to August are:

  • Set performance expectations before production deployments – ~40% improvement since we started using AppDynamics in test

  • Fine tune alert and Business Transactions policies prior to production deployments

  • Identify testing requirements, and testing gaps

  • Decrease performance test result reporting from 5 hours to 20 minutes

One of my favorite talks (which is why I selected it for the deep dive track at AppSphere) was the talk given by one of our Senior Sales Engineers Steve Sturtevant he came to AppDynamics from PayPal 2 years ago. In his talk titled “DevOps and Agile: AppDynamics in Continuous Integration Environments” Steve explains the integration work he did, but more importantly in dynamic environments can you scale and auto implement monitoring? Do you quantify the impact of change? Steve gives a good demo with integration between Puppet, Jenkins, ServiceNow, and AppDynamics in his examples with automation throughout the lifecycle.

One of our partners Electric Cloud specializes in continuous delivery, and embraces an open ecosystem of technology providers and open source software. They’ve put together some great resources for people looking to explore the rest of the feedback loop.