Automation Framework in Analytics – Part 1

September 27 2017
 

Testing has always been a continuously evolving machine. Read this blog series to see how we use our own AppDynamics products to test our events service.


This blog series highlights how we use our own products to test our events service which currently ingests more than three trillion events per month.

With fast iterations and deliverables, testing has always been a continuously evolving machine — and a reason why AppDynamics is aligning toward microservices-based architectures. While there are multiple ways to prudently handle the problem of testing, we’d like to share some of the learnings and key requirements which have shaped our elastic-testing framework, powered by Docker and AWS.

Applying this framework helped us deliver stellar results:

  • Ability to bring up complex test environments on the fly, based on testing needs.
  • 80% increase in speed of running and finding bugs earlier in the release cycle.
  • The flexibility to simulate environment instabilities, which potentially occur in any production (or like) environment.
  • Helps with plans to move towards continuous integration (CI).
  • Predictable testing time.
  • A robust environment to allow us to run pre-checkin as well as nightly build tests.
  • Ease of running tests more frequently for small changes vs. full cycle.

Below we will share some of the challenges we faced while end-to-end testing the AppDynamics Events Service, data store for on-premises Application Analytics, End User Monitoring (EUM) deployments, and Database Monitoring deployments. We’ll provide our approach towards solving these challenges, discuss best practices for integration with a continuous development cycle, and share ways to reduce cost on testing infrastructure when testing the application.

By sharing our experience, we hope to provide a case study that will help you and your team avoid similar challenges.

What is Application Analytics?

Application Analytics refers to the real-time analysis and visualization of automatically collected and correlated data. In our case, analytics reveal insights into IT operations, customer experience, and business outcomes. With this next generation of IT operations analytics platform, IT and business users are empowered to quickly answer more meaningful questions than ever before, all in real-time. Analytics is backed by a very powerful events service to store the ingested events, so that data can be queried back. This service is highly scalable – handling more than 3 trillion events per month.

Deployment Background

Our Unified Analytics product can be deployed in two ways:

  • on-premises deployment
  • SaaS deployment

Events Service

The AppDynamics events service is architected to cater to customers based on the deployment chosen. The events service offers a lightweight deployment for on-premises deployment to ease the handling of operating data. It will also have minimal components, which allows the events service to cater to the scalability and volume of data to be handled – a typical use case for any SaaS-based service.

The SaaS events service has:

  1. API Layer: Entry point service
  2. Kafka queue
  3. Indexer Layer, which consumes the data from kafka queue and writes to an event store
  4. Event Store – Elasticsearch

The on-premises events service has:

  1. API Interface / REST Endpoint for the service
  2. Event Store

 Architecture of events platform

Operation/Environment Matrix

The operation bypasses a few layers when it comes to on-premises deployments. A SaaS ingestion layer prevents data-loss through a kafka layer that helps coordinate the ingestion. However, in an on-premises environment, the ingestion happens directly to elasticsearch through the API interface.

Objectives for testing the Events Service:

  • CI tests can run in build systems consistently.
  • The tests are easily pluggable and can run based on the deployment type.
  • Ease of running tests in different environment types (either locally or in cloud) for the benefit of time and to ensure that the tests are environment agnostic.
  • The framework could be scalable and could also be used for functionality, performance, and scalability tests.

These objectives are mandatory to take us towards continuous deployment, where production deployment is just one-click away from committing the code.

Building the Test Framework

To build our testing framework, we analyzed the various solutions available. Below are the options we went through:

  1. Bring the whole Saas environment into a local environment via individual processes such as  elasticsearch, kafka, and web servers, and testing them in a local box.
  2. Have some separate VMs/Bare metal hosts allocated for these tests so that we deploy these components there and run.
  3. Use AWS for deploying these components and use them for testing.
  4. Use Docker containers to create a secluded environment, deploy, and test.
  5. We reviewed each option listed above and conducted a detailed analysis to understand the pros and cons of each and every option. The outcome of this exercise enabled us to pick the right choice for the testing environment.

Stay Tuned

We will publish a follow-up blog to shed more light on:

  1. The pros and cons of every option we had
  2. What choice did we come up with and why
  3. Architecture of our framework
  4. Test flow
  5. Performance of our infra-setup time and infra-based test running time

Swamy Sambamurthy works as a Principal Engineer at AppDynamics and have 11+ years of experience in building scalable automation frameworks. In the past and currently in AppDynamics, Swamy helped in building automation frameworks against distributed systems and big-data environments, which has the ability to scale through huge number of ingestion and querying requests.

Swamy Sambamurthy

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form