The AppD Approach: Composing Docker Containers for Monitoring

Since its introduction four years ago, Docker has vastly changed how modern applications and services are built. But while the benefits of microservices are well documented, the bad habits aren’t.

Case in point: As people began porting more of their monolithic applications to containers, Dockerfiles ended up becoming bloated, defeating the original purpose of containers. Any package or service you thought you needed was installed on the image. This led to minor changes in source or server, forcing you to rebuild the image. People would package multiple processes into a single Dockerfile. And obviously, as the images got bigger, things became much less efficient because you would spend all of your time waiting on a rebuild to check a simple change in source code.

The quick fix was to layer your applications. Maybe you had a base image, a language-specific image, a server image, and then your source code. While your images became more contained, any change to your bottom-level images would require an entire rebuild of the image set. Although your Dockerfiles became less bloated, you still suffered from the same upgrade issues. With the industry becoming more and more agile, this practice didn’t feel aligned.

The purpose of this blog is to show how we migrated an application to Docker—highlighting the Docker best practices we implemented—and how we achieved our end goal of monitoring the app in AppDynamics. (Source code located here)

Getting Started

With these best (and worst) practices in mind, we began by taking a multi-service Java application and putting it into Docker Compose. We wanted to build out the containers with the Principle of Least Privilege: each system component or process should have the least authority needed to complete its tasks. The containers needed to be ephemeral too, always shutting down when a SIGTERM is received. Since there were going to be environment variables reused across multiple services, we created a docker-compose.env file (image below) that could be leveraged across every service.

[AD-Capital-Docker/docker-compose.env]

Lastly, we knew that for our two types of log data—Application and Agent—we would need to create a shared volume to house it.

[AD-Capital-Docker/docker-compose.yml]

Instead of downloading and installing Java or Tomcat in the Dockerfile, we decided to pull the images directly from the official Tomcat in the Docker Store. This would allow us to know which version we were on without having to install either Java or Tomcat. Upgrading versions of Java or Tomcat would be easy, and would leave the work to Tomcat instead of on our end.

We knew we were going to have a number of services dependent on each other and linking through Compose, and that a massive bash script could cause problems. Enter Dockerize, a utility that simplifies running applications in Docker containers. Its primary role is to wait for other services to be available using TCP, HTTP(S) and Unix before starting the main process.

Some backstory: When using tools like Docker Compose, it’s common to depend on services in other linked containers. But oftentimes relying on links is not enough; while the container itself may have started, the service(s) within it may not be ready, resulting in shell script hacks to work around race conditions. Dockerize gives you the ability to wait for services on a specified protocol (file, TCP, TCP4, TCP6, HTTP, HTTPS and Unix) before starting your application. You can use the -timeout # argument (default: 10 seconds) to specify how long to wait for the services to become available. If the timeout is reached and the service is still not available, the process exits with status code 1.

[AD-Capital-Docker/ADCapital-Tomcat/startup.sh]

We then separated the source code from the agent monitoring. (The project uses a Docker volume to store the agent binaries and log/config files.) Now that we had a single image pulled from Tomcat, we could place our source code in the single Dockerfile and replicate it anywhere. Using prebuilt war files, we could download source from a different time, and place it in the Tomcat webapps subdirectory.

[AD-Capital-Docker/ADCapital-Project/Dockerfile]

We now had a Dockerfile containing everything needed for our servers, and a Dockerfile for the source code, allowing you to run it with or without monitoring enabled. The next step was to split out the AppDynamics Application and Machine Agent.

We knew we wanted to instrument with our agents, but we didn’t want a configuration file with duplicate information for every container. So we created a docker-compose.env. Since our agents require minimal configuration—and the only difference between “tiers” and “nodes” are their names—we knew we could pass these env variables across the agents without using multiple configs. In our compose file, we could then specify the tier and node name for the individual services.

[AD-Capital-Docker/docker-compose.yml]

For the purpose of this blog, we downloaded the agent and passed in the filename and SHA-256 checksum via shell scripts in the ADCapital-Appdynamics/docker-compose.yml file. We were able to pass in the application agent and configuration script to run appdynamics to the shared volume, which would allow the individual projects to use it on startup (see image below). Now that we had enabled application monitoring for our apps, we wanted to install the machine agent to enable analytics. We followed the same instrumentation process, downloading the agent and verifying the filename and checksums. The machine agent is a standalone process, so our configuration script was a little different, but took advantage of the docker-compose.env variable name to set the right parameters for the machine agent (ADCapital-Monitor/start-appdynamics). 

[AD-Capital-Docker/ADCapital-AppDynamics/startup.sh]

The payoff? We now have an image responsible for the server, one responsible for the load, and another responsible for the application. In addition, another image monitors the application, and a final image monitors the application’s logs and analytics. Updating an individual component will not require an entire rebuild of the application. We’re using Docker as it was intended: each container has one responsibility. Lastly, by using volumes to share data across services, we can easily check agent and application Logs. This makes it much easier to gain visibility into the entire landscape of our software.

If you would like to see the source code used for this blog, it is located here with instructions on how to build and setup. In the next blog, we will show you how to migrate from host agents,  using Docker images from the Docker store.

Scaling with Containers at AppSphere 2016

Containers have grown tremendously in popularity in recent years. Originally conceived as a way to replace legacy systems completely, container technology has instead become a way to extend monolithic systems with newer, faster technology. As an example of this growth, the 2016 RightScale State of the Cloud Report™ shows Docker adoption rates in 2015 moved from thirteen percent to twenty-seven percent. Another thirty-five percent of respondents say they have plans to use Docker in the near future.

What Are Containers?

Containers allow you to move software from one environment to another without worrying about different applications, SSL libraries, network topology, storage systems, or security policies — for example, moving from a machine in your data center to a virtual environment in the cloud. They are able to do this because everything you need to run the software travels as one unit. The application, binaries, libraries, and configuration files all live together inside a single container.

You can move a container to a wide variety of software environments with no problem because the program is self-contained. In contrast, virtualization also includes the operating system. Containers share the same operating system kernel, so they are lighter and more energy-efficient than a virtual machine. Hypervisors are an abstraction of the entire machine, while containers are an abstraction of only the OS kernel.

There are a variety of container technologies to support different use cases. The most popular container technology right now is Docker. It grew rapidly a few years ago with major adoption in enterprise computing, including three of the biggest financial institutions in the world — unusual for the slow-to-adopt world of banking. Docker allows software applications to run on a large number of machines at the same time, an important quality for huge sites like Facebook that must deliver data to millions of consumers simultaneously.

Container Technologies

Recent surveys performed by DevOps.com and ClusterHQ show Docker is the overwhelming favorite in container technology at this point. One of the most talked-about competitors to Docker that has emerged recently is Rocket, an open-source project from CoreIS, which ironically was one of one of Docker’s early proponents. Backed heavily by Google, Rocket’s founders developed the technology because they thought Docker had grown and moved too far away from its original purpose. While Docker has been embraced as almost an industry standard, competitors are making inroads. Rocket’s founders say one of its strengths is that it is not controlled by a single organization.

One of the pioneers in container technology back in 2001 is a product from Parallels called Virtuozzo. It gets a lot of attention from OEMs, works well on cloud servers, and features near-instant provisioning. Other fast-growing container technologies include LXC and LVE.

Container Best Practices

One of the challenges of containers is monitoring their performance. AppDynamics is able to monitor containers using our innovative Microservices iQ. It provides automatic discovery of exit and entry service endpoints, tracks important performance indicators, and isolates the cause of performance issues.

At AppSphere 2016, you can learn more about containers and performance monitoring at 10 AM on Thursday, November 17, where AppDynamics’ CTO, Steve Sturtevant, will be presenting his talk, “Best Practices for Managing IaaS, PaaS, and Container-Based Deployments.” Register today to ensure your spot to learn from Steve’s session, and even more in just a few weeks at AppSphere 2016–we’re looking forward to seeing you there!

The Importance of Monitoring Containers [Infographic]

With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue.

However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.

If your team is looking towards containers and microservices as an operational change in how you decide to ship your product, you can’t afford bugs or software issues affecting your performance, end-user experience, or ultimately your bottom line.

Ed Moyle, Director of Emerging Business & Technology at ISACA said it best in his blog, “Consider what happens to these issues when containers enter into the mix. Not only are all the VM issues still there, but they’re now potentially compounded. Inventories that were already difficult to keep current because of VM sprawl might now have to accommodate containers, too. For example, any given VM could contain potentially dozens of individual containers. Issues arising from unexpected migration of VM images might be made significantly worse when the containers running on them can be relocated with a few keystrokes.”

Earlier this year, AppDynamics unveiled Microservices iQ to address these visibility issues daunting DevOps teams today.

Infographic – Container Monitoring 101 from AppDynamics

With Microservices iQ, DevOps teams can:

  • Automatic discovery of entry and exit points of your microservice as service endpoints for focused microservices monitoring

  • Track the key performance indicators of your microservice without worrying about the entire distributed business transaction that uses it

  • Drill down and isolate the root cause of any performance issues affecting the microservice

Interested in learning more? Check out our free ebook, The Importance of Monitoring Containers.

AppDynamics Monitoring Excels for Microservices; New Pricing Model Introduced

It’s no news that microservices are one of the top trends, if not the top trend, in application architectures today. Take large monolithic applications which are brittle and difficult to change and break them into smaller manageable pieces to provide flexibility in deployment models, facilitating agile release and development to meet today’s rapidly shifting digital businesses. Unfortunately, with this change, application and infrastructure management is more complex due to size and technology changes, most often adding significantly more virtual machines and/or containers to handle the growing footprint of application instances.

Fortunately, this is just the kind of environment the AppDynamics Application Intelligence Platform is built for, delivering deep visibility across even the most complex, distributed, heterogeneous environments. We trace and monitor every business transaction from end-to-end — no matter how far apart those ends are, or how circuitous the path between — including any and all API calls across any and all microservices tiers. Wherever there is an issue, the AppDynamics platform pinpoints it and steers the way to rapid resolution. This data can also be used to analyze usage patterns, scaling requirements, and even visibility into infrastructure usage.

This is just the beginning of the microservices trend. With the rise of the Internet of Things, all manner of devices and services will be driven by microservices. The applications themselves will be extended into the “Things” causing even further exponential growth over the next five years. Gartner predicts over 25 billion devices connected by 2020, with the majority being in the utilities, manufacturing, and government sectors.

AppDynamics microservices pricing is based on the size of the Java Virtual Machine (JVM) instance; any JVM running with a maximum heap size of less than one gigabyte is considered a microservice.

We’re excited to help usher in this important technology, and to make it feasible and easy for enterprises to deploy AppDynamics Java microservices monitoring and analytics. For a more detailed perspective, see our post, Visualizing and tracking your microservices.

Complete visibility into Docker containers with AppDynamics

Today we announced the AppDynamics Docker monitoring solution that provides an application-centric view inside and across Docker containers. Performance of distributed applications and business transactions can be tagged, traced, and monitored even as they transit multiple containers.

Before I talk more about the AppDynamics Docker monitoring solution, let me quickly review the premise of Docker and point you to a recent blog “Visualizing and tracking your microservices“ by my colleague, Jonah Kowall, that highlights Docker’s synergy with another hot technology trend— microservices.

What is Docker?

Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker introduction video.

Docker makes it very easy to make changes and package the software quickly for others to test without requiring a lot of resources. At AppDynamics, we embraced Docker completely in our development, testing, and demo environments. For example, as you can see in the attached screenshot from our demo environment, we are using Docker to provision various demo use cases with different application environments like jBoss, Tomcat, MongoDB, Angularjs, and so on.

Screen Shot 2015-05-11 at 11.15.07 AM.png

In addition, you can test drive AppDynamics by downloading, deploying, and testing with the packaged applications from the AppDynamics Docker repos.

Complete visibility into Docker environment with AppDynamics

AppDynamics provides visibility into applications and business transactions made out of multiple smaller decoupled (micro) services deployed in a Docker environment using the Docker monitoring solution. The AppDynamics Docker Monitoring Extension monitors and reports on various metrics, such as: total number of containers, running containers, images, CPU usage, memory usage, network traffic, etc. The AppDynamics Docker monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP giving you the choice for data collection protocol.

The Docker metrics can now be correlated with the metrics from the applications running in the container. For example, in the screenshot below, you can see the overall performance (calls per minute in red) of a web server deployed in Docker container is correlated with Docker performance metrics (Network transmit/receive and CPU usage). As the number of calls per minute to the web server increases, you can see that the network traffic and CPU usage increases as well.

docker_metric_browser_with_cpu.png

Customers can leverage all the core functionalities of AppDynamics (e.g. dynamic baselining, health rules, policies, actions, etc.) for all the Docker metrics while correlating them with the metrics already running in the Docker environment.

The Docker monitoring extension also creates an out of the box custom dashboard with key Docker metrics as shown in the screenshot below. This out of the box dashboard will jump start the monitoring of Docker environment.

docker_custom_dashboard.png

Download the AppDynamics Docker monitoring extension, set-up and configure it following the instructions on the extension page and get end-to-end visibility into your Docker environment and the applications running within them.