Deploying AppDynamics Agents to OpenShift Using Init Containers

There are several ways to instrument an application on OpenShift with an AppDynamics application agent. The most straightforward way is to embed the agent into the main application image. (For more on this topic, read my blog Monitoring Kubernetes and OpenShift with AppDynamics.)

Let’s consider a Node.js app. All you need to do is to add a require reference to the agent libraries and pass the necessary information on the controller. The reference itself becomes a part of the app and will be embedded in the image. The list of variables (e.g., controller host name, app/tier name, license) the agent needs to communicate with the controller can be embedded, though it is best practice to pass them into the app on initialization as configurable environmental variables.

In the world of Kubernetes (K8s) and OpenShift, this task is accomplished with config maps and secrets. Config maps are reusable key value stores that can be made accessible to one or more applications. Secrets are very similar to config maps with an additional capability to obfuscate key values. When you create a secret, K8s automatically encodes the value of the key as a base64 string. Now the actual value is not visible, and you are protected from people looking over your shoulder. When the key is requested by the app, Kubernetes automatically decodes the value. Secrets can be used to store any sensitive data such as license keys, passwords, and so on. In our example below, we use a secret to store the license key.

Here is an example of AppD instrumentation where the agent is embedded, and the configurable values are passed as environment variables by means of a configMap, a secret and the pod spec.

var appDobj = {
   controllerHostName: process.env[‘CONTROLLER_HOST’],
   controllerPort: CONTROLLER_PORT,
   controllerSslEnabled: true,
accountName: process.env[‘ACCOUNT_NAME’],
   accountAccessKey: process.env[‘ACCOUNT_ACCESS_KEY’],
   applicationName: process.env[‘APPLICATION_NAME’],
   tierName: process.env[‘TIER_NAME’],
   nodeName: ‘process’
}
require(“appdynamics”).profile(appDobj);

Pod Spec
– env:
   – name: TIER_NAME
     value: MyAppTier
   – name: ACCOUNT_ACCESS_KEY
     valueFrom:
       secretKeyRef:
         key: appd-key
         name: appd-secret
   envFrom:
     – configMapRef:
         name: controller-config

A ConfigMap with AppD variables.

AppD license key stored as secret.

The Init Container Route: Best Practice

The straightforward way is not always the best. Application developers may want to avoid embedding a “foreign object” into the app images for a number of good reasons—for example, image size, granularity of testing, or encapsulation. Being developers ourselves, we respect that and offer an alternative, a less intrusive way of instrumentation. The Kubernetes way.

An init container is a design feature in Kubernetes that allows decoupling of app logic from any type of initialization routine, such as monitoring, in our case. While the main app container lives for the entire duration of the pod, the lifespan of the init container is much shorter. The init container does the required prep work before orchestration of the main container begins. Once the initialization is complete, the init container exists and the main container is started. This way the init container does not run parallel to the main container as, for example, a sidecar container would. However, like a sidecar container, the init container, while still active, has access to the ephemeral storage of the pod.

We use this ability to share storage between the init container and the main container to inject the AppDynamics agent into the app. Our init container image, in its simplest form, can be described with this Dockerfile:

FROM openjdk:8-jdk-alpine
RUN apk add –no-cache bash gawk sed grep bc coreutils
RUN mkdir -p /sharedFiles/AppServerAgent
ADD AppServerAgent.zip /sharedFiles/
RUN unzip /sharedFiles/AppServerAgent.zip -d /sharedFiles/
AppServerAgent /
CMD [“tail”, “-f”, “/dev/null”]

The above example assumes you have already downloaded the archive with AppDynamics app agent binaries locally. When the container is initialized, it unzips the binaries into a new directory. To the pod spec, we then add a directive that copies the directory with the agent binaries to a shared volume on the pod:

spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  /mountpath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account

After the init container exits, the AppDynamics agent binaries are waiting for the application to be picked up from the shared volume on the pod.

Let’s assume we are deploying a Java app, one normally initialized via a script that calls the java command with Java options. The script, startup.sh, may look like this:

# startup.sh
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.tierName=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.agent.reuse.nodeName.prefix=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar”
JAVA_OPTS=”$JAVA_OPTS
-Dappdynamics.controller.hostName=$CONTROLLER_HOST -Dappdynamics.controller.port=$CONTROLLER_PORT -Dappdynamics.controller.ssl.enabled=$CONTROLLER_SSL_ENABLED”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.accountName=$ACCOUNT_NAME -Dappdynamics.agent.accountAccessKey=$ACCOUNT_ACCESS_KEY -Dappdynamics.agent.applicationName=$APPLICATION_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.socket.collection.bci.enable=true”
JAVA_OPTS=”$JAVA_OPTS -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true”
JAVA_OPTS=”$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom”

$JAVA_OPTS -jar myapp.jar

It is embedded into the image and invoked via Docker’s ENTRYPOINT directive when the container starts.

FROM openjdk:8-jdk-alpine
COPY startup.sh startup.sh
RUN chmod +x startup.sh
ADD myapp.jar /usr/src/myapp.jar
EXPOSE 8080
ENTRYPOINT [“/bin/sh”, “startup.sh”]

To make the consumption of startup.sh more flexible and Kubernetes-friendly, we can trim it down to this:

#a more flexible startup.sh
java $JAVA_OPTS -jar myapp.jar

And declare all the necessary Java options in the spec as a single environmental variable.

containers:
       – name: my-app
         image: my-app-image:x.x.x
         imagePullPolicy: IfNotPresent
         securityContext:
           privileged: true
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
name: appd-secret
-name: JAVA_OPTS
  value: “ -javaagent:/sharedFiles/AppServerAgent/javaagent.jar
         -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
         -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
         -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
         -Xms64m -Xmx512m -XX:MaxPermSize=256m
         -Djava.net.preferIPv4Stack=true
         …”
         ports:
         – containerPort: 8080
         volumeMounts:
           – mountPath: /sharedFiles
             name: shared-files

The dynamic values for the Java options are populated from the ConfigMap. First, we reference the entire configMap, where all shared values are defined:

envFrom:
           – configMapRef:
               name: controller-config

We also reference our secret as a separate environmental variable. Then, using the $() notation, we can reference the individual variables in order to concatenate the value of the JAVA_OPTS variable.

Thanks to these Kubernetes features (init containers, configMaps, secrets), we can add AppDynamics monitoring into an existing app in a noninvasive way, without the need to rebuild the image.

This approach has multiple benefits. The app image remains unchanged in terms of size and encapsulation. From a Kubernetes perspective, no extra processing is added, as the init container exits before the main container starts. There is added flexibility in what can be passed into the application initialization routine without the need to modify the image.

Note that OpenShift does not allow running Docker containers as user root by default. If you must (for whatever good reason), add the service account you use for deployments to the anyuid SCC. Assuming your service account is my-account, as in the provided examples, run this command:

oc adm policy add-scc-to-user anyuid -z myaccount

Here’s an example of a complete app spec with AppD instrumentation:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: my-app
spec:
 replicas: 1
 template:
   metadata:
     labels:
       name: my-app
   spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  “/mountPath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account
     containers:
       – name: my-app
         image: my-service
         imagePullPolicy: IfNotPresent
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: TIER_NAME
             value: WebTier
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
                 name: appd-key-secret
           – name: JAVA_OPTS
              value: ”
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar
                  -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
                  -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
                  -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
                  -Xms64m -Xmx512m -XX:MaxPermSize=256m
                  -Djava.net.preferIPv4Stack=true
                  …”
                          ports:
                          – containerPort: 8080
                          volumeMounts:
                             – mountPath: /sharedFiles
                               name: shared-files
                    restartPolicy: Always

Learn more about how AppDynamics can help monitor your applications on Kubernetes and OpenShift.

Advances In Mesh Technology Make It Easier for the Enterprise to Embrace Containers and Microservices

More enterprises are embracing containers and microservices, which bring along additional networking complexities. So it’s no surprise that service meshes are in the spotlight now. There have been substantial advances recently in service mesh technologies—including Istio’s 1.0, Hashi Corp’s Consul 1.2.1, and Buoyant merging Conduent into LinkerD—and for good reason.

Some background: service meshes are pieces of infrastructure that facilitate service-to-service communication—the backbone of all modern applications. A service mesh allows for codifying more complex networking rules and behaviors such as a circuit breaker pattern. AppDev teams can start to rely on service mesh facilities, and rest assured their applications will perform in a consistent, code-defined manner.

Endpoint Bloom

The more services and replicas you have, the more endpoints you have. And with the container and microservices boom, the number of endpoints is exploding. With the rise of Platform-as-a-Services and container orchestrators, new terms like ingress and egress are becoming part of the AppDev team vernacular. As you go through your containerization journey, multiple questions will arise around the topic of connectivity. Application owners will have to define how and where their services are exposed.

The days of providing the networking team with a context/VIP to add to web infrastructure—such as services.acme.com/shoppingCart over port 443—are fading. Today, AppDev teams are more likely to hand over a Kubernetes YAML to add services.acme.com/shoppingCart to the Ingress controller, and then describe a behavior. Example: the shopping cart Pod needs to talk to the shopping cart validation Pod, which can only be accessed by the shopping cart because the inventory is kept on another set of Reddis Pods, which can’t be exposed to the outside world.

You’re juggling all of this while navigating constraints set by defined and deployed Kubernetes networking. At this point, don’t be alarmed if you’re thinking, “Wow, I thought I was in AppDev—didn’t know I needed a CCNA to get my application deployed!”

The Rise of the Service Mesh

When navigating the “fog of system development,” it’s tricky to know all the moving pieces and connectivity options. With AppDev teams focusing mostly on feature development rather than connectivity, it’s very important to make sure all the services are discoverable to them. Investments in API management are the norm now, with teams registering and representing their services in an API gateway or documenting them in Swagger, for example.

But what about the underlying networking stack? Services might be discoverable, but are they available? Imagine a Venn diagram of AppDev vs. Sys Engineer vs. SRE: Who’s responsible for which task? And with multiple pieces of infrastructure to traverse, what would be a consistent way to describe networking patterns between services?

Service Mesh to the Rescue

Going back to the endpoint bloom, consistency and predictability are king. Over the past few years, service meshes have been maturing and gaining popularity. Here are some great places to learn more about them:

Service Mesh 101

In the Istio model, applications participate in a service mesh. Istio acts as the mesh, and then applications can participate in the mesh via a sidecar proxy—Envoy, in Istio’s case.

Your First Mesh

DZone has a very well-written article about standing up your first Java application in Kubernetes to participate in an Istio-powered service mesh. The article goes into detail about deploying Istio itself in Kubernetes (in this case, MinuKube). For an AppDev team, the new piece would be creating the all-important routing rules, which are deployed to Istio.

Which One of these Meshes?

The New Stack has a very good article comparing the pros and cons of the major service mesh providers. The post lays out the problem in granular format, and discusses which factors you should consider to determine if your organization is even ready for a service mesh.

Increasing Importance of AppDynamics

With the advent of the service mesh, barriers are falling and enabling services to communicate more consistently, especially in production environments.

If tweaks are needed on the routing rules—for example, a time out—it’s best to have the ability to pinpoint which remote calls would make the most sense for this task. AppDynamics has the ability to examine service endpoints, which can provide much-needed data for these tweaks.

For the service mesh itself, AppDynamcs in Kubernetes can even monitor the health of your applications deployed on a Kubernetes cluster.

With the rising velocity of new applications being created or broken into smaller pieces, AppDynamics can help make sure all of these components are humming at their optimal frequency.

Getting Started with Containers and Microservices

Get Ahead of Microservices and Container Proliferation with Robust App Monitoring

Containers and microservices are growing in popularity, and why not? They enable agility, speed, and resource efficiency for many tasks that developers work on daily. They are light in terms of coding and interdependencies, which makes it much easier and less time consuming to deliver apps to app users or migrate applications from legacy systems to cloud servers.

What Are Containers and Microservices?

Containers are isolated workload environments in a virtualized operating system. They speed up workload processes and application delivery because they can be spun up quickly; and they provide a solution for application-portability challenges because they are not tied to software on physical machines.

Microservices are a type of software architecture that is light and limited in scope. Single-function applications comprise small, self-contained units working together through APIs that are not dependent on a specific language. A microservices architecture is faster and more agile than traditional application architecture.

The Importance of Monitoring

For containers and microservices to be most effective and impactful as they are adopted, technology leaders must prepare a plan on how to monitor and code within them. They also must understand how developers will use them.

Foundationally, all pieces and parts of an enterprise technology stack should be planned, monitored, and measured. Containers and microservices are no exception. Businesses should monitor them to manage their use according to a planned strategy, so that best practice standards (i.e., security protocols, sharing permissions, when to use and not use, etc.) can be identified, documented, and shared. Containers and microservices also must be monitored to ensure both the quality and security of digital products and assets.

To do all of this, an organization needs robust application monitoring capabilities that provide full visibility into the containers and microservices; as well as insight into how they are being used and their influence on goals, such as better productivity or faster time-to-market.

Assessing Your Application Monitoring Capabilities

Some of questions that enterprises should ask as they assess their application-monitoring capabilities are:

  • How can we ensure development and operations teams are working together to use containers and microservices in alignment with enterprise needs?

  • Will we build our own system to manage container assignment, clustering, etc.? Or should we use third-party vendors that will need to be monitored?

  • Will we be able to monitor code inside containers and the components that make up microservices with our current application performance management (APM) footprint?

Do we need more robust APM to effectively manage containers and microservices? And how do we determine the best solution for our needs? To answer those questions and learn more about containers and microservices—and how to effectively use and manage them — read Getting Started With Containers and Microservices: A Mini Guide for Enterprise Leaders.

This mini eBook expands on the topics discussed in this blog and includes an 8-point plan for choosing an effective APM solution.

Go to the guide.

The AppD Approach: Leveraging Docker Store Images with Built-In AppDynamics

In my previous blog we explored some of the best and worst practices of Docker, taking a hands-on approach to refactoring an application, always with containers and monitoring in mind. In that project, we chose to use physical agents from the Appdynamics download site as our monitoring method. But this time we are going to take things one step further: using images from the Docker Store to improve the same application.

Modern applications are very complex, of course, and we will show the three most common ways to use AppDynamics Docker Store Images to monitor your app, all while adhering to Docker best practices. We will continue to use this repo and move between the “master” and “docker-store-images” branches. If you haven’t read my previous post, I recommend doing so first, as we will build on the source code used there.

First Things First: The Image

Over at the AppDynamics page on the Docker Store (login required), we have three types of images for Java applications, each with our agents on them. In this project, we will work solely with the Machine Agent and Java images but, in principle, the scenarios and implementations are language-agnostic. The images are based on OpenJDK, Tomcat, and Jetty. Since our application uses Tomcat, we will use that image. (store/appdynamics/java:4.3.7.1_tomcat9-jre8).

You can see how every image is versioned with (store/appdynamics<language>:<agent-version>_<server-runtime environment>).

By inspecting the image <docker inspect store/appdynamics/java:4.3.7.1_tomcat9-jre8>, we are able verify important environmental variables, including the fact that we’re running Java 8. We’re also able to identify the JAVA_HOME variable, which will play an important role (more on this below). Furthermore, we can verify that Tomcat is installed with the correct versions and paths. Lastly, we notice the command to start the agent is simply the catalina.sh run command. On startup, the image runs Tomcat and the agent. (This is important to note as we dive deeper.)

Lastly, if you plan to use a third-party image in your production application, the image must be trusted. This means it must not modify any other containers at runtime. Having the OS level agent force itself into a containerized app—and intrusively modify code at runtime—defeats one of the main advantages of containerization: the ability to run the container anywhere without worrying about how your code executes. Always keep this in mind when evaluating container monitoring software. (AppDynamics’ images pass this test, by the way.)

Here are the three most practical migrations you’re likely to face:

Scenario 1: A Perfect World

This is the best case scenario, but also the least practical. In a perfect world, we’d be able to pull the image as a top layer, pass in only environment variables, and see our application discovered within minutes. However in our situation, we can’t do this because we have a custom startup script that we want to have run when our container starts. In this example, we’ve chosen to use Dockerize (https://github.com/jwilder/dockerize) to simplify the process of converting our application to use Docker, but of course there are many situations where you might need some custom start logic in your containers.  If you’re not using Dockerize in your script, simply pull the image and pass in the environment variables that name the individual components. Since the agents run on startup, this method will be seamless.

Scenario 2: Install Tomcat

Ideally, we’d like to make as few changes as possible. The problem here is that we have a unique startup script that needs to run when each project is started. In this scenario, your workaround is to use the agent image that doesn’t have Tomcat—in other words, use store/appdynamics/java:4.3.7.1 and install Tomcat on the image. With this approach, you remove the overlapping start commands and top-level agent. The downside is reinstalling Tomcat on every image rebuild.

Scenario 3: Refactor Start Script

Here’s the most common scenario when migrating from a physical agent—and one we found ourselves in. A specific run script brings up all of your applications. Refactoring your apps to pull from the image and start your application would be too time consuming, and would ask too much of the customer. The solution: Combine the two start scripts.

In our scenario, we had a directory responsible for the server, and another responsible for downloading and installing the agents. Since we were using Tomcat, we decided to leverage the image with Tomcat and our monitoring software, which was already installed <store/appdynamics/java:4.3.7.1_tomcat9-jre8>. (We went with the official Tomcat image because it’s the one used by the AppDynamics image.)

In our startup script, AD-Capital-Docker/ADCapital-Tomcat/startup.sh, we used Dockerize to spin up all the services. You’ll notice that we added a couple of environment variables, ${APPD_JAVAAGENT} and ${APPD_PROPERTIES}, to each start command. In our existing version, these changes enable the script to see if AppD properties are set and, if so, to start the application agent.

The next step was to refactor the startup script to use our new image. (To get the agent start command, simply pull the image, run the container, and run ps -ef at the command line.)

Since Java was installed to a different location, we had to put its path in our start command, replacing “java” with “/docker-java-home/jre/bin/java”. This approach allowed us to ensure that our application was using the Java provided from the image.

Next, we needed to make sure we were starting the services using Tomcat, and with the start command from the AppDynamics agent image. By using the command from above, we were able to replace our Catalina startup:

-cp ${CATALINA_HOME}/bin/bootstrap.jar:${CATALINA_HOME}/bin/tomcat-juli.jar org.apache.catalina.startup.Bootstrap

…with the agent startup:

-Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djdk.tls.ephemeralDHKeySize=2048
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources
-javaagent:/opt/appdynamics/javaagent.jar -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
-Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat
-Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start

If you look closely, though, not all of the services were using Tomcat on startup. The last two services simply needed to start the agent. By using the same environment variable (APPD_JAVA_AGENT), we were able to rename that variable (APPD_JAVA_AGENT) to be the path of the agent jar. And with that, we had our new startup script.

[startup.sh (BEFORE)]

[startup.sh (AFTER)]

Not only did this approach allow us to get rid of our AppDynamics directory, it also enabled a seamless transition to monitoring via Docker images.

The AppD Approach: Composing Docker Containers for Monitoring

Since its introduction four years ago, Docker has vastly changed how modern applications and services are built. But while the benefits of microservices are well documented, the bad habits aren’t.

Case in point: As people began porting more of their monolithic applications to containers, Dockerfiles ended up becoming bloated, defeating the original purpose of containers. Any package or service you thought you needed was installed on the image. This led to minor changes in source or server, forcing you to rebuild the image. People would package multiple processes into a single Dockerfile. And obviously, as the images got bigger, things became much less efficient because you would spend all of your time waiting on a rebuild to check a simple change in source code.

The quick fix was to layer your applications. Maybe you had a base image, a language-specific image, a server image, and then your source code. While your images became more contained, any change to your bottom-level images would require an entire rebuild of the image set. Although your Dockerfiles became less bloated, you still suffered from the same upgrade issues. With the industry becoming more and more agile, this practice didn’t feel aligned.

The purpose of this blog is to show how we migrated an application to Docker—highlighting the Docker best practices we implemented—and how we achieved our end goal of monitoring the app in AppDynamics. (Source code located here)

Getting Started

With these best (and worst) practices in mind, we began by taking a multi-service Java application and putting it into Docker Compose. We wanted to build out the containers with the Principle of Least Privilege: each system component or process should have the least authority needed to complete its tasks. The containers needed to be ephemeral too, always shutting down when a SIGTERM is received. Since there were going to be environment variables reused across multiple services, we created a docker-compose.env file (image below) that could be leveraged across every service.

[AD-Capital-Docker/docker-compose.env]

Lastly, we knew that for our two types of log data—Application and Agent—we would need to create a shared volume to house it.

[AD-Capital-Docker/docker-compose.yml]

Instead of downloading and installing Java or Tomcat in the Dockerfile, we decided to pull the images directly from the official Tomcat in the Docker Store. This would allow us to know which version we were on without having to install either Java or Tomcat. Upgrading versions of Java or Tomcat would be easy, and would leave the work to Tomcat instead of on our end.

We knew we were going to have a number of services dependent on each other and linking through Compose, and that a massive bash script could cause problems. Enter Dockerize, a utility that simplifies running applications in Docker containers. Its primary role is to wait for other services to be available using TCP, HTTP(S) and Unix before starting the main process.

Some backstory: When using tools like Docker Compose, it’s common to depend on services in other linked containers. But oftentimes relying on links is not enough; while the container itself may have started, the service(s) within it may not be ready, resulting in shell script hacks to work around race conditions. Dockerize gives you the ability to wait for services on a specified protocol (file, TCP, TCP4, TCP6, HTTP, HTTPS and Unix) before starting your application. You can use the -timeout # argument (default: 10 seconds) to specify how long to wait for the services to become available. If the timeout is reached and the service is still not available, the process exits with status code 1.

[AD-Capital-Docker/ADCapital-Tomcat/startup.sh]

We then separated the source code from the agent monitoring. (The project uses a Docker volume to store the agent binaries and log/config files.) Now that we had a single image pulled from Tomcat, we could place our source code in the single Dockerfile and replicate it anywhere. Using prebuilt war files, we could download source from a different time, and place it in the Tomcat webapps subdirectory.

[AD-Capital-Docker/ADCapital-Project/Dockerfile]

We now had a Dockerfile containing everything needed for our servers, and a Dockerfile for the source code, allowing you to run it with or without monitoring enabled. The next step was to split out the AppDynamics Application and Machine Agent.

We knew we wanted to instrument with our agents, but we didn’t want a configuration file with duplicate information for every container. So we created a docker-compose.env. Since our agents require minimal configuration—and the only difference between “tiers” and “nodes” are their names—we knew we could pass these env variables across the agents without using multiple configs. In our compose file, we could then specify the tier and node name for the individual services.

[AD-Capital-Docker/docker-compose.yml]

For the purpose of this blog, we downloaded the agent and passed in the filename and SHA-256 checksum via shell scripts in the ADCapital-Appdynamics/docker-compose.yml file. We were able to pass in the application agent and configuration script to run appdynamics to the shared volume, which would allow the individual projects to use it on startup (see image below). Now that we had enabled application monitoring for our apps, we wanted to install the machine agent to enable analytics. We followed the same instrumentation process, downloading the agent and verifying the filename and checksums. The machine agent is a standalone process, so our configuration script was a little different, but took advantage of the docker-compose.env variable name to set the right parameters for the machine agent (ADCapital-Monitor/start-appdynamics). 

[AD-Capital-Docker/ADCapital-AppDynamics/startup.sh]

The payoff? We now have an image responsible for the server, one responsible for the load, and another responsible for the application. In addition, another image monitors the application, and a final image monitors the application’s logs and analytics. Updating an individual component will not require an entire rebuild of the application. We’re using Docker as it was intended: each container has one responsibility. Lastly, by using volumes to share data across services, we can easily check agent and application Logs. This makes it much easier to gain visibility into the entire landscape of our software.

If you would like to see the source code used for this blog, it is located here with instructions on how to build and setup. In the next blog, we will show you how to migrate from host agents,  using Docker images from the Docker store.

Scaling with Containers at AppSphere 2016

Containers have grown tremendously in popularity in recent years. Originally conceived as a way to replace legacy systems completely, container technology has instead become a way to extend monolithic systems with newer, faster technology. As an example of this growth, the 2016 RightScale State of the Cloud Report™ shows Docker adoption rates in 2015 moved from thirteen percent to twenty-seven percent. Another thirty-five percent of respondents say they have plans to use Docker in the near future.

What Are Containers?

Containers allow you to move software from one environment to another without worrying about different applications, SSL libraries, network topology, storage systems, or security policies — for example, moving from a machine in your data center to a virtual environment in the cloud. They are able to do this because everything you need to run the software travels as one unit. The application, binaries, libraries, and configuration files all live together inside a single container.

You can move a container to a wide variety of software environments with no problem because the program is self-contained. In contrast, virtualization also includes the operating system. Containers share the same operating system kernel, so they are lighter and more energy-efficient than a virtual machine. Hypervisors are an abstraction of the entire machine, while containers are an abstraction of only the OS kernel.

There are a variety of container technologies to support different use cases. The most popular container technology right now is Docker. It grew rapidly a few years ago with major adoption in enterprise computing, including three of the biggest financial institutions in the world — unusual for the slow-to-adopt world of banking. Docker allows software applications to run on a large number of machines at the same time, an important quality for huge sites like Facebook that must deliver data to millions of consumers simultaneously.

Container Technologies

Recent surveys performed by DevOps.com and ClusterHQ show Docker is the overwhelming favorite in container technology at this point. One of the most talked-about competitors to Docker that has emerged recently is Rocket, an open-source project from CoreIS, which ironically was one of one of Docker’s early proponents. Backed heavily by Google, Rocket’s founders developed the technology because they thought Docker had grown and moved too far away from its original purpose. While Docker has been embraced as almost an industry standard, competitors are making inroads. Rocket’s founders say one of its strengths is that it is not controlled by a single organization.

One of the pioneers in container technology back in 2001 is a product from Parallels called Virtuozzo. It gets a lot of attention from OEMs, works well on cloud servers, and features near-instant provisioning. Other fast-growing container technologies include LXC and LVE.

Container Best Practices

One of the challenges of containers is monitoring their performance. AppDynamics is able to monitor containers using our innovative Microservices iQ. It provides automatic discovery of exit and entry service endpoints, tracks important performance indicators, and isolates the cause of performance issues.

At AppSphere 2016, you can learn more about containers and performance monitoring at 10 AM on Thursday, November 17, where AppDynamics’ CTO, Steve Sturtevant, will be presenting his talk, “Best Practices for Managing IaaS, PaaS, and Container-Based Deployments.” Register today to ensure your spot to learn from Steve’s session, and even more in just a few weeks at AppSphere 2016–we’re looking forward to seeing you there!

The Importance of Monitoring Containers [Infographic]

With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue.

However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.

If your team is looking towards containers and microservices as an operational change in how you decide to ship your product, you can’t afford bugs or software issues affecting your performance, end-user experience, or ultimately your bottom line.

Ed Moyle, Director of Emerging Business & Technology at ISACA said it best in his blog, “Consider what happens to these issues when containers enter into the mix. Not only are all the VM issues still there, but they’re now potentially compounded. Inventories that were already difficult to keep current because of VM sprawl might now have to accommodate containers, too. For example, any given VM could contain potentially dozens of individual containers. Issues arising from unexpected migration of VM images might be made significantly worse when the containers running on them can be relocated with a few keystrokes.”

Earlier this year, AppDynamics unveiled Microservices iQ to address these visibility issues daunting DevOps teams today.

Infographic – Container Monitoring 101 from AppDynamics

With Microservices iQ, DevOps teams can:

  • Automatic discovery of entry and exit points of your microservice as service endpoints for focused microservices monitoring

  • Track the key performance indicators of your microservice without worrying about the entire distributed business transaction that uses it

  • Drill down and isolate the root cause of any performance issues affecting the microservice

Interested in learning more? Check out our free ebook, The Importance of Monitoring Containers.

AppDynamics Monitoring Excels for Microservices; New Pricing Model Introduced

It’s no news that microservices are one of the top trends, if not the top trend, in application architectures today. Take large monolithic applications which are brittle and difficult to change and break them into smaller manageable pieces to provide flexibility in deployment models, facilitating agile release and development to meet today’s rapidly shifting digital businesses. Unfortunately, with this change, application and infrastructure management is more complex due to size and technology changes, most often adding significantly more virtual machines and/or containers to handle the growing footprint of application instances.

Fortunately, this is just the kind of environment the AppDynamics Application Intelligence Platform is built for, delivering deep visibility across even the most complex, distributed, heterogeneous environments. We trace and monitor every business transaction from end-to-end — no matter how far apart those ends are, or how circuitous the path between — including any and all API calls across any and all microservices tiers. Wherever there is an issue, the AppDynamics platform pinpoints it and steers the way to rapid resolution. This data can also be used to analyze usage patterns, scaling requirements, and even visibility into infrastructure usage.

This is just the beginning of the microservices trend. With the rise of the Internet of Things, all manner of devices and services will be driven by microservices. The applications themselves will be extended into the “Things” causing even further exponential growth over the next five years. Gartner predicts over 25 billion devices connected by 2020, with the majority being in the utilities, manufacturing, and government sectors.

AppDynamics microservices pricing is based on the size of the Java Virtual Machine (JVM) instance; any JVM running with a maximum heap size of less than one gigabyte is considered a microservice.

We’re excited to help usher in this important technology, and to make it feasible and easy for enterprises to deploy AppDynamics Java microservices monitoring and analytics. For a more detailed perspective, see our post, Visualizing and tracking your microservices.

Complete visibility into Docker containers with AppDynamics

Today we announced the AppDynamics Docker monitoring solution that provides an application-centric view inside and across Docker containers. Performance of distributed applications and business transactions can be tagged, traced, and monitored even as they transit multiple containers.

Before I talk more about the AppDynamics Docker monitoring solution, let me quickly review the premise of Docker and point you to a recent blog “Visualizing and tracking your microservices“ by my colleague, Jonah Kowall, that highlights Docker’s synergy with another hot technology trend— microservices.

What is Docker?

Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker introduction video.

Docker makes it very easy to make changes and package the software quickly for others to test without requiring a lot of resources. At AppDynamics, we embraced Docker completely in our development, testing, and demo environments. For example, as you can see in the attached screenshot from our demo environment, we are using Docker to provision various demo use cases with different application environments like jBoss, Tomcat, MongoDB, Angularjs, and so on.

Screen Shot 2015-05-11 at 11.15.07 AM.png

In addition, you can test drive AppDynamics by downloading, deploying, and testing with the packaged applications from the AppDynamics Docker repos.

Complete visibility into Docker environment with AppDynamics

AppDynamics provides visibility into applications and business transactions made out of multiple smaller decoupled (micro) services deployed in a Docker environment using the Docker monitoring solution. The AppDynamics Docker Monitoring Extension monitors and reports on various metrics, such as: total number of containers, running containers, images, CPU usage, memory usage, network traffic, etc. The AppDynamics Docker monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP giving you the choice for data collection protocol.

The Docker metrics can now be correlated with the metrics from the applications running in the container. For example, in the screenshot below, you can see the overall performance (calls per minute in red) of a web server deployed in Docker container is correlated with Docker performance metrics (Network transmit/receive and CPU usage). As the number of calls per minute to the web server increases, you can see that the network traffic and CPU usage increases as well.

docker_metric_browser_with_cpu.png

Customers can leverage all the core functionalities of AppDynamics (e.g. dynamic baselining, health rules, policies, actions, etc.) for all the Docker metrics while correlating them with the metrics already running in the Docker environment.

The Docker monitoring extension also creates an out of the box custom dashboard with key Docker metrics as shown in the screenshot below. This out of the box dashboard will jump start the monitoring of Docker environment.

docker_custom_dashboard.png

Download the AppDynamics Docker monitoring extension, set-up and configure it following the instructions on the extension page and get end-to-end visibility into your Docker environment and the applications running within them.