Best Practices for Instrumenting Containers with AppDynamics Agents

In this blog I will show some best practices for instrumenting Docker containers, using docker-compose with a few popular AppDynamics application agent types. The goal here is to avoid rebuilding your application containers in the event of an agent upgrade, or having to hard-code AppDynamics configuration into your container images. In my role as a DevOps engineer working on AppDynamics’ production environments, I use these techniques to simplify our instrumented container deployments. I’ll cover the install of binary agents like the Java agent, as well as agents installed via a repository such as Node.js or Python.

Before getting into the best practices, let’s review the most common deployment pattern—which isn’t a best practice at all.

Common (but not best-practice) Pattern:  Install Agent During Container Image Build

The first approach we’ll cover is installing the agent via Dockerfile as part of the application container build. This has the advantage of following the conventional practice of easily copying in your source files and providing transparency of the build in your Dockerfile, making adoption simpler and more intuitive. AppDynamics does not recommend this approach, however, as it requires a fresh copy of your application image to be rebuilt every time an agent needs an upgrade. This is inefficient and unnecessary because the agent is not a central part of your application code. Additionally, hard-coding the agent install in this manner may prove more difficult when you automate your builds and deployments.

Java Example

In this Dockerfile example for installing the Java agent, we have the binary stored in AWS S3 and simply copy over the agent during build time of the application image.

Dockerfile snippet: Copy from S3

Here is a similar step where we copy the agent locally.

Dockerfile snippet: Copy locally

Node.js Example

In this example, we use npm to install a specific Node.js agent version during build time.

Dockerfile snippet

Python Example

In this example, we use pip to install a specific Python agent version during build time.

Dockerfile snippet

Best Practices Pattern: Install Agents at Runtime Using Environment Variables and Sidecar Container

The below examples cover two different patterns, depending on agent type. For Java and similarly packaged agents, we’ll use something called a “sidecar container” to install the agent at container runtime.  For repository-installed agents like Node.js and Python, we’ll use environment variables and a startup script that will install the agent at container runtime.

Java Example

For the sidecar container pattern, we build a container image with the agent binary that we want to install. We then volume-mount the directory that contains the agent, so our application container can copy the agent during container runtime, and then install. This can be simplified by unpackaging the agent in the sidecar container, volume-mounting the newly unpackaged agent directory, and then having the application container point to the volume-mounted directory and using it as its agent directory. We’ll cover both examples below, starting with how we create the sidecar container or “agent-repo.”

In the Dockerfile example for the Java agent, we store the binary in AWS S3 (in an agent version-specific bucket) and simply copy the agent during build-time. We then unzip the agent, allowing us to either copy the agent to the application container and then unzipping, or simply pointing to the unzipped agent directory. Notice we use a build ARG, which allows for a more automated build using a build script.

Agent Repo Dockerfile: Copy from S3

Here’s the same example as above, but one where we copy the agent locally without using a build ARG.

Agent Repo Dockerfile: Copy locally

The build script utilizes a build ARG. If you’re using the S3 pattern above, this allows you to pass in the agent version you like.

build.sh

Now that we have built our sidecar container image, let’s cover how to build the Java agent container image to utilize this agent deployment pattern.

In the Docker snippet below, we copy in two new scripts, extractAgent.sh and startup.sh. The extractAgent.sh script copies and extracts the agent from the volume-mounted directory, /sharedFiles/, to the application container. The startup.sh script is used as our ENTRYPOINT.  This script will call extractAgent.sh and start the application.

Java Dockerfile snippet

extractAgent.sh

The startup.sh script (below) calls extractAgent.sh, which copies and unzips the agent into the $CATALINA_HOME directory. We then pass in that directory as part of our Java options in the application-startup command.

startup.sh snippet

In the docker-compose.yml, we simply add the agent-repo container with volume mount. Our Tomcat container references the agent-repo container and volume, but also uses agent-dependent environment variables so that we don’t have to edit any configuration files. This makes the deployment much more automated and portable/reusable.

docker-compose.yml

In the example below, we show another way to do this. We skip the entire process of adding the extractAgent.sh and startup.sh scripts, electing instead to copy a customized catalina.sh script and using that as our CMD. This pattern still uses the agent-repo sidecar container, but points to the volume-mounted, unzipped agent directory as part of the $CATALINA_OPTS.

Java Dockerfile snippet

catalina.sh snippet

OK, that covers the sidecar container agent deployment pattern. So what about agents that utilize a repository to install an agent? How do we automate that process so we don’t have to rebuild our application container image every time we want to upgrade our agents to a specific version? The answer is quite simple and similar to the examples above. We add a startup.sh script, which is used as our ENTRYPOINT, and then use environment variables set in the docker-compose.yml to install the specific version of our agent.

Node.js Example

Dockerfile snippet

In our index.js that is copied in (not shown in the above Dockerfile snippet), we reference our agent-dependent environment variables, which are set in the docker-compose.yml.

index.js snippet

In the startup.sh script, we use npm to install the agent. The version installed will depend on whether we specifically set the $AGENT_VERSION variable in the docker-compose.yml. If set, the version set in the variable will get installed. If not, the latest version will be installed.

startup.sh

In the docker-compose.yml, we set the $AGENT_VERSION to the agent version we want npm to install. We also set our agent-dependent environment variables, allowing us to avoid hard-coding these values. This makes the deployment much more automated and portable/reusable.

docker-compose.yml

Python Example

This example is very similar to the Node.js example, except that we are using pip to install our agent.

Dockerfile snippet

In the startup.sh script, we use pip to install the agent. The version installed will depend on whether we specifically set the $AGENT_VERSION variable in the docker-compose.yml. If set, the version set in the variable will get installed. If not, the latest version will be installed.

startup.sh

In the docker-compose.yml, we set the $AGENT_VERSION to the agent version we want pip to install. We also set our agent-dependent environment variables, allowing us to avoid hard-coding these values. This makes the deployment much more automated and portable/reusable.

docker-compose.yml

 

Pick the Best Pattern

There are many ways to instrument your Docker containers with AppDynamics agents.  We have covered a few patterns and shown what works well for my team when managing a large Docker environment.

In the Common Pattern (but not best-practice) example, I showed how you must rebuild your application container every time you want to upgrade the agent version—not an ideal approach.

But with the Best Practices Pattern, you decouple the agent specifics from the application container images, and direct that responsibility to the sidecar container and the docker-compose environment variables.

Automation, whenever possible, is always a worthy goal. Following the Best Practices Pattern will allow you to improve script deployments, leverage version control and configuration management, and plug them all into CI/CD pipelines.

For in-depth information on related techniques, read these AppDynamics blogs:

Deploying AppDynamics Agents to OpenShift Using Init Containers

The AppD Approach: Composing Docker Containers for Monitoring

The AppD Approach: Leveraging Docker Store Images with Built-In AppDynamics

 

 

Deploying AppDynamics Agents to OpenShift Using Init Containers

There are several ways to instrument an application on OpenShift with an AppDynamics application agent. The most straightforward way is to embed the agent into the main application image. (For more on this topic, read my blog Monitoring Kubernetes and OpenShift with AppDynamics.)

Let’s consider a Node.js app. All you need to do is to add a require reference to the agent libraries and pass the necessary information on the controller. The reference itself becomes a part of the app and will be embedded in the image. The list of variables (e.g., controller host name, app/tier name, license) the agent needs to communicate with the controller can be embedded, though it is best practice to pass them into the app on initialization as configurable environmental variables.

In the world of Kubernetes (K8s) and OpenShift, this task is accomplished with config maps and secrets. Config maps are reusable key value stores that can be made accessible to one or more applications. Secrets are very similar to config maps with an additional capability to obfuscate key values. When you create a secret, K8s automatically encodes the value of the key as a base64 string. Now the actual value is not visible, and you are protected from people looking over your shoulder. When the key is requested by the app, Kubernetes automatically decodes the value. Secrets can be used to store any sensitive data such as license keys, passwords, and so on. In our example below, we use a secret to store the license key.

Here is an example of AppD instrumentation where the agent is embedded, and the configurable values are passed as environment variables by means of a configMap, a secret and the pod spec.

var appDobj = {
   controllerHostName: process.env[‘CONTROLLER_HOST’],
   controllerPort: CONTROLLER_PORT,
   controllerSslEnabled: true,
accountName: process.env[‘ACCOUNT_NAME’],
   accountAccessKey: process.env[‘ACCOUNT_ACCESS_KEY’],
   applicationName: process.env[‘APPLICATION_NAME’],
   tierName: process.env[‘TIER_NAME’],
   nodeName: ‘process’
}
require(“appdynamics”).profile(appDobj);

Pod Spec
– env:
   – name: TIER_NAME
     value: MyAppTier
   – name: ACCOUNT_ACCESS_KEY
     valueFrom:
       secretKeyRef:
         key: appd-key
         name: appd-secret
   envFrom:
     – configMapRef:
         name: controller-config

A ConfigMap with AppD variables.

AppD license key stored as secret.

The Init Container Route: Best Practice

The straightforward way is not always the best. Application developers may want to avoid embedding a “foreign object” into the app images for a number of good reasons—for example, image size, granularity of testing, or encapsulation. Being developers ourselves, we respect that and offer an alternative, a less intrusive way of instrumentation. The Kubernetes way.

An init container is a design feature in Kubernetes that allows decoupling of app logic from any type of initialization routine, such as monitoring, in our case. While the main app container lives for the entire duration of the pod, the lifespan of the init container is much shorter. The init container does the required prep work before orchestration of the main container begins. Once the initialization is complete, the init container exists and the main container is started. This way the init container does not run parallel to the main container as, for example, a sidecar container would. However, like a sidecar container, the init container, while still active, has access to the ephemeral storage of the pod.

We use this ability to share storage between the init container and the main container to inject the AppDynamics agent into the app. Our init container image, in its simplest form, can be described with this Dockerfile:

FROM openjdk:8-jdk-alpine
RUN apk add –no-cache bash gawk sed grep bc coreutils
RUN mkdir -p /sharedFiles/AppServerAgent
ADD AppServerAgent.zip /sharedFiles/
RUN unzip /sharedFiles/AppServerAgent.zip -d /sharedFiles/
AppServerAgent /
CMD [“tail”, “-f”, “/dev/null”]

The above example assumes you have already downloaded the archive with AppDynamics app agent binaries locally. When the container is initialized, it unzips the binaries into a new directory. To the pod spec, we then add a directive that copies the directory with the agent binaries to a shared volume on the pod:

spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  /mountpath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account

After the init container exits, the AppDynamics agent binaries are waiting for the application to be picked up from the shared volume on the pod.

Let’s assume we are deploying a Java app, one normally initialized via a script that calls the java command with Java options. The script, startup.sh, may look like this:

# startup.sh
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.tierName=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.agent.reuse.nodeName.prefix=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar”
JAVA_OPTS=”$JAVA_OPTS
-Dappdynamics.controller.hostName=$CONTROLLER_HOST -Dappdynamics.controller.port=$CONTROLLER_PORT -Dappdynamics.controller.ssl.enabled=$CONTROLLER_SSL_ENABLED”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.accountName=$ACCOUNT_NAME -Dappdynamics.agent.accountAccessKey=$ACCOUNT_ACCESS_KEY -Dappdynamics.agent.applicationName=$APPLICATION_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.socket.collection.bci.enable=true”
JAVA_OPTS=”$JAVA_OPTS -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true”
JAVA_OPTS=”$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom”

$JAVA_OPTS -jar myapp.jar

It is embedded into the image and invoked via Docker’s ENTRYPOINT directive when the container starts.

FROM openjdk:8-jdk-alpine
COPY startup.sh startup.sh
RUN chmod +x startup.sh
ADD myapp.jar /usr/src/myapp.jar
EXPOSE 8080
ENTRYPOINT [“/bin/sh”, “startup.sh”]

To make the consumption of startup.sh more flexible and Kubernetes-friendly, we can trim it down to this:

#a more flexible startup.sh
java $JAVA_OPTS -jar myapp.jar

And declare all the necessary Java options in the spec as a single environmental variable.

containers:
       – name: my-app
         image: my-app-image:x.x.x
         imagePullPolicy: IfNotPresent
         securityContext:
           privileged: true
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
name: appd-secret
-name: JAVA_OPTS
  value: “ -javaagent:/sharedFiles/AppServerAgent/javaagent.jar
         -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
         -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
         -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
         -Xms64m -Xmx512m -XX:MaxPermSize=256m
         -Djava.net.preferIPv4Stack=true
         …”
         ports:
         – containerPort: 8080
         volumeMounts:
           – mountPath: /sharedFiles
             name: shared-files

The dynamic values for the Java options are populated from the ConfigMap. First, we reference the entire configMap, where all shared values are defined:

envFrom:
           – configMapRef:
               name: controller-config

We also reference our secret as a separate environmental variable. Then, using the $() notation, we can reference the individual variables in order to concatenate the value of the JAVA_OPTS variable.

Thanks to these Kubernetes features (init containers, configMaps, secrets), we can add AppDynamics monitoring into an existing app in a noninvasive way, without the need to rebuild the image.

This approach has multiple benefits. The app image remains unchanged in terms of size and encapsulation. From a Kubernetes perspective, no extra processing is added, as the init container exits before the main container starts. There is added flexibility in what can be passed into the application initialization routine without the need to modify the image.

Note that OpenShift does not allow running Docker containers as user root by default. If you must (for whatever good reason), add the service account you use for deployments to the anyuid SCC. Assuming your service account is my-account, as in the provided examples, run this command:

oc adm policy add-scc-to-user anyuid -z myaccount

Here’s an example of a complete app spec with AppD instrumentation:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: my-app
spec:
 replicas: 1
 template:
   metadata:
     labels:
       name: my-app
   spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  “/mountPath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account
     containers:
       – name: my-app
         image: my-service
         imagePullPolicy: IfNotPresent
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: TIER_NAME
             value: WebTier
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
                 name: appd-key-secret
           – name: JAVA_OPTS
              value: ”
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar
                  -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
                  -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
                  -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
                  -Xms64m -Xmx512m -XX:MaxPermSize=256m
                  -Djava.net.preferIPv4Stack=true
                  …”
                          ports:
                          – containerPort: 8080
                          volumeMounts:
                             – mountPath: /sharedFiles
                               name: shared-files
                    restartPolicy: Always

Learn more about how AppDynamics can help monitor your applications on Kubernetes and OpenShift.

Migrating from Docker Compose to Kubernetes

The AppDynamics Demo Platform never sleeps. It is a cloud-based system that hosts a number of applications designed to help our global sales team demonstrate the many value propositions of AppDynamics.

Last fall, we added several new, larger applications to our demo platform. With these additions, our team started to see some performance challenges with our standard Docker Compose application deployment model on a single host. Specifically, we wanted to support multiple host machines as opposed to being limited to a single host machine like Docker Compose. We had been talking about migrating to Kubernetes for several months before this and so we knew it was time to take the leap.

Before this I had extensive experience with dockerized applications and even with some Kubernetes-managed applications. However, I had never taken part in the actual migration of an application from Docker Compose to Kubernetes.

For our first attempt at migrating to Kubernetes, we chose an application that was relatively small, but which contained a variety of different elements—Java, NodeJS, GoLang, MySQL and MongoDB. The application used Docker Compose for container deployment and “orchestration.” I use the term orchestration loosely, because Docker Compose is pretty light when compared to Kubernetes.

Docker Compose

For those who have never used Docker Compose, it’s a framework that allows developers to define container-based applications in a single YAML file. This definition includes the Docker images used, exposed ports, dependencies, networking, etc. Looking at the snippet below, each block of 5 to 20 lines represents a separate service. Docker Compose is a very useful tool and makes application deployment fairly simple and easy.

Figure 1.1 – docker-compose.yaml Snippet

Preparing for the Migration

The first hurdle to converting the project was learning how Kubernetes is different from Docker Compose. One of the most dramatic ways it differs is in container-to-container communication.

In a Docker Compose environment, the containers all run on a single host machine. Docker Compose creates a local network that the containers are all part of. Take this snippet, for example:

This block will create a container called quoteServices with a hostname of quote-services and port 8080. With this definition, any container within the local Docker Compose network can access it using http://quote-services:8080. Anything outside of the local network would have to know the IP address of the container.

By comparison, Kubernetes usually runs on multiple servers called nodes, so it can’t simply create a local network for all the containers. Before we started, I was very concerned that this might lead to many code changes, but those worries would prove to be unfounded.

Creating Kubernetes YAML Files

The best way to understand the conversion from Docker Compose to Kubernetes is to see a real example of what the conversion looks like. Let’s take the above snippet of quoteServices and convert it to a form that Kubernetes can understand.

The first thing to understand is that the above Docker Compose block will get converted into two separate sections, a Deployment and a Service.

As its name implies, the deployment tells Kubernetes most of what it needs to know about how to deploy the containers. This information includes things like what to name the containers, where to pull the images from, how many containers to create, etc. The deployment for quoteServices is shown here:

As we mentioned earlier, networking is done differently in Kubernetes than in Docker Compose. The Service is what enables communication between containers. Here is the service definition for quoteServices:

This service definition tells Kubernetes to take the containers that have a name = quoteServices, as defined under selector, and to make them reachable using quote-services as hostname and port 8080. So again, this service can be reached at http://quote-services:8080 from within the Kubernetes application. The flexibility to define services this way allows us to keep our URLs intact within our application, so no changes are needed due to networking concerns.

By the end, we had taken a single Docker Compose file with about 24 blocks and converted it into about 20 different files, most of which contained a deployment and a service. This conversion was a big part of the migration effort. Initially, to “save” time, we used a tool called Kompose to generate deployment and services files automatically. However, we ended up rewriting all of the files anyway once we knew what we were doing. Using Kompose is sort of like using Word to create webpages. Sure, it works, but you’re probably going to want to re-do most of it once you know what you’re doing because it adds a lot of extra tags that you don’t really want.

Instrumenting AppDynamics

This was the easy part. Most of our applications are dockerized, and we have always monitored these and our underlying Docker infrastructure with AppDynamics. Because our Docker images already had application agents baked in, there was nothing we had to change. If we had wanted, we could have left them the way they were, and they would have worked just fine. However, we decided to take advantage of something that is fairly common in the Kubernetes world: sidecar injection.

We used the sidecar model to “inject” the AppDynamics agents into the containers. The advantage of this is that we can now update our agents without having to rebuild our application images and redeploy them. It is also more fitting with best practices. To update the agent, all we have to do is update our sidecar image, then change the tag used by the application container. Just like that, our application is running with a new agent!

Server Visibility Agent

Incorporating the Server Visibility (SVM) agent was also fairly simple. One difference to note is that Docker Compose runs on a single host, whereas Kubernetes typically uses multiple nodes, which can be added or removed dynamically.

In our Docker Compose model, our SVM agent was deployed to a single container, which monitored both the host machine and the individual containers. With Kubernetes, we would have to run one such container on each node in the cluster. The best way to do this is with a structure called a DaemonSet.

You can see from the snippet below that a DaemonSet looks a lot like a Deployment. In fact, the two are virtually identical. The main difference is how they act. A Deployment typically doesn’t say anything about where in the cluster to run the containers defined within it, but it does state how many containers to create. A DaemonSet, on the other hand, will run a container on each node in the cluster. This is important, because the number of nodes in a cluster can increase or decrease at any time.

Figure: DaemonSet definition

What Works Great

From development and operations perspectives, migrating to Kubernetes involves some extra overhead, but there are definite advantages. I’m not going to list all the advantages here, but I will tell you about my two favorites.

First of all, I love the Kubernetes Dashboard. It shows information on running containers, deployments, services, etc. It also allows you to update/add/delete any of your definitions from the UI. So when I make a change and build a new image, all I have to do is update the image tag in the deployment definition. Kubernetes will then delete the old containers and create new ones using the updated tag. It also gives easy access to log files or a shell to any of the containers.

Figure: Kubernetes Dashboard

Another thing that worked well for us is that we no longer need to keep and maintain the host machines that were running our Docker Compose applications. Part of the idea behind containerizing applications is to treat servers more like cattle than pets. While this is true to an extent, the Docker Compose host machines have become the new pets. We have seen issues with the host machines starting to have problems, needing maintenance, running out of disk space, etc. With Kubernetes, there are no more host machines, and the nodes in the cluster can be spun up and down anytime.

Conclusion

Before starting our Kubernetes journey, I was a little apprehensive about intra-application networking, deployment procedures, and adding extra layers to all of our processes. It is true that we have added a lot of extra configuration, going from a 300-line docker-compose.yaml file to about 1,000 lines spread over 20 files. This is mostly a one-time cost, though. We also had to rewrite some code, but that needed to be rewritten anyway.

In return, we gained all the advantages of a real orchestration tool: scalability, increased visibility of containers, easier server management, and many others. When it comes time to migrate our next application, which won’t be too far away, the process will be much easier and quicker.

Other Resources

The Illustrated Children’s Guide to Kubernetes

Getting Started with Docker

Kubernetes at GitHub

Migrating a Spring Boot service

 

The AppD Approach: Monitoring a Docker-on-Windows App

Here at AppDynamics, we’ve developed strong support for .NET, Windows, and Docker users. But something we haven’t spent much time documenting is how to instrument a Docker-on- Windows app. In this blog, I’ll show you how straightforward it is to get one up and running using our recently announced micro agent. Let’s get started.

Sample Reference Application

Provided with this guide is a simple ASP.NET MVC template app running on an ASP.NET application on the .NET full framework. The sample application link is provided below:

source.zip

If you have your own source code, feel free to use it.

Guide System Information

This guide was written and built on the following platform:

  • Windows Server 2016 Build 14393.rs1_release.180329-1711 (running on VirtualBox)

  • AppDynamics .NET Micro Agent Distro 4.4.3

Prerequisite Steps

Before instrumenting our sample application, we first need to download and get the .NET micro agent. This step assumes you are not using an IDE such as Visual Studio, and are working manually on your local machine.

Step 1: Get NuGet Package Explorer

If you already have a way to view and/or download NuGet packages, skip this step. There are many ways to extract and view a NuGet package, but one method is with a tool called NuGet Package Explorer, which can be downloaded here.

Step 2: Download and Extract the NuGet Package

We’ll need to download the appropriate NuGet package to instrument our .NET application.

  1. Go to https://www.nuget.org/

  2. Search for “AppDynamics”

  3. The package we need is called “AppDynamics.Agent.Distrib.Micro.Windows.”

  4. Click “Manual Download” or use the various standard NuGet packages.

  5. Now open the package with NuGet Package Explorer.

  6. Choose “Open a Local Package.”

  7. Find the location of your downloaded NuGet package and open it. You should see the screen below:

  1. Choose “File” and “Export” to export the NuGet package to a directory on your local machine.

  1. Navigate to the directory where you exported the NuGet package, and confirm that you see this:

Step 3: Create Directory and Configure Agent

Now that we’ve extracted our NuGet package, we will create a directory structure to deploy our sample application.

  1. Create a local directory somewhere on your machine. For example, I created one on my Desktop:
    C:\Users\Administrator\Docker Sample\

  1. Navigate to the directory in Step 1, create a subfolder called “source” and add the sample application code provided above (or your own source code) to this directory. If you used the sample source provided, you’ll see this:

  1. Go back to the root directory and create a directory called “agent”.

  2. Add the extracted AppDynamics micro agent components from Step 1 to this directory.

  3. Edit “AppDynamicsConfig.json” and add in your controller and application information.

{
  "controller": {
    "host": "",
    "port": ,
    "account": "",
    "password": "",
    "ssl": false,
    "enable_tls12": false
  },
  "application": {
    "name": "Sample Docker Micro Agent",
    "tier": "SampleMVCApp"
  }
}
  1. Navigate to the root of the folder, create a file called “dockerFile” and add the following text:

Sample Docker Config

FROM microsoft/iis
SHELL ["powershell"]

RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
   Install-WindowsFeature Web-Asp-Net45

ENV COR_ENABLE_PROFILING="1"
ENV COR_PROFILER="{39AEABC1-56A5-405F-B8E7-C3668490DB4A}"
ENV COR_PROFILER_PATH="C:\appdynamics\AppDynamics.Profiler_x64.dll"

RUN mkdir C:\webapp
RUN mkdir C:\appdynamics

RUN powershell -NoProfile -Command \
  Import-module IISAdministration; \    
  New-IISSite -Name "WebSite" -PhysicalPath C:\webapp -BindingInformation "*:8000:" 

EXPOSE 8000

ADD agent /appdynamics
ADD source /webapp

RUN powershell -NoProfile -Command Restart-Service wmiApSrv
RUN powershell -NoProfile -Command Restart-Service COMSysApp

Here’s what your root folder will now look like:

Building the Docker Container

Now let’s build the Docker container.

  1. Open Powershell Terminal and navigate to the location of your Docker sample app. In this example, I will call my image “appdy_dotnet” but feel free to use a different name if you desire.

  2. Run the following command to build the Docker image:

docker build –no-cache -t  appdy_dotnet .

  1. Now build the image:

docker run –name appdy_dotnet -d appdy_dotnet ping -t localhost

  1. Log into the container via powershell/cmd:

docker exec -it appdy_dotnet cmd

  1. Get the container IP by running the “ipconfig” command:
C:\ProgramData\AppDynamics\DotNetAgent\Logs>ipconfig
Windows IP Configuration


Ethernet adapter vEthernet (Container NIC 69506b92):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::7049:8ad9:94ad:d255%17
   IPv4 Address. . . . . . . . . . . : 172.30.247.210
   Subnet Mask . . . . . . . . . . . : 255.255.240.0
  1. Copy the IPv4 address, add port 8000, and request the URL from a browser. You should get the following site back (below). This is just a simple ASP.NET MVC template app that is provided with Visual Studio. In our example, the address would be:

http://<ip4-address>:8000

Here’s what the application would look like:

  1. Generate some load in the app by clicking the Home, About, and Contact tabs. Each will be registered as a separate business transaction.

Killing the Container (Optional)

In the event you get some errors and want to rebuild the container, here are some helpful commands that can be used for stopping and removing the container, if needed.

  1. Stop the container:

docker stop appdy_dotnet

  1. Remove the container:

docker rm appdy_dotnet

  1. Remove image:

docker rmi appdy_dotnet

Verify Successful Configuration via Controller

Log in to your controller and verify that you are seeing load. If you used the sample app, you’ll see the following info:

Application Flow Map

Business Transactions

Tier Information


As you can see, it’s fairly easy to instrument a Docker-on-Windows app using AppDynamics’ recently announced micro agent. To learn more about AppD’s powerful approach to monitoring .NET Core applications, read this blog from my colleague Meera Viswanathan.

The AppD Approach: Leveraging Docker Store Images with Built-In AppDynamics

In my previous blog we explored some of the best and worst practices of Docker, taking a hands-on approach to refactoring an application, always with containers and monitoring in mind. In that project, we chose to use physical agents from the Appdynamics download site as our monitoring method. But this time we are going to take things one step further: using images from the Docker Store to improve the same application.

Modern applications are very complex, of course, and we will show the three most common ways to use AppDynamics Docker Store Images to monitor your app, all while adhering to Docker best practices. We will continue to use this repo and move between the “master” and “docker-store-images” branches. If you haven’t read my previous post, I recommend doing so first, as we will build on the source code used there.

First Things First: The Image

Over at the AppDynamics page on the Docker Store (login required), we have three types of images for Java applications, each with our agents on them. In this project, we will work solely with the Machine Agent and Java images but, in principle, the scenarios and implementations are language-agnostic. The images are based on OpenJDK, Tomcat, and Jetty. Since our application uses Tomcat, we will use that image. (store/appdynamics/java:4.3.7.1_tomcat9-jre8).

You can see how every image is versioned with (store/appdynamics<language>:<agent-version>_<server-runtime environment>).

By inspecting the image <docker inspect store/appdynamics/java:4.3.7.1_tomcat9-jre8>, we are able verify important environmental variables, including the fact that we’re running Java 8. We’re also able to identify the JAVA_HOME variable, which will play an important role (more on this below). Furthermore, we can verify that Tomcat is installed with the correct versions and paths. Lastly, we notice the command to start the agent is simply the catalina.sh run command. On startup, the image runs Tomcat and the agent. (This is important to note as we dive deeper.)

Lastly, if you plan to use a third-party image in your production application, the image must be trusted. This means it must not modify any other containers at runtime. Having the OS level agent force itself into a containerized app—and intrusively modify code at runtime—defeats one of the main advantages of containerization: the ability to run the container anywhere without worrying about how your code executes. Always keep this in mind when evaluating container monitoring software. (AppDynamics’ images pass this test, by the way.)

Here are the three most practical migrations you’re likely to face:

Scenario 1: A Perfect World

This is the best case scenario, but also the least practical. In a perfect world, we’d be able to pull the image as a top layer, pass in only environment variables, and see our application discovered within minutes. However in our situation, we can’t do this because we have a custom startup script that we want to have run when our container starts. In this example, we’ve chosen to use Dockerize (https://github.com/jwilder/dockerize) to simplify the process of converting our application to use Docker, but of course there are many situations where you might need some custom start logic in your containers.  If you’re not using Dockerize in your script, simply pull the image and pass in the environment variables that name the individual components. Since the agents run on startup, this method will be seamless.

Scenario 2: Install Tomcat

Ideally, we’d like to make as few changes as possible. The problem here is that we have a unique startup script that needs to run when each project is started. In this scenario, your workaround is to use the agent image that doesn’t have Tomcat—in other words, use store/appdynamics/java:4.3.7.1 and install Tomcat on the image. With this approach, you remove the overlapping start commands and top-level agent. The downside is reinstalling Tomcat on every image rebuild.

Scenario 3: Refactor Start Script

Here’s the most common scenario when migrating from a physical agent—and one we found ourselves in. A specific run script brings up all of your applications. Refactoring your apps to pull from the image and start your application would be too time consuming, and would ask too much of the customer. The solution: Combine the two start scripts.

In our scenario, we had a directory responsible for the server, and another responsible for downloading and installing the agents. Since we were using Tomcat, we decided to leverage the image with Tomcat and our monitoring software, which was already installed <store/appdynamics/java:4.3.7.1_tomcat9-jre8>. (We went with the official Tomcat image because it’s the one used by the AppDynamics image.)

In our startup script, AD-Capital-Docker/ADCapital-Tomcat/startup.sh, we used Dockerize to spin up all the services. You’ll notice that we added a couple of environment variables, ${APPD_JAVAAGENT} and ${APPD_PROPERTIES}, to each start command. In our existing version, these changes enable the script to see if AppD properties are set and, if so, to start the application agent.

The next step was to refactor the startup script to use our new image. (To get the agent start command, simply pull the image, run the container, and run ps -ef at the command line.)

Since Java was installed to a different location, we had to put its path in our start command, replacing “java” with “/docker-java-home/jre/bin/java”. This approach allowed us to ensure that our application was using the Java provided from the image.

Next, we needed to make sure we were starting the services using Tomcat, and with the start command from the AppDynamics agent image. By using the command from above, we were able to replace our Catalina startup:

-cp ${CATALINA_HOME}/bin/bootstrap.jar:${CATALINA_HOME}/bin/tomcat-juli.jar org.apache.catalina.startup.Bootstrap

…with the agent startup:

-Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djdk.tls.ephemeralDHKeySize=2048
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources
-javaagent:/opt/appdynamics/javaagent.jar -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
-Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat
-Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start

If you look closely, though, not all of the services were using Tomcat on startup. The last two services simply needed to start the agent. By using the same environment variable (APPD_JAVA_AGENT), we were able to rename that variable (APPD_JAVA_AGENT) to be the path of the agent jar. And with that, we had our new startup script.

[startup.sh (BEFORE)]

[startup.sh (AFTER)]

Not only did this approach allow us to get rid of our AppDynamics directory, it also enabled a seamless transition to monitoring via Docker images.

The AppD Approach: Composing Docker Containers for Monitoring

Since its introduction four years ago, Docker has vastly changed how modern applications and services are built. But while the benefits of microservices are well documented, the bad habits aren’t.

Case in point: As people began porting more of their monolithic applications to containers, Dockerfiles ended up becoming bloated, defeating the original purpose of containers. Any package or service you thought you needed was installed on the image. This led to minor changes in source or server, forcing you to rebuild the image. People would package multiple processes into a single Dockerfile. And obviously, as the images got bigger, things became much less efficient because you would spend all of your time waiting on a rebuild to check a simple change in source code.

The quick fix was to layer your applications. Maybe you had a base image, a language-specific image, a server image, and then your source code. While your images became more contained, any change to your bottom-level images would require an entire rebuild of the image set. Although your Dockerfiles became less bloated, you still suffered from the same upgrade issues. With the industry becoming more and more agile, this practice didn’t feel aligned.

The purpose of this blog is to show how we migrated an application to Docker—highlighting the Docker best practices we implemented—and how we achieved our end goal of monitoring the app in AppDynamics. (Source code located here)

Getting Started

With these best (and worst) practices in mind, we began by taking a multi-service Java application and putting it into Docker Compose. We wanted to build out the containers with the Principle of Least Privilege: each system component or process should have the least authority needed to complete its tasks. The containers needed to be ephemeral too, always shutting down when a SIGTERM is received. Since there were going to be environment variables reused across multiple services, we created a docker-compose.env file (image below) that could be leveraged across every service.

[AD-Capital-Docker/docker-compose.env]

Lastly, we knew that for our two types of log data—Application and Agent—we would need to create a shared volume to house it.

[AD-Capital-Docker/docker-compose.yml]

Instead of downloading and installing Java or Tomcat in the Dockerfile, we decided to pull the images directly from the official Tomcat in the Docker Store. This would allow us to know which version we were on without having to install either Java or Tomcat. Upgrading versions of Java or Tomcat would be easy, and would leave the work to Tomcat instead of on our end.

We knew we were going to have a number of services dependent on each other and linking through Compose, and that a massive bash script could cause problems. Enter Dockerize, a utility that simplifies running applications in Docker containers. Its primary role is to wait for other services to be available using TCP, HTTP(S) and Unix before starting the main process.

Some backstory: When using tools like Docker Compose, it’s common to depend on services in other linked containers. But oftentimes relying on links is not enough; while the container itself may have started, the service(s) within it may not be ready, resulting in shell script hacks to work around race conditions. Dockerize gives you the ability to wait for services on a specified protocol (file, TCP, TCP4, TCP6, HTTP, HTTPS and Unix) before starting your application. You can use the -timeout # argument (default: 10 seconds) to specify how long to wait for the services to become available. If the timeout is reached and the service is still not available, the process exits with status code 1.

[AD-Capital-Docker/ADCapital-Tomcat/startup.sh]

We then separated the source code from the agent monitoring. (The project uses a Docker volume to store the agent binaries and log/config files.) Now that we had a single image pulled from Tomcat, we could place our source code in the single Dockerfile and replicate it anywhere. Using prebuilt war files, we could download source from a different time, and place it in the Tomcat webapps subdirectory.

[AD-Capital-Docker/ADCapital-Project/Dockerfile]

We now had a Dockerfile containing everything needed for our servers, and a Dockerfile for the source code, allowing you to run it with or without monitoring enabled. The next step was to split out the AppDynamics Application and Machine Agent.

We knew we wanted to instrument with our agents, but we didn’t want a configuration file with duplicate information for every container. So we created a docker-compose.env. Since our agents require minimal configuration—and the only difference between “tiers” and “nodes” are their names—we knew we could pass these env variables across the agents without using multiple configs. In our compose file, we could then specify the tier and node name for the individual services.

[AD-Capital-Docker/docker-compose.yml]

For the purpose of this blog, we downloaded the agent and passed in the filename and SHA-256 checksum via shell scripts in the ADCapital-Appdynamics/docker-compose.yml file. We were able to pass in the application agent and configuration script to run appdynamics to the shared volume, which would allow the individual projects to use it on startup (see image below). Now that we had enabled application monitoring for our apps, we wanted to install the machine agent to enable analytics. We followed the same instrumentation process, downloading the agent and verifying the filename and checksums. The machine agent is a standalone process, so our configuration script was a little different, but took advantage of the docker-compose.env variable name to set the right parameters for the machine agent (ADCapital-Monitor/start-appdynamics). 

[AD-Capital-Docker/ADCapital-AppDynamics/startup.sh]

The payoff? We now have an image responsible for the server, one responsible for the load, and another responsible for the application. In addition, another image monitors the application, and a final image monitors the application’s logs and analytics. Updating an individual component will not require an entire rebuild of the application. We’re using Docker as it was intended: each container has one responsibility. Lastly, by using volumes to share data across services, we can easily check agent and application Logs. This makes it much easier to gain visibility into the entire landscape of our software.

If you would like to see the source code used for this blog, it is located here with instructions on how to build and setup. In the next blog, we will show you how to migrate from host agents,  using Docker images from the Docker store.

Updates to Microservices iQ: Gain Deeper Visibility into Docker Containers and Microservices

Enterprises have never been under more pressure to deliver digital experiences at the high bar set by the likes of Facebook, Google, and Amazon. According to our recent App Attention Index 2017, consumers expect more from applications than ever before. And if you don’t meet those expectations? More than 50 percent delete an app after a single use due to poor app performance, and 80 percent (!) have deleted an app after it didn’t meet their expectations.

Because microservices and containers have been shown to help businesses ship better software faster, many are adopting these architectures. According to Gartner (“Innovation Insight for Microservices” 2017), early adopters of microservices (like Disney, GE, and Goldman Sachs) have cut development lead times by as much as 75 percent. However, containers and microservices also introduce new levels of complexity that make it challenging to isolate the issues that can degrade the entire performance of applications.

Updated Microservices iQ

Today, we’re excited to announce Microservices iQ Integrated Docker Monitoring. With the Microservices iQ, you can three-way drill-down of baseline metrics, container metrics and underlying host server metrics — all within the context of Business Transactions and single pane of glass.

Now, together with the baseline metrics that you rely on to run the world’s largest applications, you can click to view critical container metadata plus key resource indicators for single containers or clusters of containers. You can then switch seamlessly to a view of the underlying host server to view all the containers running on that host and its resource utilization.

To troubleshoot a problem with a particular microservice running inside a container, the most important determination to make is where to start. And that’s where Microservices iQ Integrated Docker Monitoring stands out.

Is a container unresponsive because another container on the same host is starving it of CPU, disk or memory? Or is there an application issue that has been exposed by the particular code path followed by this business transaction that needs to be diagnosed using Transaction Snapshots or other traditional tools?

Sometimes the source of the problem is easy to spot, but often not: and that’s where another significant enhancement to Microservices iQ comes into play: heat maps.

Heat Maps

Heat maps are a powerful visual representation of complex, multi-dimensional data. You’ve probably seen them used to show things like changes in climate and snow cover over time, financial data, and even for daily traffic reports. Because heat maps can abstract the complexity of huge amounts of data to quickly visualize complex data patterns, we’re leveraging the technique to help address one of the hardest challenges involved in managing a microservice architecture – pinpointing containers for performance anomalies and outliers.

When a cluster of containers is deployed, the expectations is each container will behave identically. We know from experience that that isn’t always true. While the majority of the containers running a given microservice may perform within expected baselines, some may exhibit slowness or higher than usual error rates, resulting in the poor user experience that leads to uninstalled apps. Ops teams managing business-critical applications need a way to quickly identify when and where these outliers are occurring, and then view performance metrics for those nodes to look for potential correlation that help cut through the noise.

With the latest Microservices iQ, we have added support for heat maps in our new Tier Metrics Correlator feature which show load imbalances and performance anomalies across all the nodes in a tier, with heat maps to highlight correlation between these occurrences and the key resource metrics (CPU, disk, memory, I/O) for the underlying servers or container hosts. Issues that would have taken hours to investigate using multiple dashboards and side-by-side metric comparisons are often immediately apparent, thanks to the unique visualization advantages that heat maps provide. Think of it like turning on the morning traffic report and finding an unused backroad that’ll get you where you’re going in half the time.

Learn more

Find out more about updates to Microservices iQ, Docker Monitoring, and a new partnership with Atlassian Jira.

 

A Deep Dive into Docker – Part 2

In Part One of this Docker primer I gave you an overview of Docker, how it came about, why it has grown so fast and where it is deployed. In the second section, I’ll delve deeper into technical aspects of Docker, such as the difference between Docker and virtual machines, the difference between Docker elements and parts, and the basics of how to get started.

Docker Vs. Virtual Machines

First, I will contrast Docker containers with virtual machines like VirtualBox or VMWare. With virtual machines the entire operating system is found inside the environment, running on top of the host through a hypervisor layer. In effect, there are two operating systems running at the same time.

In contrast, Docker has all of the services of the host operating system virtualized inside the container, including the file system. Although there is a single operating system, containers are self-contained and cannot see the files or processes of another container.

Differences Between Virtual Machines and Docker

  • Each virtual machines has its own operating system, whereas all Docker containers share the same host or kernel.

  • Virtual machines do not stop after a primary command; on the other hand, a Docker container stops after it completes the original command.

  • Due to the high CPU and memory usage, a typical computer can only run one or two virtual machines at a time. Docker containers are lightweight and can run alongside several other containers on an average laptop computer. Docker’s excellent resource efficiency is changing the way developers approach creating applications.

  • Virtual machines have their own operating system, so they might take several minutes to boot up. Docker containers do not need to load an operating system and take microseconds to start.

  • Virtual machines do not have effective diff, and they are not version controlled. You can run diff on Docker images and see the changes in the file systems; Docker also has a Docker Hub for checking images in and out, and private and public repositories are available.

  • A single virtual machine can be launched from a set of VMDK or VMX files while several Docker containers can be started from a one Docker image.

  • A virtual machine host operating system does not have to be the same as the guest operating system. Docker containers do not have their own independent operating system, so they must be exactly the same as the host (Linux Kernel.)

  • Virtual machines do not use snapshots often — they are expensive and mostly used for backup. Docker containers use an imaging system with new images layered on top, and containers can handle large snapshots.

Similarities Between Virtual Machines and Docker

  • For both Docker containers and virtual machines, processes in one cannot see the processes in another.

  • Docker containers are instances of the Docker image, whereas virtual machines are considered running instances of physical VMX and VMDK files.

  • Docker containers and virtual machines both have a root file system.

  • A single virtual machine has its own virtual network adapter and IP address; Docker containers can also have a virtual network adapter, IP address, and ports.

Virtual machines let you access multiple platforms, so users across an organization will have similar workstations. IT professionals have plenty of flexibility in building out new workstations and servers in response to expanding demand, which provides significant savings over investing in costly dedicated hardware.

Docker is excellent for coordinating and replicating deployment. Instead of using a single instance for a robust, full-bodied operating system, applications are broken down into smaller pieces that communicate with each other.

Installing Docker

Docker gives you a fast and efficient way to port apps on machines and systems. Using Linux containers (LXC) you can place apps in their own applications and operate them in a secure, self-contained environment. The important Docker parts are as follows:

  1. Docker daemon manages the containers.

  2. Docker CLI is used to communicate and command the daemon.

  3. Docker image index is either a private or public repository for Docker images.

Here are the major Docker elements:

  1. Docker containers bold everything including the application.

  2. Docker images are of containers or the operating system.

  3. Dockerfiles are scripts that build images automatically.

Applications using the Docker system employ these elements.

Linux Containers – LXC

Docker containers can be thought of as directories that can be archived or packed up and shared across a variety of platforms and machines. All dependencies and libraries are inside the container, except for the container itself, which is dependent on Linux Containers (LXC). Linux Containers let developers create applications and their dependent resources, which are boxed up in their own environment inside the container. The container takes advantage of Linux features such as profiles, cgroups, chroots and namespaces to manage the app and limit resources.

Docker Containers

Among other things, Docker containers provide isolation of processes, portability of applications, resource management, and security from outside attacks. At the same time, they cannot interfere with the processes of another container, do not work on other operating systems and cannot abuse the resources on the host system.

This flexibility allows containers to be launched quickly and easily. Gradual, layered changes lead to a lightweight container, and the simple file system means it is not difficult or expensive to roll back.

Docker Images

Docker containers begin with an image, which is the platform upon which applications and additional layers are built. Images are almost like disk images for a desktop machine, and they create a solid base to run all operations inside the container. Each image is not dependent on outside modifications and is highly resistant to outside tampering.

As developers create applications and tools and add them to the base image, they can create new image layers when the changes are committed. Developers use a union file system to keep everything together as a single item.

Dockerfiles

Docker images can be created automatically by reading a Dockerfile, which is a text document that contains all commands needed to build the image. Many instructions can be completed in succession, and the context includes files at a specific PATH on the local file system or a Git repository location; related subdirectories are included in the PATH. Likewise, the URL will include the submodules of the repository.

Getting Started

Here is a shortened example on how to get started using Docker on Ubuntu Linux — enter these Docker Engine CLI commands on a terminal window command line. If you are familiar with package managers, you can use apt and yum for installation.

  1. Log into Ubuntu with sudo.

  2. Make sure curl is installed:
    $ which curl

  3. If not, install it but update the manager first:
    $ sudo apt-get update
    $ sudo apt-get install curl

  4. Grab the latest Docker version:
    $ curl -fsSL

  5. You’ll need to enter your sudo password. Docker and its dependencies should be downloaded by now.

  6. Check that Docker is installed correctly:
    $ docker run hello-world

You should see “Hello from Docker” on the screen, which indicates Docker seems to be working correctly. Consult the Docker installation guide to get more details and find installation instructions for Mac and Windows.

Ubuntu Images

Docker is reasonably easy to work with once it is installed since the Docker daemon should be running already. Get a list of all docker commands by running sudo docker

Here is a reference list that lets you search for a docker image from a list of Ubuntu images. Keep in mind an image must be on the host machine where the containers will reside; you can pull an image or view all the images on the host using sudo docker images

Commit an image to ensure everything is the same where you last left — that way it is at the same point for when you are ready to use it again: sudo docker commit

[image name]

To create a container, start with an image and indicate a command to run. You’ll find complete instructions and commands with the official Linux installation guide.

Technical Differences

In this second part of my two-part series on Docker, I compared the technical differences between Docker and virtual machines, broke down the Docker components and reviewed the steps to get started on Linux. The process is straight forward — it just takes some practice implementing these steps to start launching containers with ease.

Begin with a small, controlled environment to ensure the Docker ecosystem will work properly for you; you’ll probably find, as I did, that the application delivery process is easy and seamless. In the end, the containers themselves are not the real advantage: the real game-changer is the opportunity to deliver applications in a much more efficient and controlled way. I believe you will enjoy how Docker allows you to migrate from dated monolithic architectures to fast, lightweight microservice faster than you thought possible.

Docker is changing app development at a rapid pace. It allows you to create and test apps quickly in any environment, provides access to big data analytics for the enterprise, helps knock down walls separating Dev and Ops, makes the app development process better and brings down the cost of infrastructure while improving efficiency.

An Introduction to Docker – Part 1

What is Docker?

In simple terms, the Docker platform is all about making it easier to create, deploy and run applications by using containers. Containers let developers package up an application with all of the necessary parts, such as libraries and other elements it is dependent upon, and then ship it all out as one package. By keeping an app and associated elements within the container, developers can be sure that the apps will run on any Linux machine no matter what kind of customized settings that machine might have, or how it might differ from the machine that was used for writing and testing the code. This is helpful for developers because it makes it easier to work on the app throughout its life cycle.

Docker is kind of like a virtual machine, but instead of creating a whole virtual operating system (OS), it lets applications take advantage of the same Linux kernel as the system they’re running on. That way, the app only has to be shipped with things that aren’t already on the host computer instead of a whole new OS. This means that apps are much smaller and perform significantly better than apps that are system dependent. It has a number of additional benefits.

Docker is an open platform for distributed applications for developers and system admins. It provides an integrated suite of capabilities for an infrastructure agnostic CaaS model. With Docker, IT operations teams are able to secure, provision and manage both infrastructure resources and base application content while developers are able to build and deploy their applications in a self-service manner.

Key Benefits

  • Open Source: Another key aspect of Docker is that it is completely open source. This means anyone can contribute to the platform and adapt and extend it to meet their own needs if they require extra features that don’t come with Docker right out of the box. All of this makes it an extremely convenient option for developers and system administrators.

  • Low-Overhead: Because developers don’t have to provide a truly virtualized environment all the way down to the hardware level, they can keep overhead costs down by creating only the necessary libraries and OS components that make it run.

  • Agile: Docker was built with speed and simplicity in mind and that’s part of the reason it has become so popular. Developers can now very simply package up any software and its dependencies into a container. They can use any language, version and tooling because they are packaged together into a container that, in effect, standardizes all elements without having to sacrifice anything.

  • Portable: Docker also makes application containers completely portable in a totally new way. Developers can now ship apps from development to testing and production without breaking the code. Differences in the environment won’t have any effect on what is packaged inside the container. There’s also no need to change the app for it to work in production, which is great for IT operations teams because now they can avoid vendor lock in by moving apps across data centers.

  • Control: Docker provides ultimate control over the apps as they move along the life cycle because the environment is standardized. This makes it a lot easier to answer questions about security, manageability and scale during this process. IT teams can customize the level of control and flexibility needed to keep service levels, performance and regulatory compliance in line for particular projects.

How Was It Created and How Did It Come About?

Apps used to be developed in a very different fashion. There were tons of private data centers where off-the-shelf software was being run and controlled by gigantic code bases that had to be updated once a year. With the development of the cloud, all of that changed. Also, now that companies worldwide are so dependent on software to connect with their customers, the software options are getting more and more customized.

As software continued to get more complex, with an expanding matrix of services, dependencies and infrastructure, it posed many challenges in reaching the end state of the app. That’s where Docker comes in.

In 2013, Docker was developed as a way to build, ship and run applications anywhere using containers. Software containers are a standard unit of software that isn’t affected by what code and dependencies are included within it. This helped developers and system administrators deal with the need to transport software across infrastructures and various environments without any modifications.

Docker was launched at PyCon Lightning Talk – The future of Linux Containers on March 13, 2013. The Docker mascot, Moby Dock, was created a few months later. In September, Docker and Red Hat announced a major alliance, introducing Fedora/RHEL compatibility. The company raised $15 million in Series B funding in January of 2014. In July 2014 Docker acquired Orchard (Fig) and in August 2014 the Docker Engine 1.2 was launched. In September 2014 they closed a $40 million Series C funding and by December 31, 2014, Docker had reached 100 million container downloads. In April 2015, they secured another $95 million in Series D funding and reached 300 million container downloads.

How Does It Work?

Docker is a Container as a Service (CaaS). To understand how it works, it’s important to first look at what a Linux container is.

Linux Containers

In a normal virtualized environment, virtual machines run on top of a physical machine with the aid of a hypervisor (e.g. Xen, Hyper-V). Containers run on user space on top of an operating system’s kernel. Each container has its own isolated user space, and it’s possible to run many different containers on one host. Containers are isolated in a host using two Linux kernel features: Namespaces and Control Groups.

There are six namespaces in Linux and they allow a container to have its own network interfaces, IP address, etc. The resources that a container uses are managed by control groups, which allow you to limit the amount of CPU and memory resources a container should use.

Docker

Docker is a container engine that uses the Linux Kernel features to make containers on top of an OS and automates app deployment on the container. It provides a lightweight environment to run app code in order to create a more efficient workflow for moving your app through the life cycle. It runs on a client-server architecture. The Docker Daemon is responsible for all the actions related to the containers, and this daemon gets the commands from the Docker client through cli or REST APIs.

The containers are built from images, and these images can be configured with apps and used as a template for creating containers. They are organized in a layer, and every change in an image is added as a layer on top of it. The Docker registry is where Docker images are stored, and developers use a public or private registry to build and share images with their teams. The Docker-hosted registry service is called Docker Hub, and it allows you to upload and download images from a central location.

Once you have your images, you can create a container, which is a writable layer of the image. The image tells Docker what the container holds, what process to run when the container is launched and other configuration data. Once the container is running, you can manage it, interact with the app and then stop and remove the container when you’re done. It makes it simple to work with the app without having to alter the code.

Why Should a Developer Care?

Docker is perfect for helping developers with the development cycle. It lets you develop on local containers that have your apps and services, and can then integrate into a continuous integration and deployment workflow. Basically, it can make a developer’s life much easier. It’s especially helpful for the following reasons:

Easier Scaling

Docker makes it easy to keep workloads highly portable. The containers can run on a developer’s local host, as well as on physical or virtual machines or in the cloud. It makes managing workloads much simpler, as you can use it to scale up or tear down apps and services easily and nearly in real time.

Higher Density and More Workloads

Docker is a lightweight and cost-effective alternative to hypervisor-based virtual machines, which is great for high density environments. It’s also useful for small and medium deployments, where you want to get more out of the resources you already have.

Key Vendors and Supporters Behind Docker

The Docker project relies on community support channels like forums, IRC and StackOverflow. Docker has received contributions from many big organizations, including:

  • Project Atomic

  • Google

  • GitHub

  • FedoraCloud

  • AlphaGov

  • Tsuru

  • Globo.com

Docker is supported by many cloud vendors, including:

  • Microsoft

  • IBM

  • Rackspace

  • Google

  • Canonical

  • Red Hat

  • VMware

  • Cisco

  • Amazon

Stay tuned for our next installment, where we will dig even deeper into Docker and its capabilities. In the meanwhile, read this blog post to learn how AppDynamics provides complete visibility into Docker Containers.

 

5 Things Your CIO Needs to Know about Docker

It’s no secret that Docker has revolutionized the application virtualization space. Today, it’s one of the fastest adopted technologies across enterprises of all sizes—and now, It’s more than just a developer’s preferred open source framework. It also drives the ideal business case to C-level decision makers by creating the ideal transitional opportunity for operational efficiency and optimized IT budgets to driving innovation and expansion. We’ve listed a few of the many reasons why your CIO needs to be paying attention to the potential around Docker.

Docker is nearly complete DevOps technology available today

DevOps has a lot to gain from container based software. As the collaboration and integration between these teams have increased with technical advances, the need to manage application dependencies throughout dev cycles has increased as well. Docker is a point of convergence for Development and Operations, and it creates a seamless link between the two to collaborate without manual barriers and processes.

Docker comes with low overhead, and with the ability to maintain a low memory capacity, it allows multiple services to run at once to allow for better collaboration. It also utilizes its shared volumes to make application code available to the containers from a host operating system so a developer can access and edit source code from any platform and see changes instantly. Docker’s flexibility also allows a front-end engineer the opportunity to explore how back-end systems work to gain full understanding of the full stack and drive a more encompassing workflow.

Docker is more manageable and lightweight compared to virtual stations

While many PaaS options are built to handle most tasks for development teams, overhead costs to maintain the architecture, begin to offset its benefits. Docker allows you to create flexible environments so you can enter deeper layers of the stack and work without disrupting any other workflows. Docker containers are easier to manage than traditional heavyweight visualizations–it’s a whole series of layers, and changing one layer doesn’t mean impacting the rest. Before its implementation, engineers would have to build out virtual machine with some fake load inside the environment. Now, they’re able to package to reduce how many virtual machines they implement, reducing costs and overhead.

Docker has the competitive advantage

It’s clear that Docker is not the only container name out today, that said, it easily owns the mindshare of IT leaders and developers alike. In the short amount of time since its 1.0 release, Docker has already seen support from leaders like Red Hat, IBM, Amazon, and even VMWare. As the pioneer of a business model tailored for developers, Docker has paved the path for rapid adoption in the container space. However, as an open source technology, it also sustains a growing community with contributors and stakeholders to lead the channels toward innovation and advancements.

Docker allows for increased developer productivity, and in turn, increased innovation

Using container-based software already creates a seamless collaboration and handoff between anyone from development, operations, and testing teams. It’s more than likely that your engineers benefit from time away from redundant tasks and troubleshooting. Returning the focus on creating, innovating, and responding to demand with a better outcome and ultimately a better product only benefits them, and your organization the most.

Creating better use of the cloud

Using containers in the cloud creates more instance utilization. By deploying multiple Docker applications onto a single cloud instance, you are much closer to achieving 100% utilization of your resource. Docker allows you to run multiple apps on the same cloud safely by abstracting and isolating their dependencies.

Your CIO’s role is already transitioning from what it used to be. Instead of focusing on operational efficiencies and cost centers, they have the power to drive innovation and productivity to their IT and development teams. Docker might have a lot of rooms to grow into, and adjust to pain points, but it already has the potential to be implemented as a best practice throughout organizations. It initiates a methodology of collaboration, sharing, education and efficiency on teams. As DevOps and Agile practices become a necessity instead of an option within enterprise teams, Docker represents much more than a container-based software. It represents a new era of digital innovation, one that makes your team excel in innovation, development, cultural practices and more.