The AppD Approach: Monitoring a Docker-on-Windows App

Here at AppDynamics, we’ve developed strong support for .NET, Windows, and Docker users. But something we haven’t spent much time documenting is how to instrument a Docker-on- Windows app. In this blog, I’ll show you how straightforward it is to get one up and running using our recently announced micro agent. Let’s get started.

Sample Reference Application

Provided with this guide is a simple ASP.NET MVC template app running on an ASP.NET application on the .NET full framework. The sample application link is provided below:

source.zip

If you have your own source code, feel free to use it.

Guide System Information

This guide was written and built on the following platform:

  • Windows Server 2016 Build 14393.rs1_release.180329-1711 (running on VirtualBox)

  • AppDynamics .NET Micro Agent Distro 4.4.3

Prerequisite Steps

Before instrumenting our sample application, we first need to download and get the .NET micro agent. This step assumes you are not using an IDE such as Visual Studio, and are working manually on your local machine.

Step 1: Get NuGet Package Explorer

If you already have a way to view and/or download NuGet packages, skip this step. There are many ways to extract and view a NuGet package, but one method is with a tool called NuGet Package Explorer, which can be downloaded here.

Step 2: Download and Extract the NuGet Package

We’ll need to download the appropriate NuGet package to instrument our .NET application.

  1. Go to https://www.nuget.org/

  2. Search for “AppDynamics”

  3. The package we need is called “AppDynamics.Agent.Distrib.Micro.Windows.”

  4. Click “Manual Download” or use the various standard NuGet packages.

  5. Now open the package with NuGet Package Explorer.

  6. Choose “Open a Local Package.”

  7. Find the location of your downloaded NuGet package and open it. You should see the screen below:

  1. Choose “File” and “Export” to export the NuGet package to a directory on your local machine.

  1. Navigate to the directory where you exported the NuGet package, and confirm that you see this:

Step 3: Create Directory and Configure Agent

Now that we’ve extracted our NuGet package, we will create a directory structure to deploy our sample application.

  1. Create a local directory somewhere on your machine. For example, I created one on my Desktop:
    C:\Users\Administrator\Docker Sample\

  1. Navigate to the directory in Step 1, create a subfolder called “source” and add the sample application code provided above (or your own source code) to this directory. If you used the sample source provided, you’ll see this:

  1. Go back to the root directory and create a directory called “agent”.

  2. Add the extracted AppDynamics micro agent components from Step 1 to this directory.

  3. Edit “AppDynamicsConfig.json” and add in your controller and application information.

{
  "controller": {
    "host": "",
    "port": ,
    "account": "",
    "password": "",
    "ssl": false,
    "enable_tls12": false
  },
  "application": {
    "name": "Sample Docker Micro Agent",
    "tier": "SampleMVCApp"
  }
}
  1. Navigate to the root of the folder, create a file called “dockerFile” and add the following text:

Sample Docker Config

FROM microsoft/iis
SHELL ["powershell"]

RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
   Install-WindowsFeature Web-Asp-Net45

ENV COR_ENABLE_PROFILING="1"
ENV COR_PROFILER="{39AEABC1-56A5-405F-B8E7-C3668490DB4A}"
ENV COR_PROFILER_PATH="C:\appdynamics\AppDynamics.Profiler_x64.dll"

RUN mkdir C:\webapp
RUN mkdir C:\appdynamics

RUN powershell -NoProfile -Command \
  Import-module IISAdministration; \    
  New-IISSite -Name "WebSite" -PhysicalPath C:\webapp -BindingInformation "*:8000:" 

EXPOSE 8000

ADD agent /appdynamics
ADD source /webapp

RUN powershell -NoProfile -Command Restart-Service wmiApSrv
RUN powershell -NoProfile -Command Restart-Service COMSysApp

Here’s what your root folder will now look like:

Building the Docker Container

Now let’s build the Docker container.

  1. Open Powershell Terminal and navigate to the location of your Docker sample app. In this example, I will call my image “appdy_dotnet” but feel free to use a different name if you desire.

  2. Run the following command to build the Docker image:

docker build –no-cache -t  appdy_dotnet .

  1. Now build the image:

docker run –name appdy_dotnet -d appdy_dotnet ping -t localhost

  1. Log into the container via powershell/cmd:

docker exec -it appdy_dotnet cmd

  1. Get the container IP by running the “ipconfig” command:
C:\ProgramData\AppDynamics\DotNetAgent\Logs>ipconfig
Windows IP Configuration


Ethernet adapter vEthernet (Container NIC 69506b92):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::7049:8ad9:94ad:d255%17
   IPv4 Address. . . . . . . . . . . : 172.30.247.210
   Subnet Mask . . . . . . . . . . . : 255.255.240.0
  1. Copy the IPv4 address, add port 8000, and request the URL from a browser. You should get the following site back (below). This is just a simple ASP.NET MVC template app that is provided with Visual Studio. In our example, the address would be:

http://<ip4-address>:8000

Here’s what the application would look like:

  1. Generate some load in the app by clicking the Home, About, and Contact tabs. Each will be registered as a separate business transaction.

Killing the Container (Optional)

In the event you get some errors and want to rebuild the container, here are some helpful commands that can be used for stopping and removing the container, if needed.

  1. Stop the container:

docker stop appdy_dotnet

  1. Remove the container:

docker rm appdy_dotnet

  1. Remove image:

docker rmi appdy_dotnet

Verify Successful Configuration via Controller

Log in to your controller and verify that you are seeing load. If you used the sample app, you’ll see the following info:

Application Flow Map

Business Transactions

Tier Information


As you can see, it’s fairly easy to instrument a Docker-on-Windows app using AppDynamics’ recently announced micro agent. To learn more about AppD’s powerful approach to monitoring .NET Core applications, read this blog from my colleague Meera Viswanathan.

The AppD Approach: Leveraging Docker Store Images with Built-In AppDynamics

In my previous blog we explored some of the best and worst practices of Docker, taking a hands-on approach to refactoring an application, always with containers and monitoring in mind. In that project, we chose to use physical agents from the Appdynamics download site as our monitoring method. But this time we are going to take things one step further: using images from the Docker Store to improve the same application.

Modern applications are very complex, of course, and we will show the three most common ways to use AppDynamics Docker Store Images to monitor your app, all while adhering to Docker best practices. We will continue to use this repo and move between the “master” and “docker-store-images” branches. If you haven’t read my previous post, I recommend doing so first, as we will build on the source code used there.

First Things First: The Image

Over at the AppDynamics page on the Docker Store (login required), we have three types of images for Java applications, each with our agents on them. In this project, we will work solely with the Machine Agent and Java images but, in principle, the scenarios and implementations are language-agnostic. The images are based on OpenJDK, Tomcat, and Jetty. Since our application uses Tomcat, we will use that image. (store/appdynamics/java:4.3.7.1_tomcat9-jre8).

You can see how every image is versioned with (store/appdynamics<language>:<agent-version>_<server-runtime environment>).

By inspecting the image <docker inspect store/appdynamics/java:4.3.7.1_tomcat9-jre8>, we are able verify important environmental variables, including the fact that we’re running Java 8. We’re also able to identify the JAVA_HOME variable, which will play an important role (more on this below). Furthermore, we can verify that Tomcat is installed with the correct versions and paths. Lastly, we notice the command to start the agent is simply the catalina.sh run command. On startup, the image runs Tomcat and the agent. (This is important to note as we dive deeper.)

Lastly, if you plan to use a third-party image in your production application, the image must be trusted. This means it must not modify any other containers at runtime. Having the OS level agent force itself into a containerized app—and intrusively modify code at runtime—defeats one of the main advantages of containerization: the ability to run the container anywhere without worrying about how your code executes. Always keep this in mind when evaluating container monitoring software. (AppDynamics’ images pass this test, by the way.)

Here are the three most practical migrations you’re likely to face:

Scenario 1: A Perfect World

This is the best case scenario, but also the least practical. In a perfect world, we’d be able to pull the image as a top layer, pass in only environment variables, and see our application discovered within minutes. However in our situation, we can’t do this because we have a custom startup script that we want to have run when our container starts. In this example, we’ve chosen to use Dockerize (https://github.com/jwilder/dockerize) to simplify the process of converting our application to use Docker, but of course there are many situations where you might need some custom start logic in your containers.  If you’re not using Dockerize in your script, simply pull the image and pass in the environment variables that name the individual components. Since the agents run on startup, this method will be seamless.

Scenario 2: Install Tomcat

Ideally, we’d like to make as few changes as possible. The problem here is that we have a unique startup script that needs to run when each project is started. In this scenario, your workaround is to use the agent image that doesn’t have Tomcat—in other words, use store/appdynamics/java:4.3.7.1 and install Tomcat on the image. With this approach, you remove the overlapping start commands and top-level agent. The downside is reinstalling Tomcat on every image rebuild.

Scenario 3: Refactor Start Script

Here’s the most common scenario when migrating from a physical agent—and one we found ourselves in. A specific run script brings up all of your applications. Refactoring your apps to pull from the image and start your application would be too time consuming, and would ask too much of the customer. The solution: Combine the two start scripts.

In our scenario, we had a directory responsible for the server, and another responsible for downloading and installing the agents. Since we were using Tomcat, we decided to leverage the image with Tomcat and our monitoring software, which was already installed <store/appdynamics/java:4.3.7.1_tomcat9-jre8>. (We went with the official Tomcat image because it’s the one used by the AppDynamics image.)

In our startup script, AD-Capital-Docker/ADCapital-Tomcat/startup.sh, we used Dockerize to spin up all the services. You’ll notice that we added a couple of environment variables, ${APPD_JAVAAGENT} and ${APPD_PROPERTIES}, to each start command. In our existing version, these changes enable the script to see if AppD properties are set and, if so, to start the application agent.

The next step was to refactor the startup script to use our new image. (To get the agent start command, simply pull the image, run the container, and run ps -ef at the command line.)

Since Java was installed to a different location, we had to put its path in our start command, replacing “java” with “/docker-java-home/jre/bin/java”. This approach allowed us to ensure that our application was using the Java provided from the image.

Next, we needed to make sure we were starting the services using Tomcat, and with the start command from the AppDynamics agent image. By using the command from above, we were able to replace our Catalina startup:

-cp ${CATALINA_HOME}/bin/bootstrap.jar:${CATALINA_HOME}/bin/tomcat-juli.jar org.apache.catalina.startup.Bootstrap

…with the agent startup:

-Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djdk.tls.ephemeralDHKeySize=2048
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources
-javaagent:/opt/appdynamics/javaagent.jar -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
-Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat
-Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start

If you look closely, though, not all of the services were using Tomcat on startup. The last two services simply needed to start the agent. By using the same environment variable (APPD_JAVA_AGENT), we were able to rename that variable (APPD_JAVA_AGENT) to be the path of the agent jar. And with that, we had our new startup script.

[startup.sh (BEFORE)]

[startup.sh (AFTER)]

Not only did this approach allow us to get rid of our AppDynamics directory, it also enabled a seamless transition to monitoring via Docker images.

The AppD Approach: Composing Docker Containers for Monitoring

Since its introduction four years ago, Docker has vastly changed how modern applications and services are built. But while the benefits of microservices are well documented, the bad habits aren’t.

Case in point: As people began porting more of their monolithic applications to containers, Dockerfiles ended up becoming bloated, defeating the original purpose of containers. Any package or service you thought you needed was installed on the image. This led to minor changes in source or server, forcing you to rebuild the image. People would package multiple processes into a single Dockerfile. And obviously, as the images got bigger, things became much less efficient because you would spend all of your time waiting on a rebuild to check a simple change in source code.

The quick fix was to layer your applications. Maybe you had a base image, a language-specific image, a server image, and then your source code. While your images became more contained, any change to your bottom-level images would require an entire rebuild of the image set. Although your Dockerfiles became less bloated, you still suffered from the same upgrade issues. With the industry becoming more and more agile, this practice didn’t feel aligned.

The purpose of this blog is to show how we migrated an application to Docker—highlighting the Docker best practices we implemented—and how we achieved our end goal of monitoring the app in AppDynamics. (Source code located here)

Getting Started

With these best (and worst) practices in mind, we began by taking a multi-service Java application and putting it into Docker Compose. We wanted to build out the containers with the Principle of Least Privilege: each system component or process should have the least authority needed to complete its tasks. The containers needed to be ephemeral too, always shutting down when a SIGTERM is received. Since there were going to be environment variables reused across multiple services, we created a docker-compose.env file (image below) that could be leveraged across every service.

[AD-Capital-Docker/docker-compose.env]

Lastly, we knew that for our two types of log data—Application and Agent—we would need to create a shared volume to house it.

[AD-Capital-Docker/docker-compose.yml]

Instead of downloading and installing Java or Tomcat in the Dockerfile, we decided to pull the images directly from the official Tomcat in the Docker Store. This would allow us to know which version we were on without having to install either Java or Tomcat. Upgrading versions of Java or Tomcat would be easy, and would leave the work to Tomcat instead of on our end.

We knew we were going to have a number of services dependent on each other and linking through Compose, and that a massive bash script could cause problems. Enter Dockerize, a utility that simplifies running applications in Docker containers. Its primary role is to wait for other services to be available using TCP, HTTP(S) and Unix before starting the main process.

Some backstory: When using tools like Docker Compose, it’s common to depend on services in other linked containers. But oftentimes relying on links is not enough; while the container itself may have started, the service(s) within it may not be ready, resulting in shell script hacks to work around race conditions. Dockerize gives you the ability to wait for services on a specified protocol (file, TCP, TCP4, TCP6, HTTP, HTTPS and Unix) before starting your application. You can use the -timeout # argument (default: 10 seconds) to specify how long to wait for the services to become available. If the timeout is reached and the service is still not available, the process exits with status code 1.

[AD-Capital-Docker/ADCapital-Tomcat/startup.sh]

We then separated the source code from the agent monitoring. (The project uses a Docker volume to store the agent binaries and log/config files.) Now that we had a single image pulled from Tomcat, we could place our source code in the single Dockerfile and replicate it anywhere. Using prebuilt war files, we could download source from a different time, and place it in the Tomcat webapps subdirectory.

[AD-Capital-Docker/ADCapital-Project/Dockerfile]

We now had a Dockerfile containing everything needed for our servers, and a Dockerfile for the source code, allowing you to run it with or without monitoring enabled. The next step was to split out the AppDynamics Application and Machine Agent.

We knew we wanted to instrument with our agents, but we didn’t want a configuration file with duplicate information for every container. So we created a docker-compose.env. Since our agents require minimal configuration—and the only difference between “tiers” and “nodes” are their names—we knew we could pass these env variables across the agents without using multiple configs. In our compose file, we could then specify the tier and node name for the individual services.

[AD-Capital-Docker/docker-compose.yml]

For the purpose of this blog, we downloaded the agent and passed in the filename and SHA-256 checksum via shell scripts in the ADCapital-Appdynamics/docker-compose.yml file. We were able to pass in the application agent and configuration script to run appdynamics to the shared volume, which would allow the individual projects to use it on startup (see image below). Now that we had enabled application monitoring for our apps, we wanted to install the machine agent to enable analytics. We followed the same instrumentation process, downloading the agent and verifying the filename and checksums. The machine agent is a standalone process, so our configuration script was a little different, but took advantage of the docker-compose.env variable name to set the right parameters for the machine agent (ADCapital-Monitor/start-appdynamics). 

[AD-Capital-Docker/ADCapital-AppDynamics/startup.sh]

The payoff? We now have an image responsible for the server, one responsible for the load, and another responsible for the application. In addition, another image monitors the application, and a final image monitors the application’s logs and analytics. Updating an individual component will not require an entire rebuild of the application. We’re using Docker as it was intended: each container has one responsibility. Lastly, by using volumes to share data across services, we can easily check agent and application Logs. This makes it much easier to gain visibility into the entire landscape of our software.

If you would like to see the source code used for this blog, it is located here with instructions on how to build and setup. In the next blog, we will show you how to migrate from host agents,  using Docker images from the Docker store.

Updates to Microservices iQ: Gain Deeper Visibility into Docker Containers and Microservices

Enterprises have never been under more pressure to deliver digital experiences at the high bar set by the likes of Facebook, Google, and Amazon. According to our recent App Attention Index 2017, consumers expect more from applications than ever before. And if you don’t meet those expectations? More than 50 percent delete an app after a single use due to poor app performance, and 80 percent (!) have deleted an app after it didn’t meet their expectations.

Because microservices and containers have been shown to help businesses ship better software faster, many are adopting these architectures. According to Gartner (“Innovation Insight for Microservices” 2017), early adopters of microservices (like Disney, GE, and Goldman Sachs) have cut development lead times by as much as 75 percent. However, containers and microservices also introduce new levels of complexity that make it challenging to isolate the issues that can degrade the entire performance of applications.

Updated Microservices iQ

Today, we’re excited to announce Microservices iQ Integrated Docker Monitoring. With the Microservices iQ, you can three-way drill-down of baseline metrics, container metrics and underlying host server metrics — all within the context of Business Transactions and single pane of glass.

Now, together with the baseline metrics that you rely on to run the world’s largest applications, you can click to view critical container metadata plus key resource indicators for single containers or clusters of containers. You can then switch seamlessly to a view of the underlying host server to view all the containers running on that host and its resource utilization.

To troubleshoot a problem with a particular microservice running inside a container, the most important determination to make is where to start. And that’s where Microservices iQ Integrated Docker Monitoring stands out.

Is a container unresponsive because another container on the same host is starving it of CPU, disk or memory? Or is there an application issue that has been exposed by the particular code path followed by this business transaction that needs to be diagnosed using Transaction Snapshots or other traditional tools?

Sometimes the source of the problem is easy to spot, but often not: and that’s where another significant enhancement to Microservices iQ comes into play: heat maps.

Heat Maps

Heat maps are a powerful visual representation of complex, multi-dimensional data. You’ve probably seen them used to show things like changes in climate and snow cover over time, financial data, and even for daily traffic reports. Because heat maps can abstract the complexity of huge amounts of data to quickly visualize complex data patterns, we’re leveraging the technique to help address one of the hardest challenges involved in managing a microservice architecture – pinpointing containers for performance anomalies and outliers.

When a cluster of containers is deployed, the expectations is each container will behave identically. We know from experience that that isn’t always true. While the majority of the containers running a given microservice may perform within expected baselines, some may exhibit slowness or higher than usual error rates, resulting in the poor user experience that leads to uninstalled apps. Ops teams managing business-critical applications need a way to quickly identify when and where these outliers are occurring, and then view performance metrics for those nodes to look for potential correlation that help cut through the noise.

With the latest Microservices iQ, we have added support for heat maps in our new Tier Metrics Correlator feature which show load imbalances and performance anomalies across all the nodes in a tier, with heat maps to highlight correlation between these occurrences and the key resource metrics (CPU, disk, memory, I/O) for the underlying servers or container hosts. Issues that would have taken hours to investigate using multiple dashboards and side-by-side metric comparisons are often immediately apparent, thanks to the unique visualization advantages that heat maps provide. Think of it like turning on the morning traffic report and finding an unused backroad that’ll get you where you’re going in half the time.

Learn more

Find out more about updates to Microservices iQ, Docker Monitoring, and a new partnership with Atlassian Jira.

 

A Deep Dive into Docker – Part 2

In Part One of this Docker primer I gave you an overview of Docker, how it came about, why it has grown so fast and where it is deployed. In the second section, I’ll delve deeper into technical aspects of Docker, such as the difference between Docker and virtual machines, the difference between Docker elements and parts, and the basics of how to get started.

Docker Vs. Virtual Machines

First, I will contrast Docker containers with virtual machines like VirtualBox or VMWare. With virtual machines the entire operating system is found inside the environment, running on top of the host through a hypervisor layer. In effect, there are two operating systems running at the same time.

In contrast, Docker has all of the services of the host operating system virtualized inside the container, including the file system. Although there is a single operating system, containers are self-contained and cannot see the files or processes of another container.

Differences Between Virtual Machines and Docker

  • Each virtual machines has its own operating system, whereas all Docker containers share the same host or kernel.

  • Virtual machines do not stop after a primary command; on the other hand, a Docker container stops after it completes the original command.

  • Due to the high CPU and memory usage, a typical computer can only run one or two virtual machines at a time. Docker containers are lightweight and can run alongside several other containers on an average laptop computer. Docker’s excellent resource efficiency is changing the way developers approach creating applications.

  • Virtual machines have their own operating system, so they might take several minutes to boot up. Docker containers do not need to load an operating system and take microseconds to start.

  • Virtual machines do not have effective diff, and they are not version controlled. You can run diff on Docker images and see the changes in the file systems; Docker also has a Docker Hub for checking images in and out, and private and public repositories are available.

  • A single virtual machine can be launched from a set of VMDK or VMX files while several Docker containers can be started from a one Docker image.

  • A virtual machine host operating system does not have to be the same as the guest operating system. Docker containers do not have their own independent operating system, so they must be exactly the same as the host (Linux Kernel.)

  • Virtual machines do not use snapshots often — they are expensive and mostly used for backup. Docker containers use an imaging system with new images layered on top, and containers can handle large snapshots.

Similarities Between Virtual Machines and Docker

  • For both Docker containers and virtual machines, processes in one cannot see the processes in another.

  • Docker containers are instances of the Docker image, whereas virtual machines are considered running instances of physical VMX and VMDK files.

  • Docker containers and virtual machines both have a root file system.

  • A single virtual machine has its own virtual network adapter and IP address; Docker containers can also have a virtual network adapter, IP address, and ports.

Virtual machines let you access multiple platforms, so users across an organization will have similar workstations. IT professionals have plenty of flexibility in building out new workstations and servers in response to expanding demand, which provides significant savings over investing in costly dedicated hardware.

Docker is excellent for coordinating and replicating deployment. Instead of using a single instance for a robust, full-bodied operating system, applications are broken down into smaller pieces that communicate with each other.

Installing Docker

Docker gives you a fast and efficient way to port apps on machines and systems. Using Linux containers (LXC) you can place apps in their own applications and operate them in a secure, self-contained environment. The important Docker parts are as follows:

  1. Docker daemon manages the containers.

  2. Docker CLI is used to communicate and command the daemon.

  3. Docker image index is either a private or public repository for Docker images.

Here are the major Docker elements:

  1. Docker containers bold everything including the application.

  2. Docker images are of containers or the operating system.

  3. Dockerfiles are scripts that build images automatically.

Applications using the Docker system employ these elements.

Linux Containers – LXC

Docker containers can be thought of as directories that can be archived or packed up and shared across a variety of platforms and machines. All dependencies and libraries are inside the container, except for the container itself, which is dependent on Linux Containers (LXC). Linux Containers let developers create applications and their dependent resources, which are boxed up in their own environment inside the container. The container takes advantage of Linux features such as profiles, cgroups, chroots and namespaces to manage the app and limit resources.

Docker Containers

Among other things, Docker containers provide isolation of processes, portability of applications, resource management, and security from outside attacks. At the same time, they cannot interfere with the processes of another container, do not work on other operating systems and cannot abuse the resources on the host system.

This flexibility allows containers to be launched quickly and easily. Gradual, layered changes lead to a lightweight container, and the simple file system means it is not difficult or expensive to roll back.

Docker Images

Docker containers begin with an image, which is the platform upon which applications and additional layers are built. Images are almost like disk images for a desktop machine, and they create a solid base to run all operations inside the container. Each image is not dependent on outside modifications and is highly resistant to outside tampering.

As developers create applications and tools and add them to the base image, they can create new image layers when the changes are committed. Developers use a union file system to keep everything together as a single item.

Dockerfiles

Docker images can be created automatically by reading a Dockerfile, which is a text document that contains all commands needed to build the image. Many instructions can be completed in succession, and the context includes files at a specific PATH on the local file system or a Git repository location; related subdirectories are included in the PATH. Likewise, the URL will include the submodules of the repository.

Getting Started

Here is a shortened example on how to get started using Docker on Ubuntu Linux — enter these Docker Engine CLI commands on a terminal window command line. If you are familiar with package managers, you can use apt and yum for installation.

  1. Log into Ubuntu with sudo.

  2. Make sure curl is installed:
    $ which curl

  3. If not, install it but update the manager first:
    $ sudo apt-get update
    $ sudo apt-get install curl

  4. Grab the latest Docker version:
    $ curl -fsSL

  5. You’ll need to enter your sudo password. Docker and its dependencies should be downloaded by now.

  6. Check that Docker is installed correctly:
    $ docker run hello-world

You should see “Hello from Docker” on the screen, which indicates Docker seems to be working correctly. Consult the Docker installation guide to get more details and find installation instructions for Mac and Windows.

Ubuntu Images

Docker is reasonably easy to work with once it is installed since the Docker daemon should be running already. Get a list of all docker commands by running sudo docker

Here is a reference list that lets you search for a docker image from a list of Ubuntu images. Keep in mind an image must be on the host machine where the containers will reside; you can pull an image or view all the images on the host using sudo docker images

Commit an image to ensure everything is the same where you last left — that way it is at the same point for when you are ready to use it again: sudo docker commit

[image name]

To create a container, start with an image and indicate a command to run. You’ll find complete instructions and commands with the official Linux installation guide.

Technical Differences

In this second part of my two-part series on Docker, I compared the technical differences between Docker and virtual machines, broke down the Docker components and reviewed the steps to get started on Linux. The process is straight forward — it just takes some practice implementing these steps to start launching containers with ease.

Begin with a small, controlled environment to ensure the Docker ecosystem will work properly for you; you’ll probably find, as I did, that the application delivery process is easy and seamless. In the end, the containers themselves are not the real advantage: the real game-changer is the opportunity to deliver applications in a much more efficient and controlled way. I believe you will enjoy how Docker allows you to migrate from dated monolithic architectures to fast, lightweight microservice faster than you thought possible.

Docker is changing app development at a rapid pace. It allows you to create and test apps quickly in any environment, provides access to big data analytics for the enterprise, helps knock down walls separating Dev and Ops, makes the app development process better and brings down the cost of infrastructure while improving efficiency.

An Introduction to Docker – Part 1

What is Docker?

In simple terms, the Docker platform is all about making it easier to create, deploy and run applications by using containers. Containers let developers package up an application with all of the necessary parts, such as libraries and other elements it is dependent upon, and then ship it all out as one package. By keeping an app and associated elements within the container, developers can be sure that the apps will run on any Linux machine no matter what kind of customized settings that machine might have, or how it might differ from the machine that was used for writing and testing the code. This is helpful for developers because it makes it easier to work on the app throughout its life cycle.

Docker is kind of like a virtual machine, but instead of creating a whole virtual operating system (OS), it lets applications take advantage of the same Linux kernel as the system they’re running on. That way, the app only has to be shipped with things that aren’t already on the host computer instead of a whole new OS. This means that apps are much smaller and perform significantly better than apps that are system dependent. It has a number of additional benefits.

Docker is an open platform for distributed applications for developers and system admins. It provides an integrated suite of capabilities for an infrastructure agnostic CaaS model. With Docker, IT operations teams are able to secure, provision and manage both infrastructure resources and base application content while developers are able to build and deploy their applications in a self-service manner.

Key Benefits

  • Open Source: Another key aspect of Docker is that it is completely open source. This means anyone can contribute to the platform and adapt and extend it to meet their own needs if they require extra features that don’t come with Docker right out of the box. All of this makes it an extremely convenient option for developers and system administrators.

  • Low-Overhead: Because developers don’t have to provide a truly virtualized environment all the way down to the hardware level, they can keep overhead costs down by creating only the necessary libraries and OS components that make it run.

  • Agile: Docker was built with speed and simplicity in mind and that’s part of the reason it has become so popular. Developers can now very simply package up any software and its dependencies into a container. They can use any language, version and tooling because they are packaged together into a container that, in effect, standardizes all elements without having to sacrifice anything.

  • Portable: Docker also makes application containers completely portable in a totally new way. Developers can now ship apps from development to testing and production without breaking the code. Differences in the environment won’t have any effect on what is packaged inside the container. There’s also no need to change the app for it to work in production, which is great for IT operations teams because now they can avoid vendor lock in by moving apps across data centers.

  • Control: Docker provides ultimate control over the apps as they move along the life cycle because the environment is standardized. This makes it a lot easier to answer questions about security, manageability and scale during this process. IT teams can customize the level of control and flexibility needed to keep service levels, performance and regulatory compliance in line for particular projects.

How Was It Created and How Did It Come About?

Apps used to be developed in a very different fashion. There were tons of private data centers where off-the-shelf software was being run and controlled by gigantic code bases that had to be updated once a year. With the development of the cloud, all of that changed. Also, now that companies worldwide are so dependent on software to connect with their customers, the software options are getting more and more customized.

As software continued to get more complex, with an expanding matrix of services, dependencies and infrastructure, it posed many challenges in reaching the end state of the app. That’s where Docker comes in.

In 2013, Docker was developed as a way to build, ship and run applications anywhere using containers. Software containers are a standard unit of software that isn’t affected by what code and dependencies are included within it. This helped developers and system administrators deal with the need to transport software across infrastructures and various environments without any modifications.

Docker was launched at PyCon Lightning Talk – The future of Linux Containers on March 13, 2013. The Docker mascot, Moby Dock, was created a few months later. In September, Docker and Red Hat announced a major alliance, introducing Fedora/RHEL compatibility. The company raised $15 million in Series B funding in January of 2014. In July 2014 Docker acquired Orchard (Fig) and in August 2014 the Docker Engine 1.2 was launched. In September 2014 they closed a $40 million Series C funding and by December 31, 2014, Docker had reached 100 million container downloads. In April 2015, they secured another $95 million in Series D funding and reached 300 million container downloads.

How Does It Work?

Docker is a Container as a Service (CaaS). To understand how it works, it’s important to first look at what a Linux container is.

Linux Containers

In a normal virtualized environment, virtual machines run on top of a physical machine with the aid of a hypervisor (e.g. Xen, Hyper-V). Containers run on user space on top of an operating system’s kernel. Each container has its own isolated user space, and it’s possible to run many different containers on one host. Containers are isolated in a host using two Linux kernel features: Namespaces and Control Groups.

There are six namespaces in Linux and they allow a container to have its own network interfaces, IP address, etc. The resources that a container uses are managed by control groups, which allow you to limit the amount of CPU and memory resources a container should use.

Docker

Docker is a container engine that uses the Linux Kernel features to make containers on top of an OS and automates app deployment on the container. It provides a lightweight environment to run app code in order to create a more efficient workflow for moving your app through the life cycle. It runs on a client-server architecture. The Docker Daemon is responsible for all the actions related to the containers, and this daemon gets the commands from the Docker client through cli or REST APIs.

The containers are built from images, and these images can be configured with apps and used as a template for creating containers. They are organized in a layer, and every change in an image is added as a layer on top of it. The Docker registry is where Docker images are stored, and developers use a public or private registry to build and share images with their teams. The Docker-hosted registry service is called Docker Hub, and it allows you to upload and download images from a central location.

Once you have your images, you can create a container, which is a writable layer of the image. The image tells Docker what the container holds, what process to run when the container is launched and other configuration data. Once the container is running, you can manage it, interact with the app and then stop and remove the container when you’re done. It makes it simple to work with the app without having to alter the code.

Why Should a Developer Care?

Docker is perfect for helping developers with the development cycle. It lets you develop on local containers that have your apps and services, and can then integrate into a continuous integration and deployment workflow. Basically, it can make a developer’s life much easier. It’s especially helpful for the following reasons:

Easier Scaling

Docker makes it easy to keep workloads highly portable. The containers can run on a developer’s local host, as well as on physical or virtual machines or in the cloud. It makes managing workloads much simpler, as you can use it to scale up or tear down apps and services easily and nearly in real time.

Higher Density and More Workloads

Docker is a lightweight and cost-effective alternative to hypervisor-based virtual machines, which is great for high density environments. It’s also useful for small and medium deployments, where you want to get more out of the resources you already have.

Key Vendors and Supporters Behind Docker

The Docker project relies on community support channels like forums, IRC and StackOverflow. Docker has received contributions from many big organizations, including:

  • Project Atomic

  • Google

  • GitHub

  • FedoraCloud

  • AlphaGov

  • Tsuru

  • Globo.com

Docker is supported by many cloud vendors, including:

  • Microsoft

  • IBM

  • Rackspace

  • Google

  • Canonical

  • Red Hat

  • VMware

  • Cisco

  • Amazon

Stay tuned for our next installment, where we will dig even deeper into Docker and its capabilities. In the meanwhile, read this blog post to learn how AppDynamics provides complete visibility into Docker Containers.

 

5 Things Your CIO Needs to Know about Docker

It’s no secret that Docker has revolutionized the application virtualization space. Today, it’s one of the fastest adopted technologies across enterprises of all sizes—and now, It’s more than just a developer’s preferred open source framework. It also drives the ideal business case to C-level decision makers by creating the ideal transitional opportunity for operational efficiency and optimized IT budgets to driving innovation and expansion. We’ve listed a few of the many reasons why your CIO needs to be paying attention to the potential around Docker.

Docker is nearly complete DevOps technology available today

DevOps has a lot to gain from container based software. As the collaboration and integration between these teams have increased with technical advances, the need to manage application dependencies throughout dev cycles has increased as well. Docker is a point of convergence for Development and Operations, and it creates a seamless link between the two to collaborate without manual barriers and processes.

Docker comes with low overhead, and with the ability to maintain a low memory capacity, it allows multiple services to run at once to allow for better collaboration. It also utilizes its shared volumes to make application code available to the containers from a host operating system so a developer can access and edit source code from any platform and see changes instantly. Docker’s flexibility also allows a front-end engineer the opportunity to explore how back-end systems work to gain full understanding of the full stack and drive a more encompassing workflow.

Docker is more manageable and lightweight compared to virtual stations

While many PaaS options are built to handle most tasks for development teams, overhead costs to maintain the architecture, begin to offset its benefits. Docker allows you to create flexible environments so you can enter deeper layers of the stack and work without disrupting any other workflows. Docker containers are easier to manage than traditional heavyweight visualizations–it’s a whole series of layers, and changing one layer doesn’t mean impacting the rest. Before its implementation, engineers would have to build out virtual machine with some fake load inside the environment. Now, they’re able to package to reduce how many virtual machines they implement, reducing costs and overhead.

Docker has the competitive advantage

It’s clear that Docker is not the only container name out today, that said, it easily owns the mindshare of IT leaders and developers alike. In the short amount of time since its 1.0 release, Docker has already seen support from leaders like Red Hat, IBM, Amazon, and even VMWare. As the pioneer of a business model tailored for developers, Docker has paved the path for rapid adoption in the container space. However, as an open source technology, it also sustains a growing community with contributors and stakeholders to lead the channels toward innovation and advancements.

Docker allows for increased developer productivity, and in turn, increased innovation

Using container-based software already creates a seamless collaboration and handoff between anyone from development, operations, and testing teams. It’s more than likely that your engineers benefit from time away from redundant tasks and troubleshooting. Returning the focus on creating, innovating, and responding to demand with a better outcome and ultimately a better product only benefits them, and your organization the most.

Creating better use of the cloud

Using containers in the cloud creates more instance utilization. By deploying multiple Docker applications onto a single cloud instance, you are much closer to achieving 100% utilization of your resource. Docker allows you to run multiple apps on the same cloud safely by abstracting and isolating their dependencies.

Your CIO’s role is already transitioning from what it used to be. Instead of focusing on operational efficiencies and cost centers, they have the power to drive innovation and productivity to their IT and development teams. Docker might have a lot of rooms to grow into, and adjust to pain points, but it already has the potential to be implemented as a best practice throughout organizations. It initiates a methodology of collaboration, sharing, education and efficiency on teams. As DevOps and Agile practices become a necessity instead of an option within enterprise teams, Docker represents much more than a container-based software. It represents a new era of digital innovation, one that makes your team excel in innovation, development, cultural practices and more.

Docker: The Secret Sauce to Fuel Innovation

Much has already been written about the virtues of Docker, and containers in general like CoreOS or Kubernetes. How life-changing Docker is, how innovative, etc. However, the real secret to Docker’s success in the marketplace is the hidden retribution of innovation. Innovation and R&D is the lifeblood of today’s technology success. Companies, no matter how large, must iterate constantly to stay ahead of their legacy competitors and new upstarts risking disruption. The rise of Agile methodologies and DevOps teams comes with the expectations of more releases, more features, and ultimately a better product.

How can you maintain this pace of innovation? Allow your developers to develop, instead of focusing on tedious — and time consuming — tasks dealing with distributed application upkeep and maintenance.

Pre-Docker Life

At AppDynamics, we primarily use Docker for our field enablement resources, such as demo environments. Before Docker, we would have to spin up a virtual machine with create some fake load inside the environment to show the benefits of AppDynamics’ monitoring. There was no quick, or easy way to make an update to the VM — even a small update. Any minor change (which as an Agile company, were often), would require some heavy lifting work for our developers. There was no version control.

Productivity Gain

Removing redundant work such as updating a demo environment VM — which let’s face it, devs don’t want to do in the first place — frees up vital time for the developers to get back doing what they do best. Setting up machines becomes obsolete and devs gonna dev.

At any company, you’re likely paying a substantial wage for quality engineers. With that expense, you should expect innovation.

Docker, in our case, also removes the project abandonment risk. If a project owner is sick or leaves the company there is typically an audit process of analyzing the code. More often than not, a good chunk would have to be rebuilt in a more consistent manner. With Docker, you insource your code into a standardized container, allowing seamless handoff to the next project owner.

Fostering DevOps

Along with passing to the next project manager, the handoff between dev, QA, and Ops becomes seamless as well — which is a main foundation of DevOps. How we use Docker, and I assume others do as well, allows us maintain repeatable processes and enable our field teams.

The shareability allows us to incorporate best practices among the entire team and provide a consistent front with engagements.

Interested to see how AppDynamics and Docker work together? Check out this blog!

AppDynamics Monitoring Excels for Microservices; New Pricing Model Introduced

It’s no news that microservices are one of the top trends, if not the top trend, in application architectures today. Take large monolithic applications which are brittle and difficult to change and break them into smaller manageable pieces to provide flexibility in deployment models, facilitating agile release and development to meet today’s rapidly shifting digital businesses. Unfortunately, with this change, application and infrastructure management is more complex due to size and technology changes, most often adding significantly more virtual machines and/or containers to handle the growing footprint of application instances.

Fortunately, this is just the kind of environment the AppDynamics Application Intelligence Platform is built for, delivering deep visibility across even the most complex, distributed, heterogeneous environments. We trace and monitor every business transaction from end-to-end — no matter how far apart those ends are, or how circuitous the path between — including any and all API calls across any and all microservices tiers. Wherever there is an issue, the AppDynamics platform pinpoints it and steers the way to rapid resolution. This data can also be used to analyze usage patterns, scaling requirements, and even visibility into infrastructure usage.

This is just the beginning of the microservices trend. With the rise of the Internet of Things, all manner of devices and services will be driven by microservices. The applications themselves will be extended into the “Things” causing even further exponential growth over the next five years. Gartner predicts over 25 billion devices connected by 2020, with the majority being in the utilities, manufacturing, and government sectors.

AppDynamics microservices pricing is based on the size of the Java Virtual Machine (JVM) instance; any JVM running with a maximum heap size of less than one gigabyte is considered a microservice.

We’re excited to help usher in this important technology, and to make it feasible and easy for enterprises to deploy AppDynamics Java microservices monitoring and analytics. For a more detailed perspective, see our post, Visualizing and tracking your microservices.

Complete visibility into Docker containers with AppDynamics

Today we announced the AppDynamics Docker monitoring solution that provides an application-centric view inside and across Docker containers. Performance of distributed applications and business transactions can be tagged, traced, and monitored even as they transit multiple containers.

Before I talk more about the AppDynamics Docker monitoring solution, let me quickly review the premise of Docker and point you to a recent blog “Visualizing and tracking your microservices“ by my colleague, Jonah Kowall, that highlights Docker’s synergy with another hot technology trend— microservices.

What is Docker?

Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker introduction video.

Docker makes it very easy to make changes and package the software quickly for others to test without requiring a lot of resources. At AppDynamics, we embraced Docker completely in our development, testing, and demo environments. For example, as you can see in the attached screenshot from our demo environment, we are using Docker to provision various demo use cases with different application environments like jBoss, Tomcat, MongoDB, Angularjs, and so on.

Screen Shot 2015-05-11 at 11.15.07 AM.png

In addition, you can test drive AppDynamics by downloading, deploying, and testing with the packaged applications from the AppDynamics Docker repos.

Complete visibility into Docker environment with AppDynamics

AppDynamics provides visibility into applications and business transactions made out of multiple smaller decoupled (micro) services deployed in a Docker environment using the Docker monitoring solution. The AppDynamics Docker Monitoring Extension monitors and reports on various metrics, such as: total number of containers, running containers, images, CPU usage, memory usage, network traffic, etc. The AppDynamics Docker monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP giving you the choice for data collection protocol.

The Docker metrics can now be correlated with the metrics from the applications running in the container. For example, in the screenshot below, you can see the overall performance (calls per minute in red) of a web server deployed in Docker container is correlated with Docker performance metrics (Network transmit/receive and CPU usage). As the number of calls per minute to the web server increases, you can see that the network traffic and CPU usage increases as well.

docker_metric_browser_with_cpu.png

Customers can leverage all the core functionalities of AppDynamics (e.g. dynamic baselining, health rules, policies, actions, etc.) for all the Docker metrics while correlating them with the metrics already running in the Docker environment.

The Docker monitoring extension also creates an out of the box custom dashboard with key Docker metrics as shown in the screenshot below. This out of the box dashboard will jump start the monitoring of Docker environment.

docker_custom_dashboard.png

Download the AppDynamics Docker monitoring extension, set-up and configure it following the instructions on the extension page and get end-to-end visibility into your Docker environment and the applications running within them.