Hands Off My Docker Containers: Dynamic Java Instrumentation in Three Easy Steps

Many AppDynamics customers have challenges with modifying startup scripts or updating images in order to inject Java agents, especially in a containerized environment. Other customers might not want to change their build process or completely restructure their projects just to try out a monitoring solution.

Fortunately, there are ways to instrument Java applications without having to access startup scripts or docker-compose.yml files. This blog will show a solution that bundles the dynamic-attach functionality with infrastructure monitoring to instrument Java processes in a Docker environment. This method requires no changes to images or deployment files. It relies instead on the unique ability of AppDynamics’ Java agents to dynamically attach to running Java processes.

Common Approaches to Inject Java Agent

Typically, injecting an AppDynamics Java agent to a process is done by adding runtime parameters to the Java command that creates the JVM. When done this way, the Java agent lives alongside the JVM and monitors it for its entire lifespan. An example of a runtime injection might look like this in the Java command:

java -javaagent:/appdynamics/AppServerAgent/javaagent.jar -jar customer-app.jar

This approach requires the agent files to be either installed locally or volume-mounted. The controller connection information is also often supplied using the run-time parameters, as well as the name of the AppDynamics Application and Tier that the agent will report to.

The above example is a very common way to inject the Java agent, but there are many variations of this method that can be customized to fit a customer’s needs.

Dynamic Attach

An alternative to the persistent Java agent injection described above is the Dynamic Attach method. The AppDynamics Java agent can be attached to a JVM that is already running, requiring no JVM restart. This ability is a competitive differentiator, as other products cannot do this.

This method takes the Java ProcessID and uses the Java -Xbootclasspath parameter to dynamically inject the Java agent into the JVM process. This will cause the JVM to perform a class retransformation necessary to monitor the application. Performance is temporarily impacted during retransformation, but returns to normal after completion.

Dynamic Agent Solution

We have developed a solution that uses Dynamic Attach within a Dockerized environment. This solution is packaged with infrastructure monitoring and is designed to make agent injection very simple to roll out and use. First, we will look at the steps required to use Dynamic Agent, and then talk about how it works.

Requirements

Use of this solution assumes you have a Java application (using Java 1.7 – 1.10) running in Docker containers on a Linux host machine. This solution will not work with IBM Java or JRockit JVMs. JBoss processes can use this, but only if the standalone.conf file is modified to include the following setting:

JBOSS_MODULES_SYSTEM_PKGS=”org.jboss.byteman,com.singularity”

Using Dynamic Agent in Three Easy Steps

It is very easy to get started with Dynamic Agent.

Step 1: Download the following repo to your host machine:

git clone https://github.com/Appdynamics/Dynamic-Agent-MA

Step 2: Modify controller.env with your controller connection information.

Step 3: Run the run.sh script.

Step 4: (optional) Sit back and let your boss and coworkers marvel at how quickly you were able to give them valuable performance and business insights into your applications.

Details on the Three Easy Steps

Step 1:

Aside from the git files, pulling down the git repo will result in four small files being downloaded:

  • controller.env
  • run.sh
  • stop.sh
  • readme.txt

 

Step 2:

Controller.env is the only file you need to modify. It contains the usual information on how to connect to the controller.

CONTROLLER_HOST=my.domain.com

CONTROLLER_PORT=8090

CONTROLLER_SSL_ENABLED=false

APPLICATION_NAME=MyApp

ACCOUNT_NAME=customer1

ACCOUNT_ACCESS_KEY=a5f426acdff0

Step 3:

Once you have updated the values in controller.env, you are ready to use the run.sh script.

./run.sh

That’s it! Within a few minutes, you should start to see your newly attached agents reporting to your controller.

After a few more minutes, you should see traffic in the flowmap:

controller.env Parameter Reference

A complete and up-to-date reference of controller.env parameters is contained in the readme.txt file in the git repo.

What is a Tier?

As described in the Common Approaches section above, manual injection of the Java agent requires you to specify the Tier name that each process will report to. In contrast, the Dynamic Agent solution has the ability to extract the Tier name from the container. It also gives you options for how the Tier name can be extracted.

By default, the container’s Hostname will be used as the Tier name in AppDynamics. If you want to change this, there are some optional parameters you can set in the controller.env file:

TIER_NAME_FROM

  • TIER_NAME_FROM=HOSTNAME or if left blank or not included in controller.env, it will use the hostname for the Tier name
  • TIER_NAME_FROM=CONTAINER_NAME will cause the process to use the name of the container for the Tier name
  • TIER_NAME_FROM=JVM_PARAM will cause the process to look for the JVM param specified in TIER_NAME_PARAM

 

For example, the following controller.env file will result in the process looking for a JVM parameter -Dservice-name. It then will use that value for the Tier name in AppDynamics.

CONTROLLER_HOST=my.controller.com

CONTROLLER_PORT=8090

CONTROLLER_SSL_ENABLED=false

APPLICATION_NAME=Jetty

ACCOUNT_NAME=customer1

ACCOUNT_ACCESS_KEY=4e76-a7f8-a5f426acdff0

TIER_NAME_FROM=JVM_PARAM

TIER_NAME_PARAM=Dservice-name

How It Works

For reasons that will soon become clear, the relevant parts of this solution run as an extension inside a standalone machine agent. Here are some of the more important files involved in the solution.

dynamicAttach.go

The heart of the extension is a Go process. It uses the Docker API to loop through all running containers on the host machine. It inspects each container, looking for a running Java process. If found, it will deploy the appropriate files to the container (housed in agentArchive.tar) and then run one of those files from the container.

This process also keeps track of the container IDs that have been instrumented, allowing it to run every minute without excessive overhead. Any already-instrumented containers will not be re-instrumented. More importantly, new containers will be quickly identified and processed.

agentArchive.tar

Tar files are the only type of file allowed to be pushed to containers by the Docker API. This file contains the Java agent, a copy of tools.jar (because most containers will not have a full JDK), and a script named attachAgent.sh.

The attachAgent.sh file is responsible for doing the actual dynamic attaching, and includes the Java command with the -Xbootclasspath parameter mentioned earlier.

This script must be run from inside the container. If dynamic attachment is attempted from the host machine, there will be errors because the users won’t match up.

The Power of Dynamic Attach

Even though the Dynamic Attach approach to agent injection is not commonly used, we have seen that it can be a powerful tool for getting a Dockerized Java environment instrumented very quickly. It also provides new possibilities to customers who might not have the ability to modify a Docker image or its runtime parameters.

We’re just beginning to explore the possibilities of what we can do with the Dynamic Attach approach. We’re working to expand this solution to work with other agents, too, such as NodeJS. Also, we’ve already started working on a similar Java solution packaged in a Kubernetes operator…but we’ll talk more about that next time.

This blog may contain product roadmap information of AppDynamics. AppDynamics reserves the right to change any product roadmap information at any time, for any reason and without notice. This information is intended to outline AppDynamics’ general product direction, it is not a guarantee of future product features, and it should not be relied on in making a purchasing decision. The development, release, and timing of any features or functionality described for AppDynamics’ products remains at AppDynamics’ sole discretion. AppDynamics reserves the right to change any planned features at any time before making them generally available as well as never making them generally available.

 

 

AWS re:Invent—What do Black Friday and Cyber Monday Have in Common?

With the genesis of Amazon Web Services, enterprises of all sizes can now take advantage of the public cloud to deliver significantly more agility and control. With AWS, elastic infrastructure is easier to attain, and usage spikes are an afterthought.

Only days apart, Black Friday and Cyber Monday are arguably the two biggest days in retail. They’re what make “web scale” a requirement for leading eCommerce organizations throughout the world. For months in advance, IT operations teams map, plan and prepare for the impending shopping crush. It’s capacity planning at its finest. And beyond eCommerce, a move to the cloud is prudent for many businesses that may encounter major traffic events like a security attack or a runaway product success.

History of the AWS Cloud

From the humble beginnings of Amazon Elastic Compute Cloud (EC2) in 2006, Amazon Web Services (AWS) put public cloud on the map. AWS continues to invest heavily in innovation, helping both the public and private sector harness the power of cloud computing. Ironically, during Cyber Monday, the largest eCommerce day of the year, AWS is kicking off its largest event of the year—AWS re:Invent—which showcases the latest and greatest AWS has to offer, as well as the transformational journeys of its clients.

AWS re:Invent 2018

For the past several years, AWS re:Invent has been the showcase of large web scale in the public cloud, and 2018 will be no different. AppDynamics is excited to return to showcase our strong partnership with AWS and tell the stories of our joint customers. Plenty of very impressive sessions will take place across the Las Vegas Strip from a wide ecosystem of clients and vendors sharing challenges, triumphs, and best practices.

Join AppDynamics at re:Invent

During the show, you’ll have an opportunity to network with 50,000 of your closest friends, as well as attend workshops, sessions, parties and everything in between. It is Vegas, after all.

As part of your AWS re:Invent experience, please stop by and learn from experts and customers at the AppDynamics Theatre, located in booth #810. No matter where you are in your cloud journey, we’re confident you’ll learn something new, be better prepared to migrate to the cloud with confidence, and monitor your workloads once you’ve arrived.

We will be running continuous sessions throughout the show (every half hour) that will cover the shift of cloud workloads to severless and containers, maturing DevOps capabilities and processes, and the impending shift to AIOps.

AIOps: Fix Before Failure

Here’s one more term for your enterprise software buzzword bingo: AIOps—the adoption of artificial intelligence in IT operations. Continuous improvement of your platforms without administrator intervention is closer than you think. With advancements in visibility, insight and action, the AppDynamics Platform can now take action for our customers—making AIOps a reality, not a buzzword.

The Cloud’s Best Friends

Serverless and container technologies are not new, but thanks to advancements in the AWS stack around Lambda, ECS/EKS and Fargate, the adoption of these underlying concepts is exploding. In Gartner’s bimodal IT model, both mode-one and mode-two organizations can benefit from AWS services. Join our experts to make sure you’re maximizing your investment in the latest and greatest AWS has to offer.

See AppDynamics on Stage

AppDynamics Senior Solutions Architect Subarno Mukherjee is leading an AWS Session, “Five Ways Application Insights Impact Migration Success,” on Tuesday at 10 AM PT, November 27th at The Venetian. Come learn how application insights impact migration success. No matter where you are in your cloud journey, you’ll have a great opportunity to learn from our experts and customers.

Time to Party?

AppDynamics and AWS are throwing a fantastic Happy Hour after Thursday’s sessions close and before the re:Play party. If you’d like to get in on the action, contact your AWS or AppDynamics account manager and we’ll add you to the list. It’s not a party you’ll want to miss!

Looking to re:Invent

We’re really excited to see you at AWS re:Invent. Be sure to sign up for our AWS Session and stop by booth #810. We’ll be active on our social channels (Twitter / Instagram) during the event as well. Hopefully you’ll make a guest appearance. See you there!

Best Practices for Instrumenting Containers with AppDynamics Agents

In this blog I will show some best practices for instrumenting Docker containers, using docker-compose with a few popular AppDynamics application agent types. The goal here is to avoid rebuilding your application containers in the event of an agent upgrade, or having to hard-code AppDynamics configuration into your container images. In my role as a DevOps engineer working on AppDynamics’ production environments, I use these techniques to simplify our instrumented container deployments. I’ll cover the install of binary agents like the Java agent, as well as agents installed via a repository such as Node.js or Python.

Before getting into the best practices, let’s review the most common deployment pattern—which isn’t a best practice at all.

Common (but not best-practice) Pattern:  Install Agent During Container Image Build

The first approach we’ll cover is installing the agent via Dockerfile as part of the application container build. This has the advantage of following the conventional practice of easily copying in your source files and providing transparency of the build in your Dockerfile, making adoption simpler and more intuitive. AppDynamics does not recommend this approach, however, as it requires a fresh copy of your application image to be rebuilt every time an agent needs an upgrade. This is inefficient and unnecessary because the agent is not a central part of your application code. Additionally, hard-coding the agent install in this manner may prove more difficult when you automate your builds and deployments.

Java Example

In this Dockerfile example for installing the Java agent, we have the binary stored in AWS S3 and simply copy over the agent during build time of the application image.

Dockerfile snippet: Copy from S3

Here is a similar step where we copy the agent locally.

Dockerfile snippet: Copy locally

Node.js Example

In this example, we use npm to install a specific Node.js agent version during build time.

Dockerfile snippet

Python Example

In this example, we use pip to install a specific Python agent version during build time.

Dockerfile snippet

Best Practices Pattern: Install Agents at Runtime Using Environment Variables and Sidecar Container

The below examples cover two different patterns, depending on agent type. For Java and similarly packaged agents, we’ll use something called a “sidecar container” to install the agent at container runtime.  For repository-installed agents like Node.js and Python, we’ll use environment variables and a startup script that will install the agent at container runtime.

Java Example

For the sidecar container pattern, we build a container image with the agent binary that we want to install. We then volume-mount the directory that contains the agent, so our application container can copy the agent during container runtime, and then install. This can be simplified by unpackaging the agent in the sidecar container, volume-mounting the newly unpackaged agent directory, and then having the application container point to the volume-mounted directory and using it as its agent directory. We’ll cover both examples below, starting with how we create the sidecar container or “agent-repo.”

In the Dockerfile example for the Java agent, we store the binary in AWS S3 (in an agent version-specific bucket) and simply copy the agent during build-time. We then unzip the agent, allowing us to either copy the agent to the application container and then unzipping, or simply pointing to the unzipped agent directory. Notice we use a build ARG, which allows for a more automated build using a build script.

Agent Repo Dockerfile: Copy from S3

Here’s the same example as above, but one where we copy the agent locally without using a build ARG.

Agent Repo Dockerfile: Copy locally

The build script utilizes a build ARG. If you’re using the S3 pattern above, this allows you to pass in the agent version you like.

build.sh

Now that we have built our sidecar container image, let’s cover how to build the Java agent container image to utilize this agent deployment pattern.

In the Docker snippet below, we copy in two new scripts, extractAgent.sh and startup.sh. The extractAgent.sh script copies and extracts the agent from the volume-mounted directory, /sharedFiles/, to the application container. The startup.sh script is used as our ENTRYPOINT.  This script will call extractAgent.sh and start the application.

Java Dockerfile snippet

extractAgent.sh

The startup.sh script (below) calls extractAgent.sh, which copies and unzips the agent into the $CATALINA_HOME directory. We then pass in that directory as part of our Java options in the application-startup command.

startup.sh snippet

In the docker-compose.yml, we simply add the agent-repo container with volume mount. Our Tomcat container references the agent-repo container and volume, but also uses agent-dependent environment variables so that we don’t have to edit any configuration files. This makes the deployment much more automated and portable/reusable.

docker-compose.yml

In the example below, we show another way to do this. We skip the entire process of adding the extractAgent.sh and startup.sh scripts, electing instead to copy a customized catalina.sh script and using that as our CMD. This pattern still uses the agent-repo sidecar container, but points to the volume-mounted, unzipped agent directory as part of the $CATALINA_OPTS.

Java Dockerfile snippet

catalina.sh snippet

OK, that covers the sidecar container agent deployment pattern. So what about agents that utilize a repository to install an agent? How do we automate that process so we don’t have to rebuild our application container image every time we want to upgrade our agents to a specific version? The answer is quite simple and similar to the examples above. We add a startup.sh script, which is used as our ENTRYPOINT, and then use environment variables set in the docker-compose.yml to install the specific version of our agent.

Node.js Example

Dockerfile snippet

In our index.js that is copied in (not shown in the above Dockerfile snippet), we reference our agent-dependent environment variables, which are set in the docker-compose.yml.

index.js snippet

In the startup.sh script, we use npm to install the agent. The version installed will depend on whether we specifically set the $AGENT_VERSION variable in the docker-compose.yml. If set, the version set in the variable will get installed. If not, the latest version will be installed.

startup.sh

In the docker-compose.yml, we set the $AGENT_VERSION to the agent version we want npm to install. We also set our agent-dependent environment variables, allowing us to avoid hard-coding these values. This makes the deployment much more automated and portable/reusable.

docker-compose.yml

Python Example

This example is very similar to the Node.js example, except that we are using pip to install our agent.

Dockerfile snippet

In the startup.sh script, we use pip to install the agent. The version installed will depend on whether we specifically set the $AGENT_VERSION variable in the docker-compose.yml. If set, the version set in the variable will get installed. If not, the latest version will be installed.

startup.sh

In the docker-compose.yml, we set the $AGENT_VERSION to the agent version we want pip to install. We also set our agent-dependent environment variables, allowing us to avoid hard-coding these values. This makes the deployment much more automated and portable/reusable.

docker-compose.yml

 

Pick the Best Pattern

There are many ways to instrument your Docker containers with AppDynamics agents.  We have covered a few patterns and shown what works well for my team when managing a large Docker environment.

In the Common Pattern (but not best-practice) example, I showed how you must rebuild your application container every time you want to upgrade the agent version—not an ideal approach.

But with the Best Practices Pattern, you decouple the agent specifics from the application container images, and direct that responsibility to the sidecar container and the docker-compose environment variables.

Automation, whenever possible, is always a worthy goal. Following the Best Practices Pattern will allow you to improve script deployments, leverage version control and configuration management, and plug them all into CI/CD pipelines.

For in-depth information on related techniques, read these AppDynamics blogs:

Deploying AppDynamics Agents to OpenShift Using Init Containers

The AppD Approach: Composing Docker Containers for Monitoring

The AppD Approach: Leveraging Docker Store Images with Built-In AppDynamics

 

 

Deploying AppDynamics Agents to OpenShift Using Init Containers

There are several ways to instrument an application on OpenShift with an AppDynamics application agent. The most straightforward way is to embed the agent into the main application image. (For more on this topic, read my blog Monitoring Kubernetes and OpenShift with AppDynamics.)

Let’s consider a Node.js app. All you need to do is to add a require reference to the agent libraries and pass the necessary information on the controller. The reference itself becomes a part of the app and will be embedded in the image. The list of variables (e.g., controller host name, app/tier name, license) the agent needs to communicate with the controller can be embedded, though it is best practice to pass them into the app on initialization as configurable environmental variables.

In the world of Kubernetes (K8s) and OpenShift, this task is accomplished with config maps and secrets. Config maps are reusable key value stores that can be made accessible to one or more applications. Secrets are very similar to config maps with an additional capability to obfuscate key values. When you create a secret, K8s automatically encodes the value of the key as a base64 string. Now the actual value is not visible, and you are protected from people looking over your shoulder. When the key is requested by the app, Kubernetes automatically decodes the value. Secrets can be used to store any sensitive data such as license keys, passwords, and so on. In our example below, we use a secret to store the license key.

Here is an example of AppD instrumentation where the agent is embedded, and the configurable values are passed as environment variables by means of a configMap, a secret and the pod spec.

var appDobj = {
   controllerHostName: process.env[‘CONTROLLER_HOST’],
   controllerPort: CONTROLLER_PORT,
   controllerSslEnabled: true,
accountName: process.env[‘ACCOUNT_NAME’],
   accountAccessKey: process.env[‘ACCOUNT_ACCESS_KEY’],
   applicationName: process.env[‘APPLICATION_NAME’],
   tierName: process.env[‘TIER_NAME’],
   nodeName: ‘process’
}
require(“appdynamics”).profile(appDobj);

Pod Spec
– env:
   – name: TIER_NAME
     value: MyAppTier
   – name: ACCOUNT_ACCESS_KEY
     valueFrom:
       secretKeyRef:
         key: appd-key
         name: appd-secret
   envFrom:
     – configMapRef:
         name: controller-config

A ConfigMap with AppD variables.

AppD license key stored as secret.

The Init Container Route: Best Practice

The straightforward way is not always the best. Application developers may want to avoid embedding a “foreign object” into the app images for a number of good reasons—for example, image size, granularity of testing, or encapsulation. Being developers ourselves, we respect that and offer an alternative, a less intrusive way of instrumentation. The Kubernetes way.

An init container is a design feature in Kubernetes that allows decoupling of app logic from any type of initialization routine, such as monitoring, in our case. While the main app container lives for the entire duration of the pod, the lifespan of the init container is much shorter. The init container does the required prep work before orchestration of the main container begins. Once the initialization is complete, the init container exists and the main container is started. This way the init container does not run parallel to the main container as, for example, a sidecar container would. However, like a sidecar container, the init container, while still active, has access to the ephemeral storage of the pod.

We use this ability to share storage between the init container and the main container to inject the AppDynamics agent into the app. Our init container image, in its simplest form, can be described with this Dockerfile:

FROM openjdk:8-jdk-alpine
RUN apk add –no-cache bash gawk sed grep bc coreutils
RUN mkdir -p /sharedFiles/AppServerAgent
ADD AppServerAgent.zip /sharedFiles/
RUN unzip /sharedFiles/AppServerAgent.zip -d /sharedFiles/
AppServerAgent /
CMD [“tail”, “-f”, “/dev/null”]

The above example assumes you have already downloaded the archive with AppDynamics app agent binaries locally. When the container is initialized, it unzips the binaries into a new directory. To the pod spec, we then add a directive that copies the directory with the agent binaries to a shared volume on the pod:

spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  /mountpath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account

After the init container exits, the AppDynamics agent binaries are waiting for the application to be picked up from the shared volume on the pod.

Let’s assume we are deploying a Java app, one normally initialized via a script that calls the java command with Java options. The script, startup.sh, may look like this:

# startup.sh
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.tierName=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.agent.reuse.nodeName.prefix=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar”
JAVA_OPTS=”$JAVA_OPTS
-Dappdynamics.controller.hostName=$CONTROLLER_HOST -Dappdynamics.controller.port=$CONTROLLER_PORT -Dappdynamics.controller.ssl.enabled=$CONTROLLER_SSL_ENABLED”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.accountName=$ACCOUNT_NAME -Dappdynamics.agent.accountAccessKey=$ACCOUNT_ACCESS_KEY -Dappdynamics.agent.applicationName=$APPLICATION_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.socket.collection.bci.enable=true”
JAVA_OPTS=”$JAVA_OPTS -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true”
JAVA_OPTS=”$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom”

$JAVA_OPTS -jar myapp.jar

It is embedded into the image and invoked via Docker’s ENTRYPOINT directive when the container starts.

FROM openjdk:8-jdk-alpine
COPY startup.sh startup.sh
RUN chmod +x startup.sh
ADD myapp.jar /usr/src/myapp.jar
EXPOSE 8080
ENTRYPOINT [“/bin/sh”, “startup.sh”]

To make the consumption of startup.sh more flexible and Kubernetes-friendly, we can trim it down to this:

#a more flexible startup.sh
java $JAVA_OPTS -jar myapp.jar

And declare all the necessary Java options in the spec as a single environmental variable.

containers:
       – name: my-app
         image: my-app-image:x.x.x
         imagePullPolicy: IfNotPresent
         securityContext:
           privileged: true
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
name: appd-secret
-name: JAVA_OPTS
  value: “ -javaagent:/sharedFiles/AppServerAgent/javaagent.jar
         -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
         -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
         -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
         -Xms64m -Xmx512m -XX:MaxPermSize=256m
         -Djava.net.preferIPv4Stack=true
         …”
         ports:
         – containerPort: 8080
         volumeMounts:
           – mountPath: /sharedFiles
             name: shared-files

The dynamic values for the Java options are populated from the ConfigMap. First, we reference the entire configMap, where all shared values are defined:

envFrom:
           – configMapRef:
               name: controller-config

We also reference our secret as a separate environmental variable. Then, using the $() notation, we can reference the individual variables in order to concatenate the value of the JAVA_OPTS variable.

Thanks to these Kubernetes features (init containers, configMaps, secrets), we can add AppDynamics monitoring into an existing app in a noninvasive way, without the need to rebuild the image.

This approach has multiple benefits. The app image remains unchanged in terms of size and encapsulation. From a Kubernetes perspective, no extra processing is added, as the init container exits before the main container starts. There is added flexibility in what can be passed into the application initialization routine without the need to modify the image.

Note that OpenShift does not allow running Docker containers as user root by default. If you must (for whatever good reason), add the service account you use for deployments to the anyuid SCC. Assuming your service account is my-account, as in the provided examples, run this command:

oc adm policy add-scc-to-user anyuid -z myaccount

Here’s an example of a complete app spec with AppD instrumentation:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: my-app
spec:
 replicas: 1
 template:
   metadata:
     labels:
       name: my-app
   spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  “/mountPath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account
     containers:
       – name: my-app
         image: my-service
         imagePullPolicy: IfNotPresent
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: TIER_NAME
             value: WebTier
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
                 name: appd-key-secret
           – name: JAVA_OPTS
              value: ”
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar
                  -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
                  -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
                  -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
                  -Xms64m -Xmx512m -XX:MaxPermSize=256m
                  -Djava.net.preferIPv4Stack=true
                  …”
                          ports:
                          – containerPort: 8080
                          volumeMounts:
                             – mountPath: /sharedFiles
                               name: shared-files
                    restartPolicy: Always

Learn more about how AppDynamics can help monitor your applications on Kubernetes and OpenShift.

Advances In Mesh Technology Make It Easier for the Enterprise to Embrace Containers and Microservices

More enterprises are embracing containers and microservices, which bring along additional networking complexities. So it’s no surprise that service meshes are in the spotlight now. There have been substantial advances recently in service mesh technologies—including Istio’s 1.0, Hashi Corp’s Consul 1.2.1, and Buoyant merging Conduent into LinkerD—and for good reason.

Some background: service meshes are pieces of infrastructure that facilitate service-to-service communication—the backbone of all modern applications. A service mesh allows for codifying more complex networking rules and behaviors such as a circuit breaker pattern. AppDev teams can start to rely on service mesh facilities, and rest assured their applications will perform in a consistent, code-defined manner.

Endpoint Bloom

The more services and replicas you have, the more endpoints you have. And with the container and microservices boom, the number of endpoints is exploding. With the rise of Platform-as-a-Services and container orchestrators, new terms like ingress and egress are becoming part of the AppDev team vernacular. As you go through your containerization journey, multiple questions will arise around the topic of connectivity. Application owners will have to define how and where their services are exposed.

The days of providing the networking team with a context/VIP to add to web infrastructure—such as services.acme.com/shoppingCart over port 443—are fading. Today, AppDev teams are more likely to hand over a Kubernetes YAML to add services.acme.com/shoppingCart to the Ingress controller, and then describe a behavior. Example: the shopping cart Pod needs to talk to the shopping cart validation Pod, which can only be accessed by the shopping cart because the inventory is kept on another set of Reddis Pods, which can’t be exposed to the outside world.

You’re juggling all of this while navigating constraints set by defined and deployed Kubernetes networking. At this point, don’t be alarmed if you’re thinking, “Wow, I thought I was in AppDev—didn’t know I needed a CCNA to get my application deployed!”

The Rise of the Service Mesh

When navigating the “fog of system development,” it’s tricky to know all the moving pieces and connectivity options. With AppDev teams focusing mostly on feature development rather than connectivity, it’s very important to make sure all the services are discoverable to them. Investments in API management are the norm now, with teams registering and representing their services in an API gateway or documenting them in Swagger, for example.

But what about the underlying networking stack? Services might be discoverable, but are they available? Imagine a Venn diagram of AppDev vs. Sys Engineer vs. SRE: Who’s responsible for which task? And with multiple pieces of infrastructure to traverse, what would be a consistent way to describe networking patterns between services?

Service Mesh to the Rescue

Going back to the endpoint bloom, consistency and predictability are king. Over the past few years, service meshes have been maturing and gaining popularity. Here are some great places to learn more about them:

Service Mesh 101

In the Istio model, applications participate in a service mesh. Istio acts as the mesh, and then applications can participate in the mesh via a sidecar proxy—Envoy, in Istio’s case.

Your First Mesh

DZone has a very well-written article about standing up your first Java application in Kubernetes to participate in an Istio-powered service mesh. The article goes into detail about deploying Istio itself in Kubernetes (in this case, MinuKube). For an AppDev team, the new piece would be creating the all-important routing rules, which are deployed to Istio.

Which One of these Meshes?

The New Stack has a very good article comparing the pros and cons of the major service mesh providers. The post lays out the problem in granular format, and discusses which factors you should consider to determine if your organization is even ready for a service mesh.

Increasing Importance of AppDynamics

With the advent of the service mesh, barriers are falling and enabling services to communicate more consistently, especially in production environments.

If tweaks are needed on the routing rules—for example, a time out—it’s best to have the ability to pinpoint which remote calls would make the most sense for this task. AppDynamics has the ability to examine service endpoints, which can provide much-needed data for these tweaks.

For the service mesh itself, AppDynamcs in Kubernetes can even monitor the health of your applications deployed on a Kubernetes cluster.

With the rising velocity of new applications being created or broken into smaller pieces, AppDynamics can help make sure all of these components are humming at their optimal frequency.

Getting Started with Containers and Microservices

Get Ahead of Microservices and Container Proliferation with Robust App Monitoring

Containers and microservices are growing in popularity, and why not? They enable agility, speed, and resource efficiency for many tasks that developers work on daily. They are light in terms of coding and interdependencies, which makes it much easier and less time consuming to deliver apps to app users or migrate applications from legacy systems to cloud servers.

What Are Containers and Microservices?

Containers are isolated workload environments in a virtualized operating system. They speed up workload processes and application delivery because they can be spun up quickly; and they provide a solution for application-portability challenges because they are not tied to software on physical machines.

Microservices are a type of software architecture that is light and limited in scope. Single-function applications comprise small, self-contained units working together through APIs that are not dependent on a specific language. A microservices architecture is faster and more agile than traditional application architecture.

The Importance of Monitoring

For containers and microservices to be most effective and impactful as they are adopted, technology leaders must prepare a plan on how to monitor and code within them. They also must understand how developers will use them.

Foundationally, all pieces and parts of an enterprise technology stack should be planned, monitored, and measured. Containers and microservices are no exception. Businesses should monitor them to manage their use according to a planned strategy, so that best practice standards (i.e., security protocols, sharing permissions, when to use and not use, etc.) can be identified, documented, and shared. Containers and microservices also must be monitored to ensure both the quality and security of digital products and assets.

To do all of this, an organization needs robust application monitoring capabilities that provide full visibility into the containers and microservices; as well as insight into how they are being used and their influence on goals, such as better productivity or faster time-to-market.

Assessing Your Application Monitoring Capabilities

Some of questions that enterprises should ask as they assess their application-monitoring capabilities are:

  • How can we ensure development and operations teams are working together to use containers and microservices in alignment with enterprise needs?

  • Will we build our own system to manage container assignment, clustering, etc.? Or should we use third-party vendors that will need to be monitored?

  • Will we be able to monitor code inside containers and the components that make up microservices with our current application performance management (APM) footprint?

Do we need more robust APM to effectively manage containers and microservices? And how do we determine the best solution for our needs? To answer those questions and learn more about containers and microservices—and how to effectively use and manage them — read Getting Started With Containers and Microservices: A Mini Guide for Enterprise Leaders.

This mini eBook expands on the topics discussed in this blog and includes an 8-point plan for choosing an effective APM solution.

Go to the guide.

The AppD Approach: Leveraging Docker Store Images with Built-In AppDynamics

In my previous blog we explored some of the best and worst practices of Docker, taking a hands-on approach to refactoring an application, always with containers and monitoring in mind. In that project, we chose to use physical agents from the Appdynamics download site as our monitoring method. But this time we are going to take things one step further: using images from the Docker Store to improve the same application.

Modern applications are very complex, of course, and we will show the three most common ways to use AppDynamics Docker Store Images to monitor your app, all while adhering to Docker best practices. We will continue to use this repo and move between the “master” and “docker-store-images” branches. If you haven’t read my previous post, I recommend doing so first, as we will build on the source code used there.

First Things First: The Image

Over at the AppDynamics page on the Docker Store (login required), we have three types of images for Java applications, each with our agents on them. In this project, we will work solely with the Machine Agent and Java images but, in principle, the scenarios and implementations are language-agnostic. The images are based on OpenJDK, Tomcat, and Jetty. Since our application uses Tomcat, we will use that image. (store/appdynamics/java:4.3.7.1_tomcat9-jre8).

You can see how every image is versioned with (store/appdynamics<language>:<agent-version>_<server-runtime environment>).

By inspecting the image <docker inspect store/appdynamics/java:4.3.7.1_tomcat9-jre8>, we are able verify important environmental variables, including the fact that we’re running Java 8. We’re also able to identify the JAVA_HOME variable, which will play an important role (more on this below). Furthermore, we can verify that Tomcat is installed with the correct versions and paths. Lastly, we notice the command to start the agent is simply the catalina.sh run command. On startup, the image runs Tomcat and the agent. (This is important to note as we dive deeper.)

Lastly, if you plan to use a third-party image in your production application, the image must be trusted. This means it must not modify any other containers at runtime. Having the OS level agent force itself into a containerized app—and intrusively modify code at runtime—defeats one of the main advantages of containerization: the ability to run the container anywhere without worrying about how your code executes. Always keep this in mind when evaluating container monitoring software. (AppDynamics’ images pass this test, by the way.)

Here are the three most practical migrations you’re likely to face:

Scenario 1: A Perfect World

This is the best case scenario, but also the least practical. In a perfect world, we’d be able to pull the image as a top layer, pass in only environment variables, and see our application discovered within minutes. However in our situation, we can’t do this because we have a custom startup script that we want to have run when our container starts. In this example, we’ve chosen to use Dockerize (https://github.com/jwilder/dockerize) to simplify the process of converting our application to use Docker, but of course there are many situations where you might need some custom start logic in your containers.  If you’re not using Dockerize in your script, simply pull the image and pass in the environment variables that name the individual components. Since the agents run on startup, this method will be seamless.

Scenario 2: Install Tomcat

Ideally, we’d like to make as few changes as possible. The problem here is that we have a unique startup script that needs to run when each project is started. In this scenario, your workaround is to use the agent image that doesn’t have Tomcat—in other words, use store/appdynamics/java:4.3.7.1 and install Tomcat on the image. With this approach, you remove the overlapping start commands and top-level agent. The downside is reinstalling Tomcat on every image rebuild.

Scenario 3: Refactor Start Script

Here’s the most common scenario when migrating from a physical agent—and one we found ourselves in. A specific run script brings up all of your applications. Refactoring your apps to pull from the image and start your application would be too time consuming, and would ask too much of the customer. The solution: Combine the two start scripts.

In our scenario, we had a directory responsible for the server, and another responsible for downloading and installing the agents. Since we were using Tomcat, we decided to leverage the image with Tomcat and our monitoring software, which was already installed <store/appdynamics/java:4.3.7.1_tomcat9-jre8>. (We went with the official Tomcat image because it’s the one used by the AppDynamics image.)

In our startup script, AD-Capital-Docker/ADCapital-Tomcat/startup.sh, we used Dockerize to spin up all the services. You’ll notice that we added a couple of environment variables, ${APPD_JAVAAGENT} and ${APPD_PROPERTIES}, to each start command. In our existing version, these changes enable the script to see if AppD properties are set and, if so, to start the application agent.

The next step was to refactor the startup script to use our new image. (To get the agent start command, simply pull the image, run the container, and run ps -ef at the command line.)

Since Java was installed to a different location, we had to put its path in our start command, replacing “java” with “/docker-java-home/jre/bin/java”. This approach allowed us to ensure that our application was using the Java provided from the image.

Next, we needed to make sure we were starting the services using Tomcat, and with the start command from the AppDynamics agent image. By using the command from above, we were able to replace our Catalina startup:

-cp ${CATALINA_HOME}/bin/bootstrap.jar:${CATALINA_HOME}/bin/tomcat-juli.jar org.apache.catalina.startup.Bootstrap

…with the agent startup:

-Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djdk.tls.ephemeralDHKeySize=2048
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources
-javaagent:/opt/appdynamics/javaagent.jar -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
-Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat
-Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start

If you look closely, though, not all of the services were using Tomcat on startup. The last two services simply needed to start the agent. By using the same environment variable (APPD_JAVA_AGENT), we were able to rename that variable (APPD_JAVA_AGENT) to be the path of the agent jar. And with that, we had our new startup script.

[startup.sh (BEFORE)]

[startup.sh (AFTER)]

Not only did this approach allow us to get rid of our AppDynamics directory, it also enabled a seamless transition to monitoring via Docker images.

The AppD Approach: Composing Docker Containers for Monitoring

Since its introduction four years ago, Docker has vastly changed how modern applications and services are built. But while the benefits of microservices are well documented, the bad habits aren’t.

Case in point: As people began porting more of their monolithic applications to containers, Dockerfiles ended up becoming bloated, defeating the original purpose of containers. Any package or service you thought you needed was installed on the image. This led to minor changes in source or server, forcing you to rebuild the image. People would package multiple processes into a single Dockerfile. And obviously, as the images got bigger, things became much less efficient because you would spend all of your time waiting on a rebuild to check a simple change in source code.

The quick fix was to layer your applications. Maybe you had a base image, a language-specific image, a server image, and then your source code. While your images became more contained, any change to your bottom-level images would require an entire rebuild of the image set. Although your Dockerfiles became less bloated, you still suffered from the same upgrade issues. With the industry becoming more and more agile, this practice didn’t feel aligned.

The purpose of this blog is to show how we migrated an application to Docker—highlighting the Docker best practices we implemented—and how we achieved our end goal of monitoring the app in AppDynamics. (Source code located here)

Getting Started

With these best (and worst) practices in mind, we began by taking a multi-service Java application and putting it into Docker Compose. We wanted to build out the containers with the Principle of Least Privilege: each system component or process should have the least authority needed to complete its tasks. The containers needed to be ephemeral too, always shutting down when a SIGTERM is received. Since there were going to be environment variables reused across multiple services, we created a docker-compose.env file (image below) that could be leveraged across every service.

[AD-Capital-Docker/docker-compose.env]

Lastly, we knew that for our two types of log data—Application and Agent—we would need to create a shared volume to house it.

[AD-Capital-Docker/docker-compose.yml]

Instead of downloading and installing Java or Tomcat in the Dockerfile, we decided to pull the images directly from the official Tomcat in the Docker Store. This would allow us to know which version we were on without having to install either Java or Tomcat. Upgrading versions of Java or Tomcat would be easy, and would leave the work to Tomcat instead of on our end.

We knew we were going to have a number of services dependent on each other and linking through Compose, and that a massive bash script could cause problems. Enter Dockerize, a utility that simplifies running applications in Docker containers. Its primary role is to wait for other services to be available using TCP, HTTP(S) and Unix before starting the main process.

Some backstory: When using tools like Docker Compose, it’s common to depend on services in other linked containers. But oftentimes relying on links is not enough; while the container itself may have started, the service(s) within it may not be ready, resulting in shell script hacks to work around race conditions. Dockerize gives you the ability to wait for services on a specified protocol (file, TCP, TCP4, TCP6, HTTP, HTTPS and Unix) before starting your application. You can use the -timeout # argument (default: 10 seconds) to specify how long to wait for the services to become available. If the timeout is reached and the service is still not available, the process exits with status code 1.

[AD-Capital-Docker/ADCapital-Tomcat/startup.sh]

We then separated the source code from the agent monitoring. (The project uses a Docker volume to store the agent binaries and log/config files.) Now that we had a single image pulled from Tomcat, we could place our source code in the single Dockerfile and replicate it anywhere. Using prebuilt war files, we could download source from a different time, and place it in the Tomcat webapps subdirectory.

[AD-Capital-Docker/ADCapital-Project/Dockerfile]

We now had a Dockerfile containing everything needed for our servers, and a Dockerfile for the source code, allowing you to run it with or without monitoring enabled. The next step was to split out the AppDynamics Application and Machine Agent.

We knew we wanted to instrument with our agents, but we didn’t want a configuration file with duplicate information for every container. So we created a docker-compose.env. Since our agents require minimal configuration—and the only difference between “tiers” and “nodes” are their names—we knew we could pass these env variables across the agents without using multiple configs. In our compose file, we could then specify the tier and node name for the individual services.

[AD-Capital-Docker/docker-compose.yml]

For the purpose of this blog, we downloaded the agent and passed in the filename and SHA-256 checksum via shell scripts in the ADCapital-Appdynamics/docker-compose.yml file. We were able to pass in the application agent and configuration script to run appdynamics to the shared volume, which would allow the individual projects to use it on startup (see image below). Now that we had enabled application monitoring for our apps, we wanted to install the machine agent to enable analytics. We followed the same instrumentation process, downloading the agent and verifying the filename and checksums. The machine agent is a standalone process, so our configuration script was a little different, but took advantage of the docker-compose.env variable name to set the right parameters for the machine agent (ADCapital-Monitor/start-appdynamics). 

[AD-Capital-Docker/ADCapital-AppDynamics/startup.sh]

The payoff? We now have an image responsible for the server, one responsible for the load, and another responsible for the application. In addition, another image monitors the application, and a final image monitors the application’s logs and analytics. Updating an individual component will not require an entire rebuild of the application. We’re using Docker as it was intended: each container has one responsibility. Lastly, by using volumes to share data across services, we can easily check agent and application Logs. This makes it much easier to gain visibility into the entire landscape of our software.

If you would like to see the source code used for this blog, it is located here with instructions on how to build and setup. In the next blog, we will show you how to migrate from host agents,  using Docker images from the Docker store.

Scaling with Containers at AppSphere 2016

Containers have grown tremendously in popularity in recent years. Originally conceived as a way to replace legacy systems completely, container technology has instead become a way to extend monolithic systems with newer, faster technology. As an example of this growth, the 2016 RightScale State of the Cloud Report™ shows Docker adoption rates in 2015 moved from thirteen percent to twenty-seven percent. Another thirty-five percent of respondents say they have plans to use Docker in the near future.

What Are Containers?

Containers allow you to move software from one environment to another without worrying about different applications, SSL libraries, network topology, storage systems, or security policies — for example, moving from a machine in your data center to a virtual environment in the cloud. They are able to do this because everything you need to run the software travels as one unit. The application, binaries, libraries, and configuration files all live together inside a single container.

You can move a container to a wide variety of software environments with no problem because the program is self-contained. In contrast, virtualization also includes the operating system. Containers share the same operating system kernel, so they are lighter and more energy-efficient than a virtual machine. Hypervisors are an abstraction of the entire machine, while containers are an abstraction of only the OS kernel.

There are a variety of container technologies to support different use cases. The most popular container technology right now is Docker. It grew rapidly a few years ago with major adoption in enterprise computing, including three of the biggest financial institutions in the world — unusual for the slow-to-adopt world of banking. Docker allows software applications to run on a large number of machines at the same time, an important quality for huge sites like Facebook that must deliver data to millions of consumers simultaneously.

Container Technologies

Recent surveys performed by DevOps.com and ClusterHQ show Docker is the overwhelming favorite in container technology at this point. One of the most talked-about competitors to Docker that has emerged recently is Rocket, an open-source project from CoreIS, which ironically was one of one of Docker’s early proponents. Backed heavily by Google, Rocket’s founders developed the technology because they thought Docker had grown and moved too far away from its original purpose. While Docker has been embraced as almost an industry standard, competitors are making inroads. Rocket’s founders say one of its strengths is that it is not controlled by a single organization.

One of the pioneers in container technology back in 2001 is a product from Parallels called Virtuozzo. It gets a lot of attention from OEMs, works well on cloud servers, and features near-instant provisioning. Other fast-growing container technologies include LXC and LVE.

Container Best Practices

One of the challenges of containers is monitoring their performance. AppDynamics is able to monitor containers using our innovative Microservices iQ. It provides automatic discovery of exit and entry service endpoints, tracks important performance indicators, and isolates the cause of performance issues.

At AppSphere 2016, you can learn more about containers and performance monitoring at 10 AM on Thursday, November 17, where AppDynamics’ CTO, Steve Sturtevant, will be presenting his talk, “Best Practices for Managing IaaS, PaaS, and Container-Based Deployments.” Register today to ensure your spot to learn from Steve’s session, and even more in just a few weeks at AppSphere 2016–we’re looking forward to seeing you there!

The Importance of Monitoring Containers [Infographic]

With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue.

However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.

If your team is looking towards containers and microservices as an operational change in how you decide to ship your product, you can’t afford bugs or software issues affecting your performance, end-user experience, or ultimately your bottom line.

Ed Moyle, Director of Emerging Business & Technology at ISACA said it best in his blog, “Consider what happens to these issues when containers enter into the mix. Not only are all the VM issues still there, but they’re now potentially compounded. Inventories that were already difficult to keep current because of VM sprawl might now have to accommodate containers, too. For example, any given VM could contain potentially dozens of individual containers. Issues arising from unexpected migration of VM images might be made significantly worse when the containers running on them can be relocated with a few keystrokes.”

Earlier this year, AppDynamics unveiled Microservices iQ to address these visibility issues daunting DevOps teams today.

Infographic – Container Monitoring 101 from AppDynamics

With Microservices iQ, DevOps teams can:

  • Automatic discovery of entry and exit points of your microservice as service endpoints for focused microservices monitoring

  • Track the key performance indicators of your microservice without worrying about the entire distributed business transaction that uses it

  • Drill down and isolate the root cause of any performance issues affecting the microservice

Interested in learning more? Check out our free ebook, The Importance of Monitoring Containers.