Deploying AppDynamics Agents to OpenShift Using Init Containers

There are several ways to instrument an application on OpenShift with an AppDynamics application agent. The most straightforward way is to embed the agent into the main application image. (For more on this topic, read my blog Monitoring Kubernetes and OpenShift with AppDynamics.)

Let’s consider a Node.js app. All you need to do is to add a require reference to the agent libraries and pass the necessary information on the controller. The reference itself becomes a part of the app and will be embedded in the image. The list of variables (e.g., controller host name, app/tier name, license) the agent needs to communicate with the controller can be embedded, though it is best practice to pass them into the app on initialization as configurable environmental variables.

In the world of Kubernetes (K8s) and OpenShift, this task is accomplished with config maps and secrets. Config maps are reusable key value stores that can be made accessible to one or more applications. Secrets are very similar to config maps with an additional capability to obfuscate key values. When you create a secret, K8s automatically encodes the value of the key as a base64 string. Now the actual value is not visible, and you are protected from people looking over your shoulder. When the key is requested by the app, Kubernetes automatically decodes the value. Secrets can be used to store any sensitive data such as license keys, passwords, and so on. In our example below, we use a secret to store the license key.

Here is an example of AppD instrumentation where the agent is embedded, and the configurable values are passed as environment variables by means of a configMap, a secret and the pod spec.

var appDobj = {
   controllerHostName: process.env[‘CONTROLLER_HOST’],
   controllerPort: CONTROLLER_PORT,
   controllerSslEnabled: true,
accountName: process.env[‘ACCOUNT_NAME’],
   accountAccessKey: process.env[‘ACCOUNT_ACCESS_KEY’],
   applicationName: process.env[‘APPLICATION_NAME’],
   tierName: process.env[‘TIER_NAME’],
   nodeName: ‘process’
}
require(“appdynamics”).profile(appDobj);

Pod Spec
– env:
   – name: TIER_NAME
     value: MyAppTier
   – name: ACCOUNT_ACCESS_KEY
     valueFrom:
       secretKeyRef:
         key: appd-key
         name: appd-secret
   envFrom:
     – configMapRef:
         name: controller-config

A ConfigMap with AppD variables.

AppD license key stored as secret.

The Init Container Route: Best Practice

The straightforward way is not always the best. Application developers may want to avoid embedding a “foreign object” into the app images for a number of good reasons—for example, image size, granularity of testing, or encapsulation. Being developers ourselves, we respect that and offer an alternative, a less intrusive way of instrumentation. The Kubernetes way.

An init container is a design feature in Kubernetes that allows decoupling of app logic from any type of initialization routine, such as monitoring, in our case. While the main app container lives for the entire duration of the pod, the lifespan of the init container is much shorter. The init container does the required prep work before orchestration of the main container begins. Once the initialization is complete, the init container exists and the main container is started. This way the init container does not run parallel to the main container as, for example, a sidecar container would. However, like a sidecar container, the init container, while still active, has access to the ephemeral storage of the pod.

We use this ability to share storage between the init container and the main container to inject the AppDynamics agent into the app. Our init container image, in its simplest form, can be described with this Dockerfile:

FROM openjdk:8-jdk-alpine
RUN apk add –no-cache bash gawk sed grep bc coreutils
RUN mkdir -p /sharedFiles/AppServerAgent
ADD AppServerAgent.zip /sharedFiles/
RUN unzip /sharedFiles/AppServerAgent.zip -d /sharedFiles/
AppServerAgent /
CMD [“tail”, “-f”, “/dev/null”]

The above example assumes you have already downloaded the archive with AppDynamics app agent binaries locally. When the container is initialized, it unzips the binaries into a new directory. To the pod spec, we then add a directive that copies the directory with the agent binaries to a shared volume on the pod:

spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  /mountpath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account

After the init container exits, the AppDynamics agent binaries are waiting for the application to be picked up from the shared volume on the pod.

Let’s assume we are deploying a Java app, one normally initialized via a script that calls the java command with Java options. The script, startup.sh, may look like this:

# startup.sh
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.tierName=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.agent.reuse.nodeName.prefix=$TIER_NAME”
JAVA_OPTS=”$JAVA_OPTS
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar”
JAVA_OPTS=”$JAVA_OPTS
-Dappdynamics.controller.hostName=$CONTROLLER_HOST -Dappdynamics.controller.port=$CONTROLLER_PORT -Dappdynamics.controller.ssl.enabled=$CONTROLLER_SSL_ENABLED”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.agent.accountName=$ACCOUNT_NAME -Dappdynamics.agent.accountAccessKey=$ACCOUNT_ACCESS_KEY -Dappdynamics.agent.applicationName=$APPLICATION_NAME”
JAVA_OPTS=”$JAVA_OPTS -Dappdynamics.socket.collection.bci.enable=true”
JAVA_OPTS=”$JAVA_OPTS -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true”
JAVA_OPTS=”$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom”

$JAVA_OPTS -jar myapp.jar

It is embedded into the image and invoked via Docker’s ENTRYPOINT directive when the container starts.

FROM openjdk:8-jdk-alpine
COPY startup.sh startup.sh
RUN chmod +x startup.sh
ADD myapp.jar /usr/src/myapp.jar
EXPOSE 8080
ENTRYPOINT [“/bin/sh”, “startup.sh”]

To make the consumption of startup.sh more flexible and Kubernetes-friendly, we can trim it down to this:

#a more flexible startup.sh
java $JAVA_OPTS -jar myapp.jar

And declare all the necessary Java options in the spec as a single environmental variable.

containers:
       – name: my-app
         image: my-app-image:x.x.x
         imagePullPolicy: IfNotPresent
         securityContext:
           privileged: true
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
name: appd-secret
-name: JAVA_OPTS
  value: “ -javaagent:/sharedFiles/AppServerAgent/javaagent.jar
         -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
         -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
         -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
         -Xms64m -Xmx512m -XX:MaxPermSize=256m
         -Djava.net.preferIPv4Stack=true
         …”
         ports:
         – containerPort: 8080
         volumeMounts:
           – mountPath: /sharedFiles
             name: shared-files

The dynamic values for the Java options are populated from the ConfigMap. First, we reference the entire configMap, where all shared values are defined:

envFrom:
           – configMapRef:
               name: controller-config

We also reference our secret as a separate environmental variable. Then, using the $() notation, we can reference the individual variables in order to concatenate the value of the JAVA_OPTS variable.

Thanks to these Kubernetes features (init containers, configMaps, secrets), we can add AppDynamics monitoring into an existing app in a noninvasive way, without the need to rebuild the image.

This approach has multiple benefits. The app image remains unchanged in terms of size and encapsulation. From a Kubernetes perspective, no extra processing is added, as the init container exits before the main container starts. There is added flexibility in what can be passed into the application initialization routine without the need to modify the image.

Note that OpenShift does not allow running Docker containers as user root by default. If you must (for whatever good reason), add the service account you use for deployments to the anyuid SCC. Assuming your service account is my-account, as in the provided examples, run this command:

oc adm policy add-scc-to-user anyuid -z myaccount

Here’s an example of a complete app spec with AppD instrumentation:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 name: my-app
spec:
 replicas: 1
 template:
   metadata:
     labels:
       name: my-app
   spec:
     initContainers:
     – name: agent-repo
       image: agent-repo:x.x.x
       imagePullPolicy: IfNotPresent
       command: [“cp”,  “-r”,  “/sharedFiles/AppServerAgent”,  “/mountPath/AppServerAgent”]
       volumeMounts:
       – mountPath: /mountPath
         name: shared-files
     volumes:
       – name: shared-files
         emptyDir: {}
     serviceAccountName: my-account
     containers:
       – name: my-app
         image: my-service
         imagePullPolicy: IfNotPresent
         envFrom:
           – configMapRef:
               name: controller-config
         env:
           – name: TIER_NAME
             value: WebTier
           – name: ACCOUNT_ACCESS_KEY
             valueFrom:
               secretKeyRef:
                 key: appd-key
                 name: appd-key-secret
           – name: JAVA_OPTS
              value: ”
-javaagent:/sharedFiles/AppServerAgent/javaagent.jar
                  -Dappdynamics.agent.accountName=$(ACCOUNT_NAME)
                  -Dappdynamics.agent.accountAccessKey=$(ACCOUNT_ACCESS_KEY)
                  -Dappdynamics.controller.hostName=$(CONTROLLER_HOST)
                  -Xms64m -Xmx512m -XX:MaxPermSize=256m
                  -Djava.net.preferIPv4Stack=true
                  …”
                          ports:
                          – containerPort: 8080
                          volumeMounts:
                             – mountPath: /sharedFiles
                               name: shared-files
                    restartPolicy: Always

Learn more about how AppDynamics can help monitor your applications on Kubernetes and OpenShift.

Monitoring Kubernetes and OpenShift with AppDynamics

Here at AppDynamics, we build applications for both external and internal consumption. We’re always innovating to make our development and deployment process more efficient. We refactor apps to get the benefits of a microservices architecture, to develop and test faster without stepping on each other, and to fully leverage containerization.

Like many other organizations, we are embracing Kubernetes as a deployment platform. We use both upstream Kubernetes and OpenShift, an enterprise Kubernetes distribution on steroids. The Kubernetes framework is very powerful. It allows massive deployments at scale, simplifies new version rollouts and multi-variant testing, and offers many levers to fine-tune the development and deployment process.

At the same time, this flexibility makes Kubernetes complex in terms of setup, monitoring and maintenance at scale. Each of the Kubernetes core components (api-server, kube-controller-manager, kubelet, kube-scheduler) has quite a few flags that govern how the cluster behaves and performs. The default values may be OK initially for smaller clusters, but as deployments scale up, some adjustments must be made. We have learned to keep these values in mind when monitoring OpenShift clusters—both from our own pain and from published accounts of other community members who have experienced their own hair-pulling discoveries.

It should come as no surprise that we use our own tools to monitor our apps, including those deployed to OpenShift clusters. Kubernetes is just another layer of infrastructure. Along with the server and network visibility data, we are now incorporating Kubernetes and OpenShift metrics into the bigger monitoring picture.

In this blog, we will share what we monitor in OpenShift clusters and give suggestions as to how our strategy might be relevant to your own environments. (For more hands-on advice, read my blog Deploying AppDynamics Agents to OpenShift Using Init Containers.)

OpenShift Cluster Monitoring

For OpenShift cluster monitoring, we use two plug-ins that can be deployed with our standalone machine agent. AppDynamics’ Kubernetes Events Extension, described in our blog on monitoring Kubernetes events, tracks every event in the cluster. Kubernetes Snapshot Extension captures attributes of various cluster resources and publishes them to the AppDynamics Events API. The snapshot extension collects data on all deployments, pods, replica sets, daemon sets and service endpoints. It captures the full extent of the available attributes, including metadata, spec details, metrics and state. Both extensions use the Kubernetes API to retrieve the data, and can be configured to run at desired intervals.

The data these plug-ins provide ends up in our analytics data repository and instantly becomes available for mining, reporting, baselining and visualization. The data retention period is at least 90 days, which offers ample time to go back and perform an exhaustive root cause analysis (RCA). It also allows you to reduce the retention interval of events in the cluster itself. (By default, this is set to one hour.)

We use the collected data to build dynamic baselines, set up health rules and create alerts. The health rules, baselines and aggregate data points can then be displayed on custom dashboards where operators can see the norms and easily spot any deviations.

An example of a customizable Kubernetes dashboard.

What We Monitor and Why

Cluster Nodes

At the foundational level, we want monitoring operators to keep an eye on the health of the nodes where the cluster is deployed. Typically, you would have a cluster of masters, where core Kubernetes components (api-server, controller-manager, kube-schedule, etc.) are deployed, as well as a highly available etcd cluster and a number of worker nodes for guest applications. To paint a complete picture, we combine infrastructure health metrics with the relevant cluster data gathered by our Kubernetes data collectors.

From an infrastructure point of view, we track CPU, memory and disk utilization on all the nodes, and also zoom into the network traffic on etcd. In order to spot bottlenecks, we look at various aspects of the traffic at a granular level (e.g., reads/writes and throughput). Kubernetes and OpenShift clusters may suffer from memory starvation, disks overfilled with logs or spikes in consumption of the API server and, consequently, the etcd. Ironically, it is often monitoring solutions that are known for bringing clusters down by pulling excessive amounts of information from the Kubernetes APIs. It is always a good idea to establish how much monitoring is enough and dial it up when necessary to diagnose issues further. If a high level of monitoring is warranted, you may need to add more masters and etcd nodes. Another useful technique, especially with large-scale implementations, is to have a separate etcd cluster just for storing Kubernetes events. This way, the spikes in event creation and event retrieval for monitoring purposes won’t affect performance of the main etcd instances. This can be accomplished by setting the –etcd-servers-overrides flag of the api-server, for example:

–etcd-servers-overrides =/events#https://etcd1.cluster.com:2379;https://etcd2. cluster.com:2379;https://etcd3. cluster.com:2379

From the cluster perspective we monitor resource utilization across the nodes that allow pod scheduling. We also keep track of the pod counts and visualize how many pods are deployed to each node and how many of them are bad (failed/evicted).

A dashboard widget with infrastructure and cluster metrics combined.

Why is this important? Kubelet, the component responsible for managing pods on a given node, has a setting, –max-pods, which determines the maximum number of pods that can be orchestrated. In Kubernetes the default is 110. In OpenShift it is 250. The value can be changed up or down depending on need. We like to visualize the remaining headroom on each node, which helps with proactive resource planning and to prevent sudden overflows (which could mean an outage). Another data point we add there is the number of evicted pods per node.

Pod Evictions

Evictions are caused by space or memory starvation. We recently had an issue with the disk space on one of our worker nodes due to a runaway log. As a result, the kubelet produced massive evictions of pods from that node. Evictions are bad for many reasons. They will typically affect the quality of service or may even cause an outage. If the evicted pods have an exclusive affinity with the node experiencing disk pressure, and as a result cannot be re-orchestrated elsewhere in the cluster, the evictions will result in an outage. Evictions of core component pods may lead to the meltdown of the cluster.

Long after the incident where pods were evicted, we saw the evicted pods were still lingering. Why was that? Garbage collection of evictions is controlled by a setting in kube-controller-manager called –terminated-pod-gc-threshold.  The default value is set to 12,500, which means that garbage collection won’t occur until you have that many evicted pods. Even in a large implementation it may be a good idea to dial this threshold down to a smaller number.

If you experience a lot of evictions, you may also want to check if kube-scheduler has a custom –policy-config-file defined with no CheckNodeMemoryPressure or CheckNodeDiskPressure predicates.

Following our recent incident, we set up a new dashboard widget that tracks a metric of any threats that may cause a cluster meltdown (e.g., massive evictions). We also associated a health rule with this metric and set up an alert. Specifically, we’re now looking for warning events that tell us when a node is about to experience memory or disk pressure, or when a pod cannot be reallocated (e.g., NodeHasDiskPressure, NodeHasMemoryPressure, ErrorReconciliationRetryTimeout, ExceededGracePeriod, EvictionThresholdMet).

We also look for daemon pod failures (FailedDaemonPod), as they are often associated with cluster health rather than issues with the daemon set app itself.

Pod Issues

Pod crashes are an obvious target for monitoring, but we are also interested in tracking pod kills. Why would someone be killing a pod? There may be good reasons for it, but it may also signal a problem with the application. For similar reasons, we track deployment scale-downs, which we do by inspecting ScalingReplicaSet events. We also like to visualize the scale-down trend along with the app health state. Scale-downs, for example, may happen by design through auto-scaling when the app load subsides. They may also be issued manually or in error, and can expose the application to an excessive load.

Pending state is supposed to be a relatively short stage in the lifecycle of a pod, but sometimes it isn’t. It may be good idea to track pods with a pending time that exceeds a certain, reasonable threshold—one minute, for example. In AppDynamics, we also have the luxury of baselining any metric and then tracking any configurable deviation from the baseline. If you catch a spike in pending state duration, the first thing to check is the size of your images and the speed of image download. One big image may clog the pipe and affect other containers. Kubelet has this flag, –serialize-image-pulls, which is set to “true” by default. It means that images will be loaded one at a time. Change the flag to “false” if you want to load images in parallel and avoid the potential clogging by a monster-sized image. Keep in mind, however, that you have to use Docker’s overlay2 storage driver to make this work. In newer Docker versions this setting is the default. In addition to the Kubelet setting, you may also need to tweak the max-concurrent-downloads flag of the Docker daemon to ensure the desired parallelism.

Large images that take a long time to download may also cause a different type of issue that results in a failed deployment. The Kubelet flag –image-pull-progress-deadline determines the point in time when the image will be deemed “too long to pull or extract.” If you deal with big images, make sure you dial up the value of the flag to fit your needs.

User Errors

Many big issues in the cluster stem from small user errors (human mistakes). A typo in a spec—for example, in the image name—may bring down the entire deployment. Similar effects may occur due to a missing image or insufficient rights to the registry. With that in mind, we track image errors closely and pay attention to excessive image-pulling. Unless it is truly needed, image-pulling is something you want to avoid in order to conserve bandwidth and speed up deployments.

Storage issues also tend to arise due to spec errors, lack of permissions or policy conflicts. We monitor storage issues (e.g., mounting problems) because they may cause crashes. We also pay close attention to resource quota violations because they do not trigger pod failures. They will, however, prevent new deployments from starting and existing deployments from scaling up.

Speaking of quota violations, are you setting resource limits in your deployment specs?

Policing the Cluster

On our OpenShift dashboards, we display a list of potential red flags that are not necessarily a problem yet but may cause serious issues down the road. Among these are pods without resource limits or health probes in the deployment specs.

Resource limits can be enforced by resource quotas across the entire cluster or at a more granular level. Violation of these limits will prevent the deployment. In the absence of a quota, pods can be deployed without defined resource limits. Having no resource limits is bad for multiple reasons. It makes cluster capacity planning challenging. It may also cause an outage. If you create or change a resource quota when there are active pods without limits, any subsequent scale-up or redeployment of these pods will result in failures.

The health probes, readiness and liveness are not enforceable, but it is a best practice to have them defined in the specs. They are the primary mechanism for the pods to tell the kubelet whether the application is ready to accept traffic and is still functioning. If the readiness probe is not defined and the pods takes a long time to initialize (based on the kubelet’s default), the pod will be restarted. This loop may continue for some time, taking up cluster resources for no reason and effectively causing a poor user experience or outage.

The absence of the liveness probe may cause a similar effect if the application is performing a lengthy operation and the pod appears to Kubelet as unresponsive.

We provide easy access to the list of pods with incomplete specs, allowing cluster admins to have a targeted conversation with development teams about corrective action.

Routing and Endpoint Tracking

As part of our OpenShift monitoring, we provide visibility into potential routing and service endpoint issues. We track unused services, including those created by someone in error and those without any pods behind them because the pods failed or were removed.

We also monitor bad endpoints pointing at old (deleted) pods, which effectively cause downtime. This issue may occur during rolling updates when the cluster is under increased load and API request-throttling is lower than it needs to be. To resolve the issue, you may need to increase the –kube-api-burst and –kube-api-qps config values of kube-controller-manager.

Every metric we expose on the dashboard can be viewed and analyzed in the list and further refined with ADQL, the AppDynamics query language. After spotting an anomaly on the dashboard, the operator can drill into the raw data to get to the root cause of the problem.

Application Monitoring

Context plays a significant role in our monitoring philosophy. We always look at application performance through the lens of the end-user experience and desired business outcomes. Unlike specialized cluster-monitoring tools, we are not only interested in cluster health and uptime per se. We’re equally concerned with the impact the cluster may have on application health and, subsequently, on the business objectives of the app.

In addition to having a cluster-level dashboard, we also build specialized dashboards with a more application-centric point of view. There we correlate cluster events and anomalies with application or component availability, end-user experience as reported by real-user monitoring, and business metrics (e.g., conversion of specific user segments).

Leveraging K8s Metadata

Kubernetes makes it super easy to run canary deployments, blue-green deployments, and A/B or multivariate testing. We leverage these conveniences by pulling deployment metadata and using labels to analyze performance of different versions side by side.

Monitoring Kubernetes or OpenShift is just a part of what AppDynamics does for our internal needs and for our clients. AppDynamics covers the entire spectrum of end-to-end monitoring, from the foundational infrastructure to business intelligence. Inherently, AppDynamics is used by many different groups of operators who may have very different skills. For example, we look at the platform as a collaboration tool that helps translate the language of APM to the language of Kubernetes and vice versa.

By bringing these different datasets together under one umbrella, AppDynamics establishes a common ground for diverse groups of operators. On the one hand you have cluster admins, who are experts in Kubernetes but may not know the guest applications in detail. On the other hand, you have DevOps in charge of APM or managers looking at business metrics, both of whom may not be intimately familiar with Kubernetes. These groups can now have a productive monitoring conversation, using terms that are well understood by everyone and a single tool to examine data points on a shared dashboard.

Learn more about how AppDynamics can help you monitor your applications on Kubernetes and OpenShift.

Using AppDynamics with Red Hat OpenShift v3

As customers and partners are making the transition of their applications from monoliths to microservices using Platform as a Service (PaaS) providers such as RedHat OpenShift v3, AppDynamics has made a significant investment in providing first-class integrations with such providers.

OpenShift v3 has been significantly re-architected from its predecessor OpenShift v2 where cartridges have been replaced with docker images and gears with docker containers. The complete set of differences can be found here.

AppDynamics integrates its agents with RedHat OpenShift v3 using the Source-to-Image (S2I) methodology. S2I is a tool for building reproducible Docker images. It produces ready-to-run images by injecting application source into a Docker image and assembling a new Docker image. The new image incorporates the base image (the builder) and built source and is ready to use with the docker run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, etc.

Workflow

The overall workflow for using the AppDynamics with RedHat OpenShift v3 s shown below:

Step 1: is already provided for by RedHat.

To perform Steps 2 and 3, we have provided the S2I scripts in the following GitHub repository and instructions on how to build enhanced builder images for JBoss Wildfly and EAP servers.

https://github.com/Appdynamics/sti-wildfly

Let’s explore this with an actual example and use the sample https://github.com/jim-minter/ose3-ticket-monster

application.

Prerequisites:

–    Ensure OC is installed (https://docs.openshift.com/enterprise/3.0/cli_reference/get_started_cli.html#installing-the-cli)

–    Ensure sti is installed (https://github.com/openshift/source-to-image)

–    Ensure you have a dockerhub account (https://hub.docker.com/)

Step 2: Create an AppDynamics builder image

$ git clone https://github.com/Appdynamics/sti-wildfly.git

$ cd sti-wildfly

$ make build VERSION=eap6.4

Step 3: Create an Application image

$ s2i build  -e “APPDYNAMICS_APPLICATION_NAME=os3-ticketmonster,APPDYNAMICS_TIER_NAME=os3-ticketmonster-tier,APPDYNAMICS_ACCOUNT_NAME=customer1_xxxxxxxxxxxxxxxxxxf,APPDYNAMICS_ACCOUNT_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxx,APPDYNAMICS_CONTROLLER_HOST=xxxx.saas.appdynamics.com,APPDYNAMICS_CONTROLLER_PORT=443,APPDYNAMICS_CONTROLLER_SSL_ENABLED=true” https://github.com/jim-minter/ose3-ticket-monster appdynamics/sti-wildfly-eap64-centos7:latest pranta/appd-eap-ticketmonster

$ docker tag openshift-ticket-monster pranta/openshift-ticket-monster:latest

$ docker push pranta/openshift-ticket-monster

Step 4: Deploy the Application into OpenShift

$ oc login 10.0.32.128:8443

$ oc new-project wildfly

$ oc project wildfly

$ oc new-app –docker-image=pranta/appd-eap-ticketmonster:latest –name=ticketmonster-demo

Now you should be able to login to the controller and see the ticketmonster application on the application dashboard:

See a live demo of Red Hat OpenShift v3 with AppDynamics from AppSphere ’15 below.

If you’re already using RedHat OpenShift v3 but are new to AppDynamics, here’s where you can sign-up for a free trial: https://portal.appdynamics.com/account/signup/signupForm/

AppDynamics Partners with OpenShift by Red Hat to Empower DevOps

We’re proud to partner with OpenShift by Red Hat to help monitor their open-source platform-as-a-service (PaaS). Together we make it easier to scale into the cloud. The integration helps foster DevOps by increasing the visibility and collaboration between the typically fragmented development and operations teams throughout the product lifecycle. We caught up with Chris Morgan, Technical Director of Partner Ecosystem at Red Hat, to discuss all the ways Agile and rapid-release cycles have changed development and sped up innovation.

Morgan refers to these new DevOps tools as driving innovation and empowering developers by cultivating a constant feedback loop and proving end-to-end visibility while help scale applications.

 

“We have a great partner that’s able to provide [APM] to enhance the platform and make it more desirable to developers and for our customers. Ease of use and deployment is what everyone wants.”

“Using AppDynamics, we can monitor the existing application and understand how best it’s performing and then re-architect it so it can take advantage of the things that platform-as-a-service has to offer and you move to OpenShift.”

AppDynamics is excited to announce we are available in the OpenShift marketplace to make it easier than ever to add application performance monitoring to OpenShift based applications.

AppDynamics in the OpenShift Marketplace

AppDynamics in the OpenShift Marketplace

Bootstrapping DropWizard apps with AppDynamics on OpenShift by Red Hat

Getting started with DropWizard, OpenShift, and AppDynamics

In this blog post, I’ll show you how to deploy a Dropwizard-based application on OpenShift by Red Hat and monitor it with AppDynamics.

DropWizard is a high-performance Java framework for building RESTful web services. It is built by the smart folks at Yammer and is available as an open-source project on GitHub. The easiest way to get started with DropWizard is with the example application. The DropWizard example application was developed to, as its name implies, provide examples of some of the features present in DropWizard.

DropWizard

OpenShift can be used to deploy any kind of application with the DIY (do it yourself) cartridge. To get started, log in to OpenShift and create an application using the DIY cartridge.

With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for an account on OpenShift by RedHat

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a Do It Yourself application on OpenShift

$ rhc app create appdynamicsdemo diy-0.1
 --from-code https://github.com/Appdynamics/dropwizard-sample-app.git

Getting started is as easy as creating an application from an existing git repository: https://github.com/Appdynamics/dropwizard-sample-app.git

DIY Cartridge


% rhc app create appdynamicsdemo diy-0.1 --from-code https://github.com/Appdynamics/dropwizard-sample-app.git

Application Options
——————-
Domain: appddemo
Cartridges: diy-0.1
Source Code: https://github.com/Appdynamics/dropwizard-sample-app.git
Gear Size: default
Scaling: no

Creating application ‘appdynamicsdemo’ … done
Waiting for your DNS name to be available … done

Cloning into ‘appdynamicsdemo’…
Your application ‘appdynamicsdemo’ is now available.

URL: http://appdynamicsdemo-appddemo.rhcloud.com/
SSH to: 52b8adc15973ca7e46000077@appdynamicsdemo-appddemo.rhcloud.com
Git remote: ssh://52b8adc15973ca7e46000077@appdynamicsdemo-appddemo.rhcloud.com/~/git/appdynamicsdemo.git/

Run ‘rhc show-app appdynamicsdemo’ for more details about your app.

With the OpenShift Do-It-Yourself container you can easily run any application by adding a few action hooks to your application. In order to make DropWizard work on OpenShift we need to create three action hooks for building, deploying, and starting the application. Action hooks are simply scripts that are run at different points during deployment. To get started simply create a .openshift/action_hooks directory:

mkdir -p .openshift/action_hooks

Here is the example for the above sample application:

When checking out the repository use Maven to download the project dependencies and package the project for production from source code:

.openshift/action_hooks/build

cd $OPENSHIFT_REPO_DIR

mvn -s $OPENSHIFT_REPO_DIR/.openshift/settings.xml -q package

When deploying the code you need to replace the IP address and port for the DIY container. The properties are made available as environment variables:

.openshift/action_hooks/deploy

cd $OPENSHIFT_REPO_DIR

sed -i 's/@OPENSHIFT_DIY_IP@/'"$OPENSHIFT_DIY_IP"'/g' example.yml
sed -i 's/@OPENSHIFT_DIY_PORT@/'"$OPENSHIFT_DIY_PORT"'/g' example.yml

Let’s recap some of the smart decisions we have made so far:

  • Leverage OpenShift platform as a service (PaaS) for managing the infrastructure
  • Use DropWizard as a solid foundation for our Java application
  • Monitor the application performance with AppDynamics Pro

With a solid Java foundation we are prepared to build our new application. Next, try adding another machine or dive into the DropWizard documentation.

Combining DropWizard, OpenShift, and AppDynamics

AppDynamics allows you to instrument any Java application with by simply adding the AppDynamics agent to the JVM. Sign up for a AppDynamics Pro self-service account. Log in using your account details in your email titled “Welcome to your AppDynamics Pro SaaS Trial” or the account details you have entered during On-Premise installation.

The last step to combine the power of OpenShift and DropWizard is to instrument the app with AppDynamics. Simply update your AppDynamics credentials in the Java agent’s AppServerAgent/conf/controller-info.xml configuration file.

Finally, to start the application we need to run any database migrations and add the AppDynamics Java agent to the startup commmand:

.openshift/action_hooks/deploy

cd $OPENSHIFT_REPO_DIR

java -jar target/dropwizard-example-0.7.0-SNAPSHOT.jar db migrate example.yml

java -javaagent:${OPENSHIFT_REPO_DIR}AppServerAgent/javaagent.jar
     -jar ${OPENSHIFT_REPO_DIR}target/dropwizard-example-0.7.0-SNAPSHOT.jar
     server example.yml > ${OPENSHIFT_DIY_LOG_DIR}/helloworld.log &

OpenShift App

Additional resources on running DropWizard on OpenShift:

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.