Monitor Amazon EKS with AppDynamics

On the heels of announcing the general availability of AppDynamics for Kubernetes at KubeCon Europe, we’ve partnered with Amazon Web Services (AWS) to bring Amazon EKS to the broader Kubernetes community. AppDynamics provides enterprise-grade, end-to-end performance monitoring for applications orchestrated by Kubernetes.

Amazon EKS, AWS’s managed Kubernetes service, shoulders the heavy lifting of installing and operating your own Kubernetes clusters. Beyond the operational agility and simplicity in managing Kubernetes clusters, Amazon EKS brings additional value to enterprises, including the following:

1. Choice and Portability: Built on open source and upstream Kubernetes, EKS passes CNCF’s conformance tests, enabling enterprises to run applications confidently on EKS without having to make changes to the app or learn new Kubernetes tooling. You can choose where to run applications on various Kubernetes deployment venues—on-premises, AWS clusters managed with kops, Amazon EKS, or any other cloud provider.

2. High Availability: EKS deploys the control plane in at least two availability zones, monitors the health of the master nodes, and re-instantiates the master nodes, if needed, automatically. Additionally, it patches and updates Kubernetes versions.

3. Network Isolation and Performance: Worker nodes run in the subnets within your VPC, giving you control over network isolation via security groups.

Amazon EKS brings VPC networking to Kubernetes pods and removes the burden of running and managing overlay networking fabric. CNI plugin runs as a DaemonSet on every node, and allocates an IP address to every pod from the pool of secondary IP addresses attached to the elastic network interface (ENI) of the worker node instance. Communication between control plane and worker nodes occurs over AWS networking backbone, resulting in better performance and security.

Monitoring Amazon EKS with AppDynamics

EKS makes it easier to operate Kubernetes clusters; however, performance monitoring remains one of the top challenges in Kubernetes adoption. In fact, according to a recent CNCF survey, 46% of enterprises reported monitoring as their biggest challenge. Specifically, organizations deploying containers on the public cloud, cite monitoring as a big challenge. Perhaps because cloud providers monitoring tools may not play well with organization’s existing tools which are used to monitor on-premises resources.

We are therefore excited that AppDynamics and AWS have teamed up to accelerate your EKS adoption.

How Does it Work?

AppDynamics seamlessly integrates into EKS environments. The machine agent runs as a DaemonSet on EKS worker nodes, and application agents are deployed alongside your application binaries within the application pods. Out-of-the-box integration gives you the deepest visibility into EKS cluster health, AWS resources and Docker containers, and provides insights into the performance of every microservice deployed—all through a single pane of glass.

 

Unified, end-to-end monitoring helps AppDynamics’ customers expedite root-cause analysis, reduce MTTR, and confidently adopt modern application architectures such as microservices. AppDynamics provides a consistent approach to monitoring applications orchestrated by Kubernetes regardless where the clusters are deployed – on Amazon EKS or on-premises enabling enterprises leverage their existing people, processes and tools.

Correlate Kubernetes performance with business metrics: For deeper visibility into business performance, organizations can create tagged metrics, such as customer conversion rate or revenue per channel correlated with the performance of applications on the Kubernetes platform. Health rules and alerts based on business metrics provide intelligent validation so that every code release can drive business outcomes.

Get Started Today

To get started with enterprise-grade monitoring of EKS follow these easy steps:

1. Sign-up for a free AppDynamics trial and configure the environment’s ConfigMap. Sample configuration and instructions are available on our GitHub page.

2. Create the EKS cluster and worker nodes, and configure kubectl with the EKS control plane endpoint. Deploy your Kubernetes services and deployments.

3. Start end-to-end performance monitoring with AppDynamics!

The AppD Approach: Monitoring Kubernetes Events

Just recently we launched AppDynamics for Kubernetes, giving enterprises end-to-end, unified visibility into their entire Kubernetes stack and Kubernetes-orchestrated applications for both on-prem and public cloud environments. Our industry-leading APM provides visibility into Kubernetes by leveraging labels such as Namespace, Pod or ReplicaSet. And AppDynamics customers can organize, group, query or filter Kubernetes objects or performance metrics based on labels.

Of course, we’re always finding ways to make things better. As a preview of what’s to come, we’re now offering the AppDynamics Kubernetes Events Monitor Extension, which we plan to incorporate into future builds of our core solution.

Events Monitoring

In our 4.4.3 release, built-in Kubernetes capabilities focus on monitoring the containers and applications that run on top of Kubernetes. This new extension adds the additional capability of monitoring metrics provided by the Kubernetes Events API.

By monitoring these events, our extension enables enterprises to troubleshoot everything that goes wrong in the Kubernetes orchestration platform—from scaling up/scaling down, new deployments, deleting applications, creating new applications, and so on. If an event goes to a warning state, users can drill down into the warning to see where it occurred, making troubleshooting easier.

How It Works

Kubernetes usually stores events for a certain amount of time, which by default isn’t very long. After an hour, in fact, the events get purged. The AppDynamics Machine Agent, in addition to being used to report on basic hardware metrics (CPU, memory, disk, etc.) is the hook for custom extensions, including our new Kubernetes Events Monitor Extension.

It’s fairly easy to install our new extension. You’ll find detailed instructions here, but here’s a quick overview:

  • Deploy the AppDynamics Machine Agent as you normally would, and then add the Kubernetes Events Monitor Extension to it. If you’re deploying the Machine Agent using Docker, as a Kubernetes daemonset, simply add the extension to the container.

  • Configuration is simple. The extension just needs to know how to connect to the Kubernetes Cluster (from your kubectl client config), as well as your credentials for logging into the AppDynamics platform.

  • Once setup is complete, you’ll be able to push Kubernetes events to AppDynamics.

Once configured, the Kubernetes Events Monitor Extension will query Kubernetes events every minute.

The extension tracks all events happening in Kubernetes, including time-stamping information and messages. Below is a sample dashboard:

Here’s a closer view:

The Event Details view provides more information. Below, the Message field shows that Kubernetes tried to attach a volume to a running deployment, but couldn’t mount it due to a timeout.

The Events Monitor Extension also shows the Kubernetes Namespace—important information for locating where the event occurred (e.g., the specific host and component) in the Kubernetes cluster.

Like all AppDynamics extensions, the Kubernetes Events Monitor Extension is user-configurable. For example, with a few simple edits to the extension configuration file, you can change the default one-minute query time for Kubernetes events, the duration of timeouts, and other settings.

Seamless Integration with Business iQ

The Kubernetes extension takes full advantage of the Business iQ real-time performance monitoring toolkit, allowing you to create metrics, visualizations and alarms. You can also use Business iQ to analyze a transaction in conjunction with Kubernetes events.

Below are some sample visualizations:​

Our new extension adds the powerful capability of monitoring Kubernetes events to our industry-leading AppDynamics for Kubernetes solution. Get started today!

Future features and functionality are subject to change at the sole discretion of AppDynamics, and AppDynamics will have no liability for delay in the delivery or failure to deliver any of the features and functionality set forth in this document.

Migrating from Docker Compose to Kubernetes

The AppDynamics Demo Platform never sleeps. It is a cloud-based system that hosts a number of applications designed to help our global sales team demonstrate the many value propositions of AppDynamics.

Last fall, we added several new, larger applications to our demo platform. With these additions, our team started to see some performance challenges with our standard Docker Compose application deployment model on a single host. Specifically, we wanted to support multiple host machines as opposed to being limited to a single host machine like Docker Compose. We had been talking about migrating to Kubernetes for several months before this and so we knew it was time to take the leap.

Before this I had extensive experience with dockerized applications and even with some Kubernetes-managed applications. However, I had never taken part in the actual migration of an application from Docker Compose to Kubernetes.

For our first attempt at migrating to Kubernetes, we chose an application that was relatively small, but which contained a variety of different elements—Java, NodeJS, GoLang, MySQL and MongoDB. The application used Docker Compose for container deployment and “orchestration.” I use the term orchestration loosely, because Docker Compose is pretty light when compared to Kubernetes.

Docker Compose

For those who have never used Docker Compose, it’s a framework that allows developers to define container-based applications in a single YAML file. This definition includes the Docker images used, exposed ports, dependencies, networking, etc. Looking at the snippet below, each block of 5 to 20 lines represents a separate service. Docker Compose is a very useful tool and makes application deployment fairly simple and easy.

Figure 1.1 – docker-compose.yaml Snippet

Preparing for the Migration

The first hurdle to converting the project was learning how Kubernetes is different from Docker Compose. One of the most dramatic ways it differs is in container-to-container communication.

In a Docker Compose environment, the containers all run on a single host machine. Docker Compose creates a local network that the containers are all part of. Take this snippet, for example:

This block will create a container called quoteServices with a hostname of quote-services and port 8080. With this definition, any container within the local Docker Compose network can access it using http://quote-services:8080. Anything outside of the local network would have to know the IP address of the container.

By comparison, Kubernetes usually runs on multiple servers called nodes, so it can’t simply create a local network for all the containers. Before we started, I was very concerned that this might lead to many code changes, but those worries would prove to be unfounded.

Creating Kubernetes YAML Files

The best way to understand the conversion from Docker Compose to Kubernetes is to see a real example of what the conversion looks like. Let’s take the above snippet of quoteServices and convert it to a form that Kubernetes can understand.

The first thing to understand is that the above Docker Compose block will get converted into two separate sections, a Deployment and a Service.

As its name implies, the deployment tells Kubernetes most of what it needs to know about how to deploy the containers. This information includes things like what to name the containers, where to pull the images from, how many containers to create, etc. The deployment for quoteServices is shown here:

As we mentioned earlier, networking is done differently in Kubernetes than in Docker Compose. The Service is what enables communication between containers. Here is the service definition for quoteServices:

This service definition tells Kubernetes to take the containers that have a name = quoteServices, as defined under selector, and to make them reachable using quote-services as hostname and port 8080. So again, this service can be reached at http://quote-services:8080 from within the Kubernetes application. The flexibility to define services this way allows us to keep our URLs intact within our application, so no changes are needed due to networking concerns.

By the end, we had taken a single Docker Compose file with about 24 blocks and converted it into about 20 different files, most of which contained a deployment and a service. This conversion was a big part of the migration effort. Initially, to “save” time, we used a tool called Kompose to generate deployment and services files automatically. However, we ended up rewriting all of the files anyway once we knew what we were doing. Using Kompose is sort of like using Word to create webpages. Sure, it works, but you’re probably going to want to re-do most of it once you know what you’re doing because it adds a lot of extra tags that you don’t really want.

Instrumenting AppDynamics

This was the easy part. Most of our applications are dockerized, and we have always monitored these and our underlying Docker infrastructure with AppDynamics. Because our Docker images already had application agents baked in, there was nothing we had to change. If we had wanted, we could have left them the way they were, and they would have worked just fine. However, we decided to take advantage of something that is fairly common in the Kubernetes world: sidecar injection.

We used the sidecar model to “inject” the AppDynamics agents into the containers. The advantage of this is that we can now update our agents without having to rebuild our application images and redeploy them. It is also more fitting with best practices. To update the agent, all we have to do is update our sidecar image, then change the tag used by the application container. Just like that, our application is running with a new agent!

Server Visibility Agent

Incorporating the Server Visibility (SVM) agent was also fairly simple. One difference to note is that Docker Compose runs on a single host, whereas Kubernetes typically uses multiple nodes, which can be added or removed dynamically.

In our Docker Compose model, our SVM agent was deployed to a single container, which monitored both the host machine and the individual containers. With Kubernetes, we would have to run one such container on each node in the cluster. The best way to do this is with a structure called a DaemonSet.

You can see from the snippet below that a DaemonSet looks a lot like a Deployment. In fact, the two are virtually identical. The main difference is how they act. A Deployment typically doesn’t say anything about where in the cluster to run the containers defined within it, but it does state how many containers to create. A DaemonSet, on the other hand, will run a container on each node in the cluster. This is important, because the number of nodes in a cluster can increase or decrease at any time.

Figure: DaemonSet definition

What Works Great

From development and operations perspectives, migrating to Kubernetes involves some extra overhead, but there are definite advantages. I’m not going to list all the advantages here, but I will tell you about my two favorites.

First of all, I love the Kubernetes Dashboard. It shows information on running containers, deployments, services, etc. It also allows you to update/add/delete any of your definitions from the UI. So when I make a change and build a new image, all I have to do is update the image tag in the deployment definition. Kubernetes will then delete the old containers and create new ones using the updated tag. It also gives easy access to log files or a shell to any of the containers.

Figure: Kubernetes Dashboard

Another thing that worked well for us is that we no longer need to keep and maintain the host machines that were running our Docker Compose applications. Part of the idea behind containerizing applications is to treat servers more like cattle than pets. While this is true to an extent, the Docker Compose host machines have become the new pets. We have seen issues with the host machines starting to have problems, needing maintenance, running out of disk space, etc. With Kubernetes, there are no more host machines, and the nodes in the cluster can be spun up and down anytime.

Conclusion

Before starting our Kubernetes journey, I was a little apprehensive about intra-application networking, deployment procedures, and adding extra layers to all of our processes. It is true that we have added a lot of extra configuration, going from a 300-line docker-compose.yaml file to about 1,000 lines spread over 20 files. This is mostly a one-time cost, though. We also had to rewrite some code, but that needed to be rewritten anyway.

In return, we gained all the advantages of a real orchestration tool: scalability, increased visibility of containers, easier server management, and many others. When it comes time to migrate our next application, which won’t be too far away, the process will be much easier and quicker.

Other Resources

The Illustrated Children’s Guide to Kubernetes

Getting Started with Docker

Kubernetes at GitHub

Migrating a Spring Boot service

 

Introducing AppDynamics for Kubernetes

Today we’re excited to announce AppDynamics for Kubernetes, which will give enterprises end-to-end, unified visibility into their entire Kubernetes stack and Kubernetes-orchestrated applications for both on-premises and public cloud environments. Enterprises use Kubernetes to fundamentally transform how they deploy and run applications in distributed, multicloud environments. With AppDynamics for Kubernetes, they will have a production-grade monitoring solution to deliver a flawless end-user experience.

Why is Kubernetes so popular? Because it delivers on the promise of doing more with less. By leveraging the portability, isolation, and immutability provided by containers and Kubernetes, development teams can ship more features faster by simplifying application packaging and deployment—all while keeping the application highly available without downtime. And Kubernetes’ self-healing properties not only enables operations teams to ensure application reliability and hyper-scalability but also boost efficiency through increased resource utilization.

According to the latest survey by the Cloud Native Computing Foundation (CNCF), 69% of respondents said Kubernetes was their top choice for container orchestration. And Gartner recently proclaimed that “Kubernetes has emerged as the de facto standard for container orchestration.” The rapid expansion of Kubernetes is also due to the vibrant community. With over 35,000 GitHub stars and some 1,600 unique contributors spanning every timezone, Kubernetes is the most engaged community on GitHub.

Challenges Emerge

Kubernetes brings, however, new operational workflows and complexities, many involving application performance management. As enterprises expand the use of Kubernetes beyond dev/test and into production environments, these challenges become even more profound.

The CNCF survey reveals that 38% of respondents identified monitoring as one of their biggest Kubernetes-adoption challenges—one that grows even larger to 46% as the size of the enterprise increases.

Shortcomings of Current Monitoring Approaches

When experimenting with Kubernetes in dev/test environments, organizations typically either start with the monitoring tools that come with Kubernetes or use those that are developed, maintained and supported by the community. Examples include the Kubernetes dashboard, kube-state-metrics, cAdvisor or Heapster. While these tools provide information about the current health of Kubernetes, they lack data storage capabilities. So either InfluxDB or Prometheus (two popular time-series databases) is added to provide persistence. For data visualization, open-source tools such as Grafana or Kibana are tacked on. The system still lacks log collection, though, so log collectors are added as well. Quickly, organizations realize that monitoring Kubernetes is much more involved than capturing metrics.

But wait: additional third-party integration may be needed to achieve reliability. By default, monitoring data is stored on the local disk susceptible to failure due to node outages. And to secure access to their data, organizations must develop or integrate additional tools for authentication and role-based access control (RBAC). Bottom line: While this approach may work well for small development or DevOps teams, a production-grade solution is needed, especially as enterprises start to adopt Kubernetes for their mission-critical applications.

Unfortunately, traditional APM tools often aren’t up to the task here, as they fail to address the dynamic nature of application provisioning in Kubernetes, as well as the complexities of microservices architecture.

Introducing AppDynamics for Kubernetes

The all-new AppDynamics for Kubernetes will give organizations the deepest visibility into application and business performance. With it, companies will have unparalleled insights into containerized applications, Kubernetes clusters, Docker containers, and underlying infrastructure metrics—all through a single pane of glass.

To effectively monitor the performance of applications deployed in Kubernetes, organizations must reimagine their monitoring strategies. In Kubernetes, containerized applications are deployed on pods, which are dynamically created on virtual groups or clusters called namespaces. Since Kubernetes decouples developers and operations from deploying to specific machines, it significantly simplifies day-to-day operations by abstracting the underlying infrastructure. However, this results in limited control over which physical machine the pods are deployed to, as shown in Fig. 1 below:

kubernetes_monitoring

Fig. 1: Dynamic deployments of applications across a Kubernetes cluster.

To gather performance metrics for any resource, AppDynamics leverages labels, the identifying metadata and foundation for grouping, searching, filtering and managing Kubernetes objects. This enables organizations to gather performance insights and set intelligent thresholds and alerts for the performance of Pods, Namespace, ReplicaSets, Services, Deployment and other Kubernetes labels.

With AppDynamics for Kubernetes, enterprises can:

  1. Achieve end-to-end visibility: From end-user touch points such as a browser, mobile app, or IoT device, all the way to the Kubernetes platform, AppDynamics provides line-of-code-level detail for every application deployed (either traditional app or microservice), granular metrics on Docker container resources, infrastructure metrics, log analytics, and the performance of every database query—all correlated and within the context of Business Transactions, a logical representation of end-user interaction with applications. AppDynamics for Kubernetes will help enterprises avoid silos, and enables them to leverage existing skill sets and processes to monitor Kubernetes and non-Kubernetes applications from a unified monitoring solution across multiple, hybrid clouds.
  2. Expedite root cause analysis: Cascading failures from microservices can cause alert storms. Triaging the root cause via traditional monitoring tools is often time-consuming, and can lead to finger-pointing in war-room scenarios. By leveraging unique machine learning capabilities, AppDynamics makes it simple to identify the root cause of failure.
  3. Correlate Kubernetes performance with business metrics: For deeper visibility into business performance, organizations can create tagged metrics, such as customer conversion rate or end-user experience correlated with the performance of applications on the Kubernetes platform. Health rules and alerts based on business metrics provide intelligent validation so that every code release can drive business outcomes.
  4. Get a seamless, out-of-the-box experience: AppDynamics’ Machine agent is deployed by Kubernetes as a DaemonSet on all the worker nodes, thereby leveraging Kubernetes’ capability to ensure that the AppDynamics agent is always running and reporting performance data.
  5. Accelerate ‘Shift-Left’: AppDynamics is integrated with Cisco CloudCenter, which creates immutable application profiles with built-in AppDynamics agents. Leveraging the capability, customers can dramatically streamline Day 2 operations of application deployment in various Kubernetes environments, such as dev, test and pre-production. And proactive monitoring enables customers to catch performance-related issues before they impact the user experience. Go here to learn more about Cisco CloudCenter.

AppDynamics at KubeCon Europe

We are excited to be a sponsor of KubeCon + CloudNativeCon Europe 2018, a premier Kubernetes and cloud-native event. Our team will be there in full force to help you get started with production-grade monitoring of your Kubernetes deployments. And don’t forget to load up on cool new AppD schwag at the event.

Stop by AppD booth S-C36 in the expo hall. Additionally, I will be presenting the following sessions at Cisco Lounge in the expo hall:

  1. Introduction to Application Performance Monitoring—Wed-Fri, May 2-4, 12:30 PM
  2. Enterprise-grade Application Performance Monitoring for Kubernetes—Wed-Thu, May 2-3, 3:30 PM, Friday 3:00 PM

We are looking forward to engaging with all of our fellow Kubernauts. See you in Copenhagen!