Updates to Microservices iQ: Gain Deeper Visibility into Docker Containers and Microservices

Enterprises have never been under more pressure to deliver digital experiences at the high bar set by the likes of Facebook, Google, and Amazon. According to our recent App Attention Index 2017, consumers expect more from applications than ever before. And if you don’t meet those expectations? More than 50 percent delete an app after a single use due to poor app performance, and 80 percent (!) have deleted an app after it didn’t meet their expectations.

Because microservices and containers have been shown to help businesses ship better software faster, many are adopting these architectures. According to Gartner (“Innovation Insight for Microservices” 2017), early adopters of microservices (like Disney, GE, and Goldman Sachs) have cut development lead times by as much as 75 percent. However, containers and microservices also introduce new levels of complexity that make it challenging to isolate the issues that can degrade the entire performance of applications.

Updated Microservices iQ

Today, we’re excited to announce Microservices iQ Integrated Docker Monitoring. With the Microservices iQ, you can three-way drill-down of baseline metrics, container metrics and underlying host server metrics — all within the context of Business Transactions and single pane of glass.

Now, together with the baseline metrics that you rely on to run the world’s largest applications, you can click to view critical container metadata plus key resource indicators for single containers or clusters of containers. You can then switch seamlessly to a view of the underlying host server to view all the containers running on that host and its resource utilization.

To troubleshoot a problem with a particular microservice running inside a container, the most important determination to make is where to start. And that’s where Microservices iQ Integrated Docker Monitoring stands out.

Is a container unresponsive because another container on the same host is starving it of CPU, disk or memory? Or is there an application issue that has been exposed by the particular code path followed by this business transaction that needs to be diagnosed using Transaction Snapshots or other traditional tools?

Sometimes the source of the problem is easy to spot, but often not: and that’s where another significant enhancement to Microservices iQ comes into play: heat maps.

Heat Maps

Heat maps are a powerful visual representation of complex, multi-dimensional data. You’ve probably seen them used to show things like changes in climate and snow cover over time, financial data, and even for daily traffic reports. Because heat maps can abstract the complexity of huge amounts of data to quickly visualize complex data patterns, we’re leveraging the technique to help address one of the hardest challenges involved in managing a microservice architecture – pinpointing containers for performance anomalies and outliers.

When a cluster of containers is deployed, the expectations is each container will behave identically. We know from experience that that isn’t always true. While the majority of the containers running a given microservice may perform within expected baselines, some may exhibit slowness or higher than usual error rates, resulting in the poor user experience that leads to uninstalled apps. Ops teams managing business-critical applications need a way to quickly identify when and where these outliers are occurring, and then view performance metrics for those nodes to look for potential correlation that help cut through the noise.

With the latest Microservices iQ, we have added support for heat maps in our new Tier Metrics Correlator feature which show load imbalances and performance anomalies across all the nodes in a tier, with heat maps to highlight correlation between these occurrences and the key resource metrics (CPU, disk, memory, I/O) for the underlying servers or container hosts. Issues that would have taken hours to investigate using multiple dashboards and side-by-side metric comparisons are often immediately apparent, thanks to the unique visualization advantages that heat maps provide. Think of it like turning on the morning traffic report and finding an unused backroad that’ll get you where you’re going in half the time.

Learn more

Find out more about updates to Microservices iQ, Docker Monitoring, and a new partnership with Atlassian Jira.

 

A Deep Dive into Docker – Part 2

In Part One of this Docker primer I gave you an overview of Docker, how it came about, why it has grown so fast and where it is deployed. In the second section, I’ll delve deeper into technical aspects of Docker, such as the difference between Docker and virtual machines, the difference between Docker elements and parts, and the basics of how to get started.

Docker Vs. Virtual Machines

First, I will contrast Docker containers with virtual machines like VirtualBox or VMWare. With virtual machines the entire operating system is found inside the environment, running on top of the host through a hypervisor layer. In effect, there are two operating systems running at the same time.

In contrast, Docker has all of the services of the host operating system virtualized inside the container, including the file system. Although there is a single operating system, containers are self-contained and cannot see the files or processes of another container.

Differences Between Virtual Machines and Docker

  • Each virtual machines has its own operating system, whereas all Docker containers share the same host or kernel.

  • Virtual machines do not stop after a primary command; on the other hand, a Docker container stops after it completes the original command.

  • Due to the high CPU and memory usage, a typical computer can only run one or two virtual machines at a time. Docker containers are lightweight and can run alongside several other containers on an average laptop computer. Docker’s excellent resource efficiency is changing the way developers approach creating applications.

  • Virtual machines have their own operating system, so they might take several minutes to boot up. Docker containers do not need to load an operating system and take microseconds to start.

  • Virtual machines do not have effective diff, and they are not version controlled. You can run diff on Docker images and see the changes in the file systems; Docker also has a Docker Hub for checking images in and out, and private and public repositories are available.

  • A single virtual machine can be launched from a set of VMDK or VMX files while several Docker containers can be started from a one Docker image.

  • A virtual machine host operating system does not have to be the same as the guest operating system. Docker containers do not have their own independent operating system, so they must be exactly the same as the host (Linux Kernel.)

  • Virtual machines do not use snapshots often — they are expensive and mostly used for backup. Docker containers use an imaging system with new images layered on top, and containers can handle large snapshots.

Similarities Between Virtual Machines and Docker

  • For both Docker containers and virtual machines, processes in one cannot see the processes in another.

  • Docker containers are instances of the Docker image, whereas virtual machines are considered running instances of physical VMX and VMDK files.

  • Docker containers and virtual machines both have a root file system.

  • A single virtual machine has its own virtual network adapter and IP address; Docker containers can also have a virtual network adapter, IP address, and ports.

Virtual machines let you access multiple platforms, so users across an organization will have similar workstations. IT professionals have plenty of flexibility in building out new workstations and servers in response to expanding demand, which provides significant savings over investing in costly dedicated hardware.

Docker is excellent for coordinating and replicating deployment. Instead of using a single instance for a robust, full-bodied operating system, applications are broken down into smaller pieces that communicate with each other.

Installing Docker

Docker gives you a fast and efficient way to port apps on machines and systems. Using Linux containers (LXC) you can place apps in their own applications and operate them in a secure, self-contained environment. The important Docker parts are as follows:

  1. Docker daemon manages the containers.

  2. Docker CLI is used to communicate and command the daemon.

  3. Docker image index is either a private or public repository for Docker images.

Here are the major Docker elements:

  1. Docker containers bold everything including the application.

  2. Docker images are of containers or the operating system.

  3. Dockerfiles are scripts that build images automatically.

Applications using the Docker system employ these elements.

Linux Containers – LXC

Docker containers can be thought of as directories that can be archived or packed up and shared across a variety of platforms and machines. All dependencies and libraries are inside the container, except for the container itself, which is dependent on Linux Containers (LXC). Linux Containers let developers create applications and their dependent resources, which are boxed up in their own environment inside the container. The container takes advantage of Linux features such as profiles, cgroups, chroots and namespaces to manage the app and limit resources.

Docker Containers

Among other things, Docker containers provide isolation of processes, portability of applications, resource management, and security from outside attacks. At the same time, they cannot interfere with the processes of another container, do not work on other operating systems and cannot abuse the resources on the host system.

This flexibility allows containers to be launched quickly and easily. Gradual, layered changes lead to a lightweight container, and the simple file system means it is not difficult or expensive to roll back.

Docker Images

Docker containers begin with an image, which is the platform upon which applications and additional layers are built. Images are almost like disk images for a desktop machine, and they create a solid base to run all operations inside the container. Each image is not dependent on outside modifications and is highly resistant to outside tampering.

As developers create applications and tools and add them to the base image, they can create new image layers when the changes are committed. Developers use a union file system to keep everything together as a single item.

Dockerfiles

Docker images can be created automatically by reading a Dockerfile, which is a text document that contains all commands needed to build the image. Many instructions can be completed in succession, and the context includes files at a specific PATH on the local file system or a Git repository location; related subdirectories are included in the PATH. Likewise, the URL will include the submodules of the repository.

Getting Started

Here is a shortened example on how to get started using Docker on Ubuntu Linux — enter these Docker Engine CLI commands on a terminal window command line. If you are familiar with package managers, you can use apt and yum for installation.

  1. Log into Ubuntu with sudo.

  2. Make sure curl is installed:
    $ which curl

  3. If not, install it but update the manager first:
    $ sudo apt-get update
    $ sudo apt-get install curl

  4. Grab the latest Docker version:
    $ curl -fsSL

  5. You’ll need to enter your sudo password. Docker and its dependencies should be downloaded by now.

  6. Check that Docker is installed correctly:
    $ docker run hello-world

You should see “Hello from Docker” on the screen, which indicates Docker seems to be working correctly. Consult the Docker installation guide to get more details and find installation instructions for Mac and Windows.

Ubuntu Images

Docker is reasonably easy to work with once it is installed since the Docker daemon should be running already. Get a list of all docker commands by running sudo docker

Here is a reference list that lets you search for a docker image from a list of Ubuntu images. Keep in mind an image must be on the host machine where the containers will reside; you can pull an image or view all the images on the host using sudo docker images

Commit an image to ensure everything is the same where you last left — that way it is at the same point for when you are ready to use it again: sudo docker commit

[image name]

To create a container, start with an image and indicate a command to run. You’ll find complete instructions and commands with the official Linux installation guide.

Technical Differences

In this second part of my two-part series on Docker, I compared the technical differences between Docker and virtual machines, broke down the Docker components and reviewed the steps to get started on Linux. The process is straight forward — it just takes some practice implementing these steps to start launching containers with ease.

Begin with a small, controlled environment to ensure the Docker ecosystem will work properly for you; you’ll probably find, as I did, that the application delivery process is easy and seamless. In the end, the containers themselves are not the real advantage: the real game-changer is the opportunity to deliver applications in a much more efficient and controlled way. I believe you will enjoy how Docker allows you to migrate from dated monolithic architectures to fast, lightweight microservice faster than you thought possible.

Docker is changing app development at a rapid pace. It allows you to create and test apps quickly in any environment, provides access to big data analytics for the enterprise, helps knock down walls separating Dev and Ops, makes the app development process better and brings down the cost of infrastructure while improving efficiency.

An Introduction to Docker – Part 1

What is Docker?

In simple terms, the Docker platform is all about making it easier to create, deploy and run applications by using containers. Containers let developers package up an application with all of the necessary parts, such as libraries and other elements it is dependent upon, and then ship it all out as one package. By keeping an app and associated elements within the container, developers can be sure that the apps will run on any Linux machine no matter what kind of customized settings that machine might have, or how it might differ from the machine that was used for writing and testing the code. This is helpful for developers because it makes it easier to work on the app throughout its life cycle.

Docker is kind of like a virtual machine, but instead of creating a whole virtual operating system (OS), it lets applications take advantage of the same Linux kernel as the system they’re running on. That way, the app only has to be shipped with things that aren’t already on the host computer instead of a whole new OS. This means that apps are much smaller and perform significantly better than apps that are system dependent. It has a number of additional benefits.

Docker is an open platform for distributed applications for developers and system admins. It provides an integrated suite of capabilities for an infrastructure agnostic CaaS model. With Docker, IT operations teams are able to secure, provision and manage both infrastructure resources and base application content while developers are able to build and deploy their applications in a self-service manner.

Key Benefits

  • Open Source: Another key aspect of Docker is that it is completely open source. This means anyone can contribute to the platform and adapt and extend it to meet their own needs if they require extra features that don’t come with Docker right out of the box. All of this makes it an extremely convenient option for developers and system administrators.

  • Low-Overhead: Because developers don’t have to provide a truly virtualized environment all the way down to the hardware level, they can keep overhead costs down by creating only the necessary libraries and OS components that make it run.

  • Agile: Docker was built with speed and simplicity in mind and that’s part of the reason it has become so popular. Developers can now very simply package up any software and its dependencies into a container. They can use any language, version and tooling because they are packaged together into a container that, in effect, standardizes all elements without having to sacrifice anything.

  • Portable: Docker also makes application containers completely portable in a totally new way. Developers can now ship apps from development to testing and production without breaking the code. Differences in the environment won’t have any effect on what is packaged inside the container. There’s also no need to change the app for it to work in production, which is great for IT operations teams because now they can avoid vendor lock in by moving apps across data centers.

  • Control: Docker provides ultimate control over the apps as they move along the life cycle because the environment is standardized. This makes it a lot easier to answer questions about security, manageability and scale during this process. IT teams can customize the level of control and flexibility needed to keep service levels, performance and regulatory compliance in line for particular projects.

How Was It Created and How Did It Come About?

Apps used to be developed in a very different fashion. There were tons of private data centers where off-the-shelf software was being run and controlled by gigantic code bases that had to be updated once a year. With the development of the cloud, all of that changed. Also, now that companies worldwide are so dependent on software to connect with their customers, the software options are getting more and more customized.

As software continued to get more complex, with an expanding matrix of services, dependencies and infrastructure, it posed many challenges in reaching the end state of the app. That’s where Docker comes in.

In 2013, Docker was developed as a way to build, ship and run applications anywhere using containers. Software containers are a standard unit of software that isn’t affected by what code and dependencies are included within it. This helped developers and system administrators deal with the need to transport software across infrastructures and various environments without any modifications.

Docker was launched at PyCon Lightning Talk – The future of Linux Containers on March 13, 2013. The Docker mascot, Moby Dock, was created a few months later. In September, Docker and Red Hat announced a major alliance, introducing Fedora/RHEL compatibility. The company raised $15 million in Series B funding in January of 2014. In July 2014 Docker acquired Orchard (Fig) and in August 2014 the Docker Engine 1.2 was launched. In September 2014 they closed a $40 million Series C funding and by December 31, 2014, Docker had reached 100 million container downloads. In April 2015, they secured another $95 million in Series D funding and reached 300 million container downloads.

How Does It Work?

Docker is a Container as a Service (CaaS). To understand how it works, it’s important to first look at what a Linux container is.

Linux Containers

In a normal virtualized environment, virtual machines run on top of a physical machine with the aid of a hypervisor (e.g. Xen, Hyper-V). Containers run on user space on top of an operating system’s kernel. Each container has its own isolated user space, and it’s possible to run many different containers on one host. Containers are isolated in a host using two Linux kernel features: Namespaces and Control Groups.

There are six namespaces in Linux and they allow a container to have its own network interfaces, IP address, etc. The resources that a container uses are managed by control groups, which allow you to limit the amount of CPU and memory resources a container should use.

Docker

Docker is a container engine that uses the Linux Kernel features to make containers on top of an OS and automates app deployment on the container. It provides a lightweight environment to run app code in order to create a more efficient workflow for moving your app through the life cycle. It runs on a client-server architecture. The Docker Daemon is responsible for all the actions related to the containers, and this daemon gets the commands from the Docker client through cli or REST APIs.

The containers are built from images, and these images can be configured with apps and used as a template for creating containers. They are organized in a layer, and every change in an image is added as a layer on top of it. The Docker registry is where Docker images are stored, and developers use a public or private registry to build and share images with their teams. The Docker-hosted registry service is called Docker Hub, and it allows you to upload and download images from a central location.

Once you have your images, you can create a container, which is a writable layer of the image. The image tells Docker what the container holds, what process to run when the container is launched and other configuration data. Once the container is running, you can manage it, interact with the app and then stop and remove the container when you’re done. It makes it simple to work with the app without having to alter the code.

Why Should a Developer Care?

Docker is perfect for helping developers with the development cycle. It lets you develop on local containers that have your apps and services, and can then integrate into a continuous integration and deployment workflow. Basically, it can make a developer’s life much easier. It’s especially helpful for the following reasons:

Easier Scaling

Docker makes it easy to keep workloads highly portable. The containers can run on a developer’s local host, as well as on physical or virtual machines or in the cloud. It makes managing workloads much simpler, as you can use it to scale up or tear down apps and services easily and nearly in real time.

Higher Density and More Workloads

Docker is a lightweight and cost-effective alternative to hypervisor-based virtual machines, which is great for high density environments. It’s also useful for small and medium deployments, where you want to get more out of the resources you already have.

Key Vendors and Supporters Behind Docker

The Docker project relies on community support channels like forums, IRC and StackOverflow. Docker has received contributions from many big organizations, including:

  • Project Atomic

  • Google

  • GitHub

  • FedoraCloud

  • AlphaGov

  • Tsuru

  • Globo.com

Docker is supported by many cloud vendors, including:

  • Microsoft

  • IBM

  • Rackspace

  • Google

  • Canonical

  • Red Hat

  • VMware

  • Cisco

  • Amazon

Stay tuned for our next installment, where we will dig even deeper into Docker and its capabilities. In the meanwhile, read this blog post to learn how AppDynamics provides complete visibility into Docker Containers.

 

5 Things Your CIO Needs to Know about Docker

It’s no secret that Docker has revolutionized the application virtualization space. Today, it’s one of the fastest adopted technologies across enterprises of all sizes—and now, It’s more than just a developer’s preferred open source framework. It also drives the ideal business case to C-level decision makers by creating the ideal transitional opportunity for operational efficiency and optimized IT budgets to driving innovation and expansion. We’ve listed a few of the many reasons why your CIO needs to be paying attention to the potential around Docker.

Docker is nearly complete DevOps technology available today

DevOps has a lot to gain from container based software. As the collaboration and integration between these teams have increased with technical advances, the need to manage application dependencies throughout dev cycles has increased as well. Docker is a point of convergence for Development and Operations, and it creates a seamless link between the two to collaborate without manual barriers and processes.

Docker comes with low overhead, and with the ability to maintain a low memory capacity, it allows multiple services to run at once to allow for better collaboration. It also utilizes its shared volumes to make application code available to the containers from a host operating system so a developer can access and edit source code from any platform and see changes instantly. Docker’s flexibility also allows a front-end engineer the opportunity to explore how back-end systems work to gain full understanding of the full stack and drive a more encompassing workflow.

Docker is more manageable and lightweight compared to virtual stations

While many PaaS options are built to handle most tasks for development teams, overhead costs to maintain the architecture, begin to offset its benefits. Docker allows you to create flexible environments so you can enter deeper layers of the stack and work without disrupting any other workflows. Docker containers are easier to manage than traditional heavyweight visualizations–it’s a whole series of layers, and changing one layer doesn’t mean impacting the rest. Before its implementation, engineers would have to build out virtual machine with some fake load inside the environment. Now, they’re able to package to reduce how many virtual machines they implement, reducing costs and overhead.

Docker has the competitive advantage

It’s clear that Docker is not the only container name out today, that said, it easily owns the mindshare of IT leaders and developers alike. In the short amount of time since its 1.0 release, Docker has already seen support from leaders like Red Hat, IBM, Amazon, and even VMWare. As the pioneer of a business model tailored for developers, Docker has paved the path for rapid adoption in the container space. However, as an open source technology, it also sustains a growing community with contributors and stakeholders to lead the channels toward innovation and advancements.

Docker allows for increased developer productivity, and in turn, increased innovation

Using container-based software already creates a seamless collaboration and handoff between anyone from development, operations, and testing teams. It’s more than likely that your engineers benefit from time away from redundant tasks and troubleshooting. Returning the focus on creating, innovating, and responding to demand with a better outcome and ultimately a better product only benefits them, and your organization the most.

Creating better use of the cloud

Using containers in the cloud creates more instance utilization. By deploying multiple Docker applications onto a single cloud instance, you are much closer to achieving 100% utilization of your resource. Docker allows you to run multiple apps on the same cloud safely by abstracting and isolating their dependencies.

Your CIO’s role is already transitioning from what it used to be. Instead of focusing on operational efficiencies and cost centers, they have the power to drive innovation and productivity to their IT and development teams. Docker might have a lot of rooms to grow into, and adjust to pain points, but it already has the potential to be implemented as a best practice throughout organizations. It initiates a methodology of collaboration, sharing, education and efficiency on teams. As DevOps and Agile practices become a necessity instead of an option within enterprise teams, Docker represents much more than a container-based software. It represents a new era of digital innovation, one that makes your team excel in innovation, development, cultural practices and more.

Docker: The Secret Sauce to Fuel Innovation

Much has already been written about the virtues of Docker, and containers in general like CoreOS or Kubernetes. How life-changing Docker is, how innovative, etc. However, the real secret to Docker’s success in the marketplace is the hidden retribution of innovation. Innovation and R&D is the lifeblood of today’s technology success. Companies, no matter how large, must iterate constantly to stay ahead of their legacy competitors and new upstarts risking disruption. The rise of Agile methodologies and DevOps teams comes with the expectations of more releases, more features, and ultimately a better product.

How can you maintain this pace of innovation? Allow your developers to develop, instead of focusing on tedious — and time consuming — tasks dealing with distributed application upkeep and maintenance.

Pre-Docker Life

At AppDynamics, we primarily use Docker for our field enablement resources, such as demo environments. Before Docker, we would have to spin up a virtual machine with create some fake load inside the environment to show the benefits of AppDynamics’ monitoring. There was no quick, or easy way to make an update to the VM — even a small update. Any minor change (which as an Agile company, were often), would require some heavy lifting work for our developers. There was no version control.

Productivity Gain

Removing redundant work such as updating a demo environment VM — which let’s face it, devs don’t want to do in the first place — frees up vital time for the developers to get back doing what they do best. Setting up machines becomes obsolete and devs gonna dev.

At any company, you’re likely paying a substantial wage for quality engineers. With that expense, you should expect innovation.

Docker, in our case, also removes the project abandonment risk. If a project owner is sick or leaves the company there is typically an audit process of analyzing the code. More often than not, a good chunk would have to be rebuilt in a more consistent manner. With Docker, you insource your code into a standardized container, allowing seamless handoff to the next project owner.

Fostering DevOps

Along with passing to the next project manager, the handoff between dev, QA, and Ops becomes seamless as well — which is a main foundation of DevOps. How we use Docker, and I assume others do as well, allows us maintain repeatable processes and enable our field teams.

The shareability allows us to incorporate best practices among the entire team and provide a consistent front with engagements.

Interested to see how AppDynamics and Docker work together? Check out this blog!

AppDynamics Monitoring Excels for Microservices; New Pricing Model Introduced

It’s no news that microservices are one of the top trends, if not the top trend, in application architectures today. Take large monolithic applications which are brittle and difficult to change and break them into smaller manageable pieces to provide flexibility in deployment models, facilitating agile release and development to meet today’s rapidly shifting digital businesses. Unfortunately, with this change, application and infrastructure management is more complex due to size and technology changes, most often adding significantly more virtual machines and/or containers to handle the growing footprint of application instances.

Fortunately, this is just the kind of environment the AppDynamics Application Intelligence Platform is built for, delivering deep visibility across even the most complex, distributed, heterogeneous environments. We trace and monitor every business transaction from end-to-end — no matter how far apart those ends are, or how circuitous the path between — including any and all API calls across any and all microservices tiers. Wherever there is an issue, the AppDynamics platform pinpoints it and steers the way to rapid resolution. This data can also be used to analyze usage patterns, scaling requirements, and even visibility into infrastructure usage.

This is just the beginning of the microservices trend. With the rise of the Internet of Things, all manner of devices and services will be driven by microservices. The applications themselves will be extended into the “Things” causing even further exponential growth over the next five years. Gartner predicts over 25 billion devices connected by 2020, with the majority being in the utilities, manufacturing, and government sectors.

AppDynamics microservices pricing is based on the size of the Java Virtual Machine (JVM) instance; any JVM running with a maximum heap size of less than one gigabyte is considered a microservice.

We’re excited to help usher in this important technology, and to make it feasible and easy for enterprises to deploy AppDynamics Java microservices monitoring and analytics. For a more detailed perspective, see our post, Visualizing and tracking your microservices.

Complete visibility into Docker containers with AppDynamics

Today we announced the AppDynamics Docker monitoring solution that provides an application-centric view inside and across Docker containers. Performance of distributed applications and business transactions can be tagged, traced, and monitored even as they transit multiple containers.

Before I talk more about the AppDynamics Docker monitoring solution, let me quickly review the premise of Docker and point you to a recent blog “Visualizing and tracking your microservices“ by my colleague, Jonah Kowall, that highlights Docker’s synergy with another hot technology trend— microservices.

What is Docker?

Docker is an open platform for developers and sysadmins of distributed applications that enables them to build, ship, and run any app anywhere. Docker allows applications to run on any platform irrespective of what tools were used to build it making it easy to distribute, test, and run software. I found this 5 Minute Docker video, which is very helpful when you want to get a quick and digestible overview. If you want to learn more, you can go to Docker’s web page and start with this Docker introduction video.

Docker makes it very easy to make changes and package the software quickly for others to test without requiring a lot of resources. At AppDynamics, we embraced Docker completely in our development, testing, and demo environments. For example, as you can see in the attached screenshot from our demo environment, we are using Docker to provision various demo use cases with different application environments like jBoss, Tomcat, MongoDB, Angularjs, and so on.

Screen Shot 2015-05-11 at 11.15.07 AM.png

In addition, you can test drive AppDynamics by downloading, deploying, and testing with the packaged applications from the AppDynamics Docker repos.

Complete visibility into Docker environment with AppDynamics

AppDynamics provides visibility into applications and business transactions made out of multiple smaller decoupled (micro) services deployed in a Docker environment using the Docker monitoring solution. The AppDynamics Docker Monitoring Extension monitors and reports on various metrics, such as: total number of containers, running containers, images, CPU usage, memory usage, network traffic, etc. The AppDynamics Docker monitoring extension gathers metrics from the Docker Remote API, either using Unix Socket or TCP giving you the choice for data collection protocol.

The Docker metrics can now be correlated with the metrics from the applications running in the container. For example, in the screenshot below, you can see the overall performance (calls per minute in red) of a web server deployed in Docker container is correlated with Docker performance metrics (Network transmit/receive and CPU usage). As the number of calls per minute to the web server increases, you can see that the network traffic and CPU usage increases as well.

docker_metric_browser_with_cpu.png

Customers can leverage all the core functionalities of AppDynamics (e.g. dynamic baselining, health rules, policies, actions, etc.) for all the Docker metrics while correlating them with the metrics already running in the Docker environment.

The Docker monitoring extension also creates an out of the box custom dashboard with key Docker metrics as shown in the screenshot below. This out of the box dashboard will jump start the monitoring of Docker environment.

docker_custom_dashboard.png

Download the AppDynamics Docker monitoring extension, set-up and configure it following the instructions on the extension page and get end-to-end visibility into your Docker environment and the applications running within them.

Visualizing and tracking your microservices

There is no question that microservices architectures are the current rage for software design. IT professionals and developers I speak to are migrating to this pattern pretty consistently. Meeting dozens of prospects and customers in the last ten weeks at AppDynamics, I’ve asked this question regularly— are you using or thinking about moving to microservices? — and most often the answer is “yes.” I typically follow this up with a question on the use of container technology (such as Docker), and the answer is “maybe.” I would suspect as containers mature and become cross-platform, that the answer will likely change.

I’m not going to explain the basics of microservices, as that’s that’s handled elsewhere. The pattern of using APIs, initially built to cross application boundaries within a single enterprise or organization, is now being leveraged within a single application architecture to deliver functionality. Microservices adoption is being driven by two forces: the need for agility and speed; and the re-composing of applications enabling experimentation and demands to support new delivery platforms such as web, mobile web, native apps, and partners. Defining these boundaries allows independent development of micoservices.

There are several design criteria identified by early adopters, such as those at Netflix. In this great article, Ngnix interviews Adrian Cockroft, formerly of Netflix fame and now with Battery Ventures (which happens to also be one of our major investors). In this article, there is some discussion around the architecture, and one item that was specifically concerning to me —  thinking with my IT operations hat on —  was the separate back-end storage for each microservice. Disparate storage requires a complex master data management solution or strategy to keep data in sync. Inconsistent storage also causes issues should a disaster arise and recovery be necessary. The level of complexity in managing all of these separate backends seems like a recipe for technical debt. Technical debt is the buildup of old and possibly short-term decisions, which cause systems rigidity. I reached out to Adrian Cockroft on this specific topic and got the following back from him:

“Replication across data centers is handled using Cassandra or Riak for each data store; it’s an orthogonal problem.

Keeping foreign keys in sync across data stores can be done on an exception basis when inconsistencies are detected (like read-repair) or as a nightly batch job using a throttled full table scan.

Each data store is extremely simple and will be maintained independently. In practice, this is vastly easier than a single central database with ‘kitchen sink’ schema.”

This insight provided the guidance other posts didn’t. Adrian specifically states that development should be using a standard data storage technology. In his use cases, that would be Cassandra or Riak, keeping consistency from a support perspective. How many enterprises wish they had two specific platforms for data storage? These architectures were pioneered by the web-scale innovators to meet service demands and agile release velocity. I found many of these stats to be compelling:

  • Amazon.com calls between 100-150 services (APIs) to build a page.
  • Netflix microservices architecture services 5 billion API calls per day, of which 99.7% of the calls are internal. Netflix (2014)
  • API traffic makes up over 60% of the traffic serviced by our application tier overall. Salesforce.com (October 2013)

DevOps engineers and teams responsible for operating microservices are realizing a few things, aside from the level of complexity and scale created by these new architectures:

  • Determining which services are being called to deliver application functionality to a specific user is difficult.
  • Documenting and/or visualizing the fluid application topology is something few have been able to do.
  • Building an architecture map or blueprint of the services design is nearly impossible.

Today’s monitoring approaches consist of the following broken strategies:

In this plan, status codes are logged and examined for an individual microservice. There is no way to determine the health of the application being delivered to a user, which is a major issue. The design then couples this with another tool for visualization of basic metrics about the microservice. This approach helps determine the health of each component, but once again, this flawed approach is similar to the way server monitoring works today, which is completely broken. Views which exist in silos do not provide the visibility required to understand the end-to-end transaction path. If there is a service failure that cascades to other service failures, determining root cause is virtually impossible due to the asynchronous nature of microservices. Services often call additional services, which means that there is an n-to-n relationship between services:

C:\Users\Jonah\AppData\Local\Temp\enhtmlclip\Image.png

Another approach is to use seven different tools to visualize each component individually once again; this is the root issue of #monitoringsucks:

The last example is a common pattern I’ve seen, but once again consists of a component-level view using several monitoring tools that collect data independently. In this case, the architecture consists of CloudWatch for the infrastructure, Zabbix for the server, statsd and Collectd for metrics (which feed into Graphite). The result is three consoles, three tools, and three views of each component. These tools and consoles do not handle infrastructure monitoring nor touch on application performance data.

Clearly, there needs to be a visualization of the services path including traceability from end to end for each interaction, from the user through all of the services calls, along with the infrastructure. AppDynamics delivers this capability. Here is an example of a microservices architecture running on Docker being monitored with AppDynamics. This is actually our demo environment, where we are running load generators on Docker along with each microservice for our demo application instances. We don’t publish all of this, but some of it is on our Docker repository:

 

C:\Users\Jonah\AppData\Local\Temp\Image.png en-resource://resourcemap/8d2b9eb3507b836e07c4c81f7642f0f2

We hope to present more details at DockerCon if our talks are selected.

So what about those who don’t want to pay for software? You’ll likely pay with people and time, or actual money. If you evaluate or select AppDynamics, it can be deployed on premises or SaaS (and you can switch between both deployment models). Adrian Cockroft is working on a cool new open source project called Spigo, which visualizes topologies (instrumentation is actually the hardest part, which this doesn’t do). This new open source project is built on d3 JavaScript, and you can see early examples and download source code here. Today, the tool doesn’t have real-time capabilities, but those will come over time. AppDynamics views are also pure HTML5 and JavaScript, including our rich topology map pictured above. We also animate and show detailed data regarding usage and performance across the communication paths. Expect additional visibility as we add new data sources to enhance the topology maps.

Topology and application paths are key to managing complex architectures, and with the addition of microservices and Docker, everyone will need these capabilities. AppDynamics is the most advanced topology visualization on the market to manage these new and increasingly popular complex architectures, but open source projects such as Spigo will improve visualization.

Try it for yourself, download a FREE trial of AppDynamics today!

Priorities for Application Operations Teams in 2015 [INFOGRAPHIC]

As we start 2015, IT and Application Operations teams prioritize their goals to improve the overall efficiency, migrate to the cloud, place importance on big data, among other department goals.

In case you’ve been living under a rock, and haven’t heard about the monumental success of AppDynamics AppSphere™ 2014, well now you have some required reading. As part of the event, we surveyed all those present on their IT priorities and and the results were quite surprising.

Here are a few noteworthy stats:

  • Docker and other containers are growing. 25% said they plan to use a container solution in the next year.
  • Nearly half of respondents (46%) listed “Improving Operational Efficiency” as their number one priority.
  • Enterprises still prefer private cloud to public.

Check out the full infographic below…

 

Interested in attended the next AppDynamics AppSphere? You can pre register now and save your seat!

Want to see how AppDynamics can help your IT and Application Operations teams? Download a FREE trial now!

 

Docker and DevOps: Why it Matters

Screen Shot 2014-10-21 at 10.36.00 AMUnless you have been living under a rock the last year, you have probably heard about Docker. Docker describes itself as an open platform for distributed applications for developers and sysadmins. That sounds great, but why does it matter?

Wait, virtualization isn’t new!?

Virtualization technology has existed for more than a decade and in the early days revolutionized how the world managed server environments. The virtualization layer later became the basis for the modern cloud with virtual servers being created and scaled on-demand. Traditionally virtualization software was expensive and came with a lot of overheard. Linux cgroups have existed for a while, but recently linux containers came along and added namespace support to provide isolated environments for applications. Vagrant + LXC + Chef/Puppet/Ansible have been a powerful combination for a while so what does Docker bring to the table?

Virtualization isn’t new and neither are containers, so let’s discuss what makes Docker special.

The cloud made it easy to host complex and distributed applications and their lies the problem. Ten years ago applications looked straight-forward and had few complex dependencies.

Screen Shot 2014-10-21 at 10.35.22 AM

The reality is that application complexity has evolved significantly in the last five years, and even simple services are now extremely complex.

Screen Shot 2014-10-21 at 10.35.28 AM

It has become a best practice to build large distributed applications using independent microservices. The model has changed from monolithic to distributed to now containerized microservices. Every microservice has its dependencies and unique deployment scenarios which makes managing operations even more difficult. The default is not a single stack being deployed to a single server, but rather loosely coupled components deployed to many servers.

Docker makes it easy to deploy any application on any platform.

The need for Docker

It is not just that applications are more complex, but more importantly the development model and culture has evolved. When I started engineering, developers had dedicated servers with their own builds if they were lucky. More often than not your team shared a development server as it was too expensive and cumbersome for every developer to have their environment. The times have changed significantly as the cultural norm nowadays is for every developer to be able to run complex applications off of a virtual machine on their laptop (or a dev server in the cloud). With the cheap on-demand resource provided by cloud environments, it is common to have many application environments dev, QA, production. Docker containers are isolated, but share the same kernel and core operating system files which makes them lightweight and extremely fast. Using Docker to manage containers makes it easier to build distributed systems by allowing applications to run on a single machine or across many virtual machines with ease.

Docker is both a great software project (Docker engine) and a vibrant community (DockerHub). Docker combines a portable, lightweight application runtime and packaging tool and a cloud service for sharing applications and automating workflows.

Docker makes it easy for developers and operations to collaborate

Screen Shot 2014-10-21 at 10.35.35 AM
DevOps professionals appreciate Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker also manages to unify the DevOps community whether you are a Chef fan, Puppet enthusiast, or Ansible aficionado. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

DockerHub provides official language stacks and repos

Screen Shot 2014-10-21 at 10.35.41 AM

The Docker community is built on a mature open source mentality with the corporate backing required to offer a polished experience. There is a vibrant and growing ecosystem brought together on DockerHub. This means official language stacks for the common app platforms so the community has officially supported and quality Docker repos which means wider and higher quality support.

Since Docker is so well supported you see many companies offering support for Docker as a platform with official repos on DockerHub.

Screen Shot 2014-10-21 at 10.35.48 AM

Want to take a test drive of AppDynamics? It has never been easier by using Docker to deploy a complex distributed application with application performance management built-in via the AppDynamics Docker repos.

Find out more and get started with Docker today.