Attaining Nirvana: The Four Levels of Cloud Maturity

Cloud adoption is atop every CIO’s priority list, and for good reason. Technology stacks are advancing at lightning speeds. Application architectures of the past decade are aging fast and being replaced with modern, public and private cloud-based ones. But while cloud adoption is inevitable, the vast majority of organizations are still searching for an effective application migration strategy, notes Gartner.

If you feel like you are falling behind the competition in your cloud journey, there’s no need to panic. A structured and comprehensive migration model, combined with a smart investment strategy, will go a long way toward ensuring success. Our cloud maturity model is based on insights we’ve gleaned from hundreds of conversations with CIOs about their cloud adoption strategies, as well as numerous customer migrations we’ve supported successfully. We’ve identified common patterns—or “maturity levels”—in the adoption process. By understanding this maturity model, you may find it easier to develop your own cloud strategy.

Below we describe four levels of cloud maturity. It is important to note that progression does not require adopting every level along the way. Some organizations skip levels, jumping from Level 1 to Level 3, for instance, and bypassing Level 2 altogether. Not all organizations need to end up at Level 4, and most will have different applications in their portfolio at different levels of maturity at the same time. For example, since customer-facing apps that generate revenue need to be the most agile and responsive, it makes sense to migrate them to Level 3 or Level 4, which are optimized for rapid application delivery and scale. On the other hand, older apps in maintenance mode can be kept at Level 1 or Level 2 without incurring too much additional investment. Also, keep in mind that companies are adopting a DevOps operational philosophy to accomplish these transformational tasks (more on this below).

Hybrid apps, where parts of the application continue to run on-premises due to immovable mainframe systems or data gravity—Dave McCrory’s concept where data becomes so large it’s nearly impossible to move—are a reality as well.

Level 1: Traditional Data Center Apps

Traditional data center apps run in classic virtual machines (such as VMWare) or on bare metal in a private data center, and are typically monitored by the IT Ops team. There are pros and cons to these architectures and deployments, of course. Advantages include total control over your hardware, software, and data; less reliance on external factors such as internet connectivity; and possibly a lower total cost of ownership (TCO) when deployed at scale. Disadvantages include large upfront costs that often require capital expenditure (CapEx), as well as maintenance responsibilities. They also result into longer implementation times that begin with hardware and software procurement before any code is written. Given these drawbacks, it’s quite likely that by 2020 a corporate “no cloud” policy will be extremely rare.

Level 2: Lifted and Shifted Apps

Given the cloud’s promise of elasticity and agility, nearly every IT organization has embarked on a cloud adoption journey. Those who haven’t actually started migrating their production workloads are definitely prototyping or experimenting in this area. However, cloud migration seldom runs smoothly. Oftentimes an organization will take the same virtual machines it was running on-premises and “lift and shift” them to the cloud. The target environment is typically a public cloud provider such as Amazon AWS, Microsoft Azure, or Google Cloud, or at times a private data center configured as a private cloud.

A sound migration strategy? Not really. Despite the expected cost savings of this approach, a company often finds it’s far more expensive to run its applications in the cloud in the same way it did on-premises. The large VM configurations that these applications were architected for are very expensive in the cloud. In other cases, the underlying infrastructure lacks the support it had on-premises. For example, some application servers relying on multicasting capabilities to communicate with each other, no longer work in the cloud when purely lifted and shifted. These shortcomings demand an application refactoring.

Lastly—and perhaps most importantly—these traditional data center applications were written with certain hardware reliability assumptions that may no longer be valid in the cloud. On-premises hardware is typically custom built and designed for higher reliability, whereas the cloud is built on a tenet that hardware is cheap and unreliable, while the software layer delivers application resiliency and robustness. This is another reason why application refactoring becomes necessary.

Level 3: Refactored Apps

Once an organization realizes that some of its lifted-and-shifted applications are fundamentally hamstrung in the cloud, it needs to refactor its apps. Several modern platforms that are purpose-built for running apps in cloud environments are a perfect choice for refactored programs. These modern platforms generally include application platform-as-a-service (PaaS) technologies or cloud native services. The typical PaaS technologies include Pivotal Cloud Foundry, Red Hat OpenShift, AWS Elastic Beanstalk, Azure PaaS, and Google App Engine. The cloud native offerings include hundreds of managed services offered by AWS, Azure and Google Cloud, such as database, Kubernetes, and messaging services, to name a few.

In a traditional data center apps environment, the IT organization is responsible for deploying, managing, and scaling the application code. By comparison, these modern platforms automate tasks and abstract out a lot of complexities; applications become elastic, dynamically consuming computing resources to meet varying workloads. The modern approach frees the IT organization from maintaining something that has no intellectual property benefit to its business, allowing it to focus on business problems as opposed to infrastructure issues.

Level 3 is where organizations begin to see signs of the cloud’s true potential. This is also where companies can dabble in container technology, and start to build the organizational muscle to run apps in modern architectures. However, not all is perfect in this world; organizations need to use caution when adopting some cloud-native services, which can result into vendor lock-in and poorer application portability—a top priority for some companies.

And while it may appear the cloud journey is now complete, obstacles remain. Refactored apps are often the same code from monolithic apps, only broken down into more manageable components. These smaller components can now scale automatically using PaaS or cloud-native services, but their code was not originally architected for truly modular, stateless deployments that would make them linearly scalable. Yes, a car may run faster after getting a custom performance chip upgrade, but in order to negotiate race-track turns at high speeds, it needs to be built from the ground up for high performance.

Level 4: Microservices—the Nirvana State

Microservices, as the name suggests, is an architecture in which the application is a collection of small, modular and independently deployable services that scale on demand to meet application workload requirements. These services are usually architected either using container and orchestration technologies (Docker and Kubernetes, or AWS, Azure, and Google container services) or serverless computing (AWS Lambda, Azure Functions, or Google Functions).

The reason this level is regarded as the “Nirvana State” is because applications built on microservices architectures are ultra-scalable and fault tolerant, ensuring an ultra-responsive and uninterrupted end-user experience of the application. Organizations can distribute smaller services to nimble DevOps teams, which run independently and enjoy the freedom to innovate faster, bringing new features to market in less time.

It requires a big commitment, though: a company must either rearchitect its applications from the core, or write all new applications with the microservices approach.

While these granular services offer many benefits, and the individual services are simpler to manage, combining them into broader business applications can get complicated, particularly from a monitoring and troubleshooting standpoint.

Where DevOps Fits In

Although this maturity model explores cloud adoption through a technology lens, organizational evolution is absolutely critical to the success of the adoption journey. Moving from siloed Dev and IT organizations to a more DevOps model is absolutely critical for achieving all the benefits of the higher levels of maturity.

A DevOps team generally consists of 8 to 10 people, including developers and site reliability engineers (SREs). Each super-agile team is responsible for only a few services, not the entire application. Teams take great pride in the responsiveness of their services, as well as the speed at which they deliver new functionality. The key tenet here is: You build it, you own it, you run it! That’s what makes DevOps different from the traditional Dev and Ops split.

Nirvana Takes Time—and Hard Work

Cloud adoption is a journey where the adoption of microservices on cloud platforms (public or private) can lead to greater agility, significant cost savings, and superior elasticity for organizations. The road may seem treacherous at times, but don’t get discouraged! Everyone is on the same path. The global IT sector is poised for another year of explosive growth—5.0 percent, CompTIA forecasts—and is embracing fast-paced innovation. Taking a considered approach and adopting DevOps practices is the fastest way to achieving the Nirvana State.

Learn more about how AppDynamics can help you on the path to cloud maturity.

Top 3 Challenges of Adopting Microservices as Part of Your Cloud Migration

IDC estimates 60% of worldwide enterprises are migrating existing applications to the cloud. With the promise of greater flexibility, a reduction in overhead, and the potential for significant cost savings, it’s a logical decision. But instead of performing a “lift and shift,” – simply moving an existing application to a cloud platform – many businesses use the migration period as an opportunity to modernize the architecture of their applications.

What’s more, in a survey by NGINX, over 70% of organizations say they’re adopting or exploring microservices for their new architectures – and with good reason. Breaking a monolithic application into manageable microservices allows development teams to rapidly respond to an ever-evolving set of business requirements, choose the right technology stack for each task, and readily provide support for a variety of delivery platforms, including web, mobile, and native apps.

However, adopting microservices as part of your cloud migration isn’t always easy. Below are three common challenges we’ve seen enterprises face, and solutions to help mitigate these risks.

Challenge 1: Identifying what needs to be migrated to microservices

Before you can begin breaking your application out into individual microservices, you need to first understand its full scope and architecture. This can prove challenging as many times the overarching view of the application is based on “tribal knowledge” or cobbled together from a collection of disparate tools. Sometimes a more holistic view may be available, but is based on outdated information which does not reflect the current architecture of the application.

You need to find a solution that will help you discover and map every component, dependency, and third party call of your application. This solution should help you understand the relationship of these pieces and how each impacts your application’s behavior and user experience. Armed with this information you’ll have a clear picture of what needs to be migrated, and will be able to make more informed decisions regarding the architecture of your microservices.

Challenge 2: Ensuring your microservices meet or beat pre-migration performance

To ensure your application runs smoothly post-migration, and that user experience was not negatively impacted, you need a way to compare performance metrics from pre and post-migration. This can be extremely difficult as the architecture of these two environments (with the changes to hardware and the move to a distributed architecture)  can look drastically different. To make things even more difficult, the monitoring tools supplied by individual hosting providers give insight into only a small portion of the entire architecture, and have no way of creating a more holistic set of data.

To combat these issues and establish a consistent baseline by which to measure your performance and user experience, you will need to capture key user interactions (often referred to as business transactions), prior to beginning your migration. The business transactions are likely to remain the same through migration whereas other metrics may change as you take different code paths and deploy on different infrastructures. Armed with baseline data about your business transactions, you can easily compare the performance of your pre and post-migration environments and ensure that there is no impact to your user experience or overall performance.

Challenge 3: Monitoring your new microservice environment

With large monolithic applications running a single codebase on a few pieces of hardware, two or three tools could once provide complete, straightforward monitoring of application and infrastructure performance. However, with the introduction of microservices, and the potential for each service to implement its own solution for technology stack, database, and hosting provider, a single service may now require a greater number of monitoring tools than the entire application once did. And microservice monitoring brings specific challenges: they are often short-lived, which means monitoring over a longer period of time can be more complicated; and there may be more pathways through which the service is reached, potentially exposing issues such as thread contention.

Finally, while development teams previously didn’t require a monitoring solution which took infrastructure into account, the move to DevOps and the reliance on cloud native technologies means this factor can no longer be ignored.

The goal then becomes finding a unified monitoring platform that supports all of your environments, regardless of language or technology stack. This solution must collate metrics from across your application and infrastructure into a single source of truth, and allow for correlation of those metrics to user experience.

Are you ready?

AppDynamics has been helping customers like Nasdaq, eHarmony, and Telkom with their cloud migration and microservice adoptions. Schedule a demo to see what AppDynamics can do for you.

IDC White Paper: Critical Application And Business KPIs for Successful Cloud Migration

Today, enterprises worldwide are moving towards a cloud-first strategy with the promise of benefits like agility, scalability, and innovation-at-speed.

However, migrating to the cloud can also present issues with security, compliance, performance, and more. As a result, it’s critical for businesses to understand the types of application and infrastructure monitoring, analytics, and performance information needed for successful migration.

To gain more insight into the best practices and KPIs for cloud migration, IDC surveyed 600 global enterprise decision makers about their cloud migration challenges and the information they needed to make informed decisions before, during, and post-migration.

Below are key findings from the IDC white paper, Critical Application and Business KPIs for Successful Cloud Migration (August 2017), sponsored by AppDynamics.

Application Performance Management (APM) is Imperative to Support Effective Migration  

To make smarter migration decisions, surveyed respondents reported a need for insight into KPIs for business and technical metrics, end user experience and business impact analysis, cloud capacity utilization, and cost-per-application evaluation. IDC’s research shows that application performance monitoring and analytics shed light onto these KPIs, making APM increasingly required to support effective planning and validation.

Cost Savings are the Biggest Benefit Expected of Migration to Cloud Infrastructure

Some 60% of respondents indicate that IT and development cost savings are the most important business benefits expected from migration. While the survey suggests that these expectations are often met, it’s important to note that this requires modernization of application architectures, supporting technology, people, and processes. In support of this, IDC highlights that containers are playing an increasing role in successful cloud migration, with almost two-thirds of surveyed respondents either currently containerizing new or existing applications, or planning to implement containers to support existing applications.

iOS and Android Applications are the Least Likely to Have Current or Planned Migrations

Enterprises no longer fear migrating existing applications. Nearly half (45.9%) of the respondents said they’ve already migrated some custom-developed browser-based applications to the cloud, and another 38.7% plan to do so within the next two years. Interestingly, iOS and Android applications are currently the least likely to have been migrated to date, but are important priorities for the next two years.

AppDynamics’ Role in Cloud Migration

The AppDynamics platform gives your enterprise real-time, end-to-end data about your users, transactions, code, and infrastructure to arm you with the information you need to support your application migration to the cloud. AppDynamics’ APM solution plays a central role in any cloud journey by offering:  

  • Breadth of visibility into complex and distributed applications, including every dependency, user experience, and transaction to help accelerate cloud migration evaluation and planning.
  • Pre and post-move business and technical KPI assessments to prove migration success.
  • End user experience and business impact analysis of cloud computing.

For additional data and insights, download the IDC white paper now: Critical Application and Business KPIs for Successful Cloud Migration.

Good Migrations: Five Steps to Successful Cloud Migration

Unless you’ve been living in a cave for the last decade, you’ve seen cloud computing spread like fire across every industry. You also probably know that the cloud plays a pivotal role when it comes to digital transformation. Whether “the app is the business” is a well-worn subject or not, it doesn’t change the fact that companies are spinning up their apps faster because they can scale, test, and optimize them in real production environments in the cloud. But, like any technology, the cloud isn’t perfect. If it isn’t configured specifically to your application and business needs, you can find yourself dealing with performance issues, unhappy users, and one splitting headache.

On the flip side, developing a successful cloud migration strategy can be a rewarding experience that reverberates across your enterprise. The fact that 48 of the Fortune 50 Corporations have announced plans to adopt the cloud or have done so already speaks to the fact that the cloud isn’t just good for business. It’s a must-have, basic requirement. Like Wi-Fi. And a laptop.

In our new eBook, Good Migrations: Five Steps to Successful Cloud Migration, we focus on the right steps in migration, and also the varying ways enterprises can take those steps. Simply put, no two enterprises — nor their apps — are alike. So there’s no single one-size-fits-all cloud migration plan or solution that works for every enterprise. As for company happy hours? Those seem to work pretty well for everyone.

Here are just a few of the valuable points covered:

Why Migrate at All

Every company has different reasons for migrating to the cloud. So, you do need to ask yourself a few questions about why you want (or need) to move to the cloud. It’s critical that you focus on precisely what it is you need specific to your app and business needs. Define what cloud environment fits your objectives. Determine how you’ll make the move. Plan for every phase: before, during — and yes — after you migrate.

Where Your App Lives Impacts How it Lives

New environments and IT configurations come with new rules. So you’ll scrutinize your app through a different lens after migration. You have to understand every system that connects to and interacts with it. You’re not reinventing the wheel, but you will likely need to modify it. Yes, it’s a time investment, but it’s one that will ultimately save you in the long run.

The Phases of Cloud Migration

Migration is a process that rolls out in phases. Here’s a summary of them:

  1. Choose your providers: Choose one or several, depending on your needs.
  2. Assign responsibilities to the providers: Who does what? Who is in control of the app?
  3. Adjusting the internal configuration: Manage IT expectations and apps that aren’t migrated.
  4. Getting users on board: If your team isn’t comfortable with the technology and aligned with the change, the migration won’t make a difference.

Rising Currency in the Cloud

Migrating an app or two isn’t going to fetch you the ROI it’s capable of. You need to go big or go home (that is, if you determine that the cloud is appropriate for your enterprise at all). When you implement a cloud migration on a large scale across your enterprise, you can create considerable value. To measure that value you need to set clear goals that can be measured across a variety of operations. In time, you’ll be able to calculate how much time and money your company saves, spends, and earns in the cloud.

Migration is a Journey, Not a Destination

Your apps are never truly complete. You’re always improving them. The same is true with cloud migration. Priorities change. Business inevitably changes. Users change. Like your apps and your business in general, you’ll continue to evaluate your cloud environment, making sure your teams are in alignment, along with tweaking networks and devices.

Learn More

To learn more so you can make informed decisions, be sure to download the eBook Good Migrations: Five Steps to Successful Cloud Migration.


The Enterprise is Ready for the Cloud …

Recently, we here at AppDynamics have seen two major transformations in adoption of the public cloud. First is the adoption of the cloud for production workloads, and second is adoption of the cloud by large enterprises.

Transformation 1: From Dev/Test/QA to Production Workloads

Because dev/test cycles for applications have different capacity requirements at different times, the on-demand computing resources available through the cloud are ideally suited to serve these elastic requirements. But recently we’ve seen a significant transition by customers to run production workloads on AWS and other public clouds. Perhaps there was a trust element in the cloud that has taken some time to solidify. I suppose relentless price cuts in cloud don’t hurt either. But it seems clear that enterprises are embracing the cloud for production workloads.

Transformation 2: From Startups to Enterprises

First adopters of the cloud were primarily startups who did not want to (or could not) lay out the capital investment necessary to stand up their own datacenters. The cloud has afforded many startups the computing capacity, storage, and resources of a major datacenter without the upfront investment. This on-demand computing power broke down many barriers and enabled significant innovation. Now, the late majority — enterprises — are catching on and signing up for the cloud en masse. Not only are these enterprises adopting AWS for new development and innovation, but they are migrating existing applications from on-premises data centers to AWS — and using AppDynamics to help gather valuable pre- and post-migration data about their applications.

Expanded AppDynamics Support for AWS.

In support of these transformations, AppDynamics is making additional investments in AWS. First, we have released new capabilities to support additional AWS native services such as Amazon DynamoDB, Amazon SQS, and Amazon S3, adding to our existing support of Amazon EC2, Amazon CloudWatch, and Amazon RDS. AppDynamics now monitors more AWS native services, providing even greater visibility and control for your applications running on the AWS Cloud.

Second, because we want our customers and potential customers to be successful migrating to AWS, we have launched a special 60-day trial to help customers migrate on-premises production workloads to AWS. Using AppDynamics to instrument on-premises workloads as part of a pre-migration assessment, customers are able to draw accurate, real-time topology maps of their applications, and benchmark the performance of the application in its on-premises state prior to fork-lifting it to the cloud. This visibility gives the enterprise a clear picture of what components can and should be migrated, as well as providing demonstrable data about the actual performance of the existing application. With our unique “compare release” function, customers can visualize on a single screen the pre-migration and post-migration application architecture, as well as the performance of key application transactions.

Try AppDynamics for AWS and we are certain you will see that, like Nasdaq, OMX, and other enterprise customers, the visibility provided by AppDynamics is especially valuable when migrating a platform from your internal infrastructure to the AWS Cloud.

The answer for government applications migrating to the cloud: visibility

Recently, I’ve had several conversations with US Federal Government Agencies about monitoring applications moving to FedRAMP (The Federal Risk and Authorization Management Program) data centers. Because of the Government’s Cloud First policy, which mandates that agencies take full advantage of cloud computing benefits, agencies are increasingly forced to move application outside of their own data centers. With less control on the infrastructure, the new focus is now on the performance and availability of their applications running in the cloud. Agencies want an assurance their applications are running at the same level of performance (or better) once they make the move. This is where I believe an APM solution like AppDynamics is a perfect fit to mitigate risk by providing agencies 100% visibility into their application performance.

With cloud environments, I’ve found traditional approaches for monitoring simply don’t work. This is because the agencies have limited access to the underlying IT infrastructure in the cloud. Federal agencies need the help of companies such as AppDynamics to provide them visibility into application performance from the end user down to the infrastructure to truly understand the health of their critical applications.

Before the cloud, when agencies ran applications on premise, they had the physical access to the underlying IT infrastructure. Which meant they could deploy element-monitoring tools and gain access to the network to try to infer the health of the applications. At AppDynamics, we take a modern approach to APM by monitoring performance from the top down through the concept of Business Transactions. The Business Transaction is the mechanism by which AppDynamics orders and aligns load traffic (response time, throughput, and so on) with the business perspective (For example, Login, Search, etc.).

Screen Shot 2014-07-28 at 3.11.44 PM

I’ve found AppDynamics is flexible to help customers monitor applications both on premises and in cloud environments. AppDynamics was designed from the beginning to be cloud portable by working within the constraints of cloud environments. The three main reasons why I believe AppDynamics is perfect for federal agencies to monitor critical cloud applications are:

Firstly, AppDynamics is an all software agent-based solution that doesn’t require a high network bandwidth connection. The agents can report across the Internet using a one-way HTTP(s) connection back to the controller software. This means agency applications that span multiple FedRAMP clouds can be can be monitored with a single AppDynamics controller. The controller has the intelligence to stitch the transactions that flow between clouds into one view (think – highly layered Service Oriented Architectures). The self-discovering flow map and single pane of glass view is vital to obtain the necessary visibility of your application.Screen Shot 2014-07-17 at 3.50.39 PM (2)

Secondly, AppDynamics doesn’t require privileged network access to components such as a SPAN port or a network TAP. The traditional approach for End User Monitoring was to receive a copy of the traffic to and from the application and decrypt the packets to make sense of the end user experience. This approach fails in cloud environments since the cloud providers typically will not provide access to the underlying physical infrastructure. AppDynamics captures end user experience in a cloud-friendly way through JavaScript on the browser. Through this approach, AppDynamics not only captures and correlates all end user activity with the application code execution, but also measures the page render time in the end user’s browser.

EUM-Analyze (1)

By not requiring privileged network access and high bandwidth management network, AppDynamics can follow the workload as applications are migrated to the cloud. Agencies will have visibility into the before and after state of their applications performance.

Thirdly, AppDynamics can help Agencies with scaling their applications automatically by utilizing our cloud auto-scaling capabilities. Cloud auto-scaling decisions are typically made based on infrastructure metrics such as CPU Utilization. However, I believe the better and more accurate way to auto-scale in Cloud environments is to make decisions based on application metrics such as requests per minute. For more information about cloud auto-scaling please read:

Other reasons why AppDynamics is a perfect solution for modern applications moving to the cloud are:

  1. It’s easy to deploy
  2. It requires minimal configurations (Instrumentation works out of the box)
  3. It requires minimal care and feeding on an ongoing basis (Supports rapid change with Agile development)
  4. It has a built-in Dynamic Baselining Engine to proactively alert teams of performance issues

For agencies to be successful running applications in the cloud they need end-to-end visibility into their application performance. With AppDynamics, federal agencies can finally migrate to the cloud without impacting or worrying about their applications. As all critical software applications become more complex, the visibility AppDynamics provides isn’t just a luxury feature, it’s a necessity.

Take five minutes to get complete visibility into the performance of your cloud applications with AppDynamics today.

Cloud Auto Scaling using AppDynamics

Are your applications moving to an elastic cloud infrastructure? The question is no longer if, but when – whether that is a public cloud, a private cloud, or a hybrid cloud.

Classic computing capacity models clearly indicate that over-provisioning is essential to keep up with peak loads of traffic while the over-provisioned capacity is largely left under-utilized during non-peak periods. Such over-provisioning and under-utilization can be avoided by moving to an elastic cloud-computing capacity model where just-in-time provisioning and deprovisioning can be achieved by automatically scaling up and down on-demand.


Cloud auto-scaling decisions are often made based on infrastructure metrics such as CPU Utilization. However, in a cloud or virtualized environment, infrastructure metrics may not be reliable enough for making auto-scaling decisions. Auto-scaling decisions based on application metrics, such as request-queue depth or requests per minute, are much more useful since the application is intimately familiar with conditions such as:

  • When the existing number of compute instances cannot handle the incoming arrival rate of traffic and must elastically scale up additional instances based on a high-watermark threshold on a given application metric

  • When it’s time to scale back down based on a low-watermark threshold on the same application metric.

Every application service can be expressed as a statistical model of traffic, queues and resources as shown in the diagram below.

  • For a given arrival rate λ, we need to maximize the service rate μ with an optimum value of n resources. Monitoring either the arrival rate  λ itself for synchronous requests or q depth for asynchronous requests will help us tune the application system to see if we need additional service compute instances to meet the demands of the current arrival rate.

  • Having visibility into this data allows us not only to find bottlenecks in the code but also possibly flaws in design and architecture. AppDynamics provides visibility into these application metrics.

The basic flow for auto-scaling using AppDynamics is shown in the diagram below:

Let’s take an example to illustrate how this actually works in AppDynamics. ACME Corporation has a multi-tier distributed online bookstore application running on AWS EC2:

The front-end E-Commerce tier is experiencing a very heavy volume of requests resulting in the tier going into a Warning (Yellow) state.

Now we will walk through the 6 simple steps that the ACME Corporation will use to exploit the Cloud Auto Scaling features of AppDynamics.


Step 1: Enable display of Cloud Auto Scaling features

 To do this, they first select “Setup-> My Preferences” and check the box to “Show Cloud Auto Scaling features” under “Advanced Features”:

Step 2: Define a Compute Cloud and an Image

Then they click on the Cloud Auto Scaling option at the bottom left of the screen:

 Next, they click on Compute Clouds and register a new Compute Cloud:

and fill in their AWS EC2 account info and credentials:

Next, they register a new image from which new instances of the E-Commerce tier nodes can be spawned:


and provide the details of that machine image:

By using the Launch Instance button, they can manually test whether it was successfully launched.

Step 3: Define a scale-up and a scale-down workflow

 Then, they define a scale-up workflow for the E-Commerce tier with a step to create a new compute instance from the AMI defined earlier:

Next, they define a scale-down workflow for the E-Commerce tier with a step to terminate a running compute instance from the same AMI:

Now, you may be wondering why these workflows are so simplistic and why there are no additional steps to rebalance the load-balancer after every new compute instance gets added or terminated. Well, the magic for that lies in the Ubuntu AMI that bootstraps the Tomcat JVM for the E-Commerce tier. It has the startup logic to automatically join the cluster and also has a shutdown-hook to automatically leave the cluster, by communicating directly with Apache load-balancer mod_proxy.

Step 4: Define an auto-scaling health rule

 Now, they define an auto-scaling health rule for the E-Commerce tier:and select the E-Commerce Server tier as the scope for the health rule:


and specify a Critical Condition as “Calls per Minute > 3500”, which in this case, represents the arrival rate  λ:

and a Warning Condition of “Calls per Minute > 3000”:

 Note: It is very important to choose the threshold values for Calls Per Minute in the Critical and Warning conditions very carefully, because failing to do so may result in scaling thrash.

Step 5: Define a scale-up policy

Now, they define a Scale Up Policy which will bind their newly defined Health Rule with  a Cloud Auto-scaling action:

Step 6: Define a scale-down policy

Finally, they define another policy that will invoke the Scale-down workflow when the Health rule violation is resolved.

And they’re done!

After a period of time when the Calls per Minute exceeds the configured threshold, they actually witness that the Auto-scaling Health rule was violated, as it shows up under the Events list:


When they drill down into the event, they can see the details of the Health Rule violation:


And when they click on the Actions Executed for the Cloud Auto-Scaling Workflows, they see:


Also, under Workflow executions, they see:

and when they drill-down into it, they see:


Finally, under the Machines  item under Cloud Auto Scaling, they can see the actual compute instance that was started as a result of Auto Scaling:

Thus, without any manual intervention, whenever the E-Commerce tier needs additional capacity indicated by the threshold of Calls Per Minute in the Auto-Scaling Health rule, it is automatically provisioned. Also, these additional instances are automatically released when the Calls Per Minute goes below that threshold.


AppDynamics has cloud connectors for all the major cloud providers:



If you have your own cloud platform, you can always develop your own Cloud Connector using the AppDynamics Cloud Connector API and SDKs that are available via the AppDynamics Community. Find out more in the AppDynamics Connector Development Guide. Our cloud connector code is all open-source and can be found on GitHub.

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

Cloud Migration Tips Part 4: Failure Breeds Success

Welcome back to my series on migration to the cloud. In my last post we discussed all of the effort you need to put into the planning phase of your migration. In this post we are going to focus on what should happen directly after the migration has been completed.

Regardless of how well you planned or if you just decided to dive right in without any forethought, there are steps that need to be taken after your migration to ensure your application is working properly and performing up to snuff. These steps need to be performed whether you chose to use a public, private or hybrid cloud implementation.

Step 1: Take Your New Cloud Based Application for a Test Drive

Go easy at first and just roll through the functionality as a user would. If it doesn’t work well for you then you know it wont work well when there are a bunch of users hitting it.

Assuming things went well with your functional test it’s time to go bigger. Lay down a load test and see step 2 below.

Step 2: Monitoring is Not the Job of Your Users

If you’re relying on the users of your application to let you know if there are performance or stability issues you are already a major step behind your competition. If you planned properly then you have a monitoring system in place. If you’re just winging it, put in a monitoring system now!!!

Here are the things your monitoring tool should help you understand:

  • Architecture and Flow: You design an application architecture to support the type of application you are building. How do you really know if you have deployed the architecture you designed in the first place? How do you know if your application flow changes over time and causes problems? Cloud computing environments are dynamic and can shift at any given time. You need to have a tool in place that let’s you know exactly what happened, when and if it caused any impact.

E-Commerce Website Architecture

What happens if you don’t have a flow map? Simple, when there’s a problem you waste a bunch of time trying to figure out what components were involved in the problematic transaction so that you can isolate the problem to the right component.

  • Response Times: Slow sucks! You moved to the cloud for many potential reasons but one thing is certain, your users don’t want your application(s) to run slowly. It seems obvious to monitor the response time of your applications but I’m constantly amazed by how many organizations still don’t have this type of monitoring in place for their applications. There are really only 2 options in this category; let your users tell you when (notice I didn’t say if) your application is slow or have a monitoring tool alert you right away.

Screen Shot 2012-08-14 at 1.59.33 PM

  • Resources: You need to keep an eye on the resources you are consuming in the cloud. New instances of your application can quickly add up to a large expense if your code is inefficient. You need to understand how well your application scales under load and fix the resource hogs so that you can drive better value out of your application as usage increases.


Step 3: Elasticity

Elasticity is a key benefit of migrating your application to the cloud. Traditional application architectures accounted for periodic spikes in workload by permanently over-allocating resources. Put simply, we used to buy a bunch of servers so that we could handle the monthly or yearly spikes in activity. Most of these servers sat nearly idle the rest of the year and generated heat.

If you’re going to take advantage of the inherent elasticity within your cloud environment you need to understand exactly how your application will respond to being overloaded and how your infrastructure adapts to this condition. Cloud providers have tools to execute the dynamic shift in resources but ultimately you need a tool to detect the trigger conditions and then interface with the dynamic provisioning features of your cloud.

The combination of slow transactions AND resource exhaustion would be a great trigger to spin up new application instances. Each condition on its own does not justify adding a new resource.

Screen Shot 2013-04-25 at 3.16.38 PM

Screen Shot 2013-04-25 at 3.20.05 PM

The point here is that migrating to the cloud is not a magic bullet. You need to know how to use the features that are available and you need the right tools to help you understand exactly when to use those features. You need to stress your new cloud application to the point of failure and understand how to respond BEFORE you set users free on your application. Your users will certainly break your application and during an event is not the proper time to figure out how to manage your application in the cloud.

Let failure be your guide to success. Fail when it doesn’t matter so that you can success when the pressure is on. The cloud auto-scaling features shown in this post are part of AppDynamics Pro 3.7. Click here to start your free trial today.

Cloud Migration Tips #3: Plan to Fail

Planning to deploy or migrate an application to a cloud environment is a big deal. In my last post we discussed the value of using real business and IT requirements to drive the justification of using a cloud architecture. We also explored the importance of using monitoring information to understand your before and after picture of application performance and overall success.

In this post I am going to dive deeper into the planning phase. You can’t expect to throw a half assed plan in place and just deal with problems as they pop up during an application migration. That will almost certainly result in frustration for the end users, IT staff, and the business who relies upon the application.

In reality, at least 90% of your total project time should be dedicated to planning and at most 10% to the actual implementation. The big question is “What are the most important aspects of the planning phase?”. Here’s my cloud migration planning top 10 list:

  1.  Application Portfolio Rationalization  – Let’s face reality for a moment… If you’re in a large enterprise you have multiple apps that perform a very similar business function at some level. Application Portfolio Rationalization is a method of discovering the overlap between your application and consolidating where it makes sense. It’s like spring cleaning for your IT department. You need to get your house in order before you decide to start moving applications or you will just waste a lot of time and money moving duplicate business functionality across your portfolio.
  2. Business Justification and Goal Identification – If there is one thing I try to make clear in every blog post it is the fact that you need to justify your activities using business logic. If there is no business driver for a change then why make the change? Even very techie-like activities can be related back to business drivers.
    Example… Techie Activity: Quarterly server patching Business Driver: Failure to patch exposes the business to risk of being hacked which could cause brand damage and loss of revenue.
    I included goal identification with business justification because your goals should align with the business drivers responsible for the change.
  3. Current State Architecture Assessment (App and Infra) – This task sounds simple but is really difficult for most companies. Current State Architecture Assessment is all about documenting the actual deployed application components, infrastructure components, and application dependencies. Why is this so difficult? Most enterprises have implemented a CMDB to try and document this information but the CMDB is typically manually populated and updated. What happens in reality is that over time the CMDB is neglected when application and infrastructure changes occur. In order to solve this problem some companies have turned to Automated discovery and dependency mapping tools. These tools are usually agentless so they login to each server and scan for processes, network connections, etc… at pre-defined intervals and create a very detailed mapping that includes all persistent connections to and from each host regardless of whether or not they are application related. The periodic scans also miss the short lived services calls between applications unless the scan happens to be at approximately the same time of the transient application call. An agent based APM tool covers all the gaps associated with these other methods.


    How well do you know the current architecture and dependencies of your application?

  4. Current State Performance Assessment – Traditional monitoring metrics (CPU, Memory, Disk I/O, Network I/O, etc…) will help you size your cloud environment but tell you nothing about the actual application performance. The important performance metrics encompass end user response time, business transaction response time, external service response time, error and exception rates, transaction throughput, with baselines for each. This is also a good time to make sure there are no glaring performance issues that you are going to promote into your cloud environment. It’s better to fix any known issues before you migrate as the extra complexity of the cloud can amplify your application problems.

    Screen Shot 2012-08-14 at 1.59.33 PM

    You don’t want to carry application problems into your new environment.

  5. Architectural Change Impact Assessment – Now that you know what your real application and infrastructure components are, you need to assess the impact of the difference between traditional and cloud architectures. Are there components that wont work well (or at all) in a cloud architecture? Are code changes required to take advantage of the dynamic features available in your cloud of choice? You need to have a very good understanding of how your application works today and how you want it to work after migration and plan accordingly.
  6. Problem Resolution Planning – Problem resolution planning is about making a commitment to your monitoring tools and strategy as a core layer of your overall application architecture. The number of potential points of failure increases dramatically from traditional to cloud environments due to increased virtualization and dynamic scaling. In highly distributed applications you need monitoring tools that will tell you exactly where problems are occurring or you will spend too much time isolating the problem location. Make monitoring a part of your application deployment and support culture!!!
  7. Process re-alignment – Just hearing the word “process” makes me cringe and have flashbacks to the giant, bloated , slow moving enterprise environments that I used to call my workplace. The unfortunate truth is that we really do need solid processes if we want to maintain order and have any chance of managing a large environment in a sustainable fashion. Many of the traditional IT development and operations processes need to be modified when we migrate to the cloud so you can’t overlook this task.
  8. Application re-development – The fruits of your Architectural Change Impact Assessment work will probably precipitate some level of development work within your application. Maybe only minor tweaks are required, maybe significant portions of your code need to change, maybe this application should never have been selected as a cloud migration candidate. If you need to change the application code you need to test it all over again and measure the performance.
  9. Application Functional and Performance Testing – No surprises here, after the code has been modified to function as intended with your cloud deployment it needs to be tested. APM tools really speed up the testing process since they show you the root of your performance problems down to the line of code level. If you rely only upon the output of your application testing suite your developers will spend hours trying to figure out what code to change instead of minutes fixing the problematic code.

    Call Graph

    Know exactly where the problem is whether it is in your code or an external service.

  10. Training (New Processes and Technology) – With all of those new and/or modified processes and new software required to support your cloud application training is imperative. Never forget the “people” part of “people, process, technology”.

There’s a lot more that really goes into planning a cloud migration but these major tasks are the big ones in my book. Give these 10 tasks the attention they deserve and the odds will shift in your favor for a successful cloud migration. Next week we’ll talk about important work that should happen after your application gets migrated.

Cloud Migration Tips #2: We Should Use the Cloud Because…

Welcome back to my blog series on deploying applications to the cloud.

What’s the point of deploying an application to the cloud versus just hosting it in your own data center? Is it really a good idea? Will it save you money? Will it work better? Will it cause new deployment and management problems? How do you monitor it?


These are all basic questions you should ask yourself before deciding IF your new or existing application will end up in a cloud environment.

The answers might be different for each application supporting your business. Cloud is really a set of architectural patterns that are available to help you solve business problems using technology. If you’re considering cloud you better have a business problem that you need to solve.

Here are a few business problems that would make me consider a cloud implementation for my application(s):

  • We’re out of space in our data center and most of our applications are used via the internet–should we build another data center or move some applications to public cloud providers?
  • Our new mobile application will need to scale rapidly as it becomes more popular–we have to be able to scale as needed so our customers have a good user experience.
  • We need to accelerate our time to market and make our business more agile–we don’t have time to wait for IT and all of our productivity sapping processes.

No matter what your business reasons are you need to come up with quantifiable and measurable success criteria so that you can prove out the benefits (or failure) of your cloud computing initiative. This implies you are already measuring something BEFORE you move to the cloud so you can compare metrics before and after. Here are some example KPIs that might be applicable:

  • Time to deliver requested environment to developers
  • Number of application impact incidents
  • Infrastructure cost per application
  • Time to scale / Cost to scale Application
  • Transaction throughput
  • SLA (yeah, you really should have one of these)

So You’ve Decided To Go For It

You’ve got your business justification nailed down and decided you really do need a cloud based application. Great! If this is a brand new application you can design it from the ground up and just deploy it, right? No!!! Remember, you need to monitor and manage this application if you stand any chance of providing a good user experience over the long haul.

“My cloud provider has all the monitoring and management tools I need.” – Wrong! Your cloud provider has basic monitoring tools that show you infrastructure metrics (CPU usage, Memory Usage, I/O usage, etc…). These monitoring tools don’t tell you anything about your application. Here’s what you need to know about your cloud application (at a minimum):

  • Which application nodes are in use at any given time. (Dynamic scaling, provisioning, de-provisioning will change this picture at any given time)
  • Application calls to external services with response times and error rates. (External service calls are performance killers for cloud applications and drive up the cost as most provider charge for network traffic leaving their cloud)
  • The response time and errors of all of my users business transactions. (Applies to any application architecture but cloud deployments can experience greater variance due to factors outside of the application owners control (network congestion, regional provider issues, etc…))
  • When a problem occurs – full application call stack for code analysis. (Applies to any application architecture)
  • Host level KPIs correlated with all of the application activity. (Really important in the cloud due to host virtualization, shared resources, and multiple sizing options when you select a host to deploy. Select the wrong size by mistake and you just limited your max application performance)
  • Historic baselines for everything so you know what normal behavior looks like. (Critical to identifying problems regardless of architecture)

If you’re deploying a new application you should have a really good idea of any external application dependencies (like calling a payment gateway to process credit card orders). If you are moving an existing application there is more work that needs to be done up front. In particular you need to really understand your existing application dependencies. Is there a service or backend database that your application relies upon that you’re not planning to move with the application? If so you can really screw up the entire cloud implementation if you make a bunch of calls to a component that lives outside of your chosen cloud environment.


Modern applications have many external dependencies. You absolutely MUST know what they are before moving to the cloud.

If you’re moving an existing application you better deploy a tool that can dynamically detect and show application flow maps. I’m not talking about those agentless tools that scan your hosts everyday looking for network connections (those usually miss all of the short lived services calls). I mean a solution that will give you the entire picture regardless of persistent and transient connection methodologies.

Since you need to monitor your existing environment anyway you might as well collect performance data and save it so that you have a good point of comparison for your “before and after” application environment (We’ll discuss this item more in a future blog post).

There are a ton of considerations when you choose to implement your application using cloud computing architecture patterns. In my next post I’ll go into more detail around the planning phase. Having all your ducks in a row before your begin the migration is critical to success.