A Guide to Performance Challenges with AWS EC2: Part 4

If you’re just starting, check out Part 1, Part 2, and Part 3 to get up to speed on your guide to the top 5 performance challenges you might come across managing an AWS Elastic Compute Cloud (EC2), and how to best address them. We kicked off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment along with managing different storage options. Last week, we went over how to identify the right applications run on EC2 instances for your unique workloads. This week, we’ll wrap up by handling Amazon’s Elastic Load Balancer (ELB) performance and overall AWS outages.

Poor ELB Performance

Amazon’s Elastic Load Balancer (ELB) is Amazon’s answer to load balancing that integrates seamlessly into the AWS ecosystem. Rather than sending calls directly to individual EC2 instances we can instead insert an ELB in front of our EC2 instances, send load to our ELB, and then the ELB will distribute that load across the EC2 instances. This allows us to more easily add and remove EC2 instances from our environment and affords us the optimization of leveraging auto-scaling groups that grow and shrink our EC2 environment based on our rules or based on performance metrics. This relationship is shown in figure 3.


Figure 3. ELB-to-EC2 Relationship

While we may think of an ELB as a stand-alone appliance, like a Cisco LocalDirector or F5 BigIP, under-the-hood ELB is a proprietary load balancing application running on EC2 instances. As such, it benefits from the same elastic capabilities that your own EC2 instances do, but it also suffers from the same constraints of any load balancer running on in a virtual environment, namely it must be sized appropriately. If your application receives substantial load then you need to have enough ELB instances to handle and distribute that load across your EC2 instances. Unfortunately (or fortunately depending on how you look at it) you do not have visibility into the number of ELB instances or their configurations, you must rely on Amazon to manage that complexity for you.

So how do you handle that scale-up requirement for applications that receive substantial load? There are two things that you need to keep in mind:

  • Pre-warming: if you know that your application will receive substantial load or you are expecting flash traffic then you should contact Amazon and ask them to “pre-warm” the load balancer for you. Amazon will then configure the load balancer to have the appropriate capacity to handle the load that you expect.

  • Recycling ELBs: for most applications it is advisable to recycle your EC2 instances regularly to clean up memory or other clutter that might appear on a machine, but in the case of ELBs, you do not want to recycle them if at all possible. Because an ELB consists of EC2 instances that have grown, over time, to facilitate your user load, recycling them would effectively reset their capacity back to zero and force them to start over.

To detect if you are suffering from a poor ELB configuration and need to pre-warm your environment, measure the response times of synthetic business transactions (simulated load) or response times from the client perspective (such as via JavaScript on the browser or instrumented mobile applications) and then compare the total response time with the response time from the application itself. The difference between the client’s response time and your application response time consists of both network latency as well as wait / queue time of your requests in your load balancer. Historical data should help you understand network latency, but if you are seeing consistently poor latency or, even worse, your clients are receiving an HTTP 503 error code reporting that the server cannot handle any more load, then you should contact Amazon and ask them to pre-warm your ELB.

Handling AWS Failures and Outages

Amazon failures are very infrequent, but they can and do occur. For example, AWS had an outage in its North Virginia data center (US-EAST-1) for 6-8 hours on September 20, 2015 that affected more than 20 of its services, including EC2. Many big-name clients were affected by the outage, but one notable company that managed to avoid any “significant impact” was Netflix. Netflix has created what it calls its Simian Army, which is a set of processes with colorful names like Chaos Monkey, Latency Monkey, and Chaos Gorilla (get the simian reference?) that regularly wreak havoc on their application and on their Amazon deployment. As a result they have built their application to handle failure, so the loss of an entire data center did not significantly impact Netflix.

AWS runs across more than a dozen data centers in the world:

Source: https://aws.amazon.com/about-aws/global-infrastructure/


Amazon divides the world into regions and each region maintains more than one availability zone (AZ), in which each AZ represents a data center. When a data center fails then there are redundant data centers in that same region that can take over. For services like S3 and RDS, the data in a region is safely replicated between AZs so that if one fails then the data is still available. With respect to EC2, you need to choose the AZs to which you deploy your applications so it is advised that you deploy multiple instance of your applications and services to multiple AZs in your region. You are in control of how redundant your application is, but it also means that you need to run more instances (in different AZs), which equates to more cost.

Netflix’s Chaos Gorilla is a tool that simulates a full region outage for Netflix and they have tested their application to ensure that it can sustain a full region outage. Cross AZ data replication is available to you for free in AWS, but if you want to replicate data across regions then the problem is far more complicated. S3 supports cross-region replication but at the cost of transferring data out to and in from other regions. RDS cross-region replication varies based on the database but sometimes at much higher costs. Overall, Adrian Cockcroft, the former chief architect of Netflix, tweeted that the cost to maintain active-active data replication across regions is about 25% of the total cost.

All of this is to say that resiliency and high availability are at odds with both financial costs as well as the performance overhead of data replication, but are all available to the diligent. In order to be successful at handling Amazon failures (and scheduled outages for that matter), you need to architect your application to protect against failure.


Amazon Web Services may have revolutionized computing in the cloud, but it also introduced new concerns and challenges that we need to be able to identify and respond to. This paper presented five challenges that we face when managing an EC2 environment:

  • Running in a Multi-Tenancy Environment: how do we determine when our virtual machines are running on hardware shared with other virtual machines and those other virtual machines are noisy?

  • Poor Disk I/O Performance: how do we properly interpret AWS’s IOPS metric and determine when we need to opt for a higher IOPS EBS volume?

  • The Wrong Tool for the Job: how do we align our application workload with Amazon optimized EC2 instance types?

  • Poor ELB Performance: how do ELBs work under-the-hood and how do we plan for and manage our ELBs to match expected user load?

  • Handling AWS Failures and Outages: what do we need to consider when building a resilient application that can sustain AZ or even full region failures?

Hopefully this series gave you some starting points for your own performance management exercises and helped you identify some key challenges in your own environment that may be contributing to performance issues.


A Guide to Performance Challenges with AWS EC2: Part 3

If you’re just starting, check out Part 1 and Part 2 to get up to speed on your guide to the top 5 performance challenges you might come across managing an AWS Elastic Compute Cloud (EC2), and how to best address them. We kicked off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment along with managing different storage options. This week, we’ll discuss identifying the right applications run on EC2 instances for your unique workloads. 

The Wrong Tool for the Job

It is quite common to see EC2 deployments that start simple and eventually evolve into mission critical components of the business. While companies often want to move into the cloud, they tend do so cautiously. This means initially creating sandbox deployments and moving non-critical applications to the cloud. The danger, however, is that as this sandbox environment grows into something more substantial that initial decisions for things like base AMIs and EC2 instance types are not re-evaluated and maintained over time. As a result, applications with certain characteristics may be running on EC2 instances not necessarily best for their workload.

Amazon has defined a host of different EC2 instance types and it is important to choose the right one for your application’s use case. Amazon has defined instance types that provide different combinations of CPU, memory, disk, and network capabilities and are categorized as follows:

  • General Purpose
  • Compute Optimized
  • Memory Optimized
  • GPU
  • Storage Optimized

General purpose instances are good starting points that provide a balance of compute, memory, and network resources. They come in two flavors: fixed performance and burstable performance. Fixed performance instances (M3 and M4) guarantee you specific performance capacity and are good for applications and services that require consistent capacity, such as small and mid-sized databases, data processing tasks, and backend services. Burstable performance instances (T2) provide a baseline level of CPU performance with the ability to burst above that baseline for a period of time. These instances are good for applications that vary in their compute requirements, such as development environments, build servers, code repositories, low traffic websites and web applications, microservices, early production experiments, and small databases.

Compute optimized instances (C3 and C4) provide high-powered CPUs and favor compute capacity over memory and network resources; they provide the best compute/cost value. They are best for applications that require a lot of computational power, such as high-performance front-end applications, web servers, batch processes, distributed analytics, high-performance science and engineering applications, ad serving, MMO gaming, and video encoding.

Memory optimized instances (R3) are optimized for memory-intensive applications and provide the best memory (GB) / cost value. The best use cases are high-performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis, larger deployments of SAP, Microsoft SharePoint, and other enterprise applications.

GPU instances (G2) are optimized for graphics and general purpose GPU computing applications and best for 3D streaming, machine learning, video encoding, and other server-side graphics or GPU compute workloads.

Storage optimized instances come in two flavors: High I/O instances (I2) and Dense-storage instances (D2). High I/O instances provide very fast SSD-backed instance storage and are optimized for very high random I/O performance; they are best for applications like NoSQL databases (Cassandra and MongoDB), scale-out transactional databases, data warehouses, Hadoop, and cluster file systems. Dense-storage instances deliver high disk throughput and provide the best disk throughput performance; they are best for Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, distributed file systems, network file systems, log, or data-processing applications.

The best strategy for selecting the best instance type is to choose the closest matching category (instance type family) from the list above, select an instance from that family, and then load test your application. Monitoring the performance of your application under load will reveal if your application is compute-bound, memory-bound, or network-bound so, if you have selected the wrong instance type family then adjust accordingly. Finally, load test results will help you choose the right sized instance within that instance family.

A Guide to Performance Challenges with AWS EC2: Part 2

Amazon Web Services (AWS) revolutionizes production application deployments through elastic scalability and an hourly payment plan. Companies can pay for the infrastructure that they need for any given hour of the day, scaling up to meet high user demand during peak hours and scaling down during off peak hours. The AWS Elastic Compute Cloud (EC2), is one of its core components. It has many of the same characteristics as traditional virtual machines, but is tightly integrated into the AWS ecosystem, so many of its capabilities differ from a traditional virtual machine as well. 

Last week, we kicked off our series on your guide to the top 5 performance challenges you might come across managing an AWS EC2 environment, and how to best address them. We started off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment.  

Poor Disk I/O Performance

AWS supports several different types of storage options, the core of which include the following: 

  • EC2 Instance Store
  • Elastic Block Store (EBS)
  • Simple Storage Service (S3) 

EC2 instances can access the physical disks that are attached to the machine hosting the EC2 instance to use for temporary storage. The important thing to note about this type of storage is that it is ephemeral, meaning that it only persists for the duration of the EC2 instance and then is destroyed when the EC2 instance stops. Meaningful storage, therefore, will not persist to an EC2 instance store.

For more common storage needs we’ll opt for either EBS or S3. From the perspective of how they are accessed, the main difference between the two is that EBS can be accessed through disk operations whereas S3 provides a RESTful API to store and retrieve objects. With respect to use cases, S3 is designed to store web-scale amounts of data whereas EBS is more akin to a hard drive. Therefore, when you need to access a block device from an application running on an EC2 instance and you need that data to persist between EC2 restarts, such as storage to support a database, you’ll typically leverage an EBS volume.

EBS volumes come in three flavors:

  • Magnetic Volumes: magnetic volumes can range in size from 1GB to 1TB and support 40-90 MB/sec throughput. They are good for workloads where data is accessed infrequently.
  • General Purpose SSD: general purpose SSDs can range in size from 1GB to 16TB and support 160 MB/sec throughput. They are good for use cases such as system boot volumes, virtual desktops, small to medium sized databases, and development and test environments.
  • Provisioned IOPS SSD: provisioned IOPS (Input/Output Per Second) SSDs can range from 4GB to 16TB in size and support 320 MB/sec throughput. They are good for critical business applications that required sustained IOPS performance, or more than 10,000 IOPS (160 MB/sec), such as MongoDB, SQL Server, MySQL, PostgreSQL, and Oracle

Choosing the correct EBS volume type is important, but it is also important to understand what these metrics mean and how they impact your EC2 instance and disk I/O operations. 

  • First, IOPS are measured in terms of a 16K I/O block size, so if your application is writing 64K then it will use 4 IOPS
  • Next, in order to realize the IOPS capacity, you need to send enough requests to the EBS volume to match its queue length, or number of pending operations supported by the volume
  • You must use an AMI instance that is optimized to use EBS volumes; instance types are listed here
  • The first time that you access a block from EBS, there will be approximately 50% IOPS overhead. The IOPS measurement assumes that you’ve already accessed the block at least once

With these constraints you can better understand the CloudWatch EBS metrics, such as VolumeReadOps and VolumeWriteOps, and how IOPS are computed. Review these metrics in light of the EBS volume type that you are using and see if you are approaching a limit. If you are approaching a limit then you will want to opt for a higher IOPS supported volume.

Figure 1 shows the VolumeReadOps and VolumeWriteOps for an EBS volume. From this example we can see that this particular EBS volume is experiencing about 2400 IOPS.

Figure 1. Measuring EBS IOPS

A Guide to Performance Challenges with AWS EC2: Part 1

Amazon Web Services (AWS) revolutionized production application deployments through elastic scalability and an hourly payment plan. Companies can pay for the infrastructure that they need for any given hour of the day, scaling up to meet high user demand during peak hours and scaling down during off peak hours. One of the core components of AWS is the Elastic Compute Cloud (EC2), which is Amazon’s abstraction of a virtual machine running in the cloud. It has many of the same characteristics as traditional virtual machines, but it is tightly integrated into the AWS ecosystem, which means that it has many characteristics that are new or different than traditional virtual machines.

This blog kicks off a series on your guide to the top 5 performance challenges you might come across managing an AWS EC2 environment, and how to best address them.

Running in a Multi-Tenancy Environment

Amazon EC2 instances are virtual machines that run in Amazon’s cloud and, like all virtual machines, they ultimately run on physical hardware. One side effect of running virtual machines in an environment that you do not own is that you cannot control what other virtual machines run next to you on the same physical hardware and some of your neighbors may be noisier than others. Basically, the performance of EC2 instances can sometimes be spotty and you need to be able to effectively identify when your virtual machines are running on spotty hardware and react accordingly.

So, how do you know if you have noisy neighbors?

Answer: You need to review both the physical runtime characteristics of your virtual machines as well as Amazon’s CloudWatch metrics. You can monitor the AWS CloudWatch metrics, OS and application level metrics and correlate them using an APM tool, such as AppDynamics. When examining the runtime behavior of a virtual machine using tools like top or vmstat, you’ll observe that, in addition to returning the CPU usage and idle percentages, the operating system also returns the amount of “stolen” CPU. Stolen CPU is not as diabolical as it sounds, it is a relative measure of the cycles of a CPU that should have been available to run your processes, but were not available because the hypervisor diverted cycles away from your instance. This may have happened because you have met your allotted quota of CPU usage or because another instance running on the same physical hardware is occupying the available CPU capacity. A little investigation will help you discern between the two.

First, you need to know the underlying CPU powering the hardware running your virtual machine. You can connect into your machine and execute the following command to view the CPU information:

# cat /proc/cpuinfo

processor : 0

vendor_id : GenuineIntel

cpu family : 6

model : 62

model name : Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz

stepping : 4

cpu MHz : 1800.083

cache size : 30720 KB

fdiv_bug : no

hlt_bug : no

f00f_bug : no

coma_bug : no

fpu : yes

fpu_exception : yes

cpuid level : 13

wp : yes

flags : fpu de tsc msr pae cx8 apic cmov pat clflush acpi mmx fxsr sse sse2 ss ht nx constant_tsc up pni ssse3 popcnt

bogomips : 3647.67

clflush size : 64

In this example, the EC2 instance is running on a 1.8 GHz Xeon processor. Because an ECU is equivalent to a 1.0-1.2 GHz Xeon processor then 100% ECU utilization would be between 55% and 66% of the physical CPU usage. Next, we need to look at the runtime behavior of our virtual machine, which can be accomplished by using the top or vmstat command:

# vmstat

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------

r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st

0  0      0 1101516 162504 383016    0    0     0     6    1    1  0  0 100  0  0

The “us” metric reports the CPU usage, the “id” metric reports the idle percentage of the CPU, and the “st” metic reports the amount of stolen CPU. The key to identifying noisy neighbors is to observe the metrics over time pay particular attention to the stolen CPU time when your virtual machine is idle. If you see discrepancies in the stolen time when your CPU is idle then that indicates that you are sharing the same hardware with other customers. For example, when your CPU usage is idle and the stolen CPU percentage is 10% once and then another time the stolen CPU usage is 40% then chances are that you are not simply observing hypervisor activity, but rather are observing the behavior of another virtual machine.

In addition to reviewing the operating system CPU utilization and stolen percentage, you need to cross reference this with Amazon’s CloudWatch metrics. The CloudWatch CPUUtilization is defined by Amazon as follows:

The percentage of allocated EC2 compute units that are currently in use on the instance. This metric identifies the processing power required to run an application upon a selected instance. Depending on the instance type, tools in your operating system may show a lower percentage than CloudWatch when the instance is not allocated a full processor core.

The CloudWatch CPUUtilization metric is your source of truth for truly understanding your virtual machine’s processing capability and, combined with the operating system’s CPU metrics, can provide you with enough information to discern the presence of a noisy neighbor. If the CloudWatch CPUUtilization metric is at 100% for your EC2 instance and your operating system is not reporting that you’ve reached 100% of your ECU capacity (one core of your physical CPU type / 1.0-1.2 GHz) and the CPU stolen percentage is high, then you have a noisy neighbor that is draining compute capacity from your virtual machine. For example, in the aforementioned scenario of 1 ECU on a machine with a 2.4 Ghz physical CPU, we would expect 100% ECU usage to be about 50% of the physical CPU. If the operating system reports that we’re at 30% CPU utilization with a high stolen CPU usage and CloudWatch reports that we’re at 100% usage, then we have identified a noisy neighbor.

Figure 1 presents an example of the CloudWatch CPU Utilization as compared to the operating system CPU utilization and stolen percentages, captured graphically over time. This illustrates the discrepancy between how CloudWatch sees your instance and how your instance sees itself.


Figure 1. CloudWatch vs OS CPU Metrics


Once you’ve identified that you have a noisy neighbor, so how should you react? You have a few options, but here’s what we recommend:

  • Observe the behavior to determine if it is a systemic problem (is the neighbor constantly noisy?)
  • If it is a systemic problem then move. While this might not be the best choice if you live in an apartment building, in Amazon it is simple: start a new AMI instance, register it with your ELB, and decommission the old one. If you get into the habit of viewing EC2 instances as disposable it will help you resolve these types of issues quickly
  • Consider implementing a rolling EC2 instance strategy. Again, building on the idea that EC2 instances are disposable, it is a good practice to only keep your EC2 instances around for a short period of time, such as a few hours. Just as your laptop benefits from being restarted regularly, so will your server instances. Over time your instances may add clutter to your memory and, rather than work overly hard to clean them up, it is far easier to replace them with new ones. The best strategy that I have seen in this space is to manage your rolling instances through your elasticity strategy. The cloud allows you to scale up when you need more capacity (peak hours) and scale down when you need less capacity (off-peak hours). Getting into the habit of decommissioning the oldest instances when you scale down can shorten the average lifespan of your EC2 instances and achieve a rolling EC2 instance strategy.

Expanding Amazon Web Services Monitoring with AppDynamics

Enterprises are increasingly moving their applications to the cloud and Amazon Web Services (AWS) is the leading cloud provider. We announced expanded AWS monitoring with the AppDynamics Winter ‘16 release earlier this year. In this blog, I will provide some additional details on the expanded support of AWS monitoring.

Native Support of AWS components

Before the Winter ’16 release, only Amazon Simple Queue Service (SQS) was automatically discovered by AppDynamics Java APM agents and shown in the Application flow map with key performance metrics. For other AWS components, customers had to configure the discovery of the backends manually in AppDynamics or use the old Amazon Cloudwatch Monitoring extension to get the AWS metrics and track them in metric browser or dashboards.

In the AppDynamics Winter ’16 release, the following AWS components are natively supported by Java APM agents:

  • Amazon DynamoDB: A fast and flexible NoSQL database service
  • Amazon Simple Storage Service (S3): It provides secure, durable, highly scalable object storage
  • Amazon Simple Notification Service (SNS): It is Pub-sub Service for Mobile and Enterprise Messaging

By native support, I mean automatic discovery and display of the Application flow map with key performance metrics without any manual configuration. The screenshot of AppDynamics application flow map in Figure 1 below shows an application that uses Amazon DynamoDB, S3, and SNS.

Screen Shot 2016-05-18 at 4.57.37 PM.png

Figure 1 – Application using Amazon DynamoDB, S3, and SNS

19 New AWS Monitoring Extensions

The AppDynamics platform is highly extensible to monitor various technology solutions that are not discovered natively. In the past, we had Amazon Cloudwatch Monitoring extension that collected all the AWS metrics via Amazon Cloudwatch APIs and passed them to the AppDynamics controller where they were tracked in metric browsers or dashboards. This extension was not very efficient because it collected and passed a lot of data for all the AWS components even if a customer just needed to monitor one or two different AWS components they were using.

The AppDynamics Winter ’16 release announced 19 new AWS monitoring extensions to monitor different AWS components. These extensions still use Amazon Cloudwatch APIs, but collect metrics for a specific AWS component and pass it to AppDynamics controller for tracking, creating health rules and visualizing in dashboards efficiently.

Here is the list of all the new AWS monitoring extensions:

  1. AWS Custom Namespace Monitoring Extension
  2. AWS SQS Monitoring Extension
  3. AWS S3 Monitoring Extension
  4. AWS Lambda Monitoring Extension
  5. AWS CloudSearch Monitoring Extension
  6. AWS StorageGateway Monitoring Extension
  7. AWS SNS Monitoring Extension
  8. AWS Route53 Monitoring Extension
  9. AWS Redshift Monitoring Extension
  10. AWS ElasticMapReduce Monitoring Extension
  11. AWS RDS Monitoring Extension


Figure 2 – Amazon RDS Metrics in AppDynamics Metric Browser

  1. AWS ELB Monitoring Extension
  2. AWS OpsWorks Monitoring Extension
  3. AWS EBS Monitoring Extension


Figure 3 – Amazon EBS Metrics in AppDynamics Metric Browser

  1. AWS Billing Monitoring Extension
  2. AWS AutoScaling Monitoring Extension
  3. AWS DynamoDB Monitoring Extension
  4. AWS ElastiCache Monitoring Extension
  5. AWS EC2 Monitoring Extension

Customers can leverage all the core functionalities of AppDynamics (e.g. dynamic baselining, health rules, policies, actions, etc.) for all of these AWS metrics while correlating them with the performance of the application using these AWS environment.

AppDynamics Customers using AWS Monitoring

With AppDynamics, many of our customers have accelerated their application migration to the cloud while others continue to monitor cloud applications as the application complexity explodes with the move towards microservices and dynamic web services. As the workload on the these cloud applications grew, AppDynamics came to rescue by elastically scaling the AWS resources to meet the exponential demand for applications.

For example, Nasdaq accelerated their application migration to AWS and gained complete visibility into complex application ecosystem. Heather Abbott, Senior Vice President of Corporate Solutions Technology at Nasdaq, summarized their experience with AppDynamics. “The ability to trace a transaction visually and intuitively through the interface was a major benefit AppDynamics delivered. This visibility was especially valuable when Nasdaq was migrating a platform from its internal infrastructure to the AWS Cloud. We used AppDynamics extensively to understand how our system was functioning on AWS, a completely new platform for us.”

How AWS Views the Future of Enterprise IT [VIDEO]

Amazon Web Services pioneered cloud computing in 2008. Since then, they’ve learned a lot about what enterprises have done to meaningfully adopt the cloud to benefit their businesses. However, migrating to the cloud is often an exhausting process.  

Stephen Orban, Global Head of Enterprise Strategy at AWS, was one of the keynotes at AppDynamics AppSphere 2015 and spoke about the trends he’s seen and some of the lessons he’s gained from his experiences with numerous enterprise customers. In Orban’s session, he discusses the pattern that has emerged, organizationally and architecturally, in enterprises who are using the cloud to meet their business objectives.

Here’s the video of his presentation:

You can view his deck below: 

AppSphere 15 – The Future of Enterprise IT from AppDynamics

Are you an existing AWS customer? Get an exclusive 60-day free trial of AppDynamics, CLICK HERE!

Top 10 Tweets Seen at AWS re:Invent

What happens in Vegas, stays in Vegas – except when you tweet about it. With more than 18,000 people in attendance at Amazon Web Services’ re:Invent – you can be sure there plenty of tweets. We highlighted a few of our favorites:

1.  That time it rained in Vegas…in October.

2. For Jim Fowler, the CIO at GE, there is no denying the future of the cloud.

3. Capital One using the cloud to redirect focus to making great apps for customers.

4. Apparently a tech conference can become a serious drinking game.

5.  AWS dropped the 47lb, 50 TB migration bomb called Snowball.

6. For many, the future of the cloud is clear.

7. The love of the cloud knows no distance.

8. Those at re:Invent discovered great technology and swag at AppDynamics booth.

9. Christmas came early for many developers with AWS’ newest releases.

10. The only thing people seem to love more than the cloud is Zedd.

Whether you’re moving to the cloud or already there, AppDynamics can help you migrate, scale and monitor your applications on the cloud: www.appdynamics.com/aws

Get Into the Cloud: AppDynamics at Amazon Web Services re:Invent

Screen Shot 2015-10-01 at 12.22.58 PM.png

The AppDynamics and Amazon Web Services (AWS) relationship grows stronger each year, with marquee joint customers such as Nasdaq. Our Application Intelligence Platform continues to evolve enabling enterprises to manage their cloud applications more efficiently and gain complete visibility and control into an expanded set of AWS services, with an exclusive 60 day free trial for AWS customers.

AppDynamics at AWS re:Invent

October 6 – 9, 2015

AWS re:Invent, the Amazon Web Services annual user conference, is the Mecca for the AWS community. The AWS team has pulled together another great event in Las Vegas featuring keynote announcements, training and certification opportunities, over 250 technical sessions, a partner expo, after hours activities, and more.

AppDynamics will have expanded presence at the sold-out event.

Screen Shot 2015-09-30 at 3.42.20 PM.png

Join us at Booth #550, during the expo hours from October 6 to 9, where we’ll showcase how we can help our customers:

  • Accelerate application migration to the cloud

  • Manage cloud applications as complexity explodes

  • Scale cloud applications as workloads grow.

We’ll be handing out AWS exclusive t-shirts highlighting AppDynamics’ cloud story, with daily surprise social media giveaways.  Be sure to follow us on Twitter @AppDynamics for the latest developments on AppDynamics and AWS.

AppDynamics at AWS re:Invent

Global Partner Summit

October 6, 2015

AWS leadership will discuss about the future of the business and the AWS Partner Network (APN) at the Global Partner Summit at AWS re:Invent. Join us during the following session:

Topic: Realizing the Benefits of Cloud: Cloud Migration & Application Optimization

Speaker: Tom Laszewski, Global Partner Solution Architect Senior Manager at Amazon Web Services.

Tom will discuss application optimization and highlight the value AppDynamics brings to our joint customers.

Wait.  It’s Just the Beginning.  

The AppDynamics ADVANTAGE at AppSphere 2015

November 30 – December 4, 2015

Las Vegas

We’re happy to exclusively announce that Stephen Orban, Global Head of Amazon Web Services Enterprise Strategy, will serve as AppSphere Keynote on Tuesday, December 1. Stephen will discuss the Future of Enterprise IT.  

AppSphere is AppDynamics’ annual user conference — we’re focusing on enabling businesses to bring a competitive advantage to market and driving success to their organizations. Attendees will the future landscape of the software-defined business and gain insights from topics around SaaS at scale, mobile performance, business data analytics, working towards a DevOps structure and much more.

For AWS customers, we’re offering an exclusive $100 discount on AppSphere tickets if you register here with promo code AWSCUST.  Customers who register by 11/6 with promo code AWSCUST will enter to win a free trip to Vegas.

To learn more, AppDynamics solution for AWS ecosystem, please visit appdynamics.com/aws and sign up for a 60-day free trial of AppDynamics designed exclusively for AWS customers.

The AppDynamics Team and I are looking forward to sharing these customer stories, best practices and demonstrating the AppDynamics solution at re:Invent 2015 in Las Vegas.

Monitoring Amazon SQS with AppDynamics

AppDynamics recently announced the support for applications running an expanded suite of services from Amazon Web Services (AWS). As many enterprises are migrating or deploying their new applications in the AWS Cloud, it is important to have deeper insight and control over the applications and the underlying infrastructure in order to ensure they can deliver exceptional end-user experience.

AppDynamics offers the same performance monitoring, management, automated processes, and analytics for applications running on AWS that are available for applications running on-premises. With the AppDynamics Summer ’15 Release, applications deployed on AWS are now easily instrumented to provide complete visibility and control into an expanded set of AWS services, including Amazon Simple Queue Service (Amazon SQS), Amazon Simple Storage Service (Amazon S3), and Amazon DynamoDB.

In this blog, I will focus on monitoring of applications using Amazon SQS. As per the AWS web page, “Amazon SQS is a fast, reliable, scalable, fully managed message queuing service. SQS makes it simple and cost-effective to decouple the components of a cloud application. You can use SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available.”

The Amazon SQS Java Messaging Library, which is a Java Messaging Service (JMS) interface to Amazon SQS, enables you to use Amazon SQS in the applications that already use JMS.

Message queues in SQS can be created either manually, or via the SQS Java Messaging Library and AWS Java SDK, and messages can be sent or received to the queues in various ways for different use cases.

Here is an application flow map in AppDynamics for a sample application using Amazon SQS for the following three use cases:

  • Basic send/receive

  • Batched send/receive

  • Async send/receive




AppDynamics supports all exit points for the Amazon SQS out of the box. Each exit point is treated exactly like JMS, .NET messaging, etc. for all of the use cases outlined above.

At this time, the entry point to the Amazon SQS is supported as part of a continuing transaction only. For example, if a transaction originates at some tier “foo” and continues via an exit through some SQS queue to a downstream tier, bar – the transaction on the “bar” may continue given the appropriate configuration. The user must specify a custom-interceptors.xml configuration file to apply the special SQS entry point interceptor to a given method and to configure where to obtain the correlation header.

My colleague Anthony Kilman shared the following example in case a user’s downstream application were processing messages received from an SQS message:

public abstract class  ASQSConsumer extends ASQSActor {


   protected void processMessage(Message message) {

       log.info(”  Message”);

       log.info(” MessageId: ” + message.getMessageId());

       log.info(” ReceiptHandle: ” + message.getReceiptHandle());

       log.info(” MD5OfBody: ” + message.getMD5OfBody());

       log.info(” Body:       ” + message.getBody());

       for (Map.Entry<String, String> entry : message.getAttributes().entrySet()) {

           log.info(”  Attribute”);

           log.info(” Name:  ” + entry.getKey());

           log.info(” Value: ” + entry.getValue());


       Map<String, MessageAttributeValue> messageAttributes = message.getMessageAttributes();

       log.info(“message attributes: ” + messageAttributes);




Then, the configuration to continue the transaction would be as follows:




   <match-class type=”matches-class”>

       <name filter-type=”equals”>aws.sqs.test.ASQSConsumer</name>





   <configuration type=”param” param-index=”0″ operation=”getter-chain” operation-config=”this”/>



This configuration will result in a snapshot like the following:




To learn more about cloud application performance monitoring and AWS cloud, please go to http://www.appdynamics.com/aws/.

Read our complimentary white paper, Managing the Performance of Cloud-Based Applications.


The Enterprise is Ready for the Cloud …

Recently, we here at AppDynamics have seen two major transformations in adoption of the public cloud. First is the adoption of the cloud for production workloads, and second is adoption of the cloud by large enterprises.

Transformation 1: From Dev/Test/QA to Production Workloads

Because dev/test cycles for applications have different capacity requirements at different times, the on-demand computing resources available through the cloud are ideally suited to serve these elastic requirements. But recently we’ve seen a significant transition by customers to run production workloads on AWS and other public clouds. Perhaps there was a trust element in the cloud that has taken some time to solidify. I suppose relentless price cuts in cloud don’t hurt either. But it seems clear that enterprises are embracing the cloud for production workloads.

Transformation 2: From Startups to Enterprises

First adopters of the cloud were primarily startups who did not want to (or could not) lay out the capital investment necessary to stand up their own datacenters. The cloud has afforded many startups the computing capacity, storage, and resources of a major datacenter without the upfront investment. This on-demand computing power broke down many barriers and enabled significant innovation. Now, the late majority — enterprises — are catching on and signing up for the cloud en masse. Not only are these enterprises adopting AWS for new development and innovation, but they are migrating existing applications from on-premises data centers to AWS — and using AppDynamics to help gather valuable pre- and post-migration data about their applications.

Expanded AppDynamics Support for AWS.

In support of these transformations, AppDynamics is making additional investments in AWS. First, we have released new capabilities to support additional AWS native services such as Amazon DynamoDB, Amazon SQS, and Amazon S3, adding to our existing support of Amazon EC2, Amazon CloudWatch, and Amazon RDS. AppDynamics now monitors more AWS native services, providing even greater visibility and control for your applications running on the AWS Cloud.

Second, because we want our customers and potential customers to be successful migrating to AWS, we have launched a special 60-day trial to help customers migrate on-premises production workloads to AWS. Using AppDynamics to instrument on-premises workloads as part of a pre-migration assessment, customers are able to draw accurate, real-time topology maps of their applications, and benchmark the performance of the application in its on-premises state prior to fork-lifting it to the cloud. This visibility gives the enterprise a clear picture of what components can and should be migrated, as well as providing demonstrable data about the actual performance of the existing application. With our unique “compare release” function, customers can visualize on a single screen the pre-migration and post-migration application architecture, as well as the performance of key application transactions.

Try AppDynamics for AWS and we are certain you will see that, like Nasdaq, OMX, and other enterprise customers, the visibility provided by AppDynamics is especially valuable when migrating a platform from your internal infrastructure to the AWS Cloud.