A Guide to Performance Challenges with AWS EC2: Part 4

If you’re just starting, check out Part 1, Part 2, and Part 3 to get up to speed on your guide to the top 5 performance challenges you might come across managing an AWS Elastic Compute Cloud (EC2), and how to best address them. We kicked off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment along with managing different storage options. Last week, we went over how to identify the right applications run on EC2 instances for your unique workloads. This week, we’ll wrap up by handling Amazon’s Elastic Load Balancer (ELB) performance and overall AWS outages.

Poor ELB Performance

Amazon’s Elastic Load Balancer (ELB) is Amazon’s answer to load balancing that integrates seamlessly into the AWS ecosystem. Rather than sending calls directly to individual EC2 instances we can instead insert an ELB in front of our EC2 instances, send load to our ELB, and then the ELB will distribute that load across the EC2 instances. This allows us to more easily add and remove EC2 instances from our environment and affords us the optimization of leveraging auto-scaling groups that grow and shrink our EC2 environment based on our rules or based on performance metrics. This relationship is shown in figure 3.

 

Figure 3. ELB-to-EC2 Relationship

While we may think of an ELB as a stand-alone appliance, like a Cisco LocalDirector or F5 BigIP, under-the-hood ELB is a proprietary load balancing application running on EC2 instances. As such, it benefits from the same elastic capabilities that your own EC2 instances do, but it also suffers from the same constraints of any load balancer running on in a virtual environment, namely it must be sized appropriately. If your application receives substantial load then you need to have enough ELB instances to handle and distribute that load across your EC2 instances. Unfortunately (or fortunately depending on how you look at it) you do not have visibility into the number of ELB instances or their configurations, you must rely on Amazon to manage that complexity for you.

So how do you handle that scale-up requirement for applications that receive substantial load? There are two things that you need to keep in mind:

  • Pre-warming: if you know that your application will receive substantial load or you are expecting flash traffic then you should contact Amazon and ask them to “pre-warm” the load balancer for you. Amazon will then configure the load balancer to have the appropriate capacity to handle the load that you expect.

  • Recycling ELBs: for most applications it is advisable to recycle your EC2 instances regularly to clean up memory or other clutter that might appear on a machine, but in the case of ELBs, you do not want to recycle them if at all possible. Because an ELB consists of EC2 instances that have grown, over time, to facilitate your user load, recycling them would effectively reset their capacity back to zero and force them to start over.

To detect if you are suffering from a poor ELB configuration and need to pre-warm your environment, measure the response times of synthetic business transactions (simulated load) or response times from the client perspective (such as via JavaScript on the browser or instrumented mobile applications) and then compare the total response time with the response time from the application itself. The difference between the client’s response time and your application response time consists of both network latency as well as wait / queue time of your requests in your load balancer. Historical data should help you understand network latency, but if you are seeing consistently poor latency or, even worse, your clients are receiving an HTTP 503 error code reporting that the server cannot handle any more load, then you should contact Amazon and ask them to pre-warm your ELB.

Handling AWS Failures and Outages

Amazon failures are very infrequent, but they can and do occur. For example, AWS had an outage in its North Virginia data center (US-EAST-1) for 6-8 hours on September 20, 2015 that affected more than 20 of its services, including EC2. Many big-name clients were affected by the outage, but one notable company that managed to avoid any “significant impact” was Netflix. Netflix has created what it calls its Simian Army, which is a set of processes with colorful names like Chaos Monkey, Latency Monkey, and Chaos Gorilla (get the simian reference?) that regularly wreak havoc on their application and on their Amazon deployment. As a result they have built their application to handle failure, so the loss of an entire data center did not significantly impact Netflix.

AWS runs across more than a dozen data centers in the world:

Source: https://aws.amazon.com/about-aws/global-infrastructure/

 

Amazon divides the world into regions and each region maintains more than one availability zone (AZ), in which each AZ represents a data center. When a data center fails then there are redundant data centers in that same region that can take over. For services like S3 and RDS, the data in a region is safely replicated between AZs so that if one fails then the data is still available. With respect to EC2, you need to choose the AZs to which you deploy your applications so it is advised that you deploy multiple instance of your applications and services to multiple AZs in your region. You are in control of how redundant your application is, but it also means that you need to run more instances (in different AZs), which equates to more cost.

Netflix’s Chaos Gorilla is a tool that simulates a full region outage for Netflix and they have tested their application to ensure that it can sustain a full region outage. Cross AZ data replication is available to you for free in AWS, but if you want to replicate data across regions then the problem is far more complicated. S3 supports cross-region replication but at the cost of transferring data out to and in from other regions. RDS cross-region replication varies based on the database but sometimes at much higher costs. Overall, Adrian Cockcroft, the former chief architect of Netflix, tweeted that the cost to maintain active-active data replication across regions is about 25% of the total cost.

All of this is to say that resiliency and high availability are at odds with both financial costs as well as the performance overhead of data replication, but are all available to the diligent. In order to be successful at handling Amazon failures (and scheduled outages for that matter), you need to architect your application to protect against failure.

Conclusion

Amazon Web Services may have revolutionized computing in the cloud, but it also introduced new concerns and challenges that we need to be able to identify and respond to. This paper presented five challenges that we face when managing an EC2 environment:

  • Running in a Multi-Tenancy Environment: how do we determine when our virtual machines are running on hardware shared with other virtual machines and those other virtual machines are noisy?

  • Poor Disk I/O Performance: how do we properly interpret AWS’s IOPS metric and determine when we need to opt for a higher IOPS EBS volume?

  • The Wrong Tool for the Job: how do we align our application workload with Amazon optimized EC2 instance types?

  • Poor ELB Performance: how do ELBs work under-the-hood and how do we plan for and manage our ELBs to match expected user load?

  • Handling AWS Failures and Outages: what do we need to consider when building a resilient application that can sustain AZ or even full region failures?

Hopefully this series gave you some starting points for your own performance management exercises and helped you identify some key challenges in your own environment that may be contributing to performance issues.

 

A Guide to Performance Challenges with AWS EC2: Part 3

If you’re just starting, check out Part 1 and Part 2 to get up to speed on your guide to the top 5 performance challenges you might come across managing an AWS Elastic Compute Cloud (EC2), and how to best address them. We kicked off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment along with managing different storage options. This week, we’ll discuss identifying the right applications run on EC2 instances for your unique workloads. 

The Wrong Tool for the Job

It is quite common to see EC2 deployments that start simple and eventually evolve into mission critical components of the business. While companies often want to move into the cloud, they tend do so cautiously. This means initially creating sandbox deployments and moving non-critical applications to the cloud. The danger, however, is that as this sandbox environment grows into something more substantial that initial decisions for things like base AMIs and EC2 instance types are not re-evaluated and maintained over time. As a result, applications with certain characteristics may be running on EC2 instances not necessarily best for their workload.

Amazon has defined a host of different EC2 instance types and it is important to choose the right one for your application’s use case. Amazon has defined instance types that provide different combinations of CPU, memory, disk, and network capabilities and are categorized as follows:

  • General Purpose
  • Compute Optimized
  • Memory Optimized
  • GPU
  • Storage Optimized

General purpose instances are good starting points that provide a balance of compute, memory, and network resources. They come in two flavors: fixed performance and burstable performance. Fixed performance instances (M3 and M4) guarantee you specific performance capacity and are good for applications and services that require consistent capacity, such as small and mid-sized databases, data processing tasks, and backend services. Burstable performance instances (T2) provide a baseline level of CPU performance with the ability to burst above that baseline for a period of time. These instances are good for applications that vary in their compute requirements, such as development environments, build servers, code repositories, low traffic websites and web applications, microservices, early production experiments, and small databases.

Compute optimized instances (C3 and C4) provide high-powered CPUs and favor compute capacity over memory and network resources; they provide the best compute/cost value. They are best for applications that require a lot of computational power, such as high-performance front-end applications, web servers, batch processes, distributed analytics, high-performance science and engineering applications, ad serving, MMO gaming, and video encoding.

Memory optimized instances (R3) are optimized for memory-intensive applications and provide the best memory (GB) / cost value. The best use cases are high-performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis, larger deployments of SAP, Microsoft SharePoint, and other enterprise applications.

GPU instances (G2) are optimized for graphics and general purpose GPU computing applications and best for 3D streaming, machine learning, video encoding, and other server-side graphics or GPU compute workloads.

Storage optimized instances come in two flavors: High I/O instances (I2) and Dense-storage instances (D2). High I/O instances provide very fast SSD-backed instance storage and are optimized for very high random I/O performance; they are best for applications like NoSQL databases (Cassandra and MongoDB), scale-out transactional databases, data warehouses, Hadoop, and cluster file systems. Dense-storage instances deliver high disk throughput and provide the best disk throughput performance; they are best for Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, distributed file systems, network file systems, log, or data-processing applications.

The best strategy for selecting the best instance type is to choose the closest matching category (instance type family) from the list above, select an instance from that family, and then load test your application. Monitoring the performance of your application under load will reveal if your application is compute-bound, memory-bound, or network-bound so, if you have selected the wrong instance type family then adjust accordingly. Finally, load test results will help you choose the right sized instance within that instance family.

A Guide to Performance Challenges with AWS EC2: Part 2

Amazon Web Services (AWS) revolutionizes production application deployments through elastic scalability and an hourly payment plan. Companies can pay for the infrastructure that they need for any given hour of the day, scaling up to meet high user demand during peak hours and scaling down during off peak hours. The AWS Elastic Compute Cloud (EC2), is one of its core components. It has many of the same characteristics as traditional virtual machines, but is tightly integrated into the AWS ecosystem, so many of its capabilities differ from a traditional virtual machine as well. 

Last week, we kicked off our series on your guide to the top 5 performance challenges you might come across managing an AWS EC2 environment, and how to best address them. We started off with the ins and outs of running your virtual machines in Amazon’s cloud, and how to navigate your way through a multi-tenancy environment.  

Poor Disk I/O Performance

AWS supports several different types of storage options, the core of which include the following: 

  • EC2 Instance Store
  • Elastic Block Store (EBS)
  • Simple Storage Service (S3) 

EC2 instances can access the physical disks that are attached to the machine hosting the EC2 instance to use for temporary storage. The important thing to note about this type of storage is that it is ephemeral, meaning that it only persists for the duration of the EC2 instance and then is destroyed when the EC2 instance stops. Meaningful storage, therefore, will not persist to an EC2 instance store.

For more common storage needs we’ll opt for either EBS or S3. From the perspective of how they are accessed, the main difference between the two is that EBS can be accessed through disk operations whereas S3 provides a RESTful API to store and retrieve objects. With respect to use cases, S3 is designed to store web-scale amounts of data whereas EBS is more akin to a hard drive. Therefore, when you need to access a block device from an application running on an EC2 instance and you need that data to persist between EC2 restarts, such as storage to support a database, you’ll typically leverage an EBS volume.

EBS volumes come in three flavors:

  • Magnetic Volumes: magnetic volumes can range in size from 1GB to 1TB and support 40-90 MB/sec throughput. They are good for workloads where data is accessed infrequently.
  • General Purpose SSD: general purpose SSDs can range in size from 1GB to 16TB and support 160 MB/sec throughput. They are good for use cases such as system boot volumes, virtual desktops, small to medium sized databases, and development and test environments.
  • Provisioned IOPS SSD: provisioned IOPS (Input/Output Per Second) SSDs can range from 4GB to 16TB in size and support 320 MB/sec throughput. They are good for critical business applications that required sustained IOPS performance, or more than 10,000 IOPS (160 MB/sec), such as MongoDB, SQL Server, MySQL, PostgreSQL, and Oracle

Choosing the correct EBS volume type is important, but it is also important to understand what these metrics mean and how they impact your EC2 instance and disk I/O operations. 

  • First, IOPS are measured in terms of a 16K I/O block size, so if your application is writing 64K then it will use 4 IOPS
  • Next, in order to realize the IOPS capacity, you need to send enough requests to the EBS volume to match its queue length, or number of pending operations supported by the volume
  • You must use an AMI instance that is optimized to use EBS volumes; instance types are listed here
  • The first time that you access a block from EBS, there will be approximately 50% IOPS overhead. The IOPS measurement assumes that you’ve already accessed the block at least once

With these constraints you can better understand the CloudWatch EBS metrics, such as VolumeReadOps and VolumeWriteOps, and how IOPS are computed. Review these metrics in light of the EBS volume type that you are using and see if you are approaching a limit. If you are approaching a limit then you will want to opt for a higher IOPS supported volume.

Figure 1 shows the VolumeReadOps and VolumeWriteOps for an EBS volume. From this example we can see that this particular EBS volume is experiencing about 2400 IOPS.

Figure 1. Measuring EBS IOPS

A Guide to Performance Challenges with AWS EC2: Part 1

Amazon Web Services (AWS) revolutionized production application deployments through elastic scalability and an hourly payment plan. Companies can pay for the infrastructure that they need for any given hour of the day, scaling up to meet high user demand during peak hours and scaling down during off peak hours. One of the core components of AWS is the Elastic Compute Cloud (EC2), which is Amazon’s abstraction of a virtual machine running in the cloud. It has many of the same characteristics as traditional virtual machines, but it is tightly integrated into the AWS ecosystem, which means that it has many characteristics that are new or different than traditional virtual machines.

This blog kicks off a series on your guide to the top 5 performance challenges you might come across managing an AWS EC2 environment, and how to best address them.

Running in a Multi-Tenancy Environment

Amazon EC2 instances are virtual machines that run in Amazon’s cloud and, like all virtual machines, they ultimately run on physical hardware. One side effect of running virtual machines in an environment that you do not own is that you cannot control what other virtual machines run next to you on the same physical hardware and some of your neighbors may be noisier than others. Basically, the performance of EC2 instances can sometimes be spotty and you need to be able to effectively identify when your virtual machines are running on spotty hardware and react accordingly.

So, how do you know if you have noisy neighbors?

Answer: You need to review both the physical runtime characteristics of your virtual machines as well as Amazon’s CloudWatch metrics. You can monitor the AWS CloudWatch metrics, OS and application level metrics and correlate them using an APM tool, such as AppDynamics. When examining the runtime behavior of a virtual machine using tools like top or vmstat, you’ll observe that, in addition to returning the CPU usage and idle percentages, the operating system also returns the amount of “stolen” CPU. Stolen CPU is not as diabolical as it sounds, it is a relative measure of the cycles of a CPU that should have been available to run your processes, but were not available because the hypervisor diverted cycles away from your instance. This may have happened because you have met your allotted quota of CPU usage or because another instance running on the same physical hardware is occupying the available CPU capacity. A little investigation will help you discern between the two.

First, you need to know the underlying CPU powering the hardware running your virtual machine. You can connect into your machine and execute the following command to view the CPU information:

# cat /proc/cpuinfo

processor : 0

vendor_id : GenuineIntel

cpu family : 6

model : 62

model name : Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz

stepping : 4

cpu MHz : 1800.083

cache size : 30720 KB

fdiv_bug : no

hlt_bug : no

f00f_bug : no

coma_bug : no

fpu : yes

fpu_exception : yes

cpuid level : 13

wp : yes

flags : fpu de tsc msr pae cx8 apic cmov pat clflush acpi mmx fxsr sse sse2 ss ht nx constant_tsc up pni ssse3 popcnt

bogomips : 3647.67

clflush size : 64

In this example, the EC2 instance is running on a 1.8 GHz Xeon processor. Because an ECU is equivalent to a 1.0-1.2 GHz Xeon processor then 100% ECU utilization would be between 55% and 66% of the physical CPU usage. Next, we need to look at the runtime behavior of our virtual machine, which can be accomplished by using the top or vmstat command:

# vmstat

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------

r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st

0  0      0 1101516 162504 383016    0    0     0     6    1    1  0  0 100  0  0

The “us” metric reports the CPU usage, the “id” metric reports the idle percentage of the CPU, and the “st” metic reports the amount of stolen CPU. The key to identifying noisy neighbors is to observe the metrics over time pay particular attention to the stolen CPU time when your virtual machine is idle. If you see discrepancies in the stolen time when your CPU is idle then that indicates that you are sharing the same hardware with other customers. For example, when your CPU usage is idle and the stolen CPU percentage is 10% once and then another time the stolen CPU usage is 40% then chances are that you are not simply observing hypervisor activity, but rather are observing the behavior of another virtual machine.

In addition to reviewing the operating system CPU utilization and stolen percentage, you need to cross reference this with Amazon’s CloudWatch metrics. The CloudWatch CPUUtilization is defined by Amazon as follows:

The percentage of allocated EC2 compute units that are currently in use on the instance. This metric identifies the processing power required to run an application upon a selected instance. Depending on the instance type, tools in your operating system may show a lower percentage than CloudWatch when the instance is not allocated a full processor core.

The CloudWatch CPUUtilization metric is your source of truth for truly understanding your virtual machine’s processing capability and, combined with the operating system’s CPU metrics, can provide you with enough information to discern the presence of a noisy neighbor. If the CloudWatch CPUUtilization metric is at 100% for your EC2 instance and your operating system is not reporting that you’ve reached 100% of your ECU capacity (one core of your physical CPU type / 1.0-1.2 GHz) and the CPU stolen percentage is high, then you have a noisy neighbor that is draining compute capacity from your virtual machine. For example, in the aforementioned scenario of 1 ECU on a machine with a 2.4 Ghz physical CPU, we would expect 100% ECU usage to be about 50% of the physical CPU. If the operating system reports that we’re at 30% CPU utilization with a high stolen CPU usage and CloudWatch reports that we’re at 100% usage, then we have identified a noisy neighbor.

Figure 1 presents an example of the CloudWatch CPU Utilization as compared to the operating system CPU utilization and stolen percentages, captured graphically over time. This illustrates the discrepancy between how CloudWatch sees your instance and how your instance sees itself.

 

Figure 1. CloudWatch vs OS CPU Metrics

 

Once you’ve identified that you have a noisy neighbor, so how should you react? You have a few options, but here’s what we recommend:

  • Observe the behavior to determine if it is a systemic problem (is the neighbor constantly noisy?)
  • If it is a systemic problem then move. While this might not be the best choice if you live in an apartment building, in Amazon it is simple: start a new AMI instance, register it with your ELB, and decommission the old one. If you get into the habit of viewing EC2 instances as disposable it will help you resolve these types of issues quickly
  • Consider implementing a rolling EC2 instance strategy. Again, building on the idea that EC2 instances are disposable, it is a good practice to only keep your EC2 instances around for a short period of time, such as a few hours. Just as your laptop benefits from being restarted regularly, so will your server instances. Over time your instances may add clutter to your memory and, rather than work overly hard to clean them up, it is far easier to replace them with new ones. The best strategy that I have seen in this space is to manage your rolling instances through your elasticity strategy. The cloud allows you to scale up when you need more capacity (peak hours) and scale down when you need less capacity (off-peak hours). Getting into the habit of decommissioning the oldest instances when you scale down can shorten the average lifespan of your EC2 instances and achieve a rolling EC2 instance strategy.