APM vs aaNPM – Cutting Through the Marketing BS

Marketing_BSMarketing; mixed feelings! I’m really looking forward to the new super bowl ads (some are pure marketing genius) in a few weeks but I really dislike all of the confusion that marketing tends to create in the technology world. In todays blog post I’m going to attempt to cut through all of the marketing BS and clearly articulate the differences between APM (Application Performance Management) and aaNPM (Application Aware Network Performance Management). I’m not going to try to convince you that you need one instead of the other. This isn’t a sales pitch, it’s an education session.

Definitions

APM – Gartner has been hard at work trying to clearly define the APM space. According to Gartner, APM tools should contain the following 5 dimensions:

  • End-user experience monitoring (EUM) – (How is the application performing according to what the end user sees)
  • Runtime application architecture discovery modeling and display (A view of the logical application flow at any given point in time)
  • User-defined transaction profiling (Tracking user activity across the entire application)
  • Component deep-dive monitoring in application context (Application code execution, sql query execution, message queue behavior, etc…)
  • Analytics

aaNPM – Again, Gartner has created a definition for this market segment which you can read about here

“These solutions allow passive packet capture of network traffic and must include the following features, in addition to packet capture technology:

  • Receive and process one or more of these flow-based data sources: NetFlow, sFlow and Internet Protocol Flow Information Export (IPFIX).
  • Provide roll-ups and dashboards of collected data into business-relevant views, consisting of application-centric performance displays.
  • Monitor performance in an always-on state, and generate alarms based on manual or automatically generated thresholds.
  • Offer protocol analysis capabilities to decode and understand multiple applications, including voice, video, HTTP and database protocols. The tool must provide end-user experience information for these applications.
  • Have the ability to decrypt encrypted traffic if the proper keys are provided to the solution.”

Reality

So what do these definitions mean in real life?

APM tools are typically used by application support, operations, and development teams to rapidly identify, isolate, and repair application issues. These teams usually have a high level understanding of how networks operate but not nearly the detailed knowledge required to resolve network related issues. They live and breathe application code, integration points, and component (server, OS, VM, JVM, etc…) metrics. They call in the network team when they think there is a network issue (hopefully based upon some high level network indicators). These teams usually have no control over the network and must follow the network teams process to get a physical device connected to the network. APM tools do not help you solve network problems.

aaNPM tools are network packet sniffers by definition. The majority of these tools (NetScout, Cisco, Fluke Networks, Network Instruments, etc.) are hardware appliances that you connect to your network. They need to be connected to each segment of the network where you want to collect packets or they must be fed filtered and aggregated packet streams by NPB (Network Packet Broker) devices. aaNPM tools contain a wealth of network level details in addition to some valuable application data (EUM metrics, transaction details, and application flows). aaNPM tools help network engineers solve network problems that are manifesting themselves as application problems. aaNPM tools are not capable of solving code issues as they have no visibility into application code.

If I were on a network support team I would want to understand if applications were being impacted by network issues so I could prioritize remediation properly and have a point of reference if I was called out by an application support team.

Network and Application Team Convergence

I’ve been asked if I see network and application teams converging in a similar way that dev and ops teams are converging as a result of the DevOps movement. I have not seen this with any of the companies I get to interact with. Based on my experience working in operations at various companies in the past, network teams and application support teams think in very different ways. Is it impossible for these teams to work in unison? No, but I see it as unlikely over the next few years at least.

But what about SDN (Software Defined Networking)? I think most network engineers see SDN as a threat to their livelihood whether that is true or not. No matter the case, SDN will take a long time to make it’s way into operational use and in the mean time network and application teams will remain separate.

I hope this was helpful in cutting through the marketing spin that many vendors are using to expand their reach. When it comes right down to it, your use case may require APM, aaNPM, a combination of both, or some other technology not discussed today. Technology is made to solve problems. I highly recommend starting out by defining your problems and then exploring the best solutions available to help solve your problem.

If you’ve decided to explore your APM tool options you can try AppDynamics for free by clicking here.

The Digital Enterprise – Problems and Solutions

According to a recent article featured in Wall Street and Technology, Financial Services (FS) companies have a problem. The article explains that FS companies built more datacenter capacity than they needed when profits were up and demand was rising. Now that profits are lower and demand has not risen as expected the data centers are partially empty and very costly to operate.

FS companies are starting to outsource their IT infrastructure and this brings a new problem to light…

“It will take a decade to complete the move to a digital enterprise, especially in financial services, because of the complexity of software and existing IT architecture. “Legacy data and applications are hard to move” to a third party, Bishop says, adding that a single application may touch and interact with numerous other applications. Removing one system from a datacenter may disrupt the entire ecosystem.”

Serious Problems

The article calls out a significant problem that FS companies are facing now and will be for the next decade but doesn’t mention a solution.

The problem is that you can’t just pick up an application and move it without impacting other applications. Based upon my experience working with FS applications I see multiple related problems:

  1. Disruption of other applications
  2. Baselining performance and availability before the move
  3. Ensuring performance and availability after the move

All of these problems increase risk and the chance that users will be impacted.

Solutions

1. Disruption of other applications – The solution to this problem is easy in theory and traditionally difficult in practice. The theory is that you need to understand all of the external interactions with application you want to move.

One solution is to use ADDM (Application Discovery and Dependency Mapping) tools that scan your infrastructure looking for application components and the various communications to and from them. This method works okay (I have used it in the past) but typically requires a lot of manual data manipulation after the fact to improve the accuracy of the discovered information.

ADDM1

ADDM product view of application dependencies.

Another solution is to use an APM (Application Performance Management) tool to gather the information from within the running application. The right APM tool will automatically see all application instances (even in a dynamically scaled environment) as well as all of the communications into and out of the monitored application.

Distributed Application View

APM visualization of an application and it’s components with remote service calls.

Remote Services 1

APM tool list of remote application calls with response times, throughput and errors.

 

A combination of these two types of tools would provide the ultimate in accurate and easy to consume information (APM strength) along with flexibility to cover all of the one off custom application processes that might not be supported by an APM tool (ADDM strength).

2. Baselining performance and availability before the move – It’s critically important to understand the performance characteristics of your application before you move. This will provide the baseline required for comparison sake after you make the move. The last thing you want to do is degrade application performance and user satisfaction by moving an application. The solution here is leveraging the APM tool referenced in solution #1. This is a core strength of APM and should be leveraged from multiple perspectives:

  1. Overall application throughput, response times, and availability
  2. Individual business transaction throughput and response times
  3. External dependency throughput and response times
  4. Application error rate and type
Application overview and baseline

Application overview with baseline information.

transactions and baselines

Business transaction overview and baseline information.

3. Ensuring performance and availability after the move – Now that your application has moved to an outsourcer it’s more important than ever to understand performance and availability. Invariably your application performance will degrade and the finger pointing between you and your outsourcer will begin. That is, unless you are using an APM tool to monitor your application. The whole point of APM tools is to end finger pointing and to reduce mean time to restore service (MTRS) as much as possible. By using APM after the application move you will provide the highest level of service to your customers as possible.

Compare Releases

Comparison of two application releases. Granular comparison to understand before and after states. – Key Performance Indicators

Compare releases 2

Comparison of two application releases. Granular comparison to understand before and after states. – Load, Response Time, Errors

If you’re considering or in the process of transitioning to a digital enterprise you should seriously consider using APM to solve a multitude of problems. You can click here to sign up for a free trial of AppDynamics and get started today.

Instrumenting .NET applications with AppDynamics using NuGet

Introduction

One of the coolest things to come out of the .NET stable at AppD this week was the NuGet package for Azure Cloud Services. NuGet makes it a breeze to deploy our .NET agent along with your web and worker roles from inside Visual Studio. For those unfamiliar with NuGet, more information can be found here.

Our NuGet package ensures that the .NET agent is deployed at the same time when the role is published to the cloud. After adding it to the project you’ll never have to worry about deploying the agent when you swap your hosting environment from staging to production in Azure or when Azure changes the machine from under your instance. For the remainder of the post I’ll use a web role to demonstrate how to quickly install our NuGet package, changes it makes to your solution and how to edit the configuration by hand if needed. Even though I’ll use a web role, things work exactly the same way for a worker role.

Installation

So, without further ado, let’s take a look at how to quickly instrument .NET code in Azure using AppD’s NuGet package for Windows Azure Cloud Services. NuGet packages can be added via the command line or the GUI. In order to use the command line, we need to bring up the package manager console in Visual Studio as shown below

PackageManager

In the console, type ‘install-package AppDynamics.WindowsAzure.CloudServices’ to install the package. This will bring up the following UI where you can enter the information needed by the agent to talk to the controller and upload metrics. You should have received this information in the welcome email from AppDynamics.

Azure

The ‘Application Name’ is the name of the application in the controller under which the metrics reported by this agent will be stored. When ‘Test Connection’ is checked we will check the information entered by trying to connect to the controller. An error message will be displayed if the test connection is unsuccessful. That’s it, enter the information, click apply and we’re done. Easy Peasy. No more adding files one by one or modifying scripts by hand. Once deployed, instances of this web role will start reporting metrics as soon as they experience any traffic. Oh, and by the way, if you prefer to use a GUI instead of typing commands on the console, the same thing can be done by right-clicking on the solution in Visual Studio and choosing ‘Manage NuGet Package’.

Anatomy of the package

If you look closely at the solution explorer you’ll notice that a new folder called ‘AppDynamics’ has been created in the solution explorer. On expanding the folder you’ll find the following two files:

  • Installer of the latest and greatest .NET agent.
  • Startup.cmd
The startup script makes sure that the agent gets installed as a part of the deployment process on Azure. Other than adding these files we also change the ServiceDefinition.csdef file to add a startup task as shown below.

Screen Shot 2013-11-27 at 8.11.27 PM

In case, you need to change the controller information you entered in the GUI while installing the package, it can be done by editing the startup section of the csdef file shown above. Application name, controller URL, port, account key etc. can all be changed. On re-deploying the role to Azure, these new values will come into effect.

Next Steps

Microsoft Developer Evangelist, Bruno Terkaly blogged about monitoring the performance of multi-tiered Windows Azure based web applications. Find out more on Microsoft Developer Network.

Find out more in our step-by-step guide on instrumenting .NET applications using AppDynamics Pro. Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Bootstrapping DropWizard apps with AppDynamics on OpenShift by Red Hat

Getting started with DropWizard, OpenShift, and AppDynamics

In this blog post, I’ll show you how to deploy a Dropwizard-based application on OpenShift by Red Hat and monitor it with AppDynamics.

DropWizard is a high-performance Java framework for building RESTful web services. It is built by the smart folks at Yammer and is available as an open-source project on GitHub. The easiest way to get started with DropWizard is with the example application. The DropWizard example application was developed to, as its name implies, provide examples of some of the features present in DropWizard.

DropWizard

OpenShift can be used to deploy any kind of application with the DIY (do it yourself) cartridge. To get started, log in to OpenShift and create an application using the DIY cartridge.

With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for an account on OpenShift by RedHat

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a Do It Yourself application on OpenShift

$ rhc app create appdynamicsdemo diy-0.1
 --from-code https://github.com/Appdynamics/dropwizard-sample-app.git

Getting started is as easy as creating an application from an existing git repository: https://github.com/Appdynamics/dropwizard-sample-app.git

DIY Cartridge


% rhc app create appdynamicsdemo diy-0.1 --from-code https://github.com/Appdynamics/dropwizard-sample-app.git

Application Options
——————-
Domain: appddemo
Cartridges: diy-0.1
Source Code: https://github.com/Appdynamics/dropwizard-sample-app.git
Gear Size: default
Scaling: no

Creating application ‘appdynamicsdemo’ … done
Waiting for your DNS name to be available … done

Cloning into ‘appdynamicsdemo’…
Your application ‘appdynamicsdemo’ is now available.

URL: http://appdynamicsdemo-appddemo.rhcloud.com/
SSH to: 52b8adc15973ca7e46000077@appdynamicsdemo-appddemo.rhcloud.com
Git remote: ssh://52b8adc15973ca7e46000077@appdynamicsdemo-appddemo.rhcloud.com/~/git/appdynamicsdemo.git/

Run ‘rhc show-app appdynamicsdemo’ for more details about your app.

With the OpenShift Do-It-Yourself container you can easily run any application by adding a few action hooks to your application. In order to make DropWizard work on OpenShift we need to create three action hooks for building, deploying, and starting the application. Action hooks are simply scripts that are run at different points during deployment. To get started simply create a .openshift/action_hooks directory:

mkdir -p .openshift/action_hooks

Here is the example for the above sample application:

When checking out the repository use Maven to download the project dependencies and package the project for production from source code:

.openshift/action_hooks/build

cd $OPENSHIFT_REPO_DIR

mvn -s $OPENSHIFT_REPO_DIR/.openshift/settings.xml -q package

When deploying the code you need to replace the IP address and port for the DIY container. The properties are made available as environment variables:

.openshift/action_hooks/deploy

cd $OPENSHIFT_REPO_DIR

sed -i 's/@OPENSHIFT_DIY_IP@/'"$OPENSHIFT_DIY_IP"'/g' example.yml
sed -i 's/@OPENSHIFT_DIY_PORT@/'"$OPENSHIFT_DIY_PORT"'/g' example.yml

Let’s recap some of the smart decisions we have made so far:

  • Leverage OpenShift platform as a service (PaaS) for managing the infrastructure
  • Use DropWizard as a solid foundation for our Java application
  • Monitor the application performance with AppDynamics Pro

With a solid Java foundation we are prepared to build our new application. Next, try adding another machine or dive into the DropWizard documentation.

Combining DropWizard, OpenShift, and AppDynamics

AppDynamics allows you to instrument any Java application with by simply adding the AppDynamics agent to the JVM. Sign up for a AppDynamics Pro self-service account. Log in using your account details in your email titled “Welcome to your AppDynamics Pro SaaS Trial” or the account details you have entered during On-Premise installation.

The last step to combine the power of OpenShift and DropWizard is to instrument the app with AppDynamics. Simply update your AppDynamics credentials in the Java agent’s AppServerAgent/conf/controller-info.xml configuration file.

Finally, to start the application we need to run any database migrations and add the AppDynamics Java agent to the startup commmand:

.openshift/action_hooks/deploy

cd $OPENSHIFT_REPO_DIR

java -jar target/dropwizard-example-0.7.0-SNAPSHOT.jar db migrate example.yml

java -javaagent:${OPENSHIFT_REPO_DIR}AppServerAgent/javaagent.jar
     -jar ${OPENSHIFT_REPO_DIR}target/dropwizard-example-0.7.0-SNAPSHOT.jar
     server example.yml > ${OPENSHIFT_DIY_LOG_DIR}/helloworld.log &

OpenShift App

Additional resources on running DropWizard on OpenShift:

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Insights from an Investment Banking Monitoring Architect

To put it very simply, Financial Services companies have a unique set of challenges that they have to deal with every day. They are a high priority target for hackers, they are highly regulated by federal and state governments, they deal with and employ some of the most demanding people on the planet, problems with their applications can have an impact on every other industry across the globe. I know this from first hand experience; I was an Architect at a major investment bank for over 5 years.

In this blog post I’m going to show you what’s really important when Financial Services companies consider application monitoring solutions and warn you about the hidden realities that only expose themselves after you’ve installed a large enough monitoring footprint.

1 – Product Architecture Plays a Major Role in Long Term Success or Failure

Every monitoring tool has a different core architecture. On the surface these architectures may look similar but it is imperative to dive deeper into the details of how all monitoring products work. We’ll use two real product architectures as examples.

Monitoring Architecture“APM Solution A” is an agent based solution. This means that a piece of vendor code is deployed to gather monitoring information from your running applications. This agent is intelligent and knows exactly what to monitor, how to monitor, and when to dial itself back to do no harm. The agent sends data back to central collector (called a controller) where this data is correlated, analyzed, and categorized automatically to provide actionable intelligence to the user. With this architecture the agent and the controller are very loosely coupled which lends itself well to highly distributed, virtualized environments like you see in modern application architectures.

“APM Solution B” is also agent based. They have a 3 tiered architecture which consists of agents, collectors, and servers. On the surface this architecture seems reasonable but when we look at the details a different story emerges. The agent is not intelligent therefore it does not know how to instrument an application. This means that every time an application is re-started, the agent must send all of the methods to the collector so that the collector can tell the agent how and what to instrument. This places a large load on the network, delays application startup time, and adds to the amount of hardware required to run your monitoring tool. After the collector has told the agent what to monitor the collectors job is to gather the monitoring data from the agent and pass it back to the server where it is stored and viewed. For a single application this architecture may seem acceptable but you must consider the implications across a larger deployment.

Choosing a solution with the wrong product architecture will impact your ability to monitor and manage your applications in production. Production monitoring is a requirement for rapid identification, isolation and repair of problems.

2 – Monitoring Philosophy

Monitoring isn’t as straight forward as collecting, storing, and showing data. You could use that approach but it would not provide much value. When looking at monitoring tools it’s really important to understand the impact of monitoring philosophy on your overall project and goals. When I was looking at monitoring tools I needed to be able to solve problems fast and I didn’t want to spend all of my time managing the monitoring tools. Let’s use examples to illustrate again.

Application Monitoring Philosophy“APM Solution A” monitors every business transaction flowing through whatever application it is monitoring. Whenever any business transaction has a problem (slow or error) it automatically collects all of the data (deep diagnostics) you need to figure out what caused the problem. This, combined with periodic deep diagnostic sessions at regular intervals, allows you to solve problems while keeping network, storage, and CPU overhead low. It also keeps clutter down (as compared to collecting everything all the time) so that you solve problems as fast as possible.

“APM Solution B” also monitors every transaction for each monitored application but collects deep diagnostic data for all transactions all the time. This monitoring philosophy greatly increases network, storage, and CPU overhead while providing massive amounts of data to work with regardless of whether or not there are application problems.

When I was actively using monitoring tools in the Investment Bank I never looked at deep diagnostic data unless I was working on resolving a problem.

3 – Analytics Approach

Analytics comes in many shapes and sizes these days. Regardless of the business or technical application, analytics does what humans could never do. It creates actionable intelligence from massive amounts of data and allows us to solve problems much faster than ever before. Part of my process for evaluating monitoring solutions has always been determining just how much extra help each tool would provide in identifying and isolating (root cause) application problems using analytics. Back to my example…

“APM Solution A” is an analytics product at it’s core. Every business transaction is analyzed to create a picture of “normal” response time (a baseline). When new business transactions deviate from this baseline they are automatically classified as either slow or very slow and deep diagnostic information is collected, stored, and analyzed to help identify and isolate the root cause. Static thresholds can be set for alerting but by default, alerts are based upon deviation from normal so that you can proactively identify service degradation instead of waiting for small problems to become major business impact.

“APM Solution B” only provides baselines for the business transactions you have specified. You have to manually configure the business transactions for each application. Again, on small scale this methodology is usable but quickly becomes a problem when managing the configuration of 10’s, 100’s, or 1000’s of applications that keep changing as development continues. Searching through a large set of data for a problem is much slower without the assistance of analytics.

Monitoring Analytics

4 – Vendor Focus

When you purchase software from a vendor you are also committing to working with that vendor. I always evaluated how responsive every vendor was during the pre-sales phase but it was hard to get a good measure of what the relationship would be like after the sale. No matter how good the developers are, there are going to be issues with software products. What matters the most is the response you get from the vendor after you have made the purchase.

5 – Ease of Use

This might seem obvious but ease of use is a major factor in software delivering a solid return on investment or becoming shelf-ware. Modern APM software is powerful AND easy to use at the same time. One of the worst mistakes I made as an Architect was not paying enough attention to ease of use during product evaluation and selection. If only a few people in a company are capable of using a product then it will never reach it’s full potential and that is exactly what happened with one of the products I selected. Two weeks after training a team on product usage, almost nobody remembered how to use it. That is a major issue with legacy products.

Enterprise software is undergoing a major disruption. If you already have monitoring tools in place, now is the right time to explore the marketplace and see how your environment can benefit from modern tools. If you don’t have any APM software in place yet you need catch up to your competition since most of them are already looking or have already implemented APM for their critical applications. Either way, you can get started today with a free trial of AppDynamics.

IT holds more business influence than they realise

A ‘well oiled’ organization is one where IT and the rest of the business are working together and on the same page. In order to achieve this there needs to be good communication, and for good communication there needs to be a common language.

In most organizations, while IT are striving to achieve their goal of 99.999% availability, the rest of the business is looking to drive additional revenue, increase user satisfaction, and reduce customer churn.

Ultimately everyone should be working towards a common goal: SUCCESS. Unfortunately different teams define their success in different ways and this lack of alignment often results in a mistrust between IT departments and the rest of the business.

Measuring success

Let’s look at how various teams within a typical organization define success today:

Operations:
IT ops teams are responsible for reducing risk, ensuring the application is available and the ‘lights are green’. The number ‘99.9’ can either be IT Ops best friend or its worst enemy. Availability targets such as these are often the only measure of ops success or failure, meaning many of the other things you are doing often go unnoticed.

Availability targets don’t show business insight, or the positive impact you’re having on the business. For instance, how much did performance improve after you implemented that change last week? Has the average order size increased? How many additional orders can the application process since re-platforming? Is anyone measuring what the performance improvement gains were for that change you implemented last week?

Development:
Dev teams are focussed on change. The Business demands they release more frequently, with more features, less defects, less resources and often less sleep! Dev teams are often targeted according to the number of updates and changes they can release. But nobody is measuring the success of these changes. Can anyone in your dev team demonstrate what the impact of your last code release was? Did revenues climb? Were users more satisfied? Were there an increased number of orders placed?

‘The Business’:
The business is focussed on targets; last month’s achievements and end of year goals. This means they concentrate on the past and the future, but have little or no idea what impact IT is having on the business in the present. Consulting a data warehouse to gather ‘Business Intelligence’ at the end of the month does not allow you to keep your finger on the pulse of the business.

With everyone focussing on different targets there is no real alignment to the overall business goals between different parts of an organization. One reason for this disconnect is due to the lack of meaningful shared metrics. More specifically, it’s access to these metrics in real-time that is the missing link.

If I asked how much revenue has passed through your application since reading this blogpost, or what impact your last code release had on customer adoption, how quickly could you find the answers? How quickly could anyone in your organization find the answers?

What if answers to these questions only took seconds?

Monitoring the Business in Real-time

In a previous post, I introduced AppDynamics Real-time Business Metrics which enables you to easily collect, auto-baseline, alert, and report on the Business data that is flowing through your applications… as it’s really happening.

This post demonstrates how to configure AppDynamics to extract all checkout revenue values from every business transaction and make this available as a new metric “Checkout Revenue” which can be reported in real-time just like any other AppDynamics metric.

With IT Ops, Dev and Business Owners all supporting business critical applications that are responsible for generating revenue, it is a great example of a business metric that could be used by every team to measure success.

Let’s look at a few examples of how this could change the way you do business, if everyone was jointly focussed on the same business metric.

Outage cost
The below example shows the revenue per minute vs. the response time per minute of an application. This application has obviously suffered an outage that lasted approximately 50 mins and it’s clear to see the impact it has had on the business in lost revenue. The short spike/increase in revenue seen after the outage indicates users who returned to complete their transaction, but this is not enough to recover the lost revenue for the period.

RtBM - outage

Impact of agile releases
This example shows the result of a performance improvement program that has taken place. The overall response time has improved by over a second across three code releases and you can clearly see the additional revenue that has been generated as a result of the code releases.

RtBM - agile releases

Here a 1 second improvement in response time has increased the revenue being generated by the online booking system by more than 30%. The value a development team is delivering back to the business is clearly visible with each new release, allowing developers to focus on the areas that drive the most return and quantify the value they are delivering.

Marketing campaign
This example is a little more complex. At midday there is a massive increase in the number of people visiting this eCommerce website due to an expensive TV advertising campaign. The increased load on the system has resulted in a small increase in the overall response time but nothing too significant. However, despite the increased traffic to the site, the revenue has not improved. If we take a look at the Number of Checkouts, which is a second Business Metric that has been configured, it’s clear the advertising campaign has driven additional users to the site, but these users have not generated additional revenue.

RtBM - marketing

Common metrics for common success

With traditional methods of measuring success in different ways it’s impossible to to align towards a common goal. This creates silo’d working environments that make it impossible for teams to collaborate and prioritise.

By enabling all parts of the business to focus on the business metrics that really matter, organizations benefit from being able to proactively prioritise and resolve issues when they occur. It helps IT truly align with the priorities and needs of the business, allowing them to speak the same language and manage the bottom line. For example, after implementing AppDynamics Real-time Business Metrics Doug Strick, who is the Internet Application Admin at Garmin, said the following:

“We can now understand how the application is growing over time. This data will prove invaluable in guiding future decisions in IT.”
-Doug Strick, Internet Application Admin

AppDynamics Real-time Business Metrics enable you to identify business challenges and react to them immediately, instead of waiting hours, days or even weeks for answers. Correlating performance, user experience, and Business metrics together in real-time and in one place.

If you want to capture the business performance and measure your success against it in real-time , you can get started today with Real-time Business Metrics by signing up and taking a free trial of AppDynamics Pro here.

AppDynamics Pro on the Windows Azure Store

Over a year ago, AppDynamics announced a partnership with Microsoft and launched AppDynamics Lite on the Windows Azure Store. With AppDynamics Lite, Windows Azure users were able to easily monitor their applications at the code level, allowing them to identify and diagnose performance bottlenecks in real time. Today we’re happy to announce that AppDynamics Pro is now available as an addon in the Windows Azure store, which makes it easier for developers to get complete visibility into their mission-critical applications running on Windows Azure.

  • Easier/simplified buying experience in Windows Azure Store
  • Tiered pricing based on number of agents and VM size
  • Easy deployment from Visual Studio with NuGet
  • Out-of-the-box support for more Windows Azure services

“AppDynamics is one of only a handful of application monitoring solutions that works on Windows Azure, and the only one that provides the level of visibility required in our distributed and complex application environments,” said James Graham, project manager at MacMillan Publishers. “The AppDynamics monitoring solution provides insight into how our .NET applications perform at a code level, which is invaluable in the creation of a dynamic, fulfilling user experience for our students.”

Easy buying experience

Purchasing the AppDynamics Pro add-on in the Windows Azure Store only takes a couple of minutes. In the Azure portal click NEW at the bottom left of the screen and then select STORE. Search for AppDynamics, choose your plan, add-on name and region.

4-choose-appdynamics

Tiered pricing

AppDynamics Pro for Windows Azure features new tiered pricing based on the size of your VM (extra small, small or medium, large, or extra large) and the number of agents required (1, 5 or 10). This new pricing allows organizations with smaller applications to pay less to store their monitoring data than those with larger, more heavily trafficked apps. The cost is added to your monthly Windows Azure bill, and you can cancel or change your plan at any time.

AppDynamics on Windows Azure Pricing

Deploying with NuGet

Use the AppDynamics NuGet package to deploy AppDynamics Pro with your solution from Visual Studio. For detailed instructions check out the how-to guide.

2-vs-package-search

Monitoring with AppDynamics

  • Monitor the health of Windows Azure applications
  • Troubleshoot performance problems in real time
  • Rapidly diagnose root cause of performance problems
  • Dynamically scale up and scale down their Windows Azure application based on performance metrics

AppDynamics .Net App

Additional platform support

AppDynamics Pro automatically detects and monitors most Azure services out-of-the-box, including web and worker roles, SQL, Azure Blob, Azure Queue and Windows Azure Service Bus. In addition, AppDynamics Pro now supports MVC 4. Find out more in our getting started guide for Windows Azure.

Get started monitoring your Windows Azure app by adding the AppDynamics Pro add-on in the Windows Azure Store.

Monitoring Java Applications with AppDynamics on OpenShift by Red Hat

At AppDynamics, we are all about making it easy to monitor complex applications. That is why we are excited to announce our partnership with OpenShift by RedHat to make it easier than ever before to deploy to the cloud with application performance monitoring built-in.

Getting started with OpenShift

OpenShift is Red Hat’s Platform-as-a-Service (PaaS) that allows developers to quickly develop, host, and scale applications in a cloud environment. With OpenShift you have choice of offerings, including online, on premise, and open source project options.

OpenShift Online is Red Hat’s public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or next big idea.

RedHat OpenShift

OpenShift is a platform as a service (PaaS) by RedHat ideal for deploying large distributed applications. With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for a RedHat OpenShift account

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a JBoss application on OpenShift

$ rhc app create appdynamicsdemo jbossews-2.0 --from-code https://github.com/Appdynamics/appdynamics-openshift-quickstart.git

AppDynamics @ OpenShift

Get started today with the AppDynamics OpenShift getting started guide.

Production monitoring with AppDynamics Pro

Monitor your critical cloud-based applications with AppDynamics Pro for code level visibility into application performance problems.

OpenShift App

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

Going live with a mobile app: Part 4 – Monitoring your mobile application

In the third part of this series I discussed preparing to launch a mobile application with load testing and beta testing and highlighted the differences between the iOS and Android ecosystems. In this post I will dive into monitoring your production mobile application.

Production consideration: Crash and Error Reporting

Crash and error reporting is a requirement not only for the development of your application, but also for testing and production. There are quite a few crash-reporting tools available including AppDynamics, Crashlytics, Crittercism, NewRelic, BugSense, HockeyApp, InstaBug, and TestFlight. All of these tools have are capable of reporting fatal errors in your application to help developers track down the root cause of bugs. The problem with crash and error reporting is that it only tracks issues after they have affected users. Both the Apple App Store and the Google Play Store provide basic crash reporting metrics.

The harsh reality is that mobile applications have a fickle audience that is heavily reliant on curated app stores and reviews. Reviews can make or break a mobile application, as can being featured in an app store. A common best practice is to allow in-app feedback to preempt negative reviews by engaging the user early on. There are a variety of services that make it easy to provide in-app feedback like Apptentive, Appboy, and Helpshift.

This is why being proactive with quality assurance and production monitoring has a significant impact on the success of an application. Not only must the application work as designed, but also the experience must be polished. The expectation in the mobile community is significantly higher than on the web.

Production consideration: Analytics & Instrumentation

Smart executives are data driven and mobile applications can be a plethora of business intelligence. When it comes to instrumentation the earlier you instrument and more metrics you track, the better informed you will be. Analytics and instrumentation are crucial for making informed and smart decisions about your business.

Who is your audience? What platforms and devices do they use? What user flows are the most common? Why do users abandon? Where are users located? What is the performance of your application?

appdynamics_mobile

Tracking important demographics of your audience like operating systems, devices, carriers, application versions, and geography of users is key. These metrics allow you to target your limited resources to where they are needed most. There are quite a few analytics platforms built for mobile including Google Analytics, Flurry, Amazon Analytics, FlightPath, MixPanel, KissMetrics, Localytics, and Kontagent.

All of these tools will give you better insights into your audience and enable you to make smarter decisions. There are important metrics to track like total # of installations, average session lifetime, engagement and growth, and geography of your users. Once you have basic user demographics you can use MixPanel or KissMetrics to track user activity with custom event tracking. The more instrumentation you add to your application the more metrics and customer intelligence you will have to work with.

Production consideration: Application Performance Monitoring

Application performance management tools enable you to discover the performance of your mobile and server-side applications in production. APM allows you to understand your application topology, third party dependencies, and the performance of your production application on both the client-side and the server-side. Modern application performance management solutions like AppDynamics track crashes and errors, the performance of the mobile application, and correlates performance to your backend platform, while providing rich user demographics and metrics on the end user experience. With modern business reporting tools you can evaluate the health of your application and be proactive when performance starts to deteriorate.

appdynamics_mobile_apm

End User Monitoring allows you to understand the application experience from the eyes of your real users. There are quite a few solutions to monitoring the end user experience in the market place. AppDynamics, Crittercism, New Relic, and Compuware allow you in instrument your application and gain visibility into production performance problems.

Business consideration: Real-time business metrics

Once you have launched a successful mobile experience you need to understand how that experience affects your business. If you have a business critical application like the Apple Store Checkout application or the Fedex Package Management application your business is dependent on performance and use of your application. You can gain valuable insight into your business if you track and correlate the right metrics. For example, how does performance affect revenue, or what is the average price for a checkout transaction. Understanding your core business metrics and correlate them to your mobile experience for maximum business impact.

appdynamics_business_metrics

Business consideration: Monetization

Your plan is to retire off this application, so you need to have a monetization strategy. The most common ways to make money from applications are Pay to play (charge a fee for your application), freemium (offer a free and pro upgrade), offer in-app purchases (like levels, tokens, and credit), and traditional advertising. There are many services to enable mobile advertising like Apple’s iAds, Google’s Admob, Amazon’s Mobile Ads, Flurry, inMobi, Millennial Media, and moPub. All of these strategies require precision execution, but some strategies work better for specific types of apps. Experiment with multiple strategies and do what works best for your business.

Moving forward

It is no longer enough just to have a presence on the web. In fact more and more companies are going mobile first. The mobile landscape is constantly evolving and the mobile market is seeing continued growth year over year.

Want to start monitoring your iOS or Android application today? Sign up for our beta program to get the full power of AppDynamics for your mobile apps. Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Going live with a mobile app: Part 3 – Launching a mobile application

In the second part of this series I discussed developing a mobile application and choosing a backend platform and building for various network conditions. In this post I will dive into some considerations when launching a mobile application.

Mobile app audiences are a notoriously fickle bunch and a poor first impression often results in a very harsh app store review that will negatively impact your apps growth. When an app store rating can make or break your application you have to be diligent in making sure every user has a stellar experience. The best way to do this is thoroughly test your mobile experience and load testing your backend to ensure you can handle peak traffic.

The key to a successful launch is great planning and testing. Launching mobile applications are significantly more difficult than the common web application. Not only is the audience more fickle, but you also have to adhere to third-party processes and procedures. Thorough quality assurance, crash and error reporting, load testing, and proactive production monitoring are essential to launching a successful mobile application.

Launch consideration: Testing native applications across mobile devices

Testing mobile applications is notoriously difficult due to the vast number of devices. There are a few services that make this easier for engineers. I have seen a few strategies for testing mobile devices – usually you go to Amazon and buy the top twenty devices for Android and iOS and manually test your application across every device manually. Mobile device labs of this sort are quite expensive to setup and maintain and often require some level of automation to be productive. Alternatives to setting up your own mobile lab is to use a mobile app testing platform like TheBetaFamily. They offer an easy way to test your native application across many different devices and audiences with ease.

Launch consideration: Capacity planning and load testing

Capacity planning is key to the successful launch of any web application (or mobile backend). If you want to understand what can go wrong look no further than the failure to launch of healthcare.gov. Understanding your limits and potential capacity is a requirement for planning how much traffic you can handle. Making an educated assumption about potential growth and you can come up with a plan for how many concurrent users you might need to support.

mobile_distributed_app

Once you understand your maximum concurrent users you can test your backend infrastructure to be sure your mobile experience doesn’t suffer. There are quite a few tools available to help you load test and evaluate the scalability of your backend platform. Apica, Soasta, and Blazemeter offer services that allow you to simulate your mobile application being used at high levels of concurrency.

Launch consideration: Beta testing

Beta testing is the last quality assurance step before you can make your app generally available. Testflight, HockeyApp, and Ubertesters allow you to distribute your application for testing to a select group of users. When it comes to beta testing the more users you can convince to give feedback and the larger distribution of devices the better. These beta testing and distribution tools enable you to easily gather feedback early on about what isn’t working in your application and save you from the embarrassment of negative app store reviews due to obvious problems. A/B testing is also a great way to find out which flows work best as part of your beta testing experience. This is an essential step to a successful launch – the more beta testers you can find the better.

Launch consideration: Hard launch or Soft launch?

Once you have beta tested and decided you have a great application that is battle tested for production you need to decide how to launch. The real question is hard launch or soft launch. The traditional hard launch is straightforward. Your app is approved in the app store and you go live. There are a few different strategies for soft launches of major applications. The most common is to soft launch outside of your primary market. If you are planning to release in the USA you can easily pick another region with similar characteristics like Canada, Australia, or the United Kingdom. The benefit of soft launching in a secondary market means you can validate assumptions earlier and beta testing your key audience. Soft launching can validate product/market fit, app experience, usability, and app/game/social mechanics. The result is your first experience with your key demographic will be based on the data you learned from your sample audience. The end result will be a much more polished and proven app experience.

Launch consideration: App store submission process

The application submission process varies greatly depending on the app store. This is where you get to sell your application with a marketing description, search keywords, and screenshots of your app in action. You can specify pricing and what regions/markets you want your app to be available in.

ios_app_submission

With Apple it is customary to wait up to two weeks for Apple to review your application and approve it for production. Apple routinely rejects applications for being low quality, using unsupported APIs, and for not following design guidelines. Google on the other hand offers a streamlined release process that takes less than one hour, but doesn’t offer the first line of protection that Apple provides by not allowing apps with obvious flaws.

Mobile insights with AppDynamics

With AppDynamics for Mobile, you get complete visibility into the end user experience across mobile and web with the end user experience dashboard. Get a better understanding of your audience and where to focus development efforts with analytics on device, carriers, OS, and application versions:

AppDynamics For Mobile

AppDynamics For Mobile

Want to start monitoring your iOS or Android application today? Sign up for our beta program to get the full power of AppDynamics for your mobile apps.

Mobile APM

In the next post in this series I will dive into monitoring a production mobile app and the various tools that are available. As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.