A non-technical guy’s take on Business Transaction Monitoring

I began my journey in the Application Performance Management (APM) space a little over two years ago. Transitioning from a security background, the biggest thing I was concerned about was picking up the technology quickly. Somewhere between JVMs, CLRs, JMX, IIS, and APIs I was a little overwhelmed.

The thing that caught my eye most in the early stages of learning about the APM space (outside of AppDynamics’ growth & the growing IT Ops market) was the term Business Transaction Monitoring.

Gartner defines one of their key criteria’s as “User Defined Transaction Profiling – The tracing of user-grouped events, which comprise a transaction as they occur within the application as they interact with components discovered in the second dimension; this is generated in response to a user’s request to the application.”

I know what you’re thinking…. “That is what caught your eye about APM?!” Actually, yes! Despite Gartner’s Merrian-Webster-esque response, I was able to understand the intended definition of what the term is attempting to communicate: end-to-end visibility is critical. Or, in other words, pockets of visibility (silos) are bad. Unfortunately, so many organizations have silos that keep them in the dark from understanding and managing their customer’s digital experience. That’s often because as companies grow, they become plagued by structure; individual toolsets purposed with proving innocence rather than achieving a resolution, and a lack of decentralization.

In talking with hundreds of customers over the past two years, it’s amazing how so many mature application teams still struggle with end-to-end visibility. The market only makes the problem worse since every solution provider out there claims they provide true end-to-end visibility. So, when you’re researching potential application monitoring & analytics solutions, make sure and ask the vendor what information they’re collecting & how they’re presenting that data to the end user. For example, if you look under the hood of a potential app monitoring solution and you find out they take a “machine-centric” approach by displaying JMX metrics & CPU usage & render that data to you in a fat client UI, then you’re probably looking at the wrong solution. Compare that against a purpose-built APM solution that counts, measure, and scores every single user’s interaction with your mobile / web app and presents that data in the vehicle of a business transaction. It’ll be like finally getting your eyes checked and getting prescription glasses!

Alright, let’s get to the core message of this post. A solid APM strategy must be based on business transactions. Why? The end user’s experience (BT) is the only constant unit of measurement in today’s applications. Architectures, languages, cloud platforms, and frameworks all come and go, but the user experience stays the same. With such dynamic changes occurring the app landscape of today, it’s also important that your APM solution can dynamically instrument in those environments. If you have to tell your monitoring solution what to monitor anytime, there’s a new code release, then you’re wasting time. The reason so many companies have chosen AppDynamics is because AppDynamics has architected every feature of its platform around the concept of a BT. AppDynamics provides Unified Monitoring through business transactions. One can pivot & view data across all tiers of an app including code, end user browser experience, machine data, and even database queries in just a few clicks.

I wish I had a helpful analogy for a Business Transaction, but I’m going to have to settle to encourage you to view a demo of our solution. Or, download our free trial, install the app agent, generate load, and watch how we automatically categorize similar requests into BTs. If you have any questions, then feel free to reach out to me or anyone else in our sales org.

The title of the blog was a non-Tech guys approach to BTs, so I’ll simplify my ranting with the following conclusion: Your business is likely being disrupted and now defined by software. If so, having a tool like AppDynamics is key to being able to manage your app-first approach. Your APM solution (hopefully you’re already using AppDynamics) must have a proven approach to handling complex apps by focusing on unit groupings that can provide end-to-end visibility called “Business Transactions” or BTs. A proper approach to monitoring with BTs is critical because it perfectly marries the 10,000-foot view of the end user’s digital interaction with the business (what the business wants) and the 1-foot view of class/method visibility (what your app teams need). A successful BT monitoring strategy will enable your business effectively to monitor, manage, and scale your critical apps, and provide rich context for your business to make intelligent, data-driven decisions.

2015 APM Predictions from AppDynamics

This article originally appeared on APMDigest.com

In addition to our predictions on APMdigest’s list of 15 Predictions for 2015, AppDynamics offers some additional predictions:

1. In 2014, we continued to see the rapid ascent of a new generation of APM company that is leveraging big data, cloud, SOA, analytics and Agile technologies to out-innovate the solutions provided by legacy APM providers. We will see more of this in 2015 with many older companies going private due to their inability to compete with younger, more innovative companies and the way in which these legacy providers are structured internally.

2. Mobile APM as a standalone application will no longer exist in 2015. Businesses will realize the importance of reliance on backend infrastructure, which will necessitate end-to-end visibility across all of their applications from one comprehensive APM solution.

3. The gap between business analytics and IT analytics is quickly narrowing. In 2015, software analytics and business analytics will be viewed as one and the same and as a critical piece of business intelligence from stakeholders on both sides of the equation.

4. APM is not a new space, but is changing rapidly with the ever-changing design and delivery of software. Old APM solutions cannot keep up with upgraded applications and the amount of distributed infrastructure required by new applications. In 2015, companies that do not apply a modern APM solution by year’s end will be severely hindered and fall behind their competitors.

5. Shop Direct’s CEO commented this year that 50% of the company’s consumers viewed its site through a mobile device, but in 2015 100% of their customers will test content through their mobile app. 2015 will be do or die in terms of mobile. Mobile channels are exploding and businesses need to get the mobile experience right this year — not just one time, but on an ongoing basis with the right kind of APM tools.

If you can’t see it, you can’t manage it – ITOA use case #1

“There was 5 exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days, and the pace is increasing…,” – Eric Schmidt, Former CEO, Google.

If IT leaders hadn’t already heard Schmidt’s famous quotation, today they are definitely facing the challenge he describes. Gone are the days when IT leaders were tasked with just keeping an organization running, now IT teams are charged with driving innovation. As businesses become defined by the software that runs them, IT leaders must not only collect and try to make sense of the increasing amount of information these systems generate, but leverage this data as a competitive advantage in the marketplace. This type of competitive advantage may come in many forms, but generally speaking, the more IT leaders know about their environments and the ways end users interact with them, the better off they (and the business) will be. Gleaning this type of insight from IT environments is what analysts refer to as IT Operations Analytics (ITOA). ITOA solutions collect the structured and unstructured data generated by IT environments, process that data, and display the information in an actionable way so operations teams can make better informed decisions in real-time. I’d like to discuss five common ITOA use cases we see across our customer base in this series, starting with visualizing your environment. In the rest of this series I’ll examine each of the other use cases and describe how a solution like the Application Intelligence Platform can address each and in turn provide value for operations teams.

The five common ITOA use cases I’ll delve into are:

  • Visualize the environment
  • Rapid troubleshooting
  • Prioritize issues and opportunities
  • Analyze business impact
  • Create action plans

Visualizing the environment

The first use case refers to the ability for an ITOA system to model infrastructure and / or the application stack being monitored. These models vary in nature but oftentimes are topological representations of the environment. Being able to visualize the application environment and see the dependencies is an important foundation for the rest of the use cases on this list.

In the Summer ‘14 release announcement blog, we highlighted the enhancements we’ve made in regard to our flow maps, which is the visual representation of the application environment, including application servers, databases, web services, and more.

What’s great about the AppDynamics approach is that this flow map is discovered automatically out of the box, unlike legacy monitoring solutions that require significant manual configuration to get the same kind of view. We also automatically adjust this flow map on the fly when your application changes (re-architected app, code release, etc.). Because we know all the common entry and exit points of each node, we simply tag and trace the paths the different user requests take to paint a picture of the flow of requests and all the interactions between different components inside the application. Most customers see something like the flow map below within minutes of installing AppDynamics in their environment.
Screen Shot 2014-12-11 at 8.41.42 AM
Now a flow map like this is obviously very valuable, but what happens when the application environment is very large and complex? How does this kind of view scale for the kinds of enterprise applications many AppDynamics customers have deployed? Environments with thousands of nodes and potentially hundreds of tiers? Luckily for our customers, the Application Intelligence Platform was built from the ground up to handle these kinds of environments with ease. There are two characteristics of our flow maps that enable operations teams to manage flow maps of large-scale application performance management deployments; self-aggregation and self-organizing layouts.

Self-aggregation refers to our powerful algorithms that make these complex environments more manageable by condensing and expanding the visualization to enable intelligent zooming in and zooming out of the topology of the application. This allows us to automatically deliver the right level of application health indicators to match the zoom level.

For example, this is what a complex application could look like when zoomed all the way out:
Screen Shot 2014-12-11 at 8.41.51 AM
As one zooms in, relevant metrics information becomes visible:
Screen Shot 2014-12-11 at 8.42.00 AM
Until you are zoomed all the way in on a particular tier and can see all of the associated metrics you’d care about:
Screen Shot 2014-12-11 at 8.42.08 AM
The ability to iterate back and forth between a macro-level view of the application and a close-up of a particular part of the environment gives operations teams the visibility they need to understand exactly how an application functions and how the different components interact with each other.

Self-organizing layouts relates to our ability to automatically format the service and tier dependencies by using auto-grouping heuristics to dynamically determine tier and node weightages. By leveraging static data (like application tier profiles) and dynamic KPIs (like transaction response times) we organize the business-critical tiers in a way that brings the most important parts of the application to the forefront depending on the type of layout you prefer.

One can automatically group the flow map into a circular view:
Screen Shot 2014-12-11 at 8.42.18 AM
You can let AppDynamics suggest a layout:
Screen Shot 2014-12-11 at 8.42.27 AM
You can create a custom layout just by dragging and dropping individual components:
Screen Shot 2014-12-11 at 8.42.36 AM
And you can auto-fit your layout to the screen for efficient zooming in / out:
Screen Shot 2014-12-11 at 8.42.45 AM
You’ve seen how AppDynamics can visualize individual applications, but what if, like many of our large enterprise customers, you have many different complex applications that have dependencies on one or more other applications? How does one obtain a data-center view to understand, at a high level, what application health looks like across all applications?

With the cross-app business flow feature, customers can do just that. AppDynamics even supports role-based access control (RBAC) so administers can limit user access to a particular application. We allow customers to group, define, and limit access to applications however makes the most sense for their individual environments and for their business.

Screen Shot 2014-12-11 at 8.42.54 AM

As you can see, AppDynamics provides a great way for IT Operations teams to discover and visualize their application environment. We automatically map the application out of the box, we provide flexible layout options so customers can customize the view to their liking, and offer a way for Ops teams to understand how different applications interact with each other.

In the next post in this series, we’ll discuss how the Application Intelligence platform can address the second common ITOA use case, rapid troubleshooting. In the meantime, I encourage you to sign up for free and try AppDynamics for yourself.

5 Secrets to Better PHP Performance

Wait! Do you really need to profile that PHP code? Are you sure you want to start down that time-consuming, tedious path? If you’re looking to squeeze some more performance out of your PHP web application, there are a few relatively quick and easy checks to perform that can give your performance a boost before you dive into refactoring the code. And even if you’re intent on profiling your PHP code, you should still look at these areas to make sure you’re getting maximum performance.

Cache In On OPCache

One of PHP’s strengths is that it’s interpreted on the fly into executable code called opcodes, so you can develop rapidly and test without pausing to compile your code with every change you make. However, it’s inefficient and slow to recompile the identical code each time that code runs on your website.

For many years, opcode cache has been a go-to solution for this particular slowdown. These caches are PHP extensions that create hooks into the compilation system and save the output of compiled code into memory. Then in future runs, PHP checks to make sure that the source file has not changed — via timestamps and file size checking — and if it hasn’t, it runs the cached copy of the code.

The most famous of these caches was APC or Alternative PHP Cache. APC not only provided an opcode cache, but also permitted making user data persistent in shared memory as well.

Given the importance of having an opcode cache configured to get optimal performance out of a PHP application, the PHP core team decided to include one by default with all versions of PHP since version 5.5. They chose OPCache, formerly Zend Optimizer+. Part of the commercial Zend Server offering, Zend Optimizer has now been open-sourced back to the community.

(To learn more about the importance of OpCache to PHP application performance, see this excellent article by my colleague Rob Bolton.)

Screen Shot 2014-12-01 at 9.07.55 AM

Look Outside Your Application

Maybe it’s not your PHP at all that’s slowing your application down, especially if you’ve implemented opcode cache. It’s more than likely that at least several of your application bottlenecks happen when accessing external resources. Let’s look at a couple suspects.

Database Delays

It’s not untypical for the database layer to account for 90 percent of measured execution time for a PHP application. So it makes sense to spend the necessary time to review your codebase for all instances of database access.

First and most obvious, turn on the slow SQL logs and find and fix slow SQL queries. Then proceed to query the queries themselves. Are they efficient? Do you make the same queries multiple times in one execution of your code? (Even with a query cache, that’s still inefficient.) Are you making too many queries? Do you have queries hitting a table without an appropriate index?
Investing a little time to fix your queries can noticeably reduce your database access time and noticeably increase your application performance.

Filesystem Snafus

I/O, I/O, who knows where the time goes? Some of it is with all the in-and-out of your file system. So study your filesystem for the same kinds of inefficiencies you looked for in your database queries. Some likely time-consuming culprits: reading in local files, processing XML, image processing, or using the filesystem for session storage.

Specifically, look for code that would cause a file stat to happen — reading of a file’s statistics, such as the date it was last modified. Functions such as file_exists(), filesize(), or filetime() cause file stats to happen, and are easy to leave accidentally in a loop. Never do something twice that only needs to be done once. That’s the worst kind of wasted time.

Keep Your Eye On APIs

What other external resources do you rely on? It’s a rare application that doesn’t leverage APIs. Unfortunately, in many or most cases, you don’t have control over the remote APIs you’re using, so you can’t do anything directly about their performance. You can, however, mitigate the effect of API performance in your code through techniques such as caching API output or making API calls in the background.

Your main goal is to protect your user from a failing or misbehaving API. Make sure you have reasonable timeouts in place for any API requests and, to the best of your application’s ability, be ready to display your application’s output without the API’s response.

Now Profile Your PHP

If you’re lucky, just enabling an opcode cache and optimizing external resource usage is enough to get the performance gains you need at the moment. But eventually, as your application needs increase, you’ll need or want to go deeper to get better performance to maintain or boost user experience and conserve hosting costs.

A number of open source tools can help you profile your PHP code and discover where the most time is being spent. One of the most common is Xdebug. Xdebug does require compiling and running special extensions on your server, and was originally intended, as its name implies, to be a debugging tool; the profiling aspects were added later.

It’s not easy to keep application performance in step with ever-increasing user expectations. But looking into and optimizing the functionality described above can help make sure your PHP application is performing at its best.

Gain better visibility and ensure optimal PHP application performance, try AppDynamics for FREE today!

Before PHP Performance, Looking at Your Software Process

In an effort to optimize your application performance you benchmark and profile your code, built a solid testing environment, collect key metrics, the whole nine yards. Yet, a growing realization eventually dawns on you that your team isn’t pushing out new features as fast as they used to without placing the performance of your app in jeopardy. You’ve hit a tipping point and are neck-deep in a backlog-pile-up and can’t evolve your software as fast as you did. You miss those good ‘ol days when new features were being pushed out at almost every iteration, bugs were few and far in between, the company couldn’t keep up with how fast your team was developing and life was good.

The answer may be simpler than you think as the solution to reaching maximum development velocity probably lies in your development process itself. Your team velocity should evolve to include focusing on a combination of new features, bug resolution and application optimizations. Striking the right balance among all these variables will help you reach your maximum effectiveness.

There are a number of things that you need to look at and evaluate, such as:

Project Management Methodology

Software lifecycle management patterns have been observed and summarized into best practices but similar to software design patterns there is no perfect solution that applies to every situation. The implementation of a methodology can (and probably should) be tailored to best fit the personality of your team.

An agile shop may implement an Agile process, keep track of velocity, estimate story points and have the art of estimating down to a science. But where do the lines of effectiveness and efficiency cross? You may be extremely efficient at implementing your process but you’re so caught up with following the “rules” that you’re losing effectiveness.

That’s right, effectiveness and efficiency are not the same thing.

Of course certain rules and checkpoints are necessary: having code-review, ensuring proper builds are run before deployment, etc. You should ask yourself whether having checkpoints are necessary or are they hindering progress? You should also explore whether you’re lacking certain processes that may bring some control to the chaos. But at some point you’ll need to determine for yourself whether you have too many checkpoints or not enough.

Building Your Team

You need to ensure that your development team is correct for your project. This means you have both the right skills as well as the right size.

Understaffing a project leaves you struggling to get everything done in time, with everyone pulling long hours and getting burned out. Too many people can be just as bad of a metric. Not only will people feel left out, feeling like they can’t contribute, but they may find themselves poking at pieces of the code in their spare time that really didn’t need touched. The team will lose focus, individuals will have their moral level drop, and the entire team dynamic will suffer.

It’s also just as important to make sure that you have the right mix of skills to go along with this. Do you need PHP experts? Dedicated JavaScript developers? HTML/CSS Wizards? DevOps people to integrate tightly with the systems? Or do a few general purpose jack-of-all-trade types fit your project best.

There really is no right answer here; there is only a right answer per project. That answer will constantly change as the project itself evolves as well, and it takes a good manager to be on top of this.
This project is going to need a lot of overtime

Speaking of building teams, check out AppDynamics openings here!


Going hand-in-hand with building out your team, you (unfortunately) always have to think of your budget as well. You need to make sure from the very beginning that you have enough budget allocated. There is nothing worse than getting halfway through a project and suddenly having to shelter it because there’s no money left.

Beware of of feature creep! When scoping a software you’ll need to stay disciplined in keeping true to the original MVP (minimum viable product). Once the minimum feature-set has been designed and approved, keep true to that roadmap. Of course every feature can always be improved! Of course you’ll think of new ways to do things! But in order to get a product out the door, you need to learn when you draw the line and call for a feature-freeze. Otherwise, you’ll burn right through your budget, miss critical deadlines and turn your project into a speeding train without brakes. Remember, done is better than perfect.

Don’t extremely over-budget either. Having padding is a smart idea, but your company probably could have used that extra budget in better ways. It leads you down a path of over-staffing, and running into the same issues we mentioned above.

Using Version Control

Let’s focus back to the application itself. At the very top of that list, is the need to use Version Control. To many of us it seems like such an obvious thing, yet it’s amazing how many people still haven’t seen the need for this.

Whether you are a team of one, or a team of 1000, you really need to be using some form of version control. Beyond the obvious need to “find that code that was blown away two days ago”, the abilities provided by the version control systems to manage having multiple developers all working together, on the same project, and committing code while the software ensures that there are no issues, is simply invaluable.

There are plenty of VCS options out there, with Git probably being the most popular at the moment. But there are a slew of others such as CVS, Mercurial, Bazaar, SourceSafe and more.

You need to find a version-control workflow that best match how your team is (or will be) working together. For example Git is designed around the idea of extremely distributed workload, with lots of developers working independently of each other, throwing away lots of local code that they only kept temporarily, and committing back just the final pieces that they are ready with.

Tracking Issues/Bugs

It’s a fact of life; your software is going to have bugs. When they are found you are going to need a good solution to allow you to store the details of the bug, assign it off to a developer to fix, and then track the bug as it’s fixed and verified.

At the same time, most bug tracking software is designed to also allow you to track your new product features and enhancements as well. If you are a sole developer on a project, using software such as this can be crucial to remembering all the moving parts and where you are trying to go. If you are running a large team, then it’s extremely important to make sure that issues are captured, shared with the team, and that it’s clear who is working on it. There is nothing worse then finding out that the last two days of work you just did on a bug, was just solved by another developer.

There are lots of issue/ticket tracking software solutions out there. These range from software packages such as Trac or JIRA, to hosted solutions such as provided by GitHub or Unfuddle. There are simply too many options to list. The best option for your team is one that they will use. Check out the options that are available and find what matches your development process best.

Avoiding Code Debt

Hopefully you are familiar with the concept of code debt. It’s where you have code in your system that is subpar, that you need to fix before you can actually add new features (or where you have to work-around it in order to do anything). Code debt can be created by bad architecture designs (even if they were good decisions at the time), or by simply having cases where a programmer chooses to create sub-par code in the first place, saying: “I’ll come back and fix this later,” just to get a feature out the door more quickly. An easy way to estimate your code-debt is by simply searching your entire code-base for the strong “// todo”. You’d be surprised how many times programmers know their code will need to be re-factored before they’re even finished writing it.

The more code debt that you build up, the longer it takes you to complete any task, be that adding a new feature, or fixing an old bug. Not only that but there’s a good chance that any case of code debt existing is also a potential source of performance issues in your software product.

One common solution to attempt to reduce code debt (though there is no magic bullet) is the use of Agile development processes. In Agile development you aren’t locked to a huge architecture decision in the beginning, but you build the software in small pieces, one at a time, modifying the product as you go. With a proper Agile environment, you shouldn’t ever push off some bad code to “Get a feature out quickly,” nor have the problem of realizing that a massive architecture mistake was made too late. As the whole time you are developing it is a constant iterative process, where the entire system is always getting just slightly better.

Deployment Process & Rollbacks

Of course you also need to have a solid system for handling deploying your code out to your server(s). This is a highly “personal” thing. There are solutions as simple as a small script that checks out code, zips it, and uploads it to full-blown and very complicated systems like Capistrano.

There are two important things here for you to make sure that your team has covered. First of all, you want it to be as easy as possible to deploy code – Optimally you want to be doing this as often as possible, with as small of deployments as possible so that there is less code movement per release. A deployment shouldn’t involve having the entire team on call and an hour-long process. A single script should be able to run that makes all the magic happen.

Secondly, the opposite of deployment is just as (if not more) important. You need to be able to rollback a deployment as well. We mentioned before how bugs are inevitable? Well, that applies not only to small bugs in existing code, but to show-stopping bugs upon deployment. Hopefully this won’t happen to you, but you need to be prepared when it does. Have just as simple of a way to rollback a deployment, as it was for you to push it live in the first place.

That one fact will save you countless hours, countless issues, and keep the hair on your head.
Presentation of web application lifecycle

Wrapping it Up

Well, that’s a tour of some of the biggest process items for your team to think about. As we continue this series of posts we will cover some of this things in more detail, as well as really start getting into various performance issues that will happen in your code itself.

Start solving your PHP performance problems, start a FREE trial of AppDynamics today!

Focusing on Business Transactions is a Proven Best Practice

In today’s software-defined business era, uptime and availability are key to the business survival. The application is the business. However, ensuring proper application performance remains a daunting task for their production environments, where do you start?

Enter Business Transactions.

By starting to focus on the end-user experience and measuring application performance based on their interactions, we can correctly gauge how the entire environment is performing. We follow each individual user request as it flows through the application architecture, comparing the response time to its optimal performance. This inside-out strategy allows AppDynamics to instantly identify performance bottlenecks and allows application owners to get to the root-cause of issues that much faster.

By becoming business transaction-centric application owners can ensure uptime and availability even within a challenging application environment. Business transactions give them the insight that’s required in order to react to quickly changing conditions and respond accordingly.

So, what exactly is a Business Transaction?

Simply: any and every online user request.

Consider a business transaction to be a user-generated action within your system. The best practice for determining the performance of your application isn’t to measure CPU usage, but to track the flow of a transaction that your customer, the end user, has requested.

It can be requests such as:

  • logging in
  • adding an item to a cart
  • checking out
  • searching for an item
  • navigating different tabs
  • Shifting your focus to business transactions completely changes the game in terms of your ability to support application performance.

    Business Transactions and APM

    Business Transactions equip application owners with three important advantages.

    Knowledge of User Experience

    If a business transaction is a “user-generated action,” then it’s pretty clear how monitoring business transactions can have a tremendous effect on your ability to understand the experience of your end user.

    If your end user adds a book to a shopping cart, is the transaction performing as expected or is it taking 3 seconds longer? (And what kind of impact will that have on end users? Will they decide to surf away and buy books somewhere else, thus depriving your business of not just the immediate purchase but the potential loss of lifetime customer revenue?)

    Monitoring business transactions gives you a powerful insight into the experience of your end user.

    Service Assurance – the ability to track baseline performance metrics

    AppDynamics hears from our clients all the time that it’s difficult to know what “normal” actually is. This is particularly true in an ever-changing application environment. If you try to determine normal performance by correlating code-level metrics – while at the same time reacting to frequent code drops – you will never get there.

    Business transactions offer a Service Assurance constant that you can use for ongoing monitoring. The size of your environment may change and the number of nodes may come and go, but by focusing on business transactions as your ongoing metric, you can begin to create baseline performance for your application. Understanding this baseline performance is exactly what you need in order to understand whether your application is running as expected and desired, or whether it’s completely gone off the rails.

    For example, you may have a sense of how your application is supposed to perform. But do you really know how it performs every Sunday at 6 p.m.? Or the last week of December? And if you don’t, how will you know when the application is deviating from acceptable performance? It’s figuring out “normal” in terms of days, weeks, and even seasons that you need to truly understand your application’s baseline performance.

    Triage & Diagnosis – always knowing where to look to solve problems

    Finally, when problems occur, business transactions prevent you from hunting through logs and swimming through lines of code. The transaction’s poor performance immediately shines a spotlight on the problem – and your ability to get to root cause quickly is dramatically improved.

    If you’re tracking code-level metrics in a large environment instead of monitoring business transactions, the chances are that the fire you’re troubleshooting is going to roar out of hand before you’re able to douse it.


    Application owners are under extraordinary pressure to incorporate frequent code changes while still being held responsible for 100% application uptime and performance. In a distributed and rapidly changing environment, meeting these high expectations becomes tremendously challenging.

    A strong focus on business transactions becomes absolutely essential for maintaining application performance. Transaction-centric monitoring provides the basis for a stable performance assurance metric, it delivers powerful insights into user experience, and it ensures the ability to know where to hunt during troubleshooting.

    The right APM solution can automate much of this work. It can help application owners identify and bucket their business transactions, as well as assist with triage, troubleshooting, and root cause diagnosis when transactions violate their performance baselines. In this way, business transactions are essential to ensuring the success of Developers, Operations, and Architects – anyone with a stake in application performance.

    The Incredible Extensible Machine Agent

    Our users tell us all the time: The AppDynamics platform is amazing right out of the box. But everybody has something special they want to do, whether it’s to add some functionality, set up a unique monitoring scenario, whatever. That’s what makes AppDynamics’ emphasis on open architecture so important and useful. The functionality of the AppDynamics machine agent can be customized and extended to perform specific tasks to meet specific user needs, either through existing extensions from the AppDynamics Exchange or through user customizations.

    It helps to understand what the machine agent is and how it works. The machine agent is a stand-alone java application that can be run in conjunction with application agents or separate from them. This means monitoring can be extended to environments outside the realm of the application being monitored. It can be deployed to application servers, databases servers, web servers — really anything running Linux, UNIX, Windows, or MAC.

    Screen Shot 2014-08-21 at 9.03.06 AM

    The real elegance of the machine agent is its tremendous extensibility. For non-Windows environments, there are three ways to extend the machine agent: through a script, with Java, or by sending metrics to the agent’s HTTP listener. If you have a .NET environment, you also have the capability of adding additional hardware metrics, over and above these three ways.

    Let’s look at a real-life example. Say I want to create a extension using cURL that would give the HTTP status of certain websites. My first step is to look for one in the AppDynamics Exchange, our library of all the extensions and integrations currently available. It’s also the place one can request extensions that they need or submit extensions they have built.

    Sure enough, there’s one already available (community.appdynamics.com/t5/AppDynamics-eXchange/idbp/extensions) called Site Monitor, written by Kunal Gupta. I decided to use it, and followed these steps to create my HTTP status collection functionality.

    1. Download the extension to the machine agent on a test machine.
    2. Edit the Site Monitor configuration file (site-config.xml) to ping the sites that I wanted (in this case www.appdynamics.com). The sites can also be HTTPS sites if needed.
    3. Restart the machine agent.

    That’s it. It started pulling in the status code right away and, as a bonus, also the response time for requesting the status code of the URL that I wanted.

    Screen Shot 2014-08-21 at 9.02.55 AM

    It’s great that I can now see the status code (200 in this case), but now I can truly use its power. I can quickly build dashboards displaying the information.

    Screen Shot 2014-08-21 at 9.02.45 AM

    There also is the ability to hook the status code into custom health rules, which provide alerts when performance becomes unacceptable.

    Screen Shot 2014-08-21 at 9.02.35 AM
    Screen Shot 2014-08-21 at 9.02.14 AM

    So there it is. In just a matter of minutes, the extension was up and running, giving me valuable data about the ongoing status of my application. If the extension that I wanted didn’t exist, it would have been just as easy to use the cURL command (curl –sL –w “{http_code} \\n “ www.appdynamics.com -o /dev/null).

    Either way, the machine agent can be extended to support your specific needs and solve specific challenges. Check out the AppDynamics Exchange to see what kinds of extensions are already available, and experiment with the machine agent to see how easily you can expand its capabilities.

    If you’d like to try AppDynamics check out our free trial and start monitoring your apps today!

    Transforming IT: Building a business-driven infrastructure for the software defined business

    Executives charged with building business-driven applications have an extremely challenging task ahead of them. However, the cavalry has arrived with useful tools and strategies built specifically to keep modern applications working efficiently.

    We partnered with Gigaom Research to carefully grasp, and articulate, how these modern methodologies are improving the lives of IT professionals in today’s software-driven businesses. Typically, this knowledge has been so fragmented it’s been hard to find all this helpful knowledge in one cohesive area. Several blogs and research reports touch on various aspects, but what we learned from our research has been astounding.

    We carefully identified these challenges as the major hurdles facing IT today:

    • Customers are digital and connected
    • Business demand is growing
    • Apps are complex, distributed, and changing rapidly
    • Traditional app performance management is growing

    Clearly these have become major issues affecting companies everywhere, however more importantly, these are affecting end-users and in turn company’s bottom lines. Customers have grown accustomed to expect things instantly and when apps are performing adequately, they will quickly take their business elsewhere.

    Here are some key takeaways we noticed:

    • Customer experience is driving business performance
    • Proactively managing this experience requires new methods and tools
    • Modernize your infrastructure and approaches, but don’t forget the humans
    • Analytics is rapidly changing, fueled by the growth of big data

    This report highlights the value of proactively managing the customer experience with new methods and tools built for modern, complex applications in order to help drive business performance.

    Interested in the next-gen IT strategy and trends, check out the report!

    The future of Ops, part 2

    In my first post, I discussed how software and various tools are dramatically changing the Ops department. This post centers on the automation process.

    When I was younger, you actually had to build a server from scratch, buy power and connectivity in a data center, and manually plug a machine into the network. After wearing the operations hat for a few years, I have learned many operations tasks are mundane, manual, and often have to be done at two in the morning once something has gone wrong. DevOps is predicated on the idea that all elements of technology infrastructure can be controlled through code and automated. With the rise of the cloud it can all be done in real-time via a web service.

    Infrastructure automation + virtualization solves the problem of having to be physically present in a data center to provision hardware and make network changes. Also, by automating the mundane tasks you can remove unnecessary personnel. The benefits of using cloud services is costs scale linearly with demand and you can provision automatically as needed without having to pay for hardware up front.


    Platform Overload

    The various platforms you’re likely to encounter in this new world can be divided into 3 main groups:

    • IaaS services like Amazon Web Services & Windows Azure — These allow you to quickly create servers and storage as needed. With IaaS you are responsible for provisioning compute + storage and own the operating system up.
    • PaaS services like Pivotal Web Services, Heroku, and EngineYard — These are platforms built on top of IaaS providers that allow you to deploy a specific stack with ease. With PaaS you are responsible for provisioning apps and own only your app + data.
    • SaaS services – these are platforms usually build on top of PaaS providers built to deploy a specific app (like a host ecommerce shop or blog).

    All of these are clouds — IaaS, PaaS, SaaS — pick which problems you want to spend time solving. However, often the most complexed environments can’t be managed by a third party.

    Screen Shot 2014-08-11 at 5.59.12 PM


    Monitoring Complex Environments

    Modern monitoring is not focused on infrastructure and availability, but rather takes the perspective down through the application. The simple reality is perceived user experience is the only metric that matters. Either applications are working or they are not. The complexity of monitoring applications is compounded by the availability of applications on many platforms such as web, mobile, and desktop devices.

    Screen Shot 2014-08-11 at 5.59.22 PM

    By leveraging monitoring tools and strategic product integrations, the future of Ops can be focused on efficiency, optimization, and providing a seamless user experience. At AppDynamics we have a robust list of extensions aimed to leverage existing technology to help Ops (and Dev) departments in this modern era. You can check out our list of extensions at the AppDynamics Exchange.

    Ops people, don’t just take my word for it, bring your department into the modern age and try AppDynamics for FREE today!

    The future of Ops, part 1

    The disruption of industries through software

    Marc Andreessen famously stated in 2011 that “software is eating the world”. The world runs on software defined businesses. These businesses realize that in order to be efficient and stay ahead of the competition they must innovate or they will die. Technology is no longer secondary to your business, but is now the actual business.

    Nowadays there is an app for nearly everything and consumers have the expectation most processes are automated. Access to these apps is ubiquitously available — from the web and mobile. Every disruptive billion dollar company in the last decade has innovated through applications by fundamentally changing the market and user experience. Companies like Netflix, Uber, Square, Tesla, Nest, Instacart, and many others have capitalized on this new user experience and catering to their elevated expectations. The disruption stems from an improved user experience, and enabled through technology.

    The evolution of application complexity

    Gone are the days where applications were this simple:

    Screen Shot 2014-07-30 at 6.34.55 PM

    The reality nowadays is applications are extremely complex and distributed using several platforms. Most application architecture we come across utilize several languages such as Java, .NET, PHP, and Node.js. Operations becomes even more complex with virtualization and cloud environments, deploying to containers, and managing application made up of many microservices.

    Screen Shot 2014-07-17 at 3.50.39 PM (2)

    It is not DevOps, it’s the next generation of Ops

    Most people and companies abuse the term DevOps to no end. It is a bit embarrassing, but buzzwords flow rampant on an expo floor of a technology convention. The reality is quite simply that the operations tools engineers use to build and manage complex applications have evolved to match the complexity. I believe the operations complexity breaks down into a few main categories: infrastructure automation, configuration management, deployment automation, log management, performance management, and monitoring.

    The evolution of the Ops problem

    The modern operations reality is that the cloud is the standard platform, operations are automated through code, testing and quality assurance are automated through code, deployments are automated through code, and monitoring and instrumentation is critical to success.


    The DevOps Report from Puppet Labs surveyed the DevOps community and found some interesting results, most notably: “companies with high-performing IT organizations are twice as likely to exceed their profitability, market share and productivity goals.”

    The report also found successful DevOps teams tended to share these characteristics:

    • use continuous delivery to ensure consistent and stable deployments
    • leverage version control not just for code, but infrastructure and configuration to track and manage all environments states
    • automate testing to have confidence about the quality of every release
    • invest in monitoring and logging to be proactive about problems
    • correlate IT performance with organizational performance

    Download the entire report from Puppet Labs

    The enterprise catch up game

    Most enterprises are not able to adopt cutting edge technology at a rapid pace so they are in a constant state of migration and catching up. Furthermore, their challenges are exacerbated when dealing with hybrid environments consisting of on-premise legacy systems combined with new public and private cloud environments. Larger, less flexible, legacy companies are just starting to invest in the latest generation of programming languages such as Scala, Node.js, and Go and nosql datastores like Cassandra and Redis.

    Though enterprises may experience the challenges adapting to the latest operations trends, there are several tools out there which will help ease the transition. A good APM solution helps foster DevOps best practices and increase collaboration between the traditionally separated Dev and Ops teams.

    Don’t believe me? Try AppDynamics for FREE today!