A non-technical guy’s take on Business Transaction Monitoring

I began my journey in the Application Performance Management (APM) space a little over two years ago. Transitioning from a security background, the biggest thing I was concerned about was picking up the technology quickly. Somewhere between JVMs, CLRs, JMX, IIS, and APIs I was a little overwhelmed.

The thing that caught my eye most in the early stages of learning about the APM space (outside of AppDynamics’ growth & the growing IT Ops market) was the term Business Transaction Monitoring.

Gartner defines one of their key criteria’s as “User Defined Transaction Profiling – The tracing of user-grouped events, which comprise a transaction as they occur within the application as they interact with components discovered in the second dimension; this is generated in response to a user’s request to the application.”

I know what you’re thinking…. “That is what caught your eye about APM?!” Actually, yes! Despite Gartner’s Merrian-Webster-esque response, I was able to understand the intended definition of what the term is attempting to communicate: end-to-end visibility is critical. Or, in other words, pockets of visibility (silos) are bad. Unfortunately, so many organizations have silos that keep them in the dark from understanding and managing their customer’s digital experience. That’s often because as companies grow, they become plagued by structure; individual toolsets purposed with proving innocence rather than achieving a resolution, and a lack of decentralization.

In talking with hundreds of customers over the past two years, it’s amazing how so many mature application teams still struggle with end-to-end visibility. The market only makes the problem worse since every solution provider out there claims they provide true end-to-end visibility. So, when you’re researching potential application monitoring & analytics solutions, make sure and ask the vendor what information they’re collecting & how they’re presenting that data to the end user. For example, if you look under the hood of a potential app monitoring solution and you find out they take a “machine-centric” approach by displaying JMX metrics & CPU usage & render that data to you in a fat client UI, then you’re probably looking at the wrong solution. Compare that against a purpose-built APM solution that counts, measure, and scores every single user’s interaction with your mobile / web app and presents that data in the vehicle of a business transaction. It’ll be like finally getting your eyes checked and getting prescription glasses!

Alright, let’s get to the core message of this post. A solid APM strategy must be based on business transactions. Why? The end user’s experience (BT) is the only constant unit of measurement in today’s applications. Architectures, languages, cloud platforms, and frameworks all come and go, but the user experience stays the same. With such dynamic changes occurring the app landscape of today, it’s also important that your APM solution can dynamically instrument in those environments. If you have to tell your monitoring solution what to monitor anytime, there’s a new code release, then you’re wasting time. The reason so many companies have chosen AppDynamics is because AppDynamics has architected every feature of its platform around the concept of a BT. AppDynamics provides Unified Monitoring through business transactions. One can pivot & view data across all tiers of an app including code, end user browser experience, machine data, and even database queries in just a few clicks.

I wish I had a helpful analogy for a Business Transaction, but I’m going to have to settle to encourage you to view a demo of our solution. Or, download our free trial, install the app agent, generate load, and watch how we automatically categorize similar requests into BTs. If you have any questions, then feel free to reach out to me or anyone else in our sales org.

The title of the blog was a non-Tech guys approach to BTs, so I’ll simplify my ranting with the following conclusion: Your business is likely being disrupted and now defined by software. If so, having a tool like AppDynamics is key to being able to manage your app-first approach. Your APM solution (hopefully you’re already using AppDynamics) must have a proven approach to handling complex apps by focusing on unit groupings that can provide end-to-end visibility called “Business Transactions” or BTs. A proper approach to monitoring with BTs is critical because it perfectly marries the 10,000-foot view of the end user’s digital interaction with the business (what the business wants) and the 1-foot view of class/method visibility (what your app teams need). A successful BT monitoring strategy will enable your business effectively to monitor, manage, and scale your critical apps, and provide rich context for your business to make intelligent, data-driven decisions.

If you can’t see it, you can’t manage it – ITOA use case #1

“There was 5 exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days, and the pace is increasing…,” – Eric Schmidt, Former CEO, Google.

If IT leaders hadn’t already heard Schmidt’s famous quotation, today they are definitely facing the challenge he describes. Gone are the days when IT leaders were tasked with just keeping an organization running, now IT teams are charged with driving innovation. As businesses become defined by the software that runs them, IT leaders must not only collect and try to make sense of the increasing amount of information these systems generate, but leverage this data as a competitive advantage in the marketplace. This type of competitive advantage may come in many forms, but generally speaking, the more IT leaders know about their environments and the ways end users interact with them, the better off they (and the business) will be. Gleaning this type of insight from IT environments is what analysts refer to as IT Operations Analytics (ITOA). ITOA solutions collect the structured and unstructured data generated by IT environments, process that data, and display the information in an actionable way so operations teams can make better informed decisions in real-time. I’d like to discuss five common ITOA use cases we see across our customer base in this series, starting with visualizing your environment. In the rest of this series I’ll examine each of the other use cases and describe how a solution like the Application Intelligence Platform can address each and in turn provide value for operations teams.

The five common ITOA use cases I’ll delve into are:

  • Visualize the environment
  • Rapid troubleshooting
  • Prioritize issues and opportunities
  • Analyze business impact
  • Create action plans

Visualizing the environment

The first use case refers to the ability for an ITOA system to model infrastructure and / or the application stack being monitored. These models vary in nature but oftentimes are topological representations of the environment. Being able to visualize the application environment and see the dependencies is an important foundation for the rest of the use cases on this list.

In the Summer ‘14 release announcement blog, we highlighted the enhancements we’ve made in regard to our flow maps, which is the visual representation of the application environment, including application servers, databases, web services, and more.

What’s great about the AppDynamics approach is that this flow map is discovered automatically out of the box, unlike legacy monitoring solutions that require significant manual configuration to get the same kind of view. We also automatically adjust this flow map on the fly when your application changes (re-architected app, code release, etc.). Because we know all the common entry and exit points of each node, we simply tag and trace the paths the different user requests take to paint a picture of the flow of requests and all the interactions between different components inside the application. Most customers see something like the flow map below within minutes of installing AppDynamics in their environment.
Screen Shot 2014-12-11 at 8.41.42 AM
Now a flow map like this is obviously very valuable, but what happens when the application environment is very large and complex? How does this kind of view scale for the kinds of enterprise applications many AppDynamics customers have deployed? Environments with thousands of nodes and potentially hundreds of tiers? Luckily for our customers, the Application Intelligence Platform was built from the ground up to handle these kinds of environments with ease. There are two characteristics of our flow maps that enable operations teams to manage flow maps of large-scale application performance management deployments; self-aggregation and self-organizing layouts.

Self-aggregation refers to our powerful algorithms that make these complex environments more manageable by condensing and expanding the visualization to enable intelligent zooming in and zooming out of the topology of the application. This allows us to automatically deliver the right level of application health indicators to match the zoom level.

For example, this is what a complex application could look like when zoomed all the way out:
Screen Shot 2014-12-11 at 8.41.51 AM
As one zooms in, relevant metrics information becomes visible:
Screen Shot 2014-12-11 at 8.42.00 AM
Until you are zoomed all the way in on a particular tier and can see all of the associated metrics you’d care about:
Screen Shot 2014-12-11 at 8.42.08 AM
The ability to iterate back and forth between a macro-level view of the application and a close-up of a particular part of the environment gives operations teams the visibility they need to understand exactly how an application functions and how the different components interact with each other.

Self-organizing layouts relates to our ability to automatically format the service and tier dependencies by using auto-grouping heuristics to dynamically determine tier and node weightages. By leveraging static data (like application tier profiles) and dynamic KPIs (like transaction response times) we organize the business-critical tiers in a way that brings the most important parts of the application to the forefront depending on the type of layout you prefer.

One can automatically group the flow map into a circular view:
Screen Shot 2014-12-11 at 8.42.18 AM
You can let AppDynamics suggest a layout:
Screen Shot 2014-12-11 at 8.42.27 AM
You can create a custom layout just by dragging and dropping individual components:
Screen Shot 2014-12-11 at 8.42.36 AM
And you can auto-fit your layout to the screen for efficient zooming in / out:
Screen Shot 2014-12-11 at 8.42.45 AM
You’ve seen how AppDynamics can visualize individual applications, but what if, like many of our large enterprise customers, you have many different complex applications that have dependencies on one or more other applications? How does one obtain a data-center view to understand, at a high level, what application health looks like across all applications?

With the cross-app business flow feature, customers can do just that. AppDynamics even supports role-based access control (RBAC) so administers can limit user access to a particular application. We allow customers to group, define, and limit access to applications however makes the most sense for their individual environments and for their business.

Screen Shot 2014-12-11 at 8.42.54 AM

As you can see, AppDynamics provides a great way for IT Operations teams to discover and visualize their application environment. We automatically map the application out of the box, we provide flexible layout options so customers can customize the view to their liking, and offer a way for Ops teams to understand how different applications interact with each other.

In the next post in this series, we’ll discuss how the Application Intelligence platform can address the second common ITOA use case, rapid troubleshooting. In the meantime, I encourage you to sign up for free and try AppDynamics for yourself.

The Critical Use Cases for Application Intelligence Among Higher Ed IT Departments

Universities are extremely dependent on the performance of third-party applications in their environment such as Blackboard and Kuali Financial Systems. These applications are vital tools to ensure a seamless user experience and to modernize their course registration, records management, and even financial systems. For schools that had been using homegrown applications or archaic card systems, these applications have made lives immensely easier. However, there are real risks in using these third-party applications that can be easily mitigated with Application Intelligence.

Third-Party Application Monitoring

We’ve all been there when something goes wrong, and you’re on hold with your vendor’s support team trying to resolve the issue. The only problem in universities scenarios? It’s midnight, and 20,000 students are attempting to sign up for that semester’s classes. Higher education environments are unique in their inconsistent loads and zero-tolerance policy for downtime.

In the above scenario, you’re either reliant on the vendor – or, if it’s an open source solution, your own wits – to get the application back up and running before your students start complaining.

The right Application Intelligence tool can utilize Application Performance Monitoring (APM) to help universities troubleshoot their performance problems in their third-party applications on the fly. (For further reading, check out how Cornell University, Washington University of St. Louis, and Missouri State use AppDynamics to solve their performance issues.)

The IT staff at these universities are able to provide support teams with fine-grained detail about performance problems in production. When selecting an APM tool, however, these universities discovered that not all solutions are created equal. A good APM solution must be extremely intuitive, far-ranging in its capabilities, and able to speak the language of business – not the language of developers.

Use Business Transactions to Obtain End-to-end Visibility

Traditional monolithic applications often consisted of little more than a few JVMs talking to one another. In the present day, a typical IT environment consists of a multitude of open source and proprietary components and distributed tiers, all attempting to communicate together in order to perform complex business transactions.

Too often APM tools captures just a piece of this massive web of communication — but are unable to reveal the entire architecture, leaving blind spots in place. The right tool should not just provide visibility into what already exists, but be able to automatically discover, trace, correlate, and visualize transaction performance for new releases.

This is made possible with business transactions — check out my blog on the importance of using business transactions as a best practice here. Business transactions allow the tool to create a common language between developers and IT operations by representing the end user request, rather than a snippet of code.

A business transaction represents a “user generated” action. For example, a student might register for a class in Blackboard, or a member of your financial services department might approve a purchase order in Kuali Financial System. The APM tool needs to be able to make these actions highly visible to the IT operations team. This is an essential part of the simplicity and usability of an APM tool: the ability to talk in the language of business.

Dynamic Baselines Help With Alerting

Once the APM tool gives you a clear view into the application’s business transactions, then it’s possible to measure the performance of those transactions. In legacy APM tools, you have to set manual thresholds for each transaction. For example, you might say that the class registration transaction takes two seconds on average, or the student log-in takes half a second.

The problem with this approach is you may not have the data to set those thresholds yourself. You could make an educated guess, but the burden is on you to tell the APM tool how well your application performs. If the transaction responds to load differently on Sunday versus Monday, or at 9 a.m. versus 8 p.m., or in late November versus the middle of July, it’s up to you to specify the appropriate threshold for each time period. If your performance policies aren’t granular enough to reflect the true performance profile that occurs during the hours in which you operate, as well as to account for periodic variations, you’ll either lack visibility or be flooded with false alarms.

The importance of the dynamic baselines is compounded among universities IT teams since the environment has such inconsistent load — registering for classes, exams, etc. — will screw with static baselines.

An APM tool that leverages best practices should be able to set those thresholds for you. This means being able to set baselines for your application by discovering how each transaction’s performance may vary over specified operating periods. It observes periodic variations, accounts for them, and sets thresholds accordingly. A tool that sets dynamic baselines for each business transaction will be highly accurate and eliminate false alarms.

In Summary

Managing higher education applications is hard …. but there are things you can do to make it easier on yourself and the universities IT department. Use the tools available to you to help your students have a smooth user experience and avoid application downtime.

If you’re interested in a deeper discussion on how AppDynamics helps universities with their application issues, check out our white paper: 4 Critical Strategies for Managing Higher Ed Apps.

Also, don’t just take our word for it, test drive a FREE trial of AppDynamics today!

Focusing on Business Transactions is a Proven Best Practice

In today’s software-defined business era, uptime and availability are key to the business survival. The application is the business. However, ensuring proper application performance remains a daunting task for their production environments, where do you start?

Enter Business Transactions.

By starting to focus on the end-user experience and measuring application performance based on their interactions, we can correctly gauge how the entire environment is performing. We follow each individual user request as it flows through the application architecture, comparing the response time to its optimal performance. This inside-out strategy allows AppDynamics to instantly identify performance bottlenecks and allows application owners to get to the root-cause of issues that much faster.
output_R1SFaZ

By becoming business transaction-centric application owners can ensure uptime and availability even within a challenging application environment. Business transactions give them the insight that’s required in order to react to quickly changing conditions and respond accordingly.

So, what exactly is a Business Transaction?

Simply: any and every online user request.

Consider a business transaction to be a user-generated action within your system. The best practice for determining the performance of your application isn’t to measure CPU usage, but to track the flow of a transaction that your customer, the end user, has requested.

It can be requests such as:

  • logging in
  • adding an item to a cart
  • checking out
  • searching for an item
  • navigating different tabs
  • Shifting your focus to business transactions completely changes the game in terms of your ability to support application performance.

    Business Transactions and APM

    Business Transactions equip application owners with three important advantages.

    Knowledge of User Experience

    If a business transaction is a “user-generated action,” then it’s pretty clear how monitoring business transactions can have a tremendous effect on your ability to understand the experience of your end user.

    If your end user adds a book to a shopping cart, is the transaction performing as expected or is it taking 3 seconds longer? (And what kind of impact will that have on end users? Will they decide to surf away and buy books somewhere else, thus depriving your business of not just the immediate purchase but the potential loss of lifetime customer revenue?)

    Monitoring business transactions gives you a powerful insight into the experience of your end user.

    Service Assurance – the ability to track baseline performance metrics

    AppDynamics hears from our clients all the time that it’s difficult to know what “normal” actually is. This is particularly true in an ever-changing application environment. If you try to determine normal performance by correlating code-level metrics – while at the same time reacting to frequent code drops – you will never get there.

    Business transactions offer a Service Assurance constant that you can use for ongoing monitoring. The size of your environment may change and the number of nodes may come and go, but by focusing on business transactions as your ongoing metric, you can begin to create baseline performance for your application. Understanding this baseline performance is exactly what you need in order to understand whether your application is running as expected and desired, or whether it’s completely gone off the rails.

    For example, you may have a sense of how your application is supposed to perform. But do you really know how it performs every Sunday at 6 p.m.? Or the last week of December? And if you don’t, how will you know when the application is deviating from acceptable performance? It’s figuring out “normal” in terms of days, weeks, and even seasons that you need to truly understand your application’s baseline performance.

    Triage & Diagnosis – always knowing where to look to solve problems

    Finally, when problems occur, business transactions prevent you from hunting through logs and swimming through lines of code. The transaction’s poor performance immediately shines a spotlight on the problem – and your ability to get to root cause quickly is dramatically improved.

    If you’re tracking code-level metrics in a large environment instead of monitoring business transactions, the chances are that the fire you’re troubleshooting is going to roar out of hand before you’re able to douse it.

    Summary

    Application owners are under extraordinary pressure to incorporate frequent code changes while still being held responsible for 100% application uptime and performance. In a distributed and rapidly changing environment, meeting these high expectations becomes tremendously challenging.

    A strong focus on business transactions becomes absolutely essential for maintaining application performance. Transaction-centric monitoring provides the basis for a stable performance assurance metric, it delivers powerful insights into user experience, and it ensures the ability to know where to hunt during troubleshooting.

    The right APM solution can automate much of this work. It can help application owners identify and bucket their business transactions, as well as assist with triage, troubleshooting, and root cause diagnosis when transactions violate their performance baselines. In this way, business transactions are essential to ensuring the success of Developers, Operations, and Architects – anyone with a stake in application performance.

    The Incredible Extensible Machine Agent

    Our users tell us all the time: The AppDynamics platform is amazing right out of the box. But everybody has something special they want to do, whether it’s to add some functionality, set up a unique monitoring scenario, whatever. That’s what makes AppDynamics’ emphasis on open architecture so important and useful. The functionality of the AppDynamics machine agent can be customized and extended to perform specific tasks to meet specific user needs, either through existing extensions from the AppDynamics Exchange or through user customizations.

    It helps to understand what the machine agent is and how it works. The machine agent is a stand-alone java application that can be run in conjunction with application agents or separate from them. This means monitoring can be extended to environments outside the realm of the application being monitored. It can be deployed to application servers, databases servers, web servers — really anything running Linux, UNIX, Windows, or MAC.

    Screen Shot 2014-08-21 at 9.03.06 AM

    The real elegance of the machine agent is its tremendous extensibility. For non-Windows environments, there are three ways to extend the machine agent: through a script, with Java, or by sending metrics to the agent’s HTTP listener. If you have a .NET environment, you also have the capability of adding additional hardware metrics, over and above these three ways.

    Let’s look at a real-life example. Say I want to create a extension using cURL that would give the HTTP status of certain websites. My first step is to look for one in the AppDynamics Exchange, our library of all the extensions and integrations currently available. It’s also the place one can request extensions that they need or submit extensions they have built.

    Sure enough, there’s one already available (community.appdynamics.com/t5/AppDynamics-eXchange/idbp/extensions) called Site Monitor, written by Kunal Gupta. I decided to use it, and followed these steps to create my HTTP status collection functionality.

    1. Download the extension to the machine agent on a test machine.
    2. Edit the Site Monitor configuration file (site-config.xml) to ping the sites that I wanted (in this case www.appdynamics.com). The sites can also be HTTPS sites if needed.
    3. Restart the machine agent.

    That’s it. It started pulling in the status code right away and, as a bonus, also the response time for requesting the status code of the URL that I wanted.

    Screen Shot 2014-08-21 at 9.02.55 AM

    It’s great that I can now see the status code (200 in this case), but now I can truly use its power. I can quickly build dashboards displaying the information.

    Screen Shot 2014-08-21 at 9.02.45 AM

    There also is the ability to hook the status code into custom health rules, which provide alerts when performance becomes unacceptable.

    Screen Shot 2014-08-21 at 9.02.35 AM
    Screen Shot 2014-08-21 at 9.02.14 AM

    So there it is. In just a matter of minutes, the extension was up and running, giving me valuable data about the ongoing status of my application. If the extension that I wanted didn’t exist, it would have been just as easy to use the cURL command (curl –sL –w “{http_code} \\n “ www.appdynamics.com -o /dev/null).

    Either way, the machine agent can be extended to support your specific needs and solve specific challenges. Check out the AppDynamics Exchange to see what kinds of extensions are already available, and experiment with the machine agent to see how easily you can expand its capabilities.

    If you’d like to try AppDynamics check out our free trial and start monitoring your apps today!

    Transforming IT: Building a business-driven infrastructure for the software defined business

    Executives charged with building business-driven applications have an extremely challenging task ahead of them. However, the cavalry has arrived with useful tools and strategies built specifically to keep modern applications working efficiently.

    We partnered with Gigaom Research to carefully grasp, and articulate, how these modern methodologies are improving the lives of IT professionals in today’s software-driven businesses. Typically, this knowledge has been so fragmented it’s been hard to find all this helpful knowledge in one cohesive area. Several blogs and research reports touch on various aspects, but what we learned from our research has been astounding.

    We carefully identified these challenges as the major hurdles facing IT today:

    • Customers are digital and connected
    • Business demand is growing
    • Apps are complex, distributed, and changing rapidly
    • Traditional app performance management is growing

    Clearly these have become major issues affecting companies everywhere, however more importantly, these are affecting end-users and in turn company’s bottom lines. Customers have grown accustomed to expect things instantly and when apps are performing adequately, they will quickly take their business elsewhere.

    Here are some key takeaways we noticed:

    • Customer experience is driving business performance
    • Proactively managing this experience requires new methods and tools
    • Modernize your infrastructure and approaches, but don’t forget the humans
    • Analytics is rapidly changing, fueled by the growth of big data

    This report highlights the value of proactively managing the customer experience with new methods and tools built for modern, complex applications in order to help drive business performance.

    Interested in the next-gen IT strategy and trends, check out the report!

    The future of Ops, part 2

    In my first post, I discussed how software and various tools are dramatically changing the Ops department. This post centers on the automation process.

    When I was younger, you actually had to build a server from scratch, buy power and connectivity in a data center, and manually plug a machine into the network. After wearing the operations hat for a few years, I have learned many operations tasks are mundane, manual, and often have to be done at two in the morning once something has gone wrong. DevOps is predicated on the idea that all elements of technology infrastructure can be controlled through code and automated. With the rise of the cloud it can all be done in real-time via a web service.

    Infrastructure automation + virtualization solves the problem of having to be physically present in a data center to provision hardware and make network changes. Also, by automating the mundane tasks you can remove unnecessary personnel. The benefits of using cloud services is costs scale linearly with demand and you can provision automatically as needed without having to pay for hardware up front.

     

    Platform Overload

    The various platforms you’re likely to encounter in this new world can be divided into 3 main groups:

    • IaaS services like Amazon Web Services & Windows Azure — These allow you to quickly create servers and storage as needed. With IaaS you are responsible for provisioning compute + storage and own the operating system up.
    • PaaS services like Pivotal Web Services, Heroku, and EngineYard — These are platforms built on top of IaaS providers that allow you to deploy a specific stack with ease. With PaaS you are responsible for provisioning apps and own only your app + data.
    • SaaS services – these are platforms usually build on top of PaaS providers built to deploy a specific app (like a host ecommerce shop or blog).

    All of these are clouds — IaaS, PaaS, SaaS — pick which problems you want to spend time solving. However, often the most complexed environments can’t be managed by a third party.

    Screen Shot 2014-08-11 at 5.59.12 PM

     

    Monitoring Complex Environments

    Modern monitoring is not focused on infrastructure and availability, but rather takes the perspective down through the application. The simple reality is perceived user experience is the only metric that matters. Either applications are working or they are not. The complexity of monitoring applications is compounded by the availability of applications on many platforms such as web, mobile, and desktop devices.

    Screen Shot 2014-08-11 at 5.59.22 PM

    By leveraging monitoring tools and strategic product integrations, the future of Ops can be focused on efficiency, optimization, and providing a seamless user experience. At AppDynamics we have a robust list of extensions aimed to leverage existing technology to help Ops (and Dev) departments in this modern era. You can check out our list of extensions at the AppDynamics Exchange.

    Ops people, don’t just take my word for it, bring your department into the modern age and try AppDynamics for FREE today!

    Monitoring IBM Maximo Performance

    If you’re like most of our customers running IBM Maximo, you’ve probably run into a performance issue or two recently — maybe it was something obvious, maybe it took a long time to debug, maybe you bought some more consulting services to fix it or maybe you are the consultant who’s trying to fix it. Regardless of the circumstance, I’ve found that most IBM Maximo implementations, regardless of industry and implementation, have a few things in common:

    • They’re Big Implementations span thousands of users, managing large portfolios of assets
    • They’re Mission Critical Business can’t function without them. Service calls can’t be scheduled, work can’t be done and it’s hard to know what’s going on.
    • They’re Complex Typically deployed across farms of servers, integrating with backend ERP, database, inventory, scheduling and a variety of homegrown applications. Usually deployed and managed by a team of folks, patches and updates can go through a long validation cycle before ever seeing production
    • They’re Opaque From an operations perspective, you may understand the general data flow through your system, but it’s difficult to know when something is down or slow and what the root cause is. It’s easy to get into a finger pointing scenario because no one’s sure what’s really going on.

    Sounds like a perfect candidate for Application Performance Monitoring.

    The nice thing about Maximo is that it’s written on an IBM WebSphere Java application framework, which makes it a good candidate for instrumentation and performance monitoring. A small modification to the WebSphere JVM arguments to include the AppDynamics Application Agent and you’re off to the races.

    Screen Shot 2014-08-04 at 6.10.07 PM

    Screen Shot 2014-08-04 at 6.10.24 PM

    Once you have the agent installed for your IBM Maximo WebSphere server, you should immediately start seeing performance data and flow map for any requests you’ve made to IBM Maximo.

    Screen Shot 2014-08-04 at 6.10.35 PM

    Business Transaction Configuration

    IBM Maximo operates on a relatively straightforward URI scheme, which makes Business Transaction detection easy. Since traffic is initiating from the IBM Maximo tier, all BT detection happens there. Most of the action happens within a few main URLs:

    /maximo/ui/

    /maximo/ui/login

    /maximo/report

    /maximo/ui/XXXX

    From there, the request is broken down further based on the URL parameter “value”. Ok, easy enough — modifying default BT naming to include 3 segments and creating a custom match rule to split the transactions based on value leaves us exactly what we’re looking for.

    Screen Shot 2014-08-04 at 6.10.45 PM

    Screen Shot 2014-08-04 at 6.10.54 PM

    Backend Applications

    Backend applications for Maximo come in a variety of different flavors. From a performance monitoring perspective, they can be categorized into:

    1. Off The Shelf such as BO, SAP, etc
    2. Custom Built
    3. 3rd Party (web-based)

    In addition, these apps can be built in either native or managed languages. AppDynamics can view performance for each of these, but the approach varies depending on the architecture of the application and how much instrumentation you want for that application.

    For most IBM Maximo implementations, the deployment typically includes a mixture of both fully instrumented applications and applications instrumented as exit points.

    Typically your custom built application backends would be the primary candidates for full instrumentation. I won’t go into full detail here on how to set these up, but with basic configuration, you should be able to see full data flow throughout your IBM Maximo deployment.

    So… Now What?

    With this newfound visibility into IBM Maximo, now you can start realizing core benefits of APM, alerting, root cause analysis and best of all: performance metrics!  If you’d like to try this out for yourself, you can get an environment here.

    The future of Ops, part 1

    The disruption of industries through software

    Marc Andreessen famously stated in 2011 that “software is eating the world”. The world runs on software defined businesses. These businesses realize that in order to be efficient and stay ahead of the competition they must innovate or they will die. Technology is no longer secondary to your business, but is now the actual business.

    Nowadays there is an app for nearly everything and consumers have the expectation most processes are automated. Access to these apps is ubiquitously available — from the web and mobile. Every disruptive billion dollar company in the last decade has innovated through applications by fundamentally changing the market and user experience. Companies like Netflix, Uber, Square, Tesla, Nest, Instacart, and many others have capitalized on this new user experience and catering to their elevated expectations. The disruption stems from an improved user experience, and enabled through technology.

    The evolution of application complexity

    Gone are the days where applications were this simple:

    Screen Shot 2014-07-30 at 6.34.55 PM

    The reality nowadays is applications are extremely complex and distributed using several platforms. Most application architecture we come across utilize several languages such as Java, .NET, PHP, and Node.js. Operations becomes even more complex with virtualization and cloud environments, deploying to containers, and managing application made up of many microservices.

    Screen Shot 2014-07-17 at 3.50.39 PM (2)

    It is not DevOps, it’s the next generation of Ops

    Most people and companies abuse the term DevOps to no end. It is a bit embarrassing, but buzzwords flow rampant on an expo floor of a technology convention. The reality is quite simply that the operations tools engineers use to build and manage complex applications have evolved to match the complexity. I believe the operations complexity breaks down into a few main categories: infrastructure automation, configuration management, deployment automation, log management, performance management, and monitoring.

    The evolution of the Ops problem

    The modern operations reality is that the cloud is the standard platform, operations are automated through code, testing and quality assurance are automated through code, deployments are automated through code, and monitoring and instrumentation is critical to success.

    iStock_000023396874_Small

    The DevOps Report from Puppet Labs surveyed the DevOps community and found some interesting results, most notably: “companies with high-performing IT organizations are twice as likely to exceed their profitability, market share and productivity goals.”

    The report also found successful DevOps teams tended to share these characteristics:

    • use continuous delivery to ensure consistent and stable deployments
    • leverage version control not just for code, but infrastructure and configuration to track and manage all environments states
    • automate testing to have confidence about the quality of every release
    • invest in monitoring and logging to be proactive about problems
    • correlate IT performance with organizational performance

    Download the entire report from Puppet Labs

    The enterprise catch up game

    Most enterprises are not able to adopt cutting edge technology at a rapid pace so they are in a constant state of migration and catching up. Furthermore, their challenges are exacerbated when dealing with hybrid environments consisting of on-premise legacy systems combined with new public and private cloud environments. Larger, less flexible, legacy companies are just starting to invest in the latest generation of programming languages such as Scala, Node.js, and Go and nosql datastores like Cassandra and Redis.

    Though enterprises may experience the challenges adapting to the latest operations trends, there are several tools out there which will help ease the transition. A good APM solution helps foster DevOps best practices and increase collaboration between the traditionally separated Dev and Ops teams.

    Don’t believe me? Try AppDynamics for FREE today!

    How to Triage a Busy Thread Count Alert in 14 Minutes

    This is a real example of troubleshooting a production application issue provided by an AppDynamics customer. What you are about to see is a combination of run time analytics, adaptive data collection, intelligent alerting, and a proven problem solving workflow. From first alert to DBA handoff took only 14 minutes.

    5:26 p.m. – Operations receives an email alert about Busy Threads breaching a threshold. The incident was automatically detected and alerted upon by AppDynamics when the Busy Threads JMX metric shot up to 182.

    AppDynamics sends notifications detailing busy thread counts

    5:34 p.m. – Details from AppDynamics show that call volume is down, response time is up, errors are up and network I/O is down. Initial suspicion is that the load balancer may be throttling traffic due to poor performance.

    Thread_ART_Throughput_Errors

     

    Thread_Network

     

    Thread_CPU

    5:38 p.m. – Company procedure is followed by disabling the server from the load balancer so that it will not receive any more traffic. Recycle of application server is considered as a possible temporary resolution to the issue.

    5:40 p.m. – Details from AppDynamics are used to show that transactions are backing up because of a database issue. There is no need to recycle the application server. The issue is handed off to DBA team with full application context for resolution.

     

    Thread_Txn_Map

    Screenshot showing problematic JDBC call as the culprit.

    Screenshot showing problematic JDBC call as the culprit.

    Later that day: DBA team fixes the issue and application response time returns to normal. All nodes are restored into the load balancer rotation.

    This is an example of a scenario that IT Operations teams deal with regularly. Without having AppDynamics in place to provide fault domain isolation this type of problem usually ends up in a long conference call where all support personnel for this application must participate until service has been restored. There is no need to waste significant company resources any more. Stop the “all hands on deck” madness and see how AppDynamics can help your company today.