DevOps Introduction Course

DevOps and AppDynamics

Need a crash course on what DevOps is, how to foster a collaborative environment, and how to measure the effectiveness?

We created a DevOps course, which presents you with a series of interactive questions to test your knowledge. Having just viewed the course myself, I have to say it’s a great introduction to DevOps and what it means for the enterprise. Plus it has a number of key use cases to how our Application Intelligence Platform can help an organization transforming into a DevOps operating model.

Here’s a quick snippet of the course:

Introduction to DevOps – Intro from AppDynamics on Vimeo.

The following learning objectives are covered:

  • How to define DevOps
  • Articulate the trends in enterprise IT that increasingly requires DevOps
  • Implement the tools and practices that enable the DevOps workflow to proactively solve application performance issues

DevOps Use Cases

One of the highlights of this course, for me, is the last section on implementing the tools and practices as they explain a number of good DevOps use cases structured around a software lifecycle and how our solution can be applied:

For developer teams, the video shows how our ability to monitor Business Transactions is extremely useful when running scenario load or stress tests. The example in the video shows how Business Transaction monitoring can be configured to detect test transactions so that new features are easily identifiable in the AppDynamics console during such a test.

Another key use case the video highlights is the need to be able to easily compare releases and so there is detailed walkthrough that shows how this feature can be used easily.

Next Steps

So if you want to learn more about DevOps and see how AppDynamics can help an enterprises DevOps transformation then please check out the course. If you want a little bit more information then also check out this promo video for the course.

Just a quick post to encourage you to check out our new AppDynamics DevOps Introduction course. This course is part of our AppDynamics University resources.

DevOps in 2015 – Beyond Basic Metrics

This article originally appeared on CodeProject.com

It’s 2015, do you know how your web applications are behaving in their production environments? Maybe you’ve got some front-end analytics plugin installed with some server monitoring also configured so you can see the processor and memory utilization of your servers. The outstanding question is: Do you really know what the code is doing? Can you get a snapshot at a point in time of the entire server-farm and analyze line-by-line what the application servers are executing? Memory and processor utilization are wonderful metrics to determine the health of a server, but what about the health of your user’s experience? Hardware metrics only tell a part of the story, and there are other important metrics in your web application such as server response time, bounce rate, and sales conversion rates that speak volumes to your business units

DevOps has quickly become the buzzword du jour for the software industry and promotes something that many developers and IT Administrators don’t enjoy: collaboration. According to Wikipedia, “DevOps acknowledges the interdependence of software development, quality assurance, and IT operations“. By having all three arms of a technology department collaborating, software products and services can be produced with a high quality bar and operational efficiencies. In this article, we’ll review how IT operations can engage more with a customer’s experience and assist software development in isolating problems in a production environment with no code changes needed from the development engineers. We’ll use the AppDynamics Application Intelligence Platform to help illustrate how these synergies can drive a better product for customers

Web Performance – More Important Than Ever

There is a saying among web developers: performance is a feature. This seems simple and straightforward to the technically savvy, but to the average business analyst it may not ring true. . Some of the anecdotal numbers around this phrase include:

  • Google found that a half-second slower page resulted in a 20% drop in traffic – Marissa Mayer, Google IO 2008 Keynote
  • Amazon found that a page load slowdown of one second could cost them $1.6 billion in sales each year. –Fast Company

THOSE are numbers and details from the big publicly owned companies in your 401k or mutual fund portfolio that you can trust are taking performance seriously. Google takes page performance so seriously that they’ve added performance as a component of their page rank algorithm.

Ironically, Google’s analytics and web master tools do not provide much insight into the speed of your pages. There is a “Pages / Session” number that is presented on the default dashboard, but that doesn’t show anything about individual page speed.

Figure 1 – Google Analytics Dashboard

Google’s Webmaster tools provide some insight into page performance with a graph indicating what the Google web crawler experienced when it requested content from your site. Once again, not very clear on what pages were performing slowly nor clear on any other events going on during the request:

Figure 2 – Google Webmaster Tools – Crawl Stats

We need something that has insight into the inner workings of our application in order to provide better performance instrumentation. If we move to our hosting provider, like Microsoft Azure, I can view information about the processor and memory utilization through the online Azure portal:

Figure 3 – Microsoft Azure Performance Reporting

Nice, and I can add alerts when the CPU is burdened, or the memory is taxed. I can also add to that mix the ability to automatically scale my web application based on these metrics:

Figure 4 – Microsoft Azure Web App Scaling Configuration

Once again, it feels like a shot in the dark: are my customers having a good experience? Are page requests being fulfilled with appropriate responsiveness? I can only monitor the health of my servers, which the folks in IT operations take very seriously. Are the servers taxed because of visitor growth, or because of a lack of performance tuning on the part of the software engineers?

Get the Whole Picture

To get a complete picture of your application, and you will want a holistic view of the entire application if it is mission critical to your organization, you must engage an application platform monitor such as AppDynamics Application Intelligence Platform. This application monitoring platform is similar to the other tools discussed previously in that you do not need to update your application source code to allow it to operate. All of the functionality discussed in this article is available without writing or changing a line of code. This comes straight out of the box.

Unlike the previous examples that expose information about an entire host or just a single request to the application, Application Intelligence measures interactions as a completed business transaction. This granularity of measurement allows us to review the impact of each interaction across the entire platform in a clear systems dashboard.

Figure 5 – AppDynamics Dashboard

Immediately on this dashboard we can quickly review the average response time for the application and review any slower operating elements within our production server environment because they are highlighted in red. This complete overview of the application allows for IT operators to be able to instantly identify their standard server performance, and shows the beginnings of the customer experience within the application.

Why are some services identified as slowing down and highlighted in red? The advantage this tool brings to the game is that there is a customer experience baseline automatically calculated for your application and then measured against. This dynamic calculated baseline is key to verifying that your customers have a uniform experience on every visit.

When an operator finds that the application is performing outside of normal parameters, they can begin investigations from this dashboard.

From this view, we can drill down on a specific transaction and see how its data flowed through the entire platform:

Figure 6 – Transaction Data Flow Through the entire platform

When viewing a transaction at this level, it becomes easy to see that there is significant slowness interacting with the database, with ADO.NET transactions from the PaymentWS node on the bottom right taking more than 16 seconds to complete. Our operators can click the ‘Drill Down’ indicator over the PaymentsWS node and review those interactions in more detail:

Figure 7 – Details of a Checkout interaction with the SQL database

We can see two things happening in this view, the connection open step is taking almost five seconds to complete, and an ExecuteNonQuery call (highlighted in blue) is taking longer than five seconds. The next analysis point available is what got me the most interested in this tool: I can inspect that exact SQL statement:

Figure 8 – SQL Code Review

This is where my bread is buttered: I can now take real action on this code with my database administrator. We have concrete metrics about performance of this query and how it is being used within the platform. At this point, a developer and database administrator can decide whether to optimize this query or to optimize the database indexes to improve the performance of the application.

Oh, and we can also check the performance of that credit card processing HTTP call to “visa.com” (note: this is not a real address, is not affiliated with VISA, and is presented for demonstration purposes only). Clicking on theMovieProcessingRole Drill Down button leads to a complete stack trace leading to the HTTP call:

Figure 9 – Call Stack for an HTTP Call

Clearly, we can see that the slow-down occurs in the UploadValues and ServiceBus transmission. Inspecting the UploadValues call reflects the HTTP calls that were made to upload data:

Figure 10 – Inspecting HTTP traffic

Now we have a clear picture that the synchronous HTTP transaction is delaying processing by almost three seconds. With this information, we can decide to take action in changing architecture to minimize the impact of this network interaction.

Real Business Metrics

Once customer experience is optimized and monitored, the IT Operations team can then rest easy knowing that their servers are running well and with healthy metrics for memory and processor utilization. However, that’s not the end of the story when it comes to measurements and metrics that we want to collect from our application.

The real metrics the business cares about revolve around costs and revenue. Many business-focused applications already have reports available in back-office applications to show sales numbers and utilization of the application. But what about other lost opportunities that don’t immediately come out on standard sales reports? What about the analysis of those visitors to your application who did not click through a ‘buy’ button or use the application in a way that you expected?

For the marketing and sales groups that are building the business-side of an application, these additional metrics are necessary but very hard to unearth. How do you track what actions were NOT taken in an application? With Google Analytics, I can start to see some of this information using their Visitor Flow feature:

Figure 11 – Google Analytics Visitor Flow Initial Analysis by Country

That gives an interesting starting point, but we can do better. AppDynamics has designed features to support collecting exactly these typetypes of custom business metrics. The average business user doesn’t want to drill down into source code or even know where in a database to extract data from to make business decisions. They’re typically looking for a quick view dashboard that demonstrates the key metrics that they’re looking for. For example, the sales group is very focused on capturing revenue, so a suitable dashboard for an appliance retailer may look like this:

Figure 12 – A Revenue focused dashboard based on a custom metric

This looks good, but it doesn’t feel like something that a sales person could put together quickly. With a little help from a developer friend, a custom revenue metric can be added to the process that can be tracked and presented. Custom metrics can read from any public method call in your code, intercepting calls and recording values without needing to add an extra Windows Event Log metric:

Figure 13 – Creating a custom metric for each time a customer ‘checks out’

As this value is collected, we can map it against any other metrics that are already collected about our application visitors. In this case, we are collecting the total amount charged to the customer and the currency that the customer is paying with.

With additional information being collected about business transactions against time, using other known metrics about our visitors we can start to see some interesting trends develop. In the lower right corner of the dashboard, we noticed that curved televisions are seeing a two-to-one sales advantage over flat-screen televisions. Perhaps it’s time to change up the layout of the website to promote those older flat-screen televisions in inventory while back-ordering the curved screens.

Summary

DevOps is a brave new world in 2015, and we need to have our development teams work together with operations to ensure smooth running of our production web applications. However, we’ve seen in our examples that smooth operations and profitable operations are not just the concerns of development and operations. With an advanced system management tool like AppDynamics, business analysts can get the metrics they need to make appropriate decisions to improve the business performance of an application. When the chips are on the table at the end of the day, it’s not the up-time of an application that matters but rather the return on investment on that web application. Advanced metrics and analytics should be a standard tool for you in 2015 to meet those business goals.

Want to try these advanced metrics for yourself? Check out a FREE trial of AppDynamics today

Keep CALM And Embrace DevOps – Lean

Following on from my last blog posts on culture and automation, I want to move on to the next letter of the CALMS model – the letter L for Lean. 

Defining Lean

For many, and for myself, the basis of modern Lean principles come from Toyota and the Toyota Production System. If you want to read about this in detail then I would recommend The Toyota Way, by Jeffrey Liker, a book that I read a couple a years ago and enjoyed. Often referred to as Lean Manufacturing, the definition on Wikipedia states that it’s ‘a systemic method for the elimination of waste within a manufacturing process’. Since then Lean IT has emerged as an extension of Lean manufacturing and focuses on eliminating waste in IT services while providing increased value to customers or employees. 

The Principles Of Lean Applied to DevOps

The Principles Of Lean Applied to DevOps The main principles of Lean focus on identifying value streams, which in the case of DevOps, are applications that form services provided to customers, employees or partners. From this, value stream mapping visualization techniques are used to break down and analyze identified value streams into steps or processes, from design through to delivery, so that waste activities can be identified and eliminated. For example, in terms of monitoring, this would be tools which are not providing value to certain processes in the value stream, thus inhibiting flow and therefore not promoting speed and quality, essential to DevOps.

Six Lean Essentials For DevOps And Application Performance Management (APM)

Understanding and optimizing value streams is crucial. But Lean has a number of other key concepts, which can be applied to your APM strategy in order to ensure that it provides value to DevOps adoption:

  1. Understand how applications create value for customers, employees or partners. Lean defines a concept called ‘Gemba’. In Japanese this means ‘actual place’ and it means to understand where value is created for customers. In terms of APM, it’s about understanding the value that an application creates for its users. This is important, as you can’t create a business case for an APM solution without understanding the business importance of the application you are going to monitor. In relation to DevOps (delivering at speed while maintaining quality) features such as Real User Monitoring (RUM) help safeguard the value created while AppDynamics’ ability to identify Business Transactions in an application mean that all key interactions, which deliver value to the user, are automatically identified and proactively monitored.
  2. An APM solution must be ‘second nature’ in terms of operation. Students studying the Japanese martial art of Karate memorize and practice moves that become ’second nature’. This concept has been adopted by Lean and is about reinforcing certain activities, patterns and thought. In DevOps, this will mean reinforcing culture characteristics such as ‘fail fast, fail forward’. APM is an essential activity that should be reinforced in DevOps adoption, and it’s vital that an APM solution is easy to use so that it becomes ’second nature’ in terms of operation. For that reason one of our driving principles at AppDynamics is ease of use.
  3. Eliminate waste, inconsistency and absurdity from your APM strategy. There are three categories of waste in Lean: “muda” is a physical waste (e.g. time, money, and resources); “mura” is the inconsistency or unevenness of process; and “muri” means avoiding absurdity and unreasonableness. For DevOps, removing these three categories of waste is critical for speed. But they are also critical for your APM strategy, as multiple overlapping monitoring tools in a typical siloed enterprise mean physical waste (licenses etc), an inconsistency in the way they are used, and ultimately absurdity as alerts are not representative of the business. Hence at AppDynamics we provide you with end-to-end visibility of application performance and have smart alerting which means alerts are always in the business context.
  4. APM must act as a ‘fail-safe’ against application performance issues. “Poka-yoke” is a Lean principle that means ‘fail-safing’ or ‘mistake-proofing’. Essentially this is what any good APM solution should provide – the ability to make sure that application issues are remediated before they impact the customer or employee. As explained in my blog post on automation, our platform makes it easy to run a response to an issue or event meaning that a problem can be remedied before it impacts the business.
  5. Contextual dashboards are critical to APM strategy and DevOps. Data from any APM solution only becomes information and knowledge if it’s displayed in context of the audience. The Lean principle of “kanban” means “visual signal” or “card” and is about making sure that visuals are used to allow teams to communicate more easily on what work needs to be done and when. In DevOps, this easy communication is central to speed while maintaining quality and it’s also central to APM. A great APM solution should allow you to generate visual reports and dashboards quickly in order to communicate information relevant for the audience. 
  6. APM should provide the basis for continuous improvement in DevOps. DevOps success means delivering quality applications and new features at speed, which either internally, support employee productivity, or externally, delight customers. Therefore to stay ahead of the competition, the Lean principle of “kaizan” or continuous improvement of your people, process and technology in regards to DevOps is critical. As a great APM solution provides visibility of end-to-end performance of an application then this should be a primary source of information for performance improvement. At AppDynamics, with our built-in application analytics features, it also means that we provide the ability to not only continually improve performance, but your overall software strategy.

So these are my six essentials for Lean, DevOps and APM. I find that Lean principles always promote lots of discussion so I would love to hear your thoughts and comments. Also if you want to hear more about DevOps and Culture then please join or listen to a recording of my webinar which I will be running this Thursday, 9th April. More information can be found here.   

Keep CALM And Embrace DevOps – Automation

Following on from my blog post on Culture, I want to turn my attention to the next letter in the CALMS model – the letter A for Automation. If you boil down DevOps to its basics then it’s all about releasing new applications or features at speed while maintaining quality. In order to do this, automation solutions are vitally important but…..

I have problems with Automation

For me the mantra of achieving speed via automation tools is nothing new. In fact I was ‘automating’ Citrix Metaframe builds using windows scripting techniques back in 2004. The market though, has become awash with different automation products and it’s fair to say that many enterprises now suffer from ‘automation sprawl’. This results in a tactical rather than the strategic approach to automation required, meaning that benefits are never fully realized. In fact, I have spoken to many companies for which the word ‘automation’ leaves a bitter taste in the mouth as they have been burned by failed implementations.

Now before everyone says “John, what about solutions like Chef and Puppet? They have been very successful so far” Yes, they have, I agree. They have both captured the market when it comes to turning infrastructure into code, so that environments can be rapidly and automatically built. I have no doubt that their solution or platform is easy to use but I would argue that the success of these solutions does not lie just with the product alone but with their execution. They made it very easy to share and encourage sharing of automation scripts and modules.

Here are my problems with Automation in relation to DevOps.

Problem 1: Speed can be dangerous

Speed is the nirvana for many enterprises when it comes to DevOps initiatives. The issue is that for many, their processes around release and change are centered on rigor and control, at the detriment of speed. I know I have sat through countless, pointless Change Approval Boards (CAB) in my time – one of the symptoms of this approach.

But going in all guns blazing in regards to release and change automation specifically, can damage your business. A widely accepted stat, backed up by formal research at Gartner, is that 80% of business service outages (read – application outages) are caused by release, change and configuration processes. Therefore the need for speed could easily amplify this stat. The solution is to make sure that you proactively monitor what matters in regards to business services.  This means the end-user’s experience, the application itself through to the infrastructure workload, including the database. Our Application Intelligence platform provides this, hence why it’s a perfect partner to any infrastructure or application release automation tools.

Problem 2: Automation success isn’t about the tools but the people and process

I have heard two great comments in regards to automation in last couple of years. Firstly, “Don’t automate what you don’t understand” and secondly, “A bad process automated, is still a bad process”. I think these comments summarize nicely the problems which enterprises face in any strategic automation or DevOps initiative. While the automation product may be simple to use, understanding what to automate and what effect this will have on the organization and its people is a different matter.

If you are going to be successful with automation and DevOps, it’s essential to address how an automation script or series scripts/modules will interact with, or change your current processes – processes around which jobs, roles and emotions are centered. This becomes difficult in an enterprise, which has defined processes for areas such as release, change and configuration, plus all the politics which come with this. Any strategic enterprise automation initiative has to overcome FUD (Fear, Uncertainty and Doubt) amongst employees in which automation will change their job activities. By the way, automation may well improve an employee’s role but this does not mean that FUD will not be present initially. To overcome this barrier you need to think about the people and process first. Involving people in automation initiatives (I would even suggest shying away from using the word automation) from the outset and being as transparent as possible through each stage of the adoption is vital.

Problem 3: Automation is more than just infrastructure and application automation

In my discussions with IT professionals in regard to DevOps, I hear a lot of excitement, quite rightly, in regards to infrastructure as code, application release and configuration release automation solutions. But this should not be your only focus for DevOps. It’s still essential to be able to respond rapidly to emerging application and associated infrastructure workload issues before they impact the end-user (customer or employee). This means that automated issue response or remediation is an essential capability that you should look for in any APM solution.

One of our driving principles at AppDynamics is to enable businesses to act fast. This means that we make it easy to run a response to an issue or event. Another key capability is that we help enable modern application architectures by providing cloud auto-scaling. Our platform understands application demand via real user monitoring so it automatically knows when to scale up or scale down application components in a cloud-based environment. This ensures that the customer or employee continues to have great performance even during heavy utilization periods. This feature is essential for those organizations who have to deal with periodic events such as Black Friday or Cyber Monday.

Well, here are my thoughts. I would love to hear yours. If you would like to try our rapid response, remediation and cloud auto-scaling features then please visit our free trial page.

BizDevOps: Taking DevOps a Step Further

Traditional siloed IT organizations, which were primarily driven by strictly gated Information Technology Infrastructure Library (ITIL) processes and workflows, created tremendously effective controls around operating, and evolving IT systems. This operational methodology was effective when the business expected methodical change to happen via large releases of hardware and software functionality that most often were delivering new capabilities every six to 12 months.

As businesses continue to transform to digital and become software-defined by nature, the need to iterate on software releases more rapidly has driven organizations to build specialized, faster-moving teams in order to adjust the business model and execution quickly. This began with development organizations adopting agile methodologies, including automated testing of software and continuous integration within the development lifecycle. On the IT operations side, there was a need to match the development velocity, requiring breaking down of some of the rigid infrastructure and application support silos.

DevOps is the resultant philosophy that brings together developers and operations teams organizationally, culturally and technically. These changes overcome the work blockages present in non-integrated teams. This requires automation, but also flexibility in the software and infrastructure to adapt and learn. Doing this effectively aligns the development and operations teams with the needs of the business. The ability to learn, pivot quickly, and experiment is critical. Having both teams on the same page, using the same data, and speaking the same language is essential.

Businesses today have stable consistent business processes used to run the bulk of the business, today’s shift requires coupling these with smaller unstable business processes, to provide differentiation. These unstable processes can support customer interactions that are unpredictable and require ad hoc decision-making, these are coupled with larger, more stable business processes. Unstable processes are agile, adaptable and maneuverable according to shifting customer needs. Deliberately unstable processes mandate a shift in the ability of an enterprise and its people to change in a fluid manner. This holistic approach, blending business model, processes, technology and people will fuel digital business success and lay the foundation for the digital transformation agenda.

Product management or the business application owners are responsible for taking business requirements and both stable and unstable processes, and translating them into product requirements and criteria that the DevOps teams must interpret, build and operate. What DevOps fails to do is capture the business transactions needed to drive shifting business model decisions, which are required for digital business success. In order for business, development, and operations teams to work together, key business transactions must be captured and factored into decision-making and automation. Business transactions should not be considered as isolated measure, but instead the path of the user and execution of user-initiated processes across application and infrastructure components.

Making decisions to shift or break business processes must be data-driven, using a combination of metrics and data across business, users, applications and infrastructure. This data includes contextual data such as user, location, device; transactional data such as revenue, channel, product; and operational data such as request response times, device experience, etc. BizDevOps is the extension of DevOps into the business. The ability to support these merged teams will require new technologies in order to keep these teams on the same page when driving digital business. Having a common language based on business transaction allows for more effective meetings and greater collaboration across normally disjointed parts of the organization. Future use cases for BizDevOps platforms and data may include teams, who have often been within data silos such as marketing, sales, customer support, and even back-office business service teams.

My colleague, Anand Akela, wrote about BizDevOps success yesterday and is also hosting an upcoming webinar. Register now!

 

5 Steps to DevOps Success with Application Analytics and BizDevOps

In my last blog, I talked about how the process of BizDevOps is emerging to utilize DevOps practices that further drive the overall business agenda and how a business transaction can be that common language for collaboration between Dev, Ops, and Biz.

DevOps is an approach that improves collaboration among Dev and Ops teams to enable fast delivery of applications and ensure impeccable end-user experience. BizDevOps takes the concept of DevOps to a new level – by bringing the business context and insights to day to day DevOps activities. BizDevOps ensures that Dev and Ops focus on what matters to the business and also introduces the Biz persona (line-of-business manager, product manager) as a key stakeholder in the process.

IDC recently published research on these two topics in its report, “DevOps and the Cost of Downtime: Fortune 1000 Best Practice Metrics Quantified,” by IDC Vice President Stephen Elliot. In addition to eye-opening cost of application downtime and increasing momentum for DevOps, it also highlighted that “I=improved customer experience” is the most expected business outcome from DevOps practices.

Improving end-user customer experience is the one of the key objectives of an application performance management solution in a production environment. By leveraging the application performance data earlier in the development cycle, DevOps teams can ensure readiness for exceptional customer experience before deploying any application in production. Finally, harnessing the business data in application transactions and logs and correlating them with operational data can provide actionable business insights.

In this blog, I will discuss five steps to DevOps success leveraging an application performance management solution in production. I’ll also discuss how application performance analytics earlier in the development lifecycle will help foster BizDevOps success.

1. Monitor and manage performance with the business in mind

In order to minimize the application downtime and expedite remediation of application performance issues, you will need to understand business impact of different transactions and their dependencies on various application components and underlying infrastructure. You need to be looking at every aspect of a business transaction – starting from the user experience, the application performance, how the application interacts with the infrastructure, and then finally what is the business impact and how is the business performing.

AppDynamics automatically discovers application topology and interdependencies, trace key business transactions, visualize and prioritize the end to end business transactions performance in addition to monitoring the health of the application and underlying infrastructure.

Screen Shot 2015-03-02 at 8.18.44 AM

 

The AppDynamics Application Intelligence Platform is a self-learning platform that tracks every business transaction metrics, automatically benchmarks what’s considered normal performance and adjust that baseline dynamically over time using the metrics collected. This dynamic baselining allows for intelligent alerting and remedial actions in case of deviations from normal.

AppDynamics collects metrics for every transaction, but, for those transactions that are trending towards abnormal performance, it intelligently collects call graph snapshots and other diagnostics. A transaction snapshot depicts a set of diagnostic data, taken at a certain point in time, for an individual transaction across all application servers through which the transaction has passed. It gives you the code-level visibility for a business transaction, not just at a single component in your environment, but for the transaction as touched by all instrumented tiers involved in processing the business transaction.

After you find the root cause of the problems by starting from the entry point of a business transaction and drilling down all the way to the calls in your code, AppDynamics also allows you to automate the remedial actions via run-book automation so that you can resolve the issues quickly and ensure exceptional end-user experience.

2. Don’t just manage production apps; Ensure readiness in pre-production

It’s become commonplace that the goal of any mobile application is to give exceptional experience to their end-customer so they can get a five-star rating from iTunes stores or Google Play. Any web application has similar goals for earning their end-user loyalty.

In order to achieve these challenging goals in production, you will need to ensure that your applications are tested and ready for desirable performance in pre-production before they are deployed in the production. It’s helpful if you can use the same application performance management tool that you use in your production environment for monitoring tests in a pre-production environment under a production like environment. APM solution (e.g. AppDynamics) can let you set policies that can trigger automated actions to report issues or simply to notify of successful / unsuccessful test runs during pre-production.

Screen Shot 2015-03-02 at 8.19.07 AM

Having deep application transaction traces, detailed snapshots of applications, and underlying infrastructure is also very important to understand the root cause of any performance issue so that developers can fix them before it surfaces in production. AppDynamics can be run in Developer Mode to capture full snapshots for every transaction and provide full diagnostics data.

3. When stuff happens (and it will), collaborate effectively with Dev, Ops, and Biz

In addition to ensuring production readiness before deployment and having a complete end-to-end visibility into the production environment, it is very important to have processes and tools that foster collaboration between development, operations, and business teams. It helps to get everyone on the same page by looking at the same Business Transaction data, focus on metrics that translate to the business value the application delivers and dive in deeper when appropriate.

AppDynamics Virtual War room IT enables the collaboration between development, operations, and business teams. It allows them to see and resolve problems together in a shared virtual space. It enables scheduled, auto-delivered reports, and adds a new iOS app that sends push notifications to team members when system alerts are triggered.

Screen Shot 2015-03-02 at 9.32.57 AM

AppDynamics Virtual War room IT helps track changes in real time by correlating streaming metrics at second intervals. It allows multiple developers, operations professionals, and business users to look at the same dashboard at the same time and make changes to the dashboard in real-time. It shows a timeline decorated with annotated artifacts that can represent notes and observations.

4. Change is most often the cause of poor performance; so understand changes to improve performance

Once the application is deployed in production, it is critical to watch for any changes in the environment since majority of IT outages are caused by improperly implemented changes. So, in order to minimize the very costly application downtime, it is important to understand the performance impact of every change – Software, server & database upgrades, Infrastructure changes.

Screen Shot 2015-03-02 at 8.19.38 AM

AppDynamics enables you to compare your application before and after a new code release, code sprints and even bug fixes allowing you to assess the impact the new code had on application performance in both pre-production and production environment. You can compare business transaction snapshots for  code path differences between versions, fixes, or between 2 different hosts.

5. Unlock actionable business insights with Application Analytics

Harnessing the business data in transactions and logs and correlating them with operation data can help you unlock actionable business insights.  For example, understanding which users had trouble checking out of your e-commerce application during an outage and what products were in their cart can provide that data to your marketing team so that they can execute on a win back campaign.

Similarly, in case there are multiple business transactions having performance issues, you can prioritize the resolution based on the revenue impact of transactions.

Screen Shot 2015-03-02 at 8.19.48 AM

The AppDynamics Application Intelligence Platform delivers true business intelligence and supports both your immediate operational needs and your longer-term business objectives. It helps you analyze your business and IT outcomes in real-time based on streaming business and operational data in the dashboard. It gives you clear visibility into how your customers are using your products and allows driving customer enablement and prioritizing product development efforts.

To learn more about these key strategies for DevOps success, join me in a webinar with IDC Vice President Stephen Elliot, the author of the “DevOps and the Cost of Downtime: Fortune 1000 Best Practice Metrics Quantified” report. Register now!

Keep CALM And Embrace DevOps – Culture

Keep CALM And Embrace DevOpsThis is my first blog post at AppDynamics, and I have to say that it’s great to be aboard. It’s been a hectic first couple of weeks, but the energy, enthusiasm and friendliness of everyone I have met has made me very excited about 2015! AppDynamics has a market leading APM and analytics platform but it also takes great people to make a great company – and AppDynamics has a wealth of talent!

So to start my blogging life at AppDynamics I want to focus on something that is a red-hot buzzword in IT at the moment – DevOps! For the last couple of years, every conference I attended and nearly every call I had at Forrester had a discussion on DevOps. There is a good reason for this, because from my perspective, DevOps promises that the software-powered business can deliver quality app-fueled business services rapidly, to delight customers.

I would also say that the term DevOps is not encompassing enough as it’s not just about development and operations. It’s also about the business, because in today’s digital business, every function is accountable for software strategy success. Take a look here at this blog by my colleague Anand Akela for our view on BizDevOps.

The CALMS Framework For DevOps

In most discussions I have had with ops professionals, one question came up again and again in regards to DevOps, and this was “Is there a framework for DevOps adoption?” Now there are good reasons for this question and one is that many enterprise ops folk are aware of frameworks such as ITIL. But rather than a framework, ITIL morphed itself into a whole, daunting library of books. This is something, which I hope never happens with DevOps!

But there is one framework that has emerged which was originally coined by Jez Humble (@jezhumble), a pioneer in DevOps. Jez devised the CALMS framework, which makes perfect sense to me (Ok I know the title of my blog says ‘Keep Calm’ but I am in marketing now ☺):

C – Culture
A – Automation
L – Lean
M – Measurement
S – Sharing

Let’s Start With Culture

Culture – One of the most used words in business today. “It’s all about the culture…”,Culture is how organizations ‘do things;…“, “Culture is about rituals….”, etc., etc. Well, that clears up what culture means?!?!? The reality is that culture is such a fluffy term that it means something different to different circumstances, people, businesses, industries, countries, etc.

To define culture and what it means for DevOps you have to understand the starting point, or the challenges that your enterprise’s operating model currently faces in regards to software strategy and what the end, new ‘DevOps’ operating model looks like. Here are a couple of examples:

  1. Shift from a fear of failure, to a fail-fast, fail-forward approach. DevOps is about speed. If your operating model today promotes zero failure and employees are scared of failing then you will stifle the ability to innovate, to promote new ways of doing things, to move fast. Failure can be good so long as we learn and improve. This is all part of innovation, which is central to DevOps.
  2. Shift from a tech obsession, to a customer obsession. In today’s digital world, it can be so easy to get caught up with tech buzzwords such as mobile, wearables and cloud. But the rules of business have not changed. Deliver what your customers want, delight them, and hopefully they will help to promote your brand. This means that every employee has to think about the external customer, their needs, their wants in order to guide strategic decisions. It’s about moving from an inside-out to an outside-in operating model.
  3. Shift from organizational silos to a collaborative model. Let’s face it, the desire for collaboration across different business functions has always been a goal. But collaboration is never as good as it can be. I could write a whole book about why, but largely this is because business = people = different agendas = politics = failure to collaborate. To move fast, deliver quality software rapidly, and then it’s essential we get collaboration right. This is not just about collaboration between Dev and Ops, but an operating model that promotes collaboration across business functions (e.g. digital teams, marketing), development and operations.
  4. Shift from big data confusion to real-time information driven insight. In the fast moving world of DevOps information and insights in context will be your business lifeline. This means that having application data such as engagement, technical and business data (revenue etc.) changed quickly into information that can be consumed by different business audiences is essential to making fact-based strategic decisions. So we have to move away from the current confusion around big data and analytics and shift to an operating model that makes an analytics solution that focuses on applications, a core part of making strategic decisions in regards to software strategy.      

Why APM And Analytics Is Key To DevOps Culture

Having worked with many enterprises and APM solution vendors in my time at Forrester, I believe that a great APM solution can support all four of my points above. We are at a turning point in the APM market as APM is no longer just about incident management and response but is about making sense of application data, turning it into insight to support business decisions. A great APM solution today has analytics baked in and it has to be simple to use i.e. simple to turn data into information, simple to display information to different audiences. At AppDynamics, we uphold three core principles for our application intelligence platform:

  • See – Our platform is able to safeguard and optimize application performance from the end-user (customer or employee) through to the infrastructure workload and database/data store backend. This means that we can detect potential issues before they impact the customer. This supports a fail-fast, fail-forward operating model in a customer-obsessed business.
  • Act – Our platform includes automation features meaning that the business can respond quickly to potential problems that could impact the customer. So for example, if an application server is being maxed out, we make it easy to automatically spin up another server before a performance or outage issue. On top of this our war room feature makes it simple for the business, development and operations to collaborate looking at the same information in real-time, to resolve issues quickly.
  • Know – Our integrated application analytics makes it easy to display information in context of the audience. We collect all application data and make it simple to for technical or non-technical audiences to change this data into information so that strategic insights can be made.

In my next blog I will tackle Automation but for now I would love to hear your comments on DevOps and culture.

Anand and IDC Vice President, Stephen Elliot, the author of the “DevOps and the Cost of Downtime: Fortune 1000 Best Practice Metrics Quantified” report will be speaking on and upcoming webinar about DevOps best practices. Register now!

 

APM & DevOps: How To Collaborate Effectively with the Virtual War Room

We’ve all been there: the dreaded War Room.

5FGR9l8

When a critical application starts behaving badly, everyone and their uncle is on high alert, on conference bridges, chat sessions, and often physically in the same room trying to get to the bottom of the problem. And here’s how the conversation usually goes:

    Non-Technical Lead: “Okay, what do we know?”
    IT Ops Lead: “The system is slow and crashing intermittently.”
    Non-Technical Lead: “And?”
    IT Ops Lead: “And we don’t know why. Server team, what can you tell us?”
    Server Lead: “Servers are fine. Maybe there’s a problem with the network?”
    Network Lead: “Network looks fine. Maybe it’s the database?”
    Database Lead: “Databases look fine. The application might be doing something it shouldn’t though.”
    Non-Technical Lead: “$#?@! Customers can’t check out and we’re losing revenue by the minute — how can you say each of your systems is fine?!?!”

At this point, everyone looks to their individual tools with their system-specific metrics. Server team talks CPU, memory, and disk I/O; network team talks throughput, packet loss, and latency; database team talk top queries, cache-hit ratios, and connection counts.

And round-and-round they go. Eventually, the dev team is called in. Then it just gets worse.

But there’s a better way.

Over the past 10 years, performance monitoring has revolutionized IT, specifically because measuring application behavior in terms of Business Transactions is something *everyone* can agree on, because everyone understands what it means when “Checkout” is slow, stalled, or errored, even the Non-Technical Lead.

And the best part is, Dev and Ops can agree on it too.

That said, until now there’s still been a significant limitation when it comes to accessing this data, as there hasn’t been a simple way for everyone to get on the same page, looking at the same data, on the same screen, in a purpose-built collaboration solution designed around application performance monitoring.

Which is why AppDynamics new Virtual War Room is so groundbreaking.

Starting a session is easy as can be — one click and you’re up and running. From there, invitations can be sent to anyone to join.

Once in the Virtual War Room, everyone’s immediately on the same page, with dashboards and widgets providing business context with application transaction data. For instance, the example below depicts an e-commerce scenario in which an order processing issue is preventing orders from being fulfilled. The Business Transaction view of Orders Processed is something everyone can understand, and therefore everyone can see there’s an issue:

Screen Shot 2014-12-03 at 11.16.03 AM

As demonstrated in this example, chat functionality allows team members to collaborate in real-time, with annotations added to the graphs when War Room Notes are entered. The conversation begins with a team member inquiring if any diagnostics have been performed (looking at Snapshots), and when it’s discovered there have been timeout exceptions, a second graph is added displaying blocked threads. Seeing the inverse relationship between Blocked Threads and Orders Processed, thread capacity is increased to resolve the issue.

It might seem simple – and for demonstration purposes the scenario definitely is – but the reality of coming to these conclusions in today’s enterprises is far from straight forward. In day-to-day practice, it’s just too easy to be consumed by a tunnel-vision worldview based on silos of responsibility. Without unbiased data, presented in clear business context, the blame game is the outcome to be expected. Thankfully, with AppDynamics Virtual War Room that doesn’t have to be the case.

In a nutshell, this is how AppDynamics enables true DevOps collaboration with our new Virtual War Room:

  1. Get everyone on the same page by looking at the same Business Transaction data
  2. Keep the focus on metrics that translate to the business value the application delivers; dive in deeper when appropriate
  3. Include as broad an audience as possible and foster communication, with chat and annotation capability
  4. Identify resolution criteria, assign ownership
  5. Take lessons learned to improve development, test, deployment, and production processes.

Now imagine doing this not just during a hair-on-fire production issue, but during a load test, deployment or pilot scenario. Powerful stuff. Only from AppDynamics.

Docker and DevOps: Why it Matters

Screen Shot 2014-10-21 at 10.36.00 AMUnless you have been living under a rock the last year, you have probably heard about Docker. Docker describes itself as an open platform for distributed applications for developers and sysadmins. That sounds great, but why does it matter?

Wait, virtualization isn’t new!?

Virtualization technology has existed for more than a decade and in the early days revolutionized how the world managed server environments. The virtualization layer later became the basis for the modern cloud with virtual servers being created and scaled on-demand. Traditionally virtualization software was expensive and came with a lot of overheard. Linux cgroups have existed for a while, but recently linux containers came along and added namespace support to provide isolated environments for applications. Vagrant + LXC + Chef/Puppet/Ansible have been a powerful combination for a while so what does Docker bring to the table?

Virtualization isn’t new and neither are containers, so let’s discuss what makes Docker special.

The cloud made it easy to host complex and distributed applications and their lies the problem. Ten years ago applications looked straight-forward and had few complex dependencies.

Screen Shot 2014-10-21 at 10.35.22 AM

The reality is that application complexity has evolved significantly in the last five years, and even simple services are now extremely complex.

Screen Shot 2014-10-21 at 10.35.28 AM

It has become a best practice to build large distributed applications using independent microservices. The model has changed from monolithic to distributed to now containerized microservices. Every microservice has its dependencies and unique deployment scenarios which makes managing operations even more difficult. The default is not a single stack being deployed to a single server, but rather loosely coupled components deployed to many servers.

Docker makes it easy to deploy any application on any platform.

The need for Docker

It is not just that applications are more complex, but more importantly the development model and culture has evolved. When I started engineering, developers had dedicated servers with their own builds if they were lucky. More often than not your team shared a development server as it was too expensive and cumbersome for every developer to have their environment. The times have changed significantly as the cultural norm nowadays is for every developer to be able to run complex applications off of a virtual machine on their laptop (or a dev server in the cloud). With the cheap on-demand resource provided by cloud environments, it is common to have many application environments dev, QA, production. Docker containers are isolated, but share the same kernel and core operating system files which makes them lightweight and extremely fast. Using Docker to manage containers makes it easier to build distributed systems by allowing applications to run on a single machine or across many virtual machines with ease.

Docker is both a great software project (Docker engine) and a vibrant community (DockerHub). Docker combines a portable, lightweight application runtime and packaging tool and a cloud service for sharing applications and automating workflows.

Docker makes it easy for developers and operations to collaborate

Screen Shot 2014-10-21 at 10.35.35 AM
DevOps professionals appreciate Docker as it makes it extremely easy to manage the deployment of complex distributed applications. Docker also manages to unify the DevOps community whether you are a Chef fan, Puppet enthusiast, or Ansible aficionado. Docker is also supported by the major cloud platforms including Amazon Web Services and Microsoft Azure which means it’s easy to deploy to any platform. Ultimately, Docker provides flexibility and portability so applications can run on-premise on bare metal or in a public or private cloud.

DockerHub provides official language stacks and repos

Screen Shot 2014-10-21 at 10.35.41 AM

The Docker community is built on a mature open source mentality with the corporate backing required to offer a polished experience. There is a vibrant and growing ecosystem brought together on DockerHub. This means official language stacks for the common app platforms so the community has officially supported and quality Docker repos which means wider and higher quality support.

Since Docker is so well supported you see many companies offering support for Docker as a platform with official repos on DockerHub.

Screen Shot 2014-10-21 at 10.35.48 AM

Want to take a test drive of AppDynamics? It has never been easier by using Docker to deploy a complex distributed application with application performance management built-in via the AppDynamics Docker repos.

Find out more and get started with Docker today.