Gartner Report Reveals Why Your APM Strategy Needs AIOps

Is application performance monitoring (APM) without artificial intelligence a waste of resources?

It turns out, the answer may be yes. Gartner’s newly released report, Artificial Intelligence for IT Operations Delivers Improved Business Outcomes, reveals that using artificial intelligence for IT operations (AIOps) in tandem with APM might be the key to optimizing business performance.

So why exactly do AIOps and APM make such a powerful pair and, perhaps more importantly, how can you start applying an AIOps mindset to APM in your own organization?

AIOps and APM: Great Alone, Better Together

Application performance monitoring (APM) is the key to proactively diagnosing and fixing performance issues, but a new study from Gartner reveals the many incremental benefits IT teams can derive from leveraging AIOps in conjunction with APM. Adding artificial intelligence into the mix gives IT and business leaders visibility into the right data at the right time to make decisions that maximize business impact. The power of AI in relation to APM is that most APM environments generate massive quantities of data that humans can’t possibly parse and derive meaning from fast enough to make it useful. Through machine learning, we can ingest that data, and over time, develop intelligence around what matters within an application ecosystem. As Gartner reveals, “AIOps with APM can deliver the actionable insight needed to achieve business outcomes such as improved revenue, cost and risk.”

Consider the process of assessing customer satisfaction based on customer sentiment data and related service desk data. Without using both AIOps and APM, infrastructure and operations (I&O) leaders might come to the conclusion that customers are delighted based on fast page load times. But by using AI to also ingest and analyze data from order management and customer service applications, I&O leaders can find correlations between IT metrics and business data such as revenue or customer retention. This level of insight offered by AIOps allows business leaders to make informed decisions and prioritize actions that will quickly improve customer satisfaction and, ultimately, the bottom line.

Applying AIOps to APM

Here are three ways I&O leaders can leverage AIOps together with APM to achieve incremental benefits—the step-by-step technical strategies for which can be found in Gartner’s new report:

1. Map application performance metrics to business objectives by using AIOps to detect unsuspected dependencies.

AIOps can be used to help measure IT’s activities in terms of benefits to the business—such as an increase in orders or improved customer satisfaction. To do this, I&O leaders should start by collaborating with key business stakeholders to identify the mission-critical priorities of the business relative to applications. Next, acquire the data supporting the measurement of these selected objectives by capturing the flow of business transactions such as orders, registrations and renewals. After inspecting their payloads, you can then use AIOps algorithms to detect patterns or clusters in the combined business and IT data, infer relationships, and determine causality.

2. Expand the ability to support prediction by using AIOps to forecast high probability future problems.

“AIOps provides insight into future events using its ability to extrapolate what is likely to happen next, enabling I&O leaders to take action in order to prevent impact,” Gartner states. As such, I&O leaders should take advantage of the many ways machine learning algorithms can provide value: predicting trends, detecting anomalies, determining causality and classifying data. Use AIOps algorithms to predict future values of time-series data such as end-user response time, engage in root-cause analysis of predicted issues to determine the true fault, and take preventative measures to prevent the impact of predicted problems.

3. Improve business outcomes by applying AIOps to customer and transaction data.

The pattern recognition, advanced analytics and machine learning capabilities of an AIOps solution can extend APM’s historical insight into application availability and performance to provide business impact. By using AIOps’ machine learning capabilities—including anomaly detection, classification, clustering and extrapolation—you can analyze behavior (e.g., customer actions during the order process) and relate that behavior to events afflicting the underlying IT infrastructure. Use the clustering and extrapolation algorithms contained within AIOps to detect unexpected patterns or groupings in your data and predict future outcomes. From there, you can correlate IT problems with changes in business metrics and establish how changes in application performance and availability impact customer sentiment.

Augmenting APM with Artificial Intelligence

The verdict is in and the evidence is compelling: AIOps is the key to maximizing the business impact of your APM investment.

Using AIOps together with APM can help I&O leaders more effectively align IT and business objectives, expand the ability to support prediction, and improve business performance. Leveraging AIOps can take your APM strategy to the next level, giving IT and business leaders the deep insight they need to make decisions that increase revenue, reduce costs, and lower risk.

Application performance management is already a critical tool that belongs in every IT leader’s toolbox, and AIOps is a game-changing technology set to transform APM and IT operations in a major way. As one analyst recently wrote for Forbes, “AIOps is gearing up to be the next big thing in IT management…When the power of AI is applied to operations, it will redefine the way infrastructure is managed.” In today’s competitive business landscape, companies need an edge to survive and thrive—and it seems APM with AIOps might just be the golden ticket.

Access the Full Research

For more exclusive insights into Gartner’s research on why—and how—you should apply AIOps to APM, download the report Artificial Intelligence for IT Operations Delivers Improved Business Outcomes.

Gartner, Artificial Intelligence for IT Operations Delivers Improved Business Outcomes, Charley Rich, 12 June 2018

AWS re:Invent Recap: Freedom for Builders

AWS re:Invent continues to build momentum. Amazon’s annual user conference, now in its seventh year, took place last week in Las Vegas with more than 50,000 builders on hand to share stories of cloud successes and challenges. The atmosphere was exciting and fast-paced, with Amazon once again raising the bar in the public cloud space. And AWS, which unveiled a plethora of new capabilities before and during the show, continues to delight and innovate at a rapid pace.

Are We Builders?

In his opening keynote, AWS CEO Andy Jassy shared a new theme that reverberated throughout the event: software engineers are now “builders,” not “developers.” Indeed, the internet recently has been ablaze with discussion and debate on this moniker shift.

Whether you see yourself as a builder or developer, Jassy helped categorize builders into different personas for enterprises that either like or dislike guardrails. (In AWS, a guardrail is a high-level rule that prevents deployment of resources that don’t conform to policies, thereby providing ongoing governance for the overall environment.)

If you like guardrails you could easily implement machine learning labeling with SageMaker. Not a fan of guardrails? AWS still focuses on core compute, which helps autoscale, for example, the exact number of GPUs a machine learning process needs with Amazon Elastic Inference. Still, the question remains: Are we builders or developers? The debate likely won’t end soon, but one thing is certain: the lower bar of entry for application infrastructure gives us the freedom to pick the best building blocks for our AWS environments.

Simplifying Cloud Migration

Depending on where you are in your cloud migration journey, the move to a public cloud vendor could cannibalize your development stack. AWS certainly has lowered the bar of entry, making diverse application infrastructure available to more individuals and organizations. In fact, you potentially could build a robust API without writing a single line of code on AWS.

Freedom to Transform

Another builder-related term popular at re:Invent was “freedom.” The most notable example was the series of announcements for databases evolving to Amazon Timestream, a managed time series database—check out the hashtag #databasefreedom to learn more. Regardless of whether you agree or disagree with Jassy’s “builder” designation, today’s enterprise is ripe for transformation. Partnering with AppDynamics can help you become an Agent of Transformation.

AWS in Your DataCenter

Some big news from the show made migrating your datacenter to AWS even more tangible. AWS has partnered with VMware to introduce AWS Outposts, an interesting proposition that seems to address the one place AWS was not reaching—the physical datacenter. Microsoft Azure has a similar product in Azure Stack, a combined software and hardware offering with the established management capabilities of VMware. Innovation in both platforms is sure to drive greater competition.

A Driverless Future

The dream of autonomous vehicles has been building ever since the first automobile hit the road. For re:Invent attendees who used Lyft to get around, this dream is now far more real. Lyft partnered with Aptiv to provide autonomous transport up and down the Vegas Strip, allowing those who opted in to be taken from session to session in a self-driving car.

Amazon Web Services also garnered a lot of buzz by introducing AWS DeepRacer, a scaled-down autonomous model race car. This isn’t just another toy, however. DeepRacer is a great tool for teaching machine learning concepts such as reinforcement learning, and for mastering underlying AWS services such as AWS SageMaker. Coupled with the large number of autonomous rides powered by Aptiv, many attendees will no doubt be inspired to use DeepRacer to study up on ML concepts.

AppD at re:Invent


AppDynamics’ Subarno Mukherjee leading an AWS session.

AppDynamics Senior Solutions Architect Subarno Mukherjee led an AWS session called “Five Ways Application Insights Impact Migration Success.” Subarno offered great insights into the importance of the customer and user experience, and how it impacts the success of your public cloud migration.

AppDynamics ran ongoing presentations in our booth throughout the show, covering the shift of cloud workloads to serverless and containers, maturing DevOps capabilities and processes, and the impending shift to AIOps.

Attendees showed a lot of interest in our Lambda Beta Program, too. Feel free to take a look at our program and, if interested, sign up today!

AppD Social Team

The AppDynamics social team was in full swing at re:Invent. From handing out prizes and swag to helping expand the AppDynamics community, we had a great time meeting our current and future customers. AppDynamics and AWS also co-hosted a fantastic happy hour at The Yardbird after Thursday’s sessions.

See You Next Year

We were thrilled to be a part of this year’s re:Invent. The great conversations and shared insights were fantastic. We’re looking forward to expanding our AWS ecosystem and partnerships—and to returning to re:Invent in 2019!

 

 

AIOps: A Self-Healing Mentality

The first time I began watching Minority Report back in 2002, the film’s premise made me optimistic: Crime could be prevented with the help of Precogs, a trio of mutant psychics capable of “previsualizing” crimes and enabling police to stop murderers before they act. What a great utopia!

I quickly realized, however, that this “utopia” was in fact a dystopian nightmare. I left the theater feeling confident that key elements of Minority Report’s bleak future—city-wide placement of iris scanners, for instance—would never come to pass. Fast forward to today, however, and ubiquitous iris-scanning doesn’t seem so far-fetched. Don’t believe me? Simply glance at your smartphone and the device unlocks.

This isn’t dystopian stuff, however. Rather, today’s consumer is enjoying the benefits that machine learning and artificial intelligence provide. From Amazon’s product recommendations to Netflix’s show suggestions to Lyft’s passenger predictions, these services—while not foreseeing crime—greatly enhance the user experience.

The systems that run these next-generation features are vastly complex, ingesting a large corpus of data and continually learning and adapting to help drive different decisions. Similarly, a new enterprise movement is underway to combine machine learning and AI to support IT operations. Gartner calls it “AIOps,” while Forrester favors “Cognitive Operations.”

A Hypothesis-Driven World

Hypothesis-driven analysis is not new to the business world. It impacts the average consumer in many ways, such as when a credit card vendor tweaks its credit-scoring rules to determine who should receive a promotional offer (and you get another packet in your mailbox). Or when the TSA decides to expand or contract its TSA PreCheck program.

Of course, systems with AI/ML are not new to the enterprise. Some parts of the stack, such as intrusion detection, have been using artificial intelligence and machine learning for some time.

But with AIOps, we are entering an age where the entire soup-to-nuts of measuring user sentiment—everything from A/B testing to canary deployment—can be automated. And while there’s a sharp increase in the number of systems that can take action—CI/CD, IaaS, and container orchestrators are particularly well-suited to instruction—the harder part is the conclusions process, which is where AIOps systems will come into play.

The ability to make dynamic decisions and test multiple hypotheses without administrative intervention is a huge boon to business. In addition to myriad other skills, AIOps platforms could monitor user sentiment in social collaboration tools like Slack, for instance, to determine if some type of action or deeper introspection is required. This action could be something as simple as redeploying with more verbose logging, or tracing for a limited period of time to tune, heal, or even deploy a new version of an application.

AIOps: Precog, But in a Good Way

AIOps and cognitive operations may sound like two more enterprise software buzzwords to bounce around, but their potential should not be dismissed. According to Google’s Site Reliability Engineering workbook, self-healing and auto-healing infrastructures are critically important to the enterprise. What’s important to remember about AIOps and cognitive operations is that they enable self-healing before a problem occurs.

Of course, this new paradigm is no replacement for good development and operation practices. But more often than not, we take on new projects that may be ill-defined, or find ourselves dropped into the middle of a troubled project (or firestorm). In what I call the “fog of development,” no one person has an unobstructed, 360-degree view of the system.

What if the system could deliver automated insights that you could incorporate into your next software release? Having a systematic record of real-world performance and topology—rather than just tribal knowledge—is a huge plus. Similar to the security world having a runtime application self-protection (RASP) platform, engineers should address underlying issues in future versions of the application. In some ways, AIOps and cognitive operations have much in common with the CAMS Model, the core values of the DevOps Movement: culture, automation, measurement and sharing. Wouldn’t it be nice to automate the healing as well?

The Human Masterminds Behind AI at AppDynamics

A renown data scientist at Bell Laboratories, Tian Bu was ready for a new challenge in early 2015. But of all the places he imagined himself working, Cisco wasn’t on the list. Bu thought of Cisco as a hardware company whose business appeared to lack the very thing that mattered most to him—compelling problems that could be solved through a deep understanding of data. However, at the urging of a friend, Bu agreed to take a closer look.

What he found surprised and intrigued him. Earlier that year, Cisco had begun talking up a more software-centric approach with the announcement of the Cisco ONE software licensing program. But there was a great deal more to the new software-centric strategy than what had been publicly announced. Cisco was planning to disrupt the market and itself with a highly secure, intelligent networking platform designed to continually learn, adapt, automate, and protect. Such a platform would depend on machine learning and artificial intelligence. Cisco was offering Bu an opportunity he had been preparing for his entire career.

Bu had joined the Labs in 2002 as a member of the technical staff after distinguishing himself as a Ph.D. student at the University of Massachusetts, Amherst. With the support of DARPA and in collaboration with the Lawrence Berkeley National Laboratory, he had applied the same tomographic techniques used in medical imaging to the Internet, creating algorithms for predicting bottlenecks and other issues. A paper he co-authored on the project,  “Network Tomography on General Topologies,” was published in the Proceedings of the 2002 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems and recognized ten years later with a “Test of Time” award.

In 2007, Bell Labs Ventures approached Bu about creating an internal startup to commercialize his research on analyzing and optimizing wireless networks. Within 18 months, the technology was deployed in several Tier One networks. Momentum continued to build, and the startup was acquired by Alcatel-Lucent’s network intelligence business unit in 2010. In 2012, the Labs lured Bu back with the promise of applied research. For nearly three more years, he delved into questions about wireless networking and data monetization.

Joining Cisco would represent a radical change. If Cisco succeeded in its transformation, Bu would be at the forefront of figuring out how to automate IT and design genuinely self-healing systems. Not all the pieces were in place, but neither Cisco nor Bu could afford to wait. He decided to take a leap of faith and begin building a team.

His first hire was Anne Sauve, an expert in forecasting with a Ph.D. in electrical engineering. Sauve had a unique background, which Bu believed would be useful in finding insights into the millions of metrics per second that were streaming in from modern IT systems. During her doctoral studies at the University of Michigan, Sauve had specialized in statistical signal processing. Since then she had built up six years of experience in bioinformatics and genomics and nine years in medical imaging and 3D modeling. Her last job before joining Cisco was at a startup, where she developed a churn predictor for customer renewals and natural language processing algorithms to derive insights from customer tickets.

“What I liked about Cisco was its culture of rigorous engineering and the fact that it is grounded in reality,” she said. As Sauve dove into her work, producing a time series clustering algorithm to help determine the root cause of performance issues from streaming data and a new ensemble approach to forecasting, a second data scientist named Jiabin Zhao joined the group. An internal transfer from Cisco, Zhao brought more than a decade of experience working with IT data.

When Cisco acquired AppDynamics and Perspica in 2017 the size of the team more than doubled. AppDynamics had two seasoned data scientists: Yuchen Zhao and Yi Hong. Zhao and Hong both had worked for several years applying machine learning to the root cause analysis of problems affecting application performance. Their work included the algorithms that allowed customers to search for the relevant fields that were causing a business transaction to slow down. In addition, Zhao had shared two patents with Arjun Iyer, the senior engineering director, on automating log analysis and anomaly detection.

While AppDynamics’ strength lay in surfacing insights from stored data, Perspica applied machine learning and artificial intelligence to massive amounts of streaming data. Its cloud-based analysis engine could ingest and process millions of data points in real time. It offered the ability to automate threshold management and root cause analysis (RCA) and to predict problems at scale, complementing AppDynamics’ approach to those problems. While the pieces would have to be integrated, together they represented an extremely powerful AI solution.

From Bu’s point of view, the influx of talent from AppD and Pespica was as important as the technology. J.F. Huard, Perspica’s founder, now CTO of Data Science at AppDynamics, and Philip Labo, Perspica’s principal data scientist, were particularly strong additions to the team. Like Bu, Huard had spent time in the early 1990s at Bell Labs while simultaneously earning a doctorate at Columbia University. His research focus in those days was expert systems for network management. After graduating, he pioneered the application of advanced math to provide QoS in programmable networks at a company he co-founded called Xbind. He subsequently started three more companies including one that managed dynamic resource allocation based on game theory and another that focused on predictive analytics. Perspica was Huard’s fifth company.

Years of experience had brought Bu and Huard to the same conclusion: progress in machine learning and AI came from applying the right solution. It was insight and experience that distinguished one data scientist from another.

Labo was a post-doc at Stanford University when he met Huard to interview for a job at Perspica. He remembered how Huard had enthusiastically described a problem and then asked him to solve it. “I was thinking of elaborate solutions based on my work at Stanford,” Labo recalled. “JF was like, ‘No! Principal Component Analysis.’” PCA was a statistical procedure invented in 1901, and Labo was initially unimpressed. But as he thought about it more,  he realized PCA represented an elegant and simple solution to the problem Huard had posed.

Labo was drawn to the opportunity to put his background in applied math to work solving real-world problems for customers. In graduate school he had developed expertise in real-time multivariate analysis. Though the focus of his work was change point detection in yeast population evolution, the underlying ideas were curiously applicable to multivariate anomaly detection in computer data. “There’s something really funny about math in general and applied math in particular,” he said. “It just kind of works in a lot of different situations.”

Bu said Labo’s training has indeed been useful as the team has doubled down on multivariate anomaly detection. Overall, the diversity of backgrounds and depth of experience ensures that AppD will not blindly apply AI, but will choose the most appropriate solutions—ones that are both high quality and efficient to implement.

Given an industry shortage of senior data scientists, Bu said he feels particularly lucky to have a team that has spent years applying machine learning and AI to the entire stack—from applications to the network and beyond. “The strength of the team is that we are not just data scientists who know our math, we are also very familiar with the IT analytics domain,” he said.

The automation of IT at AppDynamics and Cisco is well on its way, Bu noted, with the right people applying the right solutions to important industry problems. For now, the team is focused on time series analysis, classification, and clustering. AppDynamics will be talking more in the near future about how customers can leverage their progress to spot problems sooner, find the root cause faster, and reduce system downtime.

Until then? “We are full speed ahead,” Bu said.

 

8 Reasons Enterprises Are Slow to Adopt Machine Learning

As CTO of Data Science at AppDynamics, and in my previous role as co-founder of Perspica, I’ve seen machine learning make huge strides in recent years. ML has helped Netflix perfect binge watching, taught Siri how to sound more human and made Amazon Echo a fashion consultant. But when it comes to machine learning use cases for the enterprise, it gets a whole lot more complicated. It’s easy to apply an algorithm to a one-off use case, but comprehensive enterprise applications of machine learning don’t exist today.

Here are the top 8 challenges standing in the way of widespread adoption of machine learning in the enterprise.

1) Confusion Over What Constitutes Machine Learning

Part of the problem is a lack of understanding around what machine learning is. Machine learning is an application or subset of AI, which is generally thought of as higher-order decision-making intelligence.

Machine learning is really about applying mathematics to different domains. It locates meaning within extremely large volumes of data by canceling out the noise. It uses algorithms to parse the data and draw conclusions about it, such as what constitutes normal behavior.

2) Uncertainty About What Machine Learning Can Do

Machine-learning algorithms don’t enter chess tournaments. What they are really good at is adapting to changing systems without human intervention while continuing to differentiate between expected and anomalous behavior. This makes machine learning useful in all kinds of applications—think everything from security to health care—as well as classification and recommendation engines, and voice and image identification systems.

Consumers interact daily with dozens of machine learning systems including Google Search, Google ads, Facebook ads, Siri and Alexa, as well as virtually any online product recommendation engine from Amazon to Netflix. The challenge for enterprises is understanding how machine learning can add value to their business.

3) Getting Started Can Be Daunting

Machine learning is usually introduced into an enterprise in one of two ways. The first is that one or two employees start applying machine learning to gain insight into data they already have access to. This requires a certain amount of expertise in data science and domain knowledge—skills that are in short supply.

The second is by purchasing a solution, such as security software or application performance management solution, that uses machine learning. This is by far the easiest way to begin to realize some of the benefits of machine learning, but the downside is an enterprise is dependent on the vendor and is not developing its own machine learning capabilities.

4) The Challenge of Data Preparation

Machine learning can sound deceptively simple. It’s easy to assume that all you have to do is collect the data and run it through some algorithms. The reality is very different. Once you collect the data then you have to aggregate it. You need to determine if there are any problems with it. Your algorithm needs to be able to adapt to missing data, outlying data, garbage data, and data that’s out of sequence.

5) The Lack of Public Labelled Datasets

In order for an algorithm to make sense of a collection of data points, it needs to understand what those points represent. In another words, it needs to be able to apply pre-established labels to the data.

The availability of publicly labelled datasets would make it much easier for companies to get started with machine learning. Unfortunately, these do not yet exist, and without them, most companies are looking at a “cold start.”

6) The Need for Domain Knowledge

At its best, machine learning represents the perfect marriage between an algorithm and a problem. This means domain knowledge is a prerequisite for effective machine learning, but there is no off-the-shelf way to obtain domain knowledge. It is built up in organizations over time and includes not just the inner workings of specific companies and industries, but the IT systems they use and the data that is generated by them.

7) Hiring Brilliant Data Scientists Is Not a Panacea

Most data scientists are mathematicians. Depending on their previous job experience, they may have zero domain knowledge that is relevant to their employer’s business. They need to be paired up with analysts and domain experts, which increases the cost of any machine learning project. And these people are hard to find and in high demand. We are lucky at AppDynamics to have a team of data scientists with broad experience in multiple fields who are doing ground-breaking work.

8) Machine Learning Lacks a Shared Vocabulary

One of the challenges encountered by organizations with successful machine learning initiatives is the lack of conventions around communicating findings. They end up with silos of people, each with their own definition of input and their own approach to sampling data. Consequently, they end up with wildly different results. This makes it difficult to inspire confidence in machine learning initiatives and will slow adoption until it is addressed.

At AppDynamics we’re excited to apply our machine learning expertise to solving enterprise IT problems. And you may be interested in my insights on how the arrival of AI and machine learning in the enterprise will have a profound impact on IT departments.

AI’s Arrival in the Enterprise Will Have Profound Implications for IT

At the end of Twentieth Century in the wake of Deep Blue’s triumph over Garry Kasparov, it was popular for computer scientists to speculate about when human begins would begin to interact with artificial intelligence. It was generally believed that machines with true reasoning capabilities were decades away. In an interview with author Michio Kaku, published in Visions: How Science Will Revolutionize the 21st Century, AI expert and Carnegie Mellon professor Hans Moravec predicted that robots would be able to model the world and anticipate the consequences of different actions sometime between 2020 and 2030.

Twenty years later, we are no longer wondering about how artificial intelligence (AI) will first appear in our lives. It has arrived in the form of virtual assistants like Alexa and self-driving cars. But this can give a misleading impression of what we can expect from AI in the next few years. AI software is not going to evolve human-like reasoning capabilities anytime soon.

Indeed, most of what is described as AI is really machine-learning algorithms that act largely as detectors. These algorithms analyze massive amounts of data and learn to discriminate between normal and anomalous behavior. AI, where it exists, is similar to a decision-support system for reacting to behavior as it changes. But even in these early stages, machine learning and AI are changing the game for IT operations. In the next few years, the impact of machine learning and AI will be profound.

The problem enterprises are facing is that computing environments have simply grown too large and too complex for human beings to monitor alone. To effectively monitor enterprise systems, IT must track millions of metrics per second. This is not a challenge that can be met by putting another screen on the wall of the network operations center. There are already too many screens, and just contemplating the number of screens that would be required is overwhelming. Even more daunting is figuring out the five or ten metrics that matter the most out of five or ten million as every new millisecond brings the system to a new dynamic state.

The company I founded, Perspica, which was was acquired last year by AppDynamics/Cisco, solved this problem for our customers by applying machine learning and AI to massive amounts of streaming telemetry data generated by applications and IT infrastructure. What Perspica did was surface all the relevant metrics and then use those metrics to accelerate root cause analysis and reduce the mean time to repair. But Perspica’s ability to grow beyond that was limited by the data that we had access to. In fact, everyone involved in machine learning and AI at that time faced the same limitation. We lacked a source of truth on which to train our algorithms to go beyond what they had already achieved.

But this limitation is rapidly being overcome. Increasingly, data scientists are gaining access to new sets of what we call labelled data—sets of numbers or strings of text that a computer can understand as a true representation of something else. Data scientists who work with IT data, in particular, are finding that enough labelled data exists that we can realistically begin talking about automating large parts of IT in the next two or three years. And that is only the beginning.

In the future, every enterprise is going to have some combination of machine learning and AI to monitor its computing environments, and, equally important to understand how changes to those environments affect business goals. As these systems are deployed, they will become smarter and more sophisticated. Every application, every server, and every port on that server will have its own unique AI model, which means if you have 50 applications running on 10,000 servers you will need to train 500,000 models. This is not something that is going to be created overnight. But once these models are put in place, self-healing systems will become standard.

We’ll see AI playing a role in everyday IT and business events. For example, imagine a large airline that is planning on holding a worldwide promotion. The airline’s IT department rolls out new application code as a canary deployment. But the monitoring system soon reveals the new release is performing worse than the old code. While the business owner and IT staff are realizing that the code push has failed, the airline’s AI system is determining the root cause is a disk space issue and taking steps to address the problem.

For many years, Perspica and others were doing detection. Today, as we broaden the libraries, increase the sets of problems that have solutions and bundle those solutions together, we’ll be able to start doing remediation. Moravec, it seemed, had the timeline correct.

What will happen in the next twenty years? The media sometimes promotes “fear of AI.” But I see AI making business more profitable and people more productive. It will improve service quality, reliability, and availability not only in IT but across all industries. And in this way, AI will not only have profound implications for IT. It is also bound to improve the human condition.

Accelerate Your Digital Business with AppDynamics Winter ‘17 Product Release

Last month at AppD Summit New York, we unveiled the latest innovations in our Business iQ and App iQ platforms, paving the way for a new era of the CIO and digital business. Delivering on this vision, we’re excited to announce the general availability of AppDynamics’ Winter ‘17 Release for our customers.

As application and business success become indistinguishable, enterprises are increasing their investment in digital initiatives. According to Gartner, 71% of enterprises are actively implementing digital strategies, and IDC predicts that companies will spend $1.2 trillion on their digital transformation in 2017 alone.

But without effective tools to correlate application and business performance – and lack of end-to-end visibility across customer touchpoints, application code, infrastructure, and network – customer experiences and employee productivity are degraded, and executives can’t analyze or justify technology investments. In fact, according to McKinsey, the digital promise still seems more of a hope than a reality, with only 12% of technology and C-level executives confident that IT organizations have been effective in this shift.

Winter ‘17 Release is Here

Business iQ just got better. Bridging the gap between the app and the business, BiQ capabilities have expanded to include:

Business Journeys

With AppDynamics Business Journeys, application teams can link multiple, distributed business events into a single process that reflects the way customers interact with the business. Business events can include transaction, log, mobile, browser, synthetics, or custom events and are long-running, from hours to days.

Application teams can create performance thresholds and quickly visualize where performance issues are impacting the customer experience. KPIs for each Business Journey inform technology investments and effectively prioritize code development and release.

In the two figures below, you can see how easy it is to set up a new Business Journey for loan approvals and visualize the impact of delays through the lens of the business.

Business_Journey_Ani_720x.gif

Fig 1: Author an end-to-end Business Journey by joining multiple distributed events.

Screen Shot 2017-10-31 at 11.27.16 AM.png

Fig 2: Quickly and easily create custom dashboards visualizing business performance.

Experience Level Management (XLM)

With XLM, enterprises can establish custom service-level thresholds by customer segment, location, or device. For example, the CIO of a major retailer may deliver tailored experiences to its top customers by setting performance thresholds across its customer channels — including website, mobile apps, in-store wireless, and in-store checkout. XLM also provides an immutable audit for service-level agreements with your customers or internal business units. The product images below show the service levels setup for a connected streaming device, giving an instant view on how services are performing against set SLAs.

Screen Shot 2017-11-01 at 10.45.06 AM.png

Fig 3: Service levels setup for a connected streaming device.

Network Visibility

Application developers, IT Ops  and network teams often work in silos using a myriad of different monitoring tools. To troubleshoot application performance issues, war rooms are created, and the lack of a common language and visibility across different tools results in finger pointing, endless debates, and slower Mean Time to Resolution (MTTR).

With the introduction of AppDynamics Network Visibility, a capability AppDynamics is uniquely positioned to address now as part of Cisco, enterprises will be able to understand the impact that the network is having on application and business performance. Network performance measurements are automatically correlated with application performance in the context of the Business Transaction. IT teams will be able to triage network issues with one single pane of glass and provide the right information to network teams before there is an impact on the end-user experience. Finally, an answer to end-to-end visibility from customer, to code, to network is here.

AppDynamics automatically discovers network devices such as reverse proxy load balancers deployed on-premises and in cloud environments and eliminates the need to use expensive network tools such as SPAN/TAP to capture and analyze network traffic.

The animation below shows out-of-box visibility into network flow maps, network metrics such as latency, throughput, retransmission rates, and critical errors, enabling IT Ops to quickly identify and isolate root cause without the need to engage network teams.

Network_Viz_Ani_720x.gif

Fig 4: Correlated and out-of-box view of network performance in context of application performance.

AppDynamics IoT

IoT devices create another channel to engage with customers, and if properly measured and optimized, can create game-changing business benefits. With new IoT visibility, businesses can convert rich and invaluable insights into consumer behavior, buying patterns, and business impacts. IoT visibility includes:

Device analytics  — Together with Business iQ, IoT visibility provides an unprecedented insight into how IoT devices are driving business impact. And because these insights are delivered through a single platform, IoT visibility is the first and only solution that maps and correlates entire customer journeys — from the device to customer touchpoint, to business conversions.

Device application visibility and troubleshooting — AppDynamics’ new IoT visibility provides an aggregated view into device uptime, version status, and performance, enabling drill-down views into the device to simplify the troubleshooting of IoT applications. The screenshot below shows a list view of all active devices. A simple double-click on a specific device takes you to the device details.

Custom dashboards — Every company measures success differently. With custom dashboards in IoT visibility, companies from any vertical can quickly build new visualizations to measure the business impact of IoT devices — from the revenue impact of a slow checkout for a brick and mortar retailer, to the customer impact of a software change in a connected car.

All_active_devices.png

Fig 5: Consolidated list view of all active smart-shelf  IoT devices and key KPIs.

Synthetic Private Agent

AppDynamics Winter ‘17 Release brings Browser Synthetic Monitoring to your internal network. By running Synthetic Private Agent on-premises, you can monitor the availability and performance of internal websites and services that aren’t accessible from the public Internet. You can also test specific locations within your company and set alerts when performance issues occur and fix them before end-user experience is impacted.

Cross-Controller Federation

As application teams start using microservices architecture, the scalability requirements have exploded, necessitating APM scale. With Cross-Controller Federation, AppDynamics is taking unified monitoring to the next level. Our customers can achieve limitless scalability and flexibility to deploy application components across multiple public and private clouds.

Only with AppDynamics, customers get complete correlated visibility and quick drill-down into the line of code, irrespective of where the application components and controllers are deployed, because controllers can participate in a federation. Another important use case is keeping APM data isolated by deploying multiple controllers yet maintaining correlated visibility for compliance, architecture, and business reasons.

KPI Analyzer

KPI Analyzer applies machine learning to automate root cause analysis. With the KPI analyzer, customers can isolate the metrics that are the most likely contributors to poor performance, and identify the likely degree of impact on the KPI for each metric, automatically. The KPI analyzer makes troubleshooting root cause as simple as clicking a prompt to surface the underlying issue most likely to be the root cause of degraded performance.

The following figure shows KPI Analyzer in action. KPIs such as average response time are displayed with metrics that are automatically identified as the root cause and scored in ranked order for quick resolution.

KPIAnalyzer.png

Fig 6: Key application KPIs and automatically-detected root causes in ranked order.

Learn More

AppDynamics’ Winter ‘17 Release is rich with other important features such as Universal Agent to simplify agent installation and configuration, Enterprise Console for streamlined controller lifecycle management, and Node.js flame graph for deeper visibility, among several other features.

Join us for a webinar on November 16th to get an in-depth look into the latest innovations and features in our Winter ‘17 Release. You can also get started with the free trial of AppDynamics Winter ‘17 today!

A Look Back Before Leaping Forward: How We Got Here and Where We’re Heading

Blackberries, Blockbusters and AS400s ruled supreme back when we were building the company nine years ago. At this time, most phones didn’t have GPS, most shopping was in person and most computing happened in one place. It seemed like much simpler times, right? Yet massive disruption was on the horizon that would completely shake up how people used and thought about technology.

Brace yourself – SaaS is coming. This radical movement proved that abstracting complexity can not only free businesses from fretting over the nuts and bolts of infrastructure, but it can also free them to think bigger about what they can accomplish through technology. This was a huge wake up call for our entire industry, and became my obsession, and our inspiration for AppDynamics.

What if the principles of SaaS were applied to monitoring solutions? What if stripping away complexities of reporting and alerting about applications could free businesses in a similar way?

After many late nights working out how software can be more of a business enabler and less of a management burden, came the invention of our machine-learning powered Business Transaction, the foundation of AppDynamics.

It’s hard enough for businesses to stay on top of the latest trends and shifting needs of consumers, managing applications should come secondary to hitting business goals. So, we engineered our product with business performance as the top priority. By pairing the right business metrics with the noise-cancelling abilities of machine learning, the root cause of business-impacting problems are brought to the forefront and many intricacies of related symptoms are collapsed underneath. As a result, enterprises get a straightforward, dynamic baseline that intelligently evolves with the business. And, for the first time, the world’s most complex systems can transform into real competitive advantages.

As time went on, digital strategies became synonymous with business strategies and consumer expectations rose to a point where “next-day” isn’t fast enough. To keep up with the pace set by titans like Apple, Google and Amazon, enterprises entered uncharted territories in cloud, DevOps and IoT causing new levels of strain on technical teams. On top of that, these developers, IT pros and CIOs were challenged to defend these changes to business colleagues who are asking if it’s worth their time and money.

The evolution of enterprises’ needs have always been the fire for our innovations and today’s announcement is no exception. With systems continuing to sprawl, businesses need a way to make sense of it all – from the depths of networking to the edge of multi-cloud. So we’re widening our scope to capture exactly how devices and the network impact the business. Another side effect of distributed systems are blind spots in customer interactions, which make it harder for CIOs to map customer journeys. To help provide a more complete view, our vision for the next generation of Business iQ is to link various distributed business events for a fuller picture that can boost opportunities to stay competitive in customer experience.

With the unrelenting rush of data coming in from countless sources, we see machine learning as the next big disruptor on the horizon. Machine learning, which sounded like science fiction not too long ago, has reached critical mass in its abilities to spot patterns for predictive analytics and automation. It can also be found in our latest announcement simplifying troubleshooting to a click. With new devices coming out daily, we don’t see data slowing down anytime soon, so expect to see more developments in machine learning from us that will help enterprises achieve the scale and speed needed to take on whatever is next in this on-demand, data-driven world.