A Look Back: Evolven Software at AppD Summit New York

One of the highlights of 2017 for AppDynamics was AppD Summit New York. We took over NYC’s Pier 36 with over 1200 attendees and made big announcements around the next generation of Business iQ, new machine learning capabilities, and upcoming IoT and network visibility.

We were also able to share the spotlight with some amazing partners, including Evolven Software, who was an exhibitor at the event. We’re excited to share this Q&A with Evolven’s CEO, Sasha Gilenson, as we reflect back on the event. 

You say that “change is the root of all evil” – what do you mean by that? 

SG: Gartner researches state that 85% of performance and availability issues are caused by some kind of change. These can be changes in configuration, data, code, workload, and infrastructure. But there is often a change that triggers an incident, unless it’s a hardware failure. Evolven customers also confirm these statistics. Everyone agrees that one of the first questions in an incident war room, is “What changed?” The challenge is that many of the changes are unknown to the IT organization. Identifying them and then deciding which change is a culprit takes a lot of time and effort

What value do you deliver to AppDynamics users? 

SG: AppDynamics provide a fantastic solution to detect performance and availability issues and to narrow down location of the issue to the exact service call, function or query taking too much time or failing, or in other words – AppDynamics tells you “what’s wrong”. However, in many cases the “what’s changed?” question still stands– why is this function or query is slow now when it worked fine just a few hours ago. Is it a change in the function itself, in the call path, in the data the function expects, workload, or environment configuration? Evolven automatically detects all these changes as they happen and correlates them with AppDynamics findings (health rule violations, performance KPIs, transaction topology), and identifies changes which  are the most probable root cause of the incident.

We asked Jonah Kowall, AppDynamics’ VP Market Development and Insights, to share his insights on the joint value proposition.

Jonah: We’ve seen different practices within the silos of IT Operations around availability and performance as a practice and change and configuration management as a practice. Often times configuration management is tied to either security or other initiatives. The result is that monitoring tools can isolate a root cause of failure or slowdown from a technical perspective, for example poorly performing code, slow sql, malformed data coming into an API call, failed hardware, or degraded hardware. By working to combine enterprise actual changes as well as configuration management and APM that isolation can be done to the level of what changed, who changed it, and what was the linked change management ticket, build, or release. This is much more powerful, especially when the technology can support both legacy or typical enterprise technologies and modern technology stacks.

What is unique about Evolven and its approach?

SG: The main goal of IT is to deliver and operate rapidly, safely and securely changes supporting organization’s business requirements. To support this goal IT implements tools to manage and automate IT processes which drive change, along with tools to monitor symptoms of issues resulting from wrong by changes.  However a critical component is still  missing today – a technology for detecting and analyzing what actually changed. Evolven Change Analytics is the only solution that automatically detects actual, most granular changes across end-to-end IT environments, estimates risk of these changes and correlates them with investigated issues. And the Evolven’s solution does all this in data centers, virtualized environments, private and public cloud

The Summit featured a session on machine learning (ML). How does Evolven use ML to benefit its customers?

SG: Machine Learning is the heart of Evolven’s analytics engine. Thousands and thousands of granular changes are implemented on daily basis in IT environments. Just knowing what changed is not enough to prevent incidents and problems and investigate them faster. IT specialists cannot spent their time going through all this data looking for relevant information.

This is the goal of our machine learning analytics – to identify the changes that risk performance and availability of business systems and directly point to the changes that caused issues that need to be investigated. Evolven also uses machine learning for correlating the changes we detect with other data from existing IT tools, like AppDynamics.

Can you share some examples of how existing customers are using Evolven?

SG: There is a wide range of user scenarios our customers follow applying Evolven in their environment. However most of them use Evolven to accelerate issue investigation and to prevent issues. For example, we have numerous customers using Evolven to analyze consistency of the application and environment configuration across load balanced and clustered servers. Detecting and remediating risky inconsistencies improves system reliability and resiliency. Let me illustrate incident investigation process using the integration between AppDynamics and Evolven with a few technical examples of real issues solved by our customers.

Example 1:

AppDynamics detects a transaction responding in less than 5 seconds. It rapidly narrows the issue down to a JDBC query that looks legitimate. Without Evolven you will need a DBA to debug the query. The DBA might need to evaluate a schema, database configuration and even environment configuration. This can take a lot of time.

But the query has worked before. What changed?

Evolven instantly presents a change deployed last night in a stored procedure that is invoked by the query. A developer for some reason added an option to recompile one of the statements in the procedure. The issue was not detected in testing as test data sets are smaller than the production ones.

Example 2:

AppDynamics detects that a significant portion of users executing a specific transaction wait for more than 10 seconds. AppDynamics points to a third party component that slows down the transaction. The component is a black box. Without Evolven production support needs to escalate the issue to a vendor supplying this component.

But only part of the users experience poor response time. What is different?

Evolven instantly compares virtual hosts running the third party component and detect a difference in the patch level of .Net framework across the servers. The component relies on .Net. Obviously some of the virtual hosts are still based on an older image that was not patched.

What was your experience from the NYC Summit?

SG: It was a very unique event in many senses. The open space layout of the venue enabled productive and quite intensive interaction between the attendees, the AppDynamics team and partners. like Evolven. The list of attendees and companies they represented was impressive. This was the first time Evolven was able to introduce its unique approach, technology and value of the integration with AppDynamics to such a wide range of AppDynamics users. I was excited to find that anyone that spoke with me or my team confirmed the value and relevance of our solution and the integration for their organizations.

Check out highlights of AppD Summit New York here.

NYC Summit Exhibitor Focus: xMatters

In anticipation of our New York City Summit Event on October 19th, we’re highlighting some of the great partners who will be in attendance in this Exhibitor Series. We’re excited to share our latest Q&A with Abbas Haider Ali, CTO of xMatters. 

AppD: What issues was xMatters founded to address?

AHA: The core principles behind our product are built around making the lives of IT teams better by automating collaboration and improving handling time for all manner of events – failed build, incidents, major outages, service disruptions, application performance issues, and more. Our market focus has been on enterprises who have to support both autonomy and velocity for teams leading their company’s digital transformations, while meeting the compliance, governance, and collaboration requirements that come into play when you have tens of thousands of people in your technology organization.

AppD: What are the main three differentiators for xMatters?

AHA: Our primary differentiators stem from the “integration-driven collaboration” philosophy that we deliver in our product. Whether you’re a developer, infrastructure engineer, DBA, ops engineer, SRE, service desk process owner, or any other member of an IT function in an enterprise, you have a core system or two that serve as the focal point of your day. Our goal is to make it really easy to connect any product into xMatters so that when it needs to get a hold of you or someone on your team, it can do so, utilizing either an on-call schedule or other ruleset, or using other systems in your enterprise to figure out who needs the info and can act on it. Three differentiators that stem from this philosophy include:

  • Enriched Notifications: Since xMattters connects your entire IT ecosystem, every notification can become an actual business insight. Our view is that a single system often provides only one piece of the puzzle that’s required to solve complex issues that arise from managing the software delivery lifecycle.

  • Actionable Alerts: Actionable doesn’t entail just a simple acknowledge, escalate, or resolve sort of behavior—it connects the recipient of an xMatters alert to the workflow to get their job done. The actions in xMatters are not only useful for the immediate recipient and the team that they’re on, but also across teams. This means you can get help from teams outside your service, or if needed, move ownership over to other teams with full context using the appropriate task management system (e.g. JIRA or ServiceNow).

  • Multi-step workflow:  xMatters lays the foundation to connect your data, tools, teams, and people. This means that any business process that spans any combination of entities has the ability to do so in an automated, context-driven, and curated fashion. As data moves at unprecedented velocities to support continuous delivery initiatives, xMatters can make sure that required issues get the visibility and efficient resolution they deserve.

AppD: We have several sessions on DevOps at AppD Summit New York – how can xMatters help speed DevOps adoption?

AHA: DevOps teams in enterprises face unique challenges that xMatters is particularly suited to address. Collaboration and automation are key tenets of a DevOps approach to building and running applications. In an enterprise, the tricky thing is that even the newest applications have dependencies on services that may not run with a DevOps approach. That means you have different parts of enterprise tech teams that run at various velocities, with different tools, under different constraints, and somehow they all have to work under a governance and accountability umbrella that is very different than what you’d find at a startup.

Our integration-driven collaboration approach steps in to allow the DevOps team to get their work done at the pace they need, while the job of moving information across tools and teams is handled transparently by xMatters. Systems of record get current state, updates, and activity logs, while operations teams can move alerts between teams as they need to. And if any of them need to work on something together, it’s a breeze to engage just the people you need into collaborative sessions in the enterprise’s platform of choice – conference calls, web meetings, Slack, Hipchat, etc.

AppD: Tell us about the joint xMatters and AppDynamics story; what does it offer?

AHA: There’s no doubt that application performance management systems are at the heart of how enterprises should be managing their IT business. At the end of the day, it’s the information in AppDynamics that tells companies how effectively they are at serving their internal and external customers. And if something goes wrong, or is about to, it’s a great place to see what’s happening and get into resolution mode quickly and in a targeted fashion. That also means that there’s a very natural connection point between our respective products. When AppDynamics sees something starting to go wrong, call it a “yellow alert,” the integration between our products and the larger xMatters integration ecosystem, is to determine who is best suited to take action on the information or what other system in the tech stack needs to be contextually aware. And the better the AppDynamics early detection is, and the better the xMatters resolution process is, the less of a business impact there will be. For “red alerts,” the impact tends to be immediate in which case the joint value proposition is even stronger.

AppD: Can you share your favorite xMatters customer success story?

AHA: One of my favorite xMatters and AppDynamics joint customer stories is FamilySearch. FamilySearch is the world’s largest, free genealogy organization. They are running 20+ petabytes of data, have 15 million page hits per day and have over 800 new names added to family trees every minute. In 2011, the site had performance issues and their ability to deliver new features was trumped by their need to respond to incidents and fix existing stability issues. Today their team is broad but nimble, and includes a DevOps organization consisting of 300+ engineers split into 45 teams, with each team having responsibility over their specific area. They also have several hundred volunteers who help index and archive assets (photographs, digital documentation, metadata, etc.).

Their event management system was built in-house. For monitoring and operational tools, they use AppDynamics, Splunk, Apica and Jira, which all feed into their event management system. They also integrated xMatters with Slack, giving software developers yet another means of getting events, as well as the ability to collaborate on the solution to problems. Monitoring also occurs at every stage of the application life-cycle. They gather the results of unit testing, acceptance testing, regression testing, and deployment. They gather metrics related to application, server, and network performance. AppDynamics is used as their primary application performance monitoring tool with Apica, and CloudWatch and Splunk as supplementary tools. The organization redefined the architecture, automated everything, and now can continuously deliver features and fixes – on a common platform, and measure for improvement. Uptime is now in the three-9’s range for most components with some in two-9’s.

AppD: Why should delegates come and visit the xMatters booth?

AHA: The xMatters booth is a great place to swing by and talk to one of our engineers who can help map out a delegate’s immediate and cross team toolchain, and walk them through how our product can help them get things done more quickly, minimizing the bombarding of alerts that turn out not to be useful or actionable.

Register here to book your free place at the NYC Summit on October 19th and meet the xMatters team there.

Abbas Haider Ali brings over 16 years of experience in networking, cloud services, software, and cloud communications. As CTO of xMatters, he is responsible for evangelizing the adoption of communications-enabled business processes, and has worked closely with more than 400 global enterprises and IT organizations to create a vision for adopting intelligent communication strategies across business scenarios. Abbas holds a BASc in Computer Engineering from the University of Toronto.

Announcing IBM Z APM Connect for AppDynamics

Today’s enterprises, built upon legacy systems, continue to innovate.

During their Symposium last week (October 1st-5th), the most significant gathering of CIOs in the world, Gartner said that legacy systems will continue to underpin the digital transformation in the enterprise. As a result, these systems must be improved, with Gartner’s Tina Nunno saying it’s time to stop thinking of legacy applications as a “dirty word.” She added that by 2023, 90 percent of current applications will still be in use.

Our customers who are transforming – but doing so on top of existing systems – echo similar sentiments, asking for improvement and visibility into their legacy systems.

Similarly, Gartner suggests that “CIOs should build on their legacy systems. They should combine their modernized legacy applications and their digital platform for massive integration complexity on a massive scale,” notes a recent press release.

Visibility into existing legacy systems is critical when measuring the digital enterprise, precisely what AppDynamics does today for countless businesses. This visibility has become a reality as AppDynamics now has an answer for the Mainframe (thanks to our partnership with IBM), and it’s just what customers have been asking for. Our goal with this solution is not to create something which goes deep into Mainframe subsystems themselves, but one which eliminates the painful and lengthy triage process by getting teams on the same page. It also reduces unnecessary escalations.

The typical Mainframe z/OS team includes many specialists for each subsystem, including specialists for MQ, IIB, CICS, IMS, and DB2. These groups have specialists with their tooling and processes to handle issues. Our solution is not to replace or change those tools. In fact, the IBM team who built this new capability creates those diagnostic tools (IBM OMEGAMON). Instead, the goal is to provide an additional transactional layer of visibility which is missing within the Mainframe and across distributed systems that call the Mainframe.

This new solution, IBM Z APM Connect, is designed to give the end-to-end picture, allowing for unified monitoring and business measurement all the way back to the mysterious system of record – the Mainframe. Plus, there is no requirement to have IBM OMEGAMON tooling installed.

Mainframes are among the most reliable, secure, and available systems on the planet. Many customers report years of uptime, with 100% availability. What’s more, these technological marvels transact vast amounts of information and business logic with a high degree of efficiency compared to their distributed brethren. This means that attention to overhead when monitoring is critical. IBM has created an incredibly efficient way of tracking these transactions while keeping the additional resource utilization very low. Low overhead aligns with what makes AppDynamics unique in production environments for the world’s biggest enterprise production systems.

Learn more about this new offering from IBM along with more detailed information in the IBM announcement letter which slates this solution for general availability on December 8th, 2017.

You can also see a demo of the new solution at AppD Summit NYC. Register here to attend the session and the entire day’s conference free of charge.

NYC Summit Exhibitor Focus: Column Technologies

In anticipation of our New York City Summit Event on October 19th, we’re highlighting some of the great partners who will be in attendance in this Exhibitor Series. We’re excited to share our latest Q&A with Blaine Pryce, Vice President of Sales, Column Technologies.

AppD: Why do customers choose Column? What makes Column so unique?

BP: There are a several reasons why our customers value us. We partner with the leading best-of-breed companies in service management, DevOps and infosecurity spaces, such as AppDynamics.

Two thirds of our employees are consultants and they are focused on making their software successful, with a service-based approach. We have established methodologies that encompass process evaluation, requirements analysis, architecture, implementation, and managed services.

AppD: What do you see as the three most significant ways in which enterprise IT Operations will change over the next 24 months?

BP: I’d have to say…

1. Adopting DevOps, and the need to take an agile approach to building software. Legacy tools and point solutions won’t allow you to succeed within a DevOps framework. Retooling needs to occur to survive in business, and this in turn requires a high degree of automation.

2. More and more apps coming downstream, which means increased complexity, release velocity, and monitoring challenges.

3. The business is now taking a more collaborative approach, and working in less silos. The increasing take-up of Business iQ is evidence of IT and the business working closer together and needing a consolidated view.

AppD: Why has DevOps adoption within the enterprise accelerated? And what needs to be done to continue adoption?

BP: Enterprises need to adopt DevOps or face the market consequences. How badly do you want to Keep your customers or go after new markets? Companies need to be competitive and e.g. optimize their eCommerce capabilities, or go out of business.

My tips for continuing the adoption momentum include undertaking an automation tool assessment, addressing bottlenecks, and working on core issues without getting hung up on the DevOps philosophy. Lastly, don’t be too pre-occupied with rework.

AppD: How do you see APM supporting DevOps as an approach?

BP: APM is critical to DevOps. It helps by improving the customer experience, identifying issues early, reducing MTTR, facilitating feedback, enabling A/B testing, and reducing dev rework and release time, whilst acting as part of an agile automation process.

AppD: What typical challenges do your customers come to you with?

BP: They usually have a huge range of disparate monitoring tools, with over 10 not being an unusual number. Automation gaps are another issue, with too many manual activities and processes that a machine can complete faster and better. Velocity of development is also a huge hurdle, and there is a strong move towards reducing sprint cycles to keep up with business demands.

AppD: Tell us about your partnership with AppDynamics. What does the partnership mean for joint customers?

BP: Column was awarded “AppDynamics 2016 Implementation Partner of the Year,” which was an honor to win. With Column, joint customers benefit from one of the leading technologies, delivered by a world class services organization. We can then help customers as they expand their usage, show them how they can do more with the tool, and achieve a better job overall.

AppD: Why should Summit delegates visit the Column booth?

BP: There are two main reasons: to accelerate their DevOps journey, and automate as much as possible.

Register here to book your free place at the NYC Summit on October 19th and meet the Column team there.

Blaine Pryce has more than 30 years’ experience in enterprise sales and professional services within the Information Technology sector. Currently he serves as VP of DevOps Sales for Column Technologies. He and his team are chartered with developing solutions for their clients via a consultative approach based on business and technology requirements.

NYC Summit Exhibitor Focus: NodeSource

In anticipation of our New York City Summit Event on October 19th, we’re highlighting some of the great partners who will be in attendance in this Exhibitor Series. We’re excited to share our latest Q&A with Joe McCann, CEO, NodeSource.

AppD: Where do you think node.js is currently in the hype cycle?

JM: We are at the tail end of phase 4, the slope of enlightenment, as there are a number of success stories and case studies as well as NodeSource’s enterprise-grade offerings shoring up the standardization of Node.js workloads for Fortune 500 enterprises.

AppD: What does your runtime do that the core node.js runtime cannot?

N|Solid is a fully-compatible enhanced Node.js runtime built for mission-critical applications. N|Solid enables organizations to develop, deploy, manage, secure, and analyze Node.js applications.

N|Solid gathers detailed metrics with minimal performance overhead, giving you unparalleled visibility into application performance, with some valuable features:

  • Installation is painless. Simply install the N|Solid runtime in place of open source Node.js; no changes to application code are necessary.
  • Real-time event loop delay alerts with detailed stack trace information can help you immediately expose and resolve issues that are otherwise tricky to detect. N|Solid is the only commercial product which offers this type of alert.
  • Notifications based on CPU and heap thresholds provide an early warning when application behavior changes, helping you resolve problems before they lead to an outage.

Additionally, N|Solid offers enhanced security through real-time vulnerability scanning and configurable security policies to help protect data and services.

AppD: What are your thoughts on ChakraCore? Is this a trend we may see more of?

JM: ChakraCore is Microsoft’s JavaScript Virtual Machine (VM) that runs in their Edge browser. It is a high-performance JavaScript VM with some interesting debugging capabilities that are currently not possible with V8, Google Chrome’s JavaScript VM, which is the VM Node.js uses.

For the past few years, NodeSource has been working alongside Microsoft, IBM and a few other companies to get Node.js in a technical position where we can “swap out” the underlying JavaScript VM, thus creating competition among JavaScript VM vendors which the users of Node.js ultimately benefit from. We saw this exact thing play out in 2009 with the “browser wars,” and now, web browsers across the board are much better for end users.

AppD: What are some of the major changes we can expect to see in the node.js project over the next few coming horizons?

JM: So far, Node.js likely won’t see a substantial amount of change – and this is a good thing. The engine in a Tesla doesn’t really change that much, as it is only 17 moving parts, and Node.js is similar in that it only has a small set of core APIs. All of the benefits and updates to a Tesla automobile are in other areas – such as software to make it faster and/or safer, but the engine doesn’t change that much, if at all. For Node.js, the majority of the innovation and changes we can expect to see will be on the periphery of the Node.js core runtime, particularly around Node modules, which is one of the reasons NodeSource created Certified Modules to bring some trust to the wild, wild west of the NPM ecosystem.

AppD: What are performance challenges that enterprise node.js users are facing that we haven’t seen in the past?

JM: Node.js is unlike any other application runtime or framework given its asynchronous, non-blocking architecture. This means that all of the tools and many best practices that applied to every other application runtime prior to Node.js do not work with Node.js. This is one of the reasons NodeSource’s N|Solid exists today. Many users of Node.js will attempt to find performance bottlenecks with traditional tools that don’t work with Node.js’ event loop. The event loop itself can be the root cause of many performance issues, and the only tool that can notify you of this is N|Solid. N|Solid coupled with AppDynamics can give you the most complete view into your Node.js performance issues, but, without both, enterprises will struggle to improve their apps’ performance.

AppD: If you could go back to 2009 when node.js was first created, what would you have done differently?

JM: I wouldn’t change much, as the success of Node.js has been its focus on broadening the community to reach as many people around the world as possible; however, the only thing that comes to mind is perhaps focusing on migration paths for developers from other programming languages. In the early days, there wasn’t a big focus on getting Java and .Net developers to come join the Node.js revolution.  I think if there was a bigger focus on migrating from Java to Node.js, for example, we would have seen a faster acceleration of enterprise adoption of Node.js.

AppD: Why should Summit delegates pay a visit to the Nodesource booth?

JM: In July, NodeSource and AppDynamics announced a new native integration. Users can now capture, view, and analyze the most comprehensive set of Node.js performance metrics available, directly in the AppDynamics controller.

Visit our booth during Summit to see a demo of this integration and to learn more about how N|Solid and AppDynamics together can offer you unparalleled visibility into the system health and behavior of your Node.js applications.

Register here to book your free place at the NYC Summit on October 19th and meet the Nodesource team there.

Joe McCann is the Founder and CEO of NodeSource and is a self-taught hacker, programmer and designer with more than seventeen years of web, mobile and software development experience. Prior to NodeSource, Joe was the CTO of the award-winning ad agency, Mother. 

AppDynamics Summit NYC: Strategic Partnerships Get Tighter

Watching the registration numbers grow for AppDynamics Summit New York City is exciting.

Summit NYC will be the largest event we’ve hosted to date, packed into a single day – October 19th, with a smaller number of tracks. The venue looks unique, and we are excited to announce and deliver high-quality content and facilitate networking conversations around the next generation of Enterprise IT challenges.

I am fortunate enough to have been working on several new products co-developed with our strategic partners. The two sessions I will be participating in include an in-depth look and demo of two key innovations, driven by partners.

Below is brief overview of what you can expect from the partner integration sessions:

IBM Z APM Connect for AppDynamics

The new Mainframe capabilities recently announced, IBM Z APM Connect, is a compelling product offering developed by none other than the IBM team who builds the software that runs Mainframes.

We could not ask for a better partner than IBM, with their deep expertise and passion for Mainframe technology. The result of the last year of work is that AppDynamics now has a scalable agent for Mainframe with significantly lower overhead than other alternatives. Overhead is a major concern in the world of Mainframe where processing and cost are linked, making efficiency paramount.

Did you know that Mainframes power most credit card swipes, which equates to over $7 trillion per year? They also process over 30 billion business transactions per day. As new ways of accessing information are introduced, including the use of new application types like IoT, wearables, and augmented reality – there will be an additional workload for Mainframes.

Assuring performance and troubleshooting end-to-end transactions is a critical problem for IT Operations teams which could be far more efficient with the right tooling and workflows. AppDynamics and IBM look to solve these problems, or if anything, make it much easier than it is today. Innovation is alive and well on Mainframe contrary to popular belief.

The IBM Z APM Connect gateway for AppDynamics has been running in public beta since May, and we’ve already seen success in mutual customer deployments. We look forward to continuing to drive Mainframe APM innovation with IBM.

Come to the session from 3:30PM to 4:30PM on Thursday October 19th, where I will be joined on stage by Nathan Brice, Senior Offering Manager with IBM and Senior Software Engineer and the Lead Architect Aaron Young.

ServiceNow Integration

The second strategic partner session we are having is with ServiceNow. AppDynamics and ServiceNow share a large number of mutual customers who are driven to get work done more efficiently.

I regularly hear stories about the change ServiceNow customers accomplish by leveraging a consistent platform across the Enterprise. Just two weeks ago, ServiceNow was kind enough to host a joint webinar, where we spoke about our mutual solutions, and how our integration is used by a large number of customers. We have also seen an acceleration in adoption of the integration as we continually improve and work in close collaboration with ServiceNow.

During Summit NYC, I will be joined onstage by Zaki Bajwa,Sr. Director at ServiceNow, where we’ll discuss our mutual strategies and how we will continue to drive innovation and use cases.

We will also demo the joint integration and solution. You’ll hear more about how AppDynamics is becoming the platform for Systems of Intelligence, and ServiceNow is becoming the platform of choice for Systems of Action. These two worlds must be intrinsically linked. Come and listen. Hopefully, it applies well your strategy. The session is scheduled for 1:00 pm to 2:00 pm on Thursday, October 19th.

We look forward to seeing you! Click here for a full agenda at AppDynamics Summit New York City.

Space is still available for this free event. Register here.

NYC Summit Exhibitor Focus: IBM Z Systems

In anticipation of our New York City Summit Event on October 19th, we’re highlighting some of the great partners who will be in attendance in this Exhibitor Series. We’re excited to share our latest Q&A with Nathan Brice, Senior Offering Manager, Z Systems Monitoring & APM at IBM. 

AppD: Can you share some details of the partnership between IBM and AppDynamics so far?

NB: In my role, I’m very focused on the tools required to manage our clients’ mainframes. Many of our clients have systems that have been running without any unplanned outage for decades, and the tools to manage these systems are critical.

It seemed to me that the rapid rise of APM software, looking at the entire end-to-end application, hasn’t to-date been able to deliver meaningful visibility into the mainframe. For many of the largest enterprises in the world, it’s often the most critical application components that are running on the mainframe.

I thought there was a tremendous opportunity to partner with AppDynamics, bringing together your market-leading APM product, together with our expertise on the mainframe, to deliver true end-to-end visibility for large enterprises with mainframes.

AppD: The integration extends the visibility of AppDynamics’ Map iQ and Diagnostic iQ into mainframe subsystems such as CICS and DB2. Can you explain this in more detail?

NB: With our planned IBM offering, you’ll install new agent code on the mainframe to track transactions in key mainframe z/OS subsystem. We’ve focused on CICS Transaction Server for the initial release as it’s one of the most commonly used subsystems. We support MQ, http & SOAP as entry points into CICS and Db2 and IMS DB backend databases. So now, in the AppDynamics flow map you’ll see additional nodes for MQ, CICS, Db2, IMS DB that are running on the mainframe.

AppD: What are the main benefits of the integration for IBM Z and AppDynamics customers?

NB: The key benefit is going to be faster isolation of problems. When transactions are slowing down somewhere in the mainframe, clients today might start by investigating the entry point, then the transaction server and then finally the database. Sometime all in parallel. Being able to clearly isolate which component is causing the slowdown is going to significantly speed problem determination and help get directly to the right mainframe engineer who can debug root cause.

There is also a huge benefit in understanding the true structure of the application. Being able to visualize where the transactions really flow, and which systems they interact with is very important when some of the back-end services have been enhanced, modified and tweaked over many, many years.

AppD: How can the integration support collaboration between two traditionally siloed teams?

NB: Typically, mainframe teams are still siloed away from other teams. The mainframe is often perceived as old, difficult, complicated and unless you work on an IBM Z System today, then this is probably what you believe too. I believe the integration of mainframe components into APM dashboards is going help application teams realize that the mainframe is just like another platform and foster greater collaboration between the teams. They will be able to understand the topology – and see just how quickly the transactions are processed!

AppD: More than $6 trillion in card payments are processed annually by mainframes. What other evidence do you have that having insights into mainframe performance is still very relevant to applications today?

NB: Most people don’t realize it, but you interact with mainframes every day. When you take cash out of an ATM, book a flight or a hotel room, or pay for something with a credit card, typically the back-end system of record that processes that transaction is going to be running on a mainframe. The latest IBM Z14 machines can process 12 billion encrypted transactions every single day. That’s the sort of scale you now need as a large retailer on a busy Black Friday.

In today’s digital world, the total volume of transactions is exploding as end-users’ expectations are rapidly evolving. How many more times to do you check your balance using a smart phone banking app compared to when you had to visit a physical bank branch? These systems of record running on the mainframe are the backbone to the modern economy.

AppD: One beta customer has said, “Before this integration, the mainframe was just a black box and we couldn’t truly manage our applications end-to-end.” Is this typical of the feedback you have received to date?

NB: Yes, that’s a very common response. In fact, one of the most enjoyable aspects of this project has been working closely with many clients as we designed the product. In my 20 years with IBM, I’ve never worked on a project that has had so much positive feedback. There is a clear demand for this capability and I’m really excited about working closely with our beta clients and sponsor users as we continue to design and develop additional capability.

AppD: Why should delegates come to the IBM booth and attend the breakout session at the NYC Summit?

NB: If you work in an enterprise that uses a mainframe and you either already have, or are considering purchasing AppDynamics, then come along and learn more about what we’ve been doing in this space. I’ll be there in person as will our project architect, Aaron Young. We’ll be able to show you a demo, talk about our future plans and answer any question you may have. Come along and find us at our booth.

Register here to book your free place at the NYC Summit on October 19th and meet the IBM team there.

NYC Summit Exhibitor Focus: Apica

In anticipation of our New York City Summit Event on October 19th, we’re highlighting some of the great partners who will be in attendance in this Exhibitor Series. We’re excited to share our latest Q&A with Carmen Carey, CEO of Apica.

AppD: What was the impetus behind Apica’s formation?

CC: Apica was founded out of the need to ensure that enterprises can deliver high quality application performance and user experience while maintaining speed, visibility and control in the delivery process. As user facing applications are increasingly critical to revenue generation, high performance application delivery is a must have component of business success, competitiveness and differentiation. We believe our unique and comprehensive load testing and synthetic monitoring platform ensures uncompromised performance encompassing internal applications, legacy systems, websites, mobile devices, content streaming, APIs and IoT use cases.

AppD: What chief pain points have you encountered when load testing has been inadequate (or even non-existent?

CC: Without load testing two key pain points are encountered that impact an enterprise and its customers. First, there is a lack of understanding of the expected traffic volume – the customer interactions an application can support and where and when an increase in capacity is required.  Second, if the customer journeys or use cases (IoT, etc.) are not tested, bottlenecks in the application aren’t identified. This means enterprises are leaving the customer experience to chance and if at any point the experience is suboptimal, customers might abandon the application or service.  Ultimately, a failure to ensure the best performance outcome puts customer satisfaction, revenue, brand and competitive advantage at risk.

AppD: Scalability is a massive issue for some enterprises – how does Apica address this challenge?

CC: Apica enables enterprises to understand and implement a fit-for-purpose scaling strategy. We do this by using load testing as a means to understand performance characteristics of applications enabling the identification and resolution of capacity thresholds and application bottlenecks before deployment. Load testing carried out across various deployment characteristics will inform a scaling model that can be well defined in the pre-production phase. As a result, enterprises can develop and implement a defined scaling strategy and plan to ensure the highest level of user experience.

AppD: Typically, what are the three main drivers behind customer investment in Apica?

CC: Customers choose Apica to gain greater confidence in the delivery of their business critical applications. This means deploying applications faster and with higher quality and to realize full visibility into application performance and user experience.

AppD: How does the partnership between Apica and AppDynamics help enterprise customers?

CC: The partnership between Apica and AppDynamics ensures that enterprises can confidently deliver highly scalable, available and high-performing applications. Our integrated offer ensures end-to-end visibility in pre and post production environments and enables organizations to adopt a proactive approach to issue identification and resolution in order to ensure the best user experience.

AppD: Can you share some of Apica’s growth plans for the next 12-18 months?

CC: We are excited to be expanding globally with a focus on the US, UK and Nordics markets.  We are especially investing heavily in growing our US team, building on the foundations we have established to date. As a company we are experiencing significant growth in both new customers and the expansion of our footprint in our existing customer base. We see the partnership with AppDynamics as a further catalyst to our growth plans and look forward to continuing to build on the success we have experienced to date.

AppD: Why should delegates at the NYC Summit visit the Apica booth?

CC: We look forward to seeing familiar faces and meeting new ones at AppDynamics’ NYC Summit. If you want to learn more about best practices for load testing, get a personal demo of our solution and see how seamlessly Apica integrates with AppDynamics, come visit our booth. We are passionate about solving performance challenges in proactive ways, and we like to have good laugh.

Register here to book your free place at the NYC Summit on October 19th and meet the Apica team there.

Carmen Carey is the CEO of Apica. Her career to date encompasses leadership roles as an executive in fast-growing global technology companies. Prior to Apica, she scaled and exited Big Data Partnership and ControlCircle as CEO, and was COO of MetaPack, COO of MessageLabs and VP of Global Services at BroadVision.

Celebrating Partnership at AppSphere 2016

 With only a few weeks to go until our third annual AppSphere 2016 user conference, the AppDynamics offices are busier than ever preparing for the big event, and this year we’ve committed to outdoing ourselves when we host this year’s Partner Day. As one of our partners, what does that mean for you? Prepare for star treatment — you’re our guest of honor!

Partner Day will kick off the week-long conference event, allowing you to get the inside scoop before everyone else in the industry. Here is a sneak peek of some of the highlights for this year’s Partner Day:

Partner Insights and Executive Q&A

AppDynamics CEO, David Wadhwani, has an important message to share regarding the future of the company and the essential role our partners play in helping us reach our goals. Join the discussion—our executives are holding a panel to answer all of your questions during the open floor Q&A session.

Gain new perspective in APM

Are you ready to disrupt the market? We are — but we need your help to do it. Our team will spend the day going in depth on topics of interest to you. Sessions include Growing an AppDynamics Practice, where we’ll present new revenue streams and share what enablement, training, and investment can mean for you. Another popular session is sure to be Product Roadmap, where Bhaskar Sunkara, CTO and Head of Product, will provide insight into future product releases. Lastly, you won’t want to miss Kendall Collins, our CMO, as he hosts a session on Building Our Unique Value Proposition — where you’ll learn how partnering with AppDynamics will give you an edge over the competition.

Partner Reception

They say “it’s all about who you know”, so don’t miss the opportunity to build up your network with like-minded professionals. AppDynamics executives are hosting the reception on Sunday evening at The Cosmopolitan Hotel’s Chandelier Bar. Enjoy an evening on us, get to know your fellow guests, and connect with some of the biggest names speaking at AppSphere 2016.

Partner Day lasts only one day, but our conference runs all week. Stay for the whole event and you can add hands-on training, 70+ sessions, and maybe even take advantage of the AppDynamics Certification, available for the first time this year. For a more comprehensive look at AppSphere 2016, make sure to check out the event page.

Exploring the AppDynamics Integration with OpenShift Container Platform

This was originally posted on OpenShift’s blog.


Regardless of whether you are developing traditional applications or microservices, you will inevitably have a requirement to monitor the health of your applications, and the components upon which they are built.

Furthermore, users of OpenShift Container Platform will often have their own existing enterprise standard monitoring infrastructure into which they will be looking to integrate applications they build. As part of Red Hat’s Professional Services organisation, one of the monitoring platforms I encounter whilst working with OpenShift in the field is the AppDynamics SaaS offering.

In this post, I will run through how we can take a Source-to-Image (S2I) builder, customise it to add the monitoring agent, and then use it as the basis of a Fuse Integration Services application, written in Apache Camel, and using the Java CDI approach.

Register with AppDynamics

Before getting into the process of modifying the S2I builder image and building the application, the first thing we need to do is register with the AppDynamics platform. If you’re an existing consumer of this service, then this step obviously isn’t required!

Either way, once registered, we need to download the Java agent. In this example, I’ve used the Standalone JVM agent, but there are many more options to choose from, and one of those may better suit your requirements.

Adding the Agent to your Image

There are two primary ways you can go about adding the AppDynamics Java agent to your image.

Firstly, you can use Source-To-Image (S2I) to add the Java agent to the standard fis-java-openshift base image at the same time as pulling in all your other dependencies – mainly source code and libraries.

Secondly, you can extend the fis-java-openshift S2I builder image itself, add your own layer containing the Java agent, and use this new image as the basis for your builds.

Using S2I

When using S2I to create an image, OpenShift can execute a number of scripts as part of this process. The two scripts we are interested in in this context are assemble and run.

In the fis-java-openshift image, the S2I scripts are located in /usr/local/s2i. We can override the actions of these scripts by adding an .s2i/bin directory into the code repository, and creating our new scripts there.


The assemble script is going to be the script that pulls in the Java agent and unpacks it ready for use. Whilst we need to override it to carry out this task, we also need it to carry on performing the tasks it currently performs in addition to any customisations we might add:


# run original assemble script

# install appdynamics agent
curl -H "Accept: application/zip" https://codeload.github.com/benemon/fis-java-appdynamics-plugin/zip/master -o /deployments/appdynamics.zip

pushd /deployments
unzip appdynamics.zip
    pushd fis-java-appdynamics-plugin-master/
    mv appdynamics/ ../
rm -rf fis-java-appdynamics-plugin-master/
rm -f appdynamics.zip

As can be seen above, we actually get this script to execute the original assemble script before we add the AppDynamics agent – this way, if the Maven build fails, we haven’t wasted any time downloading any artifacts we’re not going to use.


The run script is going to be the script that sets up the environment to allow us to use the AppDynamics Java agent, and – you’ve guessed it – run our app! Just as with the assemble script, we still want run to carry on executing our application when our customisations are complete. Therefore, all we do here is get it to check for the presence of an environment variable, and if it’s found, configure the environment to use AppDynamics.


if [ x"$APPDYNAMICS_AGENT_ACCOUNT_NAME" != "x" ]; then
    mkdir /deployments/logs
    export JAVA_OPTIONS="-javaagent:/deployments/appdynamics/javaagent.jar -Dappdynamics.agent.logs.dir=/deployments/logs $JAVA_OPTIONS"

exec /usr/local/s2i/run

In this case, we’re looking for a variable called APPDYNAMICS_AGENT_ACCOUNT_NAME. After all, if we haven’t configured any credentials for the Java agent, then it can’t connect to the AppDynamics SaaS anyway.


Finally, to bring this all together, we can use a Template to pull all of these components together, begin the build process, and deploy our application.

The S2I process is possibly the simpler of the two methods outlined here for adding the AppDynamics Java agent to your application, but it does present some points of which you need to be aware:

  • The Java agent needs to be hosted somewhere accessible to your build process. It also needs to be version controlled separately from the build, which adds extra build management overhead.
  • It will be downloaded every single time you run a build – not the most efficient way of deploying it if you have an effective CI / CD pipeline and are doing multiple builds per hour!
  • Whilst it’s simpler to configure, it can present confusing problems during the build process. For example, if your assemble script creates some directories for your application to use (logging directories, for example), you may need to think about how your build and application are being executed, and who owns what in that process.

Regardless of these minor issues, this is still a powerful (and useful!) mechanism, and as such I have provided a sample repository that allows you to execute an S2I build that should pull in the Java agent and run it alongside an application.

NOTE: If you’re still interested in using the S2I process, and want to know more about how to configure the Java agent with environment variables, skip ahead to ‘Adding the AppDynamics agent to a FIS application’.

Extending the Fuse Integration Services (FIS) Base Image

My preference for using the AppDynamics Java agent with applications built on FIS (and for similar use cases), is to add it into the base image once, so that it is accessible by the any applications based on that image.


In this example, this is done by creating a new Docker image, based on fis-java-openshift:latest and adding the Java agent into this project as an artifact to be added to that image:

FROM registry.access.redhat.com/jboss-fuse-6/fis-java-openshift:latest

USER root

ADD appdynamics/ /opt/appdynamics/

RUN chgrp -R 0 /opt/appdynamics/
RUN chmod -R g+rw /opt/appdynamics/
RUN find /opt/appdynamics/ -type d -exec chmod g+x {} +

#jboss from FIS
USER 185

In this Dockerfile, we are adding the content of the appdynamics directory in our Git repository to the fis-java-openshift base image, and altering its permissions so that it is owned by the JBoss user in that image.


In order to consume this Dockerfile and turn it into a useable image, we have a number of options. By far the simplest is to execute an oc new-build command against the repository hosting the Docker image – in the case of this image, this would be:

oc new-build https://github.com/benemon/fis-java-appdynamics/#0.2-SNAPSHOT  --context-dir=src/main/docker

Note the use of the --context-dir switch pointing to the directory containing the Dockerfile. This informs OpenShift that it needs to look in a sub-directory, not the root of the Git repository, for its build artifacts.

Once we’ve executed the above command, we can tail the OpenShift logs from the CLI (or view them from the Web Console), and see the Dockerfile build taking place. The output will be similar to this:

[vagrant@rhel-cdk ~]$ oc logs -f fis-java-appdynamics-1-build           
I0709 09:03:35.440237       1 source.go:197] Downloading "https://github.com/benemon/fis-java-appdynamics" ...
Step 1 : FROM registry.access.redhat.com/jboss-fuse-6/fis-java-openshift
 ---> 771d26abb75d
Step 2 : USER root
 ---> Using cache
 ---> c66c5f1378be
Step 3 : ADD appdynamics/ /opt/appdynamics/
 ---> ef153cb350d8
Removing intermediate container 44c776871f6f
Step 4 : RUN chown -R 185:185 /opt/appdynamics/
 ---> Running in 861f8c27225e
 ---> ee1ac493f88d
Removing intermediate container 861f8c27225e
Step 5 : USER 185
 ---> Running in 1d9fe0a02e6a
 ---> 73f598d8a0e9
Removing intermediate container 1d9fe0a02e6a
Step 6 : ENV "OPENSHIFT_BUILD_NAME" "fis-java-appdynamics-1" "OPENSHIFT_BUILD_NAMESPACE" "dev1" "OPENSHIFT_BUILD_SOURCE" "https://github.com/benemon/fis-java-appdynamics" "OPENSHIFT_BUILD_COMMIT" "d025f9961896b25fcae479d62779ae455df334d3"
 ---> Running in 510a4b51db5a
 ---> c4e938d189eb
Removing intermediate container 510a4b51db5a
Step 7 : LABEL "io.openshift.build.commit.message" "Updated the FIS build artifacts" "io.openshift.build.source-location" "https://github.com/benemon/fis-java-appdynamics" "io.openshift.build.source-context-dir" "src/main/docker" "io.openshift.build.commit.author" "Benjamin Holmes \u003canonymous@email.com\u003e" "io.openshift.build.commit.date" "Sat Jul 9 11:26:11 2016 +0100" "io.openshift.build.commit.id" "d025f9961896b25fcae479d62779ae455df334d3" "io.openshift.build.commit.ref" "master"
 ---> Running in 213844392db7
 ---> 44fede9609fd
Removing intermediate container 213844392db7
Successfully built 44fede9609fd
I0709 09:04:06.573966       1 docker.go:118] Pushing image ...
I0709 09:04:10.970516       1 docker.go:122] Push successful

NOTE: As an alternative to a standard Dockerfile build, we can use the Kubernetes Fluent DSL to generate the BuildConfig and ImageStream objects as part of a Template that will tell OpenShift to do a Dockerfile build based on the supplied project content. Using Kubernetes DSL is optional (you are more than welcome to define the objects manually), but as a Java developer this is a simple process to understand, it allows you to version control your whole image build process, and also falls nicely into the ‘configuration as code’ discipline so prominent in the DevOps world. An example of how to use the Fluent DSL is supplied in the Github repository for the AppDynamics base image.

Whichever process you decide upon (the supplied Github repository contains artifacts for both builds), OpenShift will generate a number of Kubernetes artifacts. What we are interested in here is the Image Stream…

apiVersion: v1
kind: ImageStream
  generation: 1
    app: fis-java-appdynamics
  name: fis-java-appdynamics
  namespace: dev1
spec: {}
    tag: latest

…and the BuildConfig:

apiVersion: v1
kind: BuildConfig
    app: fis-java-appdynamics
  name: fis-java-appdynamics
  namespace: dev1
    kind: ImageStreamTag
    name: fis-java-appdynamics:latest
  postCommit: {}
  resources: {}
    contextDir: src/main/docker
    uri: https://github.com/benemon/fis-java-appdynamics
    secrets: []
    type: Git
        kind: ImageStreamTag
        name: fis-java-openshift:latest
    type: Docker
  - github:
    secret: 9Y66CCaSoOipX2pgeEXs
    type: GitHub
  - generic:
    secret: IrYOFwVX0pZKSkceG4D_
    type: Generic
  - type: ConfigChange
  - imageChange:
    lastTriggeredImageID: registry.access.redhat.com/jboss-fuse-6/fis-java-openshift:latest
    type: ImageChange
  lastVersion: 1

Please note that lines have been removed from the above objects for the sake of brevity.

Once the build of the fis-java-appdynamics image has completed successfully, we will have a new base image present in our namespace that contains the AppDynamics agent plugin.


Adding the AppDynamics agent to a FIS application

Given that I have elected to follow the second method of creating a new base image with the AppDynamics Java agent added to it, I now need a way of configuring it.

NOTE: These steps are much the same as those performed if you were to use the S2I builder process. However you can see the subtle differences, such as the addition of JAVA_OPTIONS is performed by the .s2i/bin/run scriptas opposed to the DeploymentConfig in the Template in the sample repository here.

Configuring the Agent

The AppDynamics agent follows a similar agent model to many other profiling tools, in that it is added to a JVM using the -javaagent switch. When thinking in terms of immutable containers, we obviously want this whole configuration process to be as loosely coupled from the application image as possible.

With this in mind, the simplest way to configure the AppDynamics Java agent is via environment variables. This is helpful, as the AppDynamics agent prioritises environment variables over any other forms of configuration it may have available to it (such as controller-info.xml within the agent distribution). The AppDynamics Agent Configuration guide has  further information.

One option we have here is to hard code all of the requisite environment variables into an application DeploymentConfig. However, in this brave new world of immutable containers, short lived cloud applications workloads, and CI/CD pipelines we can be a bit cleverer than that.

Using a Template for the application, we can still define all of the environment variables required by the AppDynamics agent, but we can also use a mixture of templated parameters, and Kubernetes’ Downward API to effectively allow the container to introspect itself at runtime, and feed useful information about itself to the agent.

Therefore, we can produce a Template that includes an environment variables component in its DeploymentConfig section which looks a little like this:

      - name: JAVA_OPTIONS
        value: '-javaagent:/opt/appdynamics/javaagent.jar'
      - name: TZ
        value: Europe/London
        value: ${SERVICE_NAME}
               apiVersion: v1
                fieldPath: metadata.namespace
              apiVersion: v1
                fieldPath: metadata.name

Note the use of Downward API reference for APPDYNAMICS_TIER_NAME and APPDYNAMICS_AGENT_NODE_NAME.

NOTE: If you would like to try this with my sample template, you should execute the following command against your OpenShift environment:

oc create -f https://raw.githubusercontent.com/benemon/camel-cxf-cdi-java-example/appdynamics/openshift/cxf-cdi-java-example.yml

This creates the template within the current OpenShift project. This should be the same project in which you have done the fis-java-appdynamics build, otherwise OpenShift won’t be able to locate the new base image!

When we present this via the OpenShift Web Console, we are shown a much more user friendly version of the above, allowing you to key in your AppDynamics account details without the need to store them in potentially troublesome static configuration files within the container.


Once all the fields have been completed, click on ‘Create’, and (assuming all mandatory fields have been filled in), a screen will be presented confirming the successful creation of all template objects.

Once this Template has been instantiated successfully, OpenShift will start a build against the source code branch, using fis-java-appdynamics as the S2I builder image.

Be aware that this project repository contains a standard Maven settings.xml which can be used to define how Maven resolves the build dependencies. If you experience long build times, this file can be updated to resolve to a local Maven repository, such as Sonatype Nexus, or JFrog Artifactory.

After the build has completed successfully, a new Pod will be started, running the application with its embedded Java agent (parts omitted for brevity):

Executing /deployments/bin/run ...
Launching application in folder: /deployments
Running  java  -javaagent:/opt/appdynamics/javaagent.jar
Install Directory resolved to[/opt/appdynamics]
log4j:WARN No appenders could be found for logger (com.singularity.MissingMethodGenerator).
log4j:WARN Please initialize the log4j system properly.
[Thread-0] Sat Jul 09 05:29:26 BST 2016[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [greeting-service]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [dev1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [greeting-service-1-fbcaa]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [true]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [true]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [greeting-service]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [dev1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [greeting-service-1-fbcaa]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdynamics/ver4.1.7.1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: AgentInstallManager - Agent node directory set to [greeting-service-1-fbcaa]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Using Java Agent Version [Server Agent v4.1.7.1 GA #9949 ra4a2721d52322207b626e8d4c88855c846741b3d 18-4.1.7.next-build]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Running IBM Java Agent [No]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Java Agent Directory [/opt/appdynamics/ver4.1.7.1]
[Thread-0] Sat Jul 09 05:29:26 BST 2016[INFO]: JavaAgent - Java Agent AppAgent directory [/opt/appdynamics/ver4.1.7.1]
Agent Logging Directory [/opt/appdynamics/ver4.1.7.1/logs/greeting-service-1-fbcaa]
Running obfuscated agent
Started AppDynamics Java Agent Successfully.
Registered app server agent with Node ID[8494] Component ID[6859] Application ID [4075]

Verifying Successful Integration

Once the application has started successfully, and the agent has registered itself with AppDynamics, you should be able to see your application on the AppDynamics Dashboard:


Drilling down into the Application in the Dashboard also confirms that the Downward API has done its job, and we’ve automatically pulled in both the container name, and the Kubernetes namespace.


Testing Integration

In order to get something a bit more meaningful out of the AppDynamics platform, I’ve put together a small test harness in SoapUI that simply runs a load test against the Fuse application’s RESTful endpoint:


In OpenShift’s container logs we can see these requests coming into the application, either via the Web Console or via the CLI.

Once the test harness has completed its cycle, going back to the AppDynamics dashboard starts to give us a glimpse of something a bit more useful to us from an application monitoring and operations point of view:


We can even drill down into the Web Service endpoints themselves, and examine the levels of load each is experiencing.


Application Scaling

One of the really nice things about using OpenShift, the Downward API, and AppDynamics in this way is that it even gives us useful information about health, request distribution and throughput when we scale out the application. Here the application has been scaled to 3 nodes:


We can also look at the load and response times being experienced by users of the application service. Whilst this particular view gives an amalgamation of data, it’s a simple operation to drill down into an individual JVM to see how it’s performing.


I have barely scratched the surface of what we can monitor, log, and alert on with OpenShift, Fuse Integration Services, and AppDynamics. Hopefully though, it gives you a glimpse of what is possible using the tools provided by the OpenShift Container Platform, and a template for not only integrating AppDynamics, but also other useful toolsets that follow a similar agent model.


The full source repository for fis-java-appdynamics is here: https://github.com/benemon/fis-java-appdynamics/tree/0.2-SNAPSHOT

The full source repository for the FIS/Camel application based on the fis-java-appdynamics image is here:https://github.com/benemon/camel-cxf-cdi-java-example/tree/appdynamics

The full source repository for the FIS/Camel application with the Java agent added using S2I is here:

Please note branches and tags in these repositories.