AppDynamics Hosts FinTech Leaders on the Future of Digital Banking

Last week AppDynamics hosted an exclusive networking evening for Financial Services executives at SushiSamba in London. Over 70 tech leaders gathered to enjoy great views over the City (the weather was on our side), excellent food and a panel discussion moderated by Shane Richmond, former tech editor for the Telegraph. We sat down with a few IT leaders from top financial service institutions, joined by our CEO David Wadhwani, who shared their thoughts on the current and future state of digital banking. 

Silicon Valley’izing’ Banking

Shane’s first question was about the concept of ‘Silicon Valley’izing’ Banking. Digital transformation, software is eating the world, every business is a digital business – we can’t open a magazine or browse a news article without coming across phrases like these. The question is, are consumers demanding digital, or are innovators leading the pace of change?

The panel was in agreement–consumers are in the driving seat. Customer expectations are changing, and the way they fundamentally want to engage with their bank is different. They want to do business how and when they want, not to be dictated by branch opening hours. For the Challenger Bank, it’s not about disruption for the sake of it. “All banks have the same problems but the capabilities to solve these problems has changed. Banks are taking different approaches but the end game is the same, making sure the customers can access their money when they want and how they want.”

One panelist focused in Investment Banking explained that to keep pace with and drive innovation, they are developing labs around the world, bringing in Fintech organizations to potentially invest in but also learn from.

Changing the Mindset

With disruption comes change in technology, processes and culture. Established banks are huge organizations who may have had employees work for them for upwards of 20 years, so changing the mindset of these employees is no small task.

Shane asked the panel what they considered to be their biggest barrier to change. One panelist focussed in Retail Banking said it’s the culture. “It needs to be accepted that someone will emerge as the Uber in Financial Services.”

Our panelist focussed in Investment Banking also agreed that culture and communication presents the biggest challenge. “It takes a long time to get a new message through the Investment Bank because of the size of the company it is. It’s difficult to convince the organisation to let go of thinking 24/7 about regulation and security, and more thinking about innovation. The labs is the beginning of leading that change.”

Winter is Coming

Lastly, Shane asked the panel, “what excites you most about the future of digital banking and what keeps you up at night?”

The sleep spoilers included the momentum of change and global social change. The warning that ‘winter is coming’ (Game of Thrones fans will absolutely understand) was used to sum up that greater change is always imminent and we don’t know what’s next…Artificial Intelligence and machine learning could be next to Uberize the market.

One panelist focussed in Infrastructure Services discussed the next generation of employees as both a potential challenge and a great opportunity. “Millennials are less likely to stay in a job for a long period of time. This means recruitment and retention is a top priority, but it brings massive opportunity for innovation and new ideas”.

The overall sentiment from our panel is one of optimism – with the clear message that organisations and individuals in all markets need to move faster, listen to their customers, focus on innovation and embrace change. The Challenger Bank believes we’re only 1% complete on the digital journey. As a customer of a couple of banks, I’m excited to be along for the ride.

Insights from an Investment Banking Monitoring Architect

To put it very simply, Financial Services companies have a unique set of challenges that they have to deal with every day. They are a high priority target for hackers, they are highly regulated by federal and state governments, they deal with and employ some of the most demanding people on the planet, problems with their applications can have an impact on every other industry across the globe. I know this from first hand experience; I was an Architect at a major investment bank for over 5 years.

In this blog post I’m going to show you what’s really important when Financial Services companies consider application monitoring solutions and warn you about the hidden realities that only expose themselves after you’ve installed a large enough monitoring footprint.

1 – Product Architecture Plays a Major Role in Long Term Success or Failure

Every monitoring tool has a different core architecture. On the surface these architectures may look similar but it is imperative to dive deeper into the details of how all monitoring products work. We’ll use two real product architectures as examples.

Monitoring Architecture“APM Solution A” is an agent based solution. This means that a piece of vendor code is deployed to gather monitoring information from your running applications. This agent is intelligent and knows exactly what to monitor, how to monitor, and when to dial itself back to do no harm. The agent sends data back to central collector (called a controller) where this data is correlated, analyzed, and categorized automatically to provide actionable intelligence to the user. With this architecture the agent and the controller are very loosely coupled which lends itself well to highly distributed, virtualized environments like you see in modern application architectures.

“APM Solution B” is also agent based. They have a 3 tiered architecture which consists of agents, collectors, and servers. On the surface this architecture seems reasonable but when we look at the details a different story emerges. The agent is not intelligent therefore it does not know how to instrument an application. This means that every time an application is re-started, the agent must send all of the methods to the collector so that the collector can tell the agent how and what to instrument. This places a large load on the network, delays application startup time, and adds to the amount of hardware required to run your monitoring tool. After the collector has told the agent what to monitor the collectors job is to gather the monitoring data from the agent and pass it back to the server where it is stored and viewed. For a single application this architecture may seem acceptable but you must consider the implications across a larger deployment.

Choosing a solution with the wrong product architecture will impact your ability to monitor and manage your applications in production. Production monitoring is a requirement for rapid identification, isolation and repair of problems.

2 – Monitoring Philosophy

Monitoring isn’t as straight forward as collecting, storing, and showing data. You could use that approach but it would not provide much value. When looking at monitoring tools it’s really important to understand the impact of monitoring philosophy on your overall project and goals. When I was looking at monitoring tools I needed to be able to solve problems fast and I didn’t want to spend all of my time managing the monitoring tools. Let’s use examples to illustrate again.

Application Monitoring Philosophy“APM Solution A” monitors every business transaction flowing through whatever application it is monitoring. Whenever any business transaction has a problem (slow or error) it automatically collects all of the data (deep diagnostics) you need to figure out what caused the problem. This, combined with periodic deep diagnostic sessions at regular intervals, allows you to solve problems while keeping network, storage, and CPU overhead low. It also keeps clutter down (as compared to collecting everything all the time) so that you solve problems as fast as possible.

“APM Solution B” also monitors every transaction for each monitored application but collects deep diagnostic data for all transactions all the time. This monitoring philosophy greatly increases network, storage, and CPU overhead while providing massive amounts of data to work with regardless of whether or not there are application problems.

When I was actively using monitoring tools in the Investment Bank I never looked at deep diagnostic data unless I was working on resolving a problem.

3 – Analytics Approach

Analytics comes in many shapes and sizes these days. Regardless of the business or technical application, analytics does what humans could never do. It creates actionable intelligence from massive amounts of data and allows us to solve problems much faster than ever before. Part of my process for evaluating monitoring solutions has always been determining just how much extra help each tool would provide in identifying and isolating (root cause) application problems using analytics. Back to my example…

“APM Solution A” is an analytics product at it’s core. Every business transaction is analyzed to create a picture of “normal” response time (a baseline). When new business transactions deviate from this baseline they are automatically classified as either slow or very slow and deep diagnostic information is collected, stored, and analyzed to help identify and isolate the root cause. Static thresholds can be set for alerting but by default, alerts are based upon deviation from normal so that you can proactively identify service degradation instead of waiting for small problems to become major business impact.

“APM Solution B” only provides baselines for the business transactions you have specified. You have to manually configure the business transactions for each application. Again, on small scale this methodology is usable but quickly becomes a problem when managing the configuration of 10’s, 100’s, or 1000’s of applications that keep changing as development continues. Searching through a large set of data for a problem is much slower without the assistance of analytics.

Monitoring Analytics

4 – Vendor Focus

When you purchase software from a vendor you are also committing to working with that vendor. I always evaluated how responsive every vendor was during the pre-sales phase but it was hard to get a good measure of what the relationship would be like after the sale. No matter how good the developers are, there are going to be issues with software products. What matters the most is the response you get from the vendor after you have made the purchase.

5 – Ease of Use

This might seem obvious but ease of use is a major factor in software delivering a solid return on investment or becoming shelf-ware. Modern APM software is powerful AND easy to use at the same time. One of the worst mistakes I made as an Architect was not paying enough attention to ease of use during product evaluation and selection. If only a few people in a company are capable of using a product then it will never reach it’s full potential and that is exactly what happened with one of the products I selected. Two weeks after training a team on product usage, almost nobody remembered how to use it. That is a major issue with legacy products.

Enterprise software is undergoing a major disruption. If you already have monitoring tools in place, now is the right time to explore the marketplace and see how your environment can benefit from modern tools. If you don’t have any APM software in place yet you need catch up to your competition since most of them are already looking or have already implemented APM for their critical applications. Either way, you can get started today with a free trial of AppDynamics.

Synthetic vs Real-User Monitoring: A Response to Gartner

AvailabilityRecently Jonah Kowall of Gartner released a research note titled “Use Synthetic Monitoring to Measure Availability and Real-User Monitoring for Performance”. After reading this paper I had some thoughts that I wanted to share based upon my experience as a Monitoring Architect (and certifiable performance geek) working within large enterprise organizations. I highly recommend reading the research note as the information and findings contained within are spot on and highlight important differences between Synthetic and Real-User Monitoring as applied to availability and performance.

My Apps Are Not All 24×7

During my time working at a top 10 Investment Bank I came across many different applications with varying service level requirements. I say they were requirements because there were rarely ever any agreements or contracts in place, usually just an organizational understanding of how important each application was to the business and the expected service level. Many of the applications in the Investment Bank portfolio were only used during trading hours of the exchanges that they interfaced with. These applications also had to be available right as the exchanges opened and performing well for the entire duration of trading activity. Having no real user activity meant that the only way to gain any insight into availability and performance of these applications was by using synthetically generated transactions.

Was this an ideal situation? No, but it was all we had to work with in the absence of real user activity. If the synthetic transactions were slow or throwing errors at least we could attempt to repair the platform before the opening bell. Once the trading day got started we measured real user activity to see the true picture of performance and made adjustments based upon that information.

Performance

Can’t Script It All

Having to rely upon synthetic transactions as a measure of availability and performance is definitely suboptimal. The problem gets amplified in environments where you shouldn’t be testing certain application functionality due to regulatory and other restrictions. Do you really want to be trading securities, derivatives, currencies, etc… with your synthetic transaction monitoring tool? Me thinks not!

So now there is a gaping hole in your monitoring strategy if you are relying upon synthetic transactions alone. You can’t test all of your business critical functionality even if you wanted to spend the long hours scripting and testing your synthetics. The scripting/testing time investment gets amplified when there are changes to your application code. If those code updates change the application response you will need to re-script for the new response. It’s an evil cycle that doesn’t happen when you use the right kind of real user monitoring.

Real User Monitoring: Accurate and Meaningful

When you monitor real user transactions you will get more accurate and relevant information. Here is a list (what would a good blog post be without a list?) of some of the benefits:

  • Understand exactly how your application is being used.
  • See the performance of each application function as the end user does, not just within your data center.
  • No scripting required (scripting can take a significant amount of time and resources)
  • Ensure full visibility of application usage and performance, not just what was scripted.
  • Understand the real geographic distribution of your users and the impact of that distribution on end user experience.
  • Ability to track performance of your most important users (particularly useful in trading environments)

Conclusion

Synthetic transaction monitoring and real user monitoring can definitely co-exist within the same application environment. Every business is different and has their own unique requirements that can impact the type of monitoring you choose to implement. If you’ve not yet read the Gartner research note I suggest you go check it out now. It provides a solid analysis on synthetic and real user monitoring tools, companies, and usage scenarios which are completely different from what I have covered here.

Have synthetic or real transaction monitoring saved the day for your company? I’d love to hear about it in the comments below.