Q&A with Ravi Mhatre of Lightspeed Venture Partners

Recently I had the opportunity to speak with investor and board member of AppDynamics, Ravi Mhatre of Lightspeed Venture Partners. Ravi is a lead investor focusing on enterprise, mobility, and cloud-based services and applications, leading investments in Bromium, MuleSoft, Nutanix, among several others.

Similar to my conversations with Steve and Matt, Ravi and I spoke on the future of technology, how the role of the CIO is changing, and where the next disruption is bound to take place.

Below you can read our full discussion…

In your opinion what category of technology will be most impactful to businesses in the next 3-5 years?

I think we’re still early innings in terms of the eventual impact cloud computing will have on business. More specifically, a significant share of cloud-related advancement to this point has related to harvesting marginal efficiencies & capabilities achieved by applying novel dimensions of abstraction. That trajectory of innovation reflects path dependencies imposed by antecedent technologies like the hypervisor/virtualization. As enterprises’ comfort with the cloud increases, ongoing advancement will by contrast more often consist of re-platforming as opposed to “abstraction stacking”.  For a business to “win” over the coming years will require that its IT leadership internalize the implications of a fully-composable environment and accordingly rethink fundamental stack architecture from the bottom up.

How would you describe the future role of the CIO?

The future role of the CIO will increasingly be one of managing constant fluidity. Themes we’re seeing at the application layer like microservices & containers foreshadow that large swaths of underlying infrastructure will continue along a pathway of decomposition. As functions & services get disaggregated, the highest value that a CIO can deliver is in optimizing at the edges of an enterprise IT topology as opposed to trying to optimize at its nodes. That entails that the role itself will be less about building & maintaining a hardened stack, and more about perpetually monitoring, orchestrating and provisioning a federation of components that collectively deliver equal or superior functionality with far greater agility, efficiency & cost effectiveness.

What is the next category to be ‘ubered’?

There are a number of ways one could interpret what constitutes the “uberification” of category. As “collaborative consumption” or the notion of a disintermediated production model applies, I think that an interesting analogue resides in open source licensing as a paradigm for IT innovation. Survey data shows that the number of enterprises leveraging open source technology has more than doubled since 2010 and that nearly 2/3s of corporate IT organizations now contribute to open source projects. Perhaps equally notable, businesses are increasingly launching external OSS projects and reducing barriers to employee participation in OSS in order to strengthen recruiting & retention of IT talent. At the heart of Uber’s business is a question that goes back at least as far as Ronald Coase’s 1937 theory of the firm, suggesting that a formally-bounded “company” only comes about when the high coordination costs for production precludes simpler options like a market-price mechanism. To that end, ubiquitous smartphones changed the fundamental math for Uber in the taxi industry. With open source, a similar pattern is emerging where the coordination costs to collaborate & innovate outside the boundaries of the traditional firm continue to decline. That pattern bodes for an interesting future in terms of how the most impactful IT innovations get built over coming years.

(Survey source: http://www.slideshare.net/blackducksoftware/2015-future-of-open-source-survey-results)

If you were not an Investor, what would you be doing right now?

I take a lot of inspiration from my father, who used his PhD background in biochemistry to grow from junior engineering roles to eventually leading a number of successful medical technology companies in a managerial capacity. My father gave me a strong appreciation for the value of primary research as well as love for technology in its own right.  Before entering the venture capital industry, my background was in electrical engineering and I had the opportunity to work in settings that gave me an ongoing fascination with computing. One such position was with Silicon Graphics, which was building 3D graphics display terminals back then. Being in that environment I became quite enamored with the vast possibility held in virtual reality & immersive computing.  Had I not entered the investing side, there’s a chance I may have pursued an entrepreneurial path around that opportunity, though it’s difficult to know where that may have put me decades after the fact!

Thanks again Ravi for you time!

Q&A with Matt Murphy of Menlo Ventures

Following up on my discussion with Steve Harrick, I had the opportunity to speak with Matt Murphy of Menlo Ventures. Matt is a longtime industry veteran focusing on mobile and cloud enterprise investments including Instart Logic, Puppet Labs, Shazam, and several others. He’s also an investor and board observer of AppDynamics.

Along with asking about the future of technology and how the CIO role is transforming, we also spoke about what he’d be doing if he wasn’t an investor.

Here’s our chat…

In your opinion what category of technology will be most impactful to businesses in the next 3-5 years?

I’m most excited about the power of machine learning and how it will impact everything from IT to consumer and enterprise apps. ML will help IT more efficiently use resources and move even more so to automation. On the app front, integrating ML helps deliver a more targeted/custom experience for the user. The intelligence and learning that cloud apps put back into the product to make it smarter, will have a huge impact. 

What is the next category to be ‘ubered’?

So many categories are being “ubered” or “airbnbed,” the big question is what category has the potential to be as big as Uber, and if I was clear on that, I’d already be an investor. On demand services and the rental economy are going to be massive going forward. We’ll continue to be surprised at the scale of these businesses.

How would you describe the future role of the CIO?

The CIO of the future is an innovator. The role has shifted from a manager of teams and process to being a driver of technology change in the organization.

We’re at a rare time where there is an abundance of new technologies and platforms. The future CIO will embrace those and drive new tech adoption for competitive advantage. We’re final at a stage where IT will not only streamline, simplify and lower cost, but really be used to outrun one’s competitor.

If you were not an Investor, what would you be doing right now?

Trying to find the next AppDynamics or Uber to start or join. Beyond that, spending more time with my wife and boys, traveling, sporting, laughing. There is never enough of that and we all have to remember to keep the balance. You only live once. Make the most of it on the dimensions that matter in the end.

Thanks again Matt for your time!

Ten Minutes with Steve Harrick of Institutional Venture Partners

I recently had the opportunity to sit down with Steve Harrick of Institutional Venture Partners (IVP) to discuss current trends and future outlook within the technology industry. Along with leading IVP’s investment in AppDynamics — and being a Board Observer — Steve also led investments in notable IT companies such as Pure Storage, Sumo Logic, MySQLSpiceworks, and several others.

Here’s a quick insight into our chat…

In your opinion, what category of technology will be most impactful to businesses in the next three to five years?

Three areas are top of mind. Security is foremost. The IT landscape changed dramatically over the last several years. Global smart phone adoption happened even faster than we expected and file sharing via Dropbox, iCloud and other services is common practice. Accordingly, the boundaries between your personal and professional life are extremely porous. Huge volumes of confidential information move among these spheres without adequate security, or checks and balances. It’s become increasingly difficult to ensure your data is safe or that your company’s data policies are enforced.  All of your company’s intellectual property and hard work could be compromised without the right security tools and appropriate emergency response plans.

Next is Data Analytics. IDC recently reported that the digital universe is doubling in size every two years, and by 2020 the amount of data we create and copy annually will reach 44 zettabytes, or 44 trillion gigabytes. It’s essential that companies are able to store, analyze and make sense of all this information in order to make better decisions. Businesses that learn how to efficiently leverage all this information will enjoy a distinct competitive advantage; those that don’t, will quickly become dinosaurs.

And finally, Application Performance Management is becoming essential as we navigate a highly distributed and rapidly changing IT landscape. We’ve already seen how AppDynamics’ software can test, measure and monitor app performance in heterogeneous environments. The next phase involves a unified solution that continually monitors an enterprise’s entire infrastructure, including not only applications but databases, servers and network performance.

What is the next category to be ‘Ubered’?

A lot of startups come to us claiming to be the “Uber of this,” or the “Uber of that.” Uber has a remarkably compelling business model and is global in its ambitions, but it is a unique company. People shouldn’t confuse Uber’s on-demand product with a simple marketplace model that connects buyers and sellers. Marketplace models can only provide value at scale, so you have to be very cautious when evaluating small marketplace startups. That said, I believe the marketplace model makes sense for healthcare and health insurance because in that market, you have an odd confluence of fragmentation and regulation. Zenefits’ free software allows customers to navigate the highly distributed HR and benefits provider landscape. The genius of Zenefits’ business model is that the company uses software to create a marketplace and in doing so, it disrupts traditional brokers – similar to the disruption that Uber has brought to the taxi industry.

How would you describe the future role of the CIO?

CIOs are responsible for each of the areas I outlined in my first answer. But perhaps more importantly, a successful CIO must efficiently manage distributed operations, whether we’re talking about people, or processors. It is becoming increasingly expensive to scale operations in the Bay Area and that’s forcing companies to establish second and third sites to cut costs. They might locate their sales team in another state, or their manufacturing in another country. And the CIO must manage the company’s information systems, on premise or in the cloud, in a manner that is not only distributed but consistent, efficient and cost effective.

If you were not an investor, what would you be doing right now?

If I wasn’t working and my kids were suddenly in college, maybe I’d go fly fishing around the world with my wife.  If I continued to work and wasn’t in VC, I would love to convince someone that I was qualified to become the president of a major research university like Stanford, Harvard or Yale. Great universities teach students responsibility and put them on a path not to memorize and repeat, but to think effectively and contribute their gifts to society. I believe it would be exciting to fashion an environment that would have a positive impact upon the leaders of tomorrow. This is an age of advancement and technology is leading the way.

Thanks again to Steve for his time and insights!

DevOps Scares Me – Part 4: Dev and Ops Collaborate Across the Lifecycle

Today we are blogging something a little different than our normal. I’m Jim Hirschauer the Operations Guy, and this is my esteemed colleague Dustin Whittle the Developer. In this blog post we’re going to discuss how we would take an application from inception through development, testing, QA, and into production. We’ll each comment on the different stages and provide our perspective on the tools that we need to use at each stage and how they help with automation, testing, and monitoring. Along the way we’ll call out the potential collaboration points to identify the areas where the DevOps approach provides the most value.

The software development loop looks like this:

DevOps Cycle

Inception and working with a product team

From an operational perspective, my first instinct is to understand the application architecture so that I can start thinking about the proper deployment model for the infrastructure components. Here are some of my operational questions and considerations for this stage:

  • Are we using a public or private cloud?
  • What is the lead time for spinning up each component and ensuring they comply with my companies regulations?
  • When do I need to provide a development environment to my dev team or will they handle it themselves?
  • Does this application perform functions that other applications or services already handle? Operations should have high-level visibility into the application and service portfolio.

From a development perspective, my first milestone is to make sure the ops team fully understands the application and what it takes to deploy it to a pre-production environment. This is where we the developers sync with the product and ops team and make sure we are aligned.

Planning for the product team:

  • Is the project scope well defined? Is there a product requirements document?
  • Do we have a well defined product backlog?
  • Are there mocks of the user experience?

Planning for the ops team:

  • What tools will we use for deployment and configuration management?
  • How will we automate the deployment process and does the ops team understand the manual steps?
  • How will we integrate our builds with our continuous integration server?
  • How will we automate the provisioning of new environments?
  • Capacity Planning – Do we know the expected production load?

There’s not a ton of activity at this stage for the operations team. This is really where the devops synergy comes into play. DevOps is simply operations working together with engineers to get things done faster in an automated and repeatable way. When it comes to scaling, the more automation in place the easier things will be in the long run.

Development and scoping production

This should start with a conversation between the dev and ops teams to control domain ownership. Depending on your organization and peers strengths this is a good time to decide who will be responsible for automating the provisioning and deployment of the application. The ops questions for deploying complex web applications:

  • How do you provision virtual machines?
  • How do you configure network devices and servers?
  • How do you deploy applications?
  • How do you collect and aggregate logs?
  • How do you monitor services?
  • How do you monitor network performance?
  • How do you monitor application performance?
  • How do you alert and remediate when there are problems?

During the development phase the operations focused staff normally make sure the development environment is managed and are actively working to set up the test, QA and Prod environments. This can take a lot of time if automation tools aren’t used.

Here are some tools you can use to automate server build and configuration:

Meanwhile, the operations staff should also make sure that the developers have access to tools which will help them with release management and application monitoring and troubleshooting. Here are some of those tools:

Deployment Automation:

Infrastructure Monitoring:

Logging:

Application + Network Performance Management:

Testing and Quality Assurance

Once developers have built unit and functional tests we need to ensure the tests are running after every commit and we don’t allow regressions in our promoted environments. In theory, developers should do this before they commit any code, but often times problems don’t show up until you have production traffic running under production infrastructure. The goal of this step is really to simulate as much as possible everything that can go wrong and find out what happens and how to remediate.

QA Problems

The next step is to do capacity planning and load testing to be confident the application doesn’t fall over when it is needed most. There are a variety of tools for load testing:

  • Apica Load Test – Cloud-based load testing for web and mobile applications
  • Soasta – Build, execute, and analyze performance tests on a single, powerful, intuitive platform.
  • Bees with Machine Guns – A utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications).
  • MultiMechanize – Multi-Mechanize is an open source framework for performance and load testing. It runs concurrent Python scripts to generate load (synthetic transactions) against a remote site or service. Multi-Mechanize is most commonly used for web performance and scalability testing, but can be used to generate workload against any remote API accessible from Python.
  • Google PageSpeed Insights – PageSpeed Insights analyzes the content of a web page, then generates suggestions to make that page faster. Reducing page load times can reduce bounce rates and increase conversion rates.

The last step of testing is discovering all of the possible failure scenarios and coming up with a disaster recovery plan. For example what happens if we lose a database or a data center or have a 100x surge in traffic.

During the test and QA stages operations needs to play a prominent role. This is often overlooked by ops teams but their participation in test and QA can make a meaningful difference in the quality of the release into production. Here’s how.

If the application is already in production (and monitored properly), operations has access to production usage and load patterns. These patterns are essential to the QA team for creating a load test that properly exercises the application. I once watched a functional test where 20+ business transactions were tested manually by the application support team. Directly after the functional test I watched the load test that ran the same 2 business transactions over and over again. Do you think the load test was an accurate representation of production load? No way! When I asked the QA team why there were only 2 transactions they said “Because that is what the application team told us to model.”

The development and application support teams usually don’t have time to sit with the QA team and give them an accurate assessment of what needs to be modeled for load testing. Operations teams should work as the middle man and provide business transaction information from production or from development if this is an application that has never seen production load.

Here are some of the operational tasks during testing and QA:

  • Ensure monitoring tools are in place.
  • Ensure environments are properly configured
  • Participate in functional, load, stress, leak, etc… tests and provide analysis and support
  • Providing guidance to the QA team

Production

Production is traditionally the domain of the operations team. For as long as I can remember, the development teams have thrown applications over the production wall for the operations staff to deal with when there are problems. Sure, some problems like hardware issues, network issues, and cooling issues are purely on the shoulders of operations–but what about all of those application specific problems? For example, there are problems where the application is consuming way too many resources, or when the application has connection issues with the database due to a misconfiguration, or when the application just locks up and has to be restarted.

I recall getting paged in the middle of the night for application-related issues and thinking how much better each release would be if the developers had to support their applications once they made it to production. It was really difficult back in those days to say with any certainty that the problem was application related and that a developer needed to be involved. Today’s monitoring tools have changed that and allow for problem isolation in just minutes. Since developers in financial services organizations are not allowed access to production servers, it makes having the proper tools all the more important.

Production devops is all about:

  • deploying code in a fast, repeatable, scalable manner
  • rapidly identifying performance and stability problems
  • alerting the proper team when a problem is detected
  • rapidly isolating the root cause of problems
  • automatic remediation of known problems and rapid manual remediation of new problems (runbooks and runbook automation)

Your application must always be available and operating correctly during business hours (this may be 24×7 for your specific application).

Alerting Tools:

In case of failures alerting tools are crucial to notify the ops team of serious issues. The operations team will usually have a runbook to turn to when things go wrong. A best practice is to collaborate on incident response plans.

Maintenance

Finally we’ve made it to the last major category of the SDLC, maintenance. As an operations guy my mind focuses on the following tasks:

  • Capacity planning – Do we have enough resources available to the application? If we use dynamic scaling, this is not an issue but a task to ensure the scaling is working properly.
  • Patching – are we up to date with patches on the infrastructure and application components? This is supposed to help with performance and/or security and/or stability but it doesn’t always work out that way.
  • Support – are we current with our software support levels (aka, have we paid and are we on supported versions)?
  • New releases (application updates) – New releases always made me cringe since I assumed the release would have issues the first week. I learned this reaction from some very late nights immediately following those new releases.

As a developer the biggest issues during the maintenance phase is working with the operations team to deploy new versions and make critical bug fixes. The other primary concern is troubleshooting production problems. Even when no new code has been deployed, sometimes failures happen. If you have a great process, application performance monitoring, and a devops mentality collaborating with ops to resolve the root cause of failures becomes easy.

As you can see, the dev and ops perspectives are pretty different, but that’s exactly why those 2 sides of the house need to tear down the walls and work together. DevOps isn’t just a set of tools, but a philosophical shift that needs that requires buy-in from all folks involved to really succeed. It’s only through a high level of collaboration that things will change for the better. AppDynamics can’t change the mindset of your organization, but it is a great way to foster collaboration across all of your organizational silos. Sign up for your free trial today and make a difference for you organization.