The AppD Approach: Deployment Options for .NET Microservices Agent

There are numerous ways to develop .NET applications, and several ways to run them. As the landscape expands for .NET development—including advances in .NET Core with its cross-platform capabilities, self-contained deployments, and even the ability to run an ASP.NET Core app on a Raspberry PI with the upcoming .NET Core 2.1 ARM32 support—it’s only fitting that AppDynamics should advance its abilities to monitor this new landscape.

One of these advancements is our new .NET Microservices Agent. Like .NET Core, this agent has evolved to become more portable and easier to use, providing more value to our customers who monitor .NET Core applications. Its portability and refinement enable a couple of installation options, both of which align closely with the movement to host .NET applications in the cloud, the development of microservices, and the growing use of containers. This flexibility in deployment was a requirement of our customers, as they had concerns over the one-size fits all deployment options of some of our competitors. These deployment methods include:

  • Installing via the AppDynamics Site Extension in Azure

  • Installing via the NuGet package bundled with the application

Each method has its advantages and disadvantages:

AppDynamics Site Extension

    • Advantage: Azure Site Extension is an easy deployment method that decouples the AppDynamics agent from the code. A couple of clicks and some basic configuration settings and—voila!—an Azure App Service has an AppDynamics monitoring solution.

    • Disadvantage: It is an Azure App Service-only option. Should the application need to be moved to another service such as Azure Service Fabric, a different installation method would be needed.

AppDynamics NuGet Package

  • Advantage: the NuGet package installation method is super versatile. Since it’s bundled with the application, wherever it goes, the agent and monitoring go too. An excellent option for microservices and containers.

  • Disadvantage: It’s biggest advantage is also a drawback, as coupling the agent with the application increases operational requirements. Agent updates, for instance, would require small configuration changes and redeployments.

The Easy Option: AppDynamics Site Extension

Azure provides the ability to add Site Extensions, a simple way to add functionality and tooling to an Azure App Service.

In the case of AppDynamics’ .NET Microservices Agent, Site Extensions is a wonderful deployment method that allows you to set up monitoring on an Azure App Service without having to modify your application. This method is great for an operations team that either wants to monitor an existing Azure App Service without deploying new bits, or decouple the monitoring solution from the application.

The installation and configuration of the AppDynamics Site Extension is simple:

  1. Add the Site Extension to the App Service from the Site Extension Gallery.

  2. Launch the Controller Configuration Form and set up the Agent.

As always, Azure provides multiple ways to do things. Let’s break down these simple steps and show installation from two perspectives: from the Azure Portal, and from the Kudu service running on the Azure App Service Control Manager site.

Installing the Site Extension via the Azure Portal

The Azure Portal provides a very easy method to install the AppDynamics Site Extension. As the Portal is the most common interface when working with Azure resources, this method will feel the most comfortable.

Step 1: Add the Site Extension

  • Log into the Azure Portal at https://portal.azure.com and navigate to the Azure App Service to install the AppDynamics Site Extension.

  • In the menu sidebar, click the Extensions option to load the list of currently installed Site Extensions for the Azure App Service. Click the Add button near the top of the page (see below) to load the Site Extension Gallery, where you can search for the latest AppDynamics Site Extension.

  • In the “Add extension” blade, select the AppDynamics Site Extension to install.
    (The Portal UI is not always the most friendly. If you hover over the names, a tooltip should appear showing the full extension name.)

  • After choosing the extension, click OK to accept the legal terms, and OK again to finish the selection. Installation will start, and after a moment the AppDynamics Site Extension will be ready to configure.

Step 2: Launch and Configure

  • To configure the AppDynamics Agent, click the AppDynamics Site Extension to bring up the details blade, and then click the Browse button at the top. This will launch the AppDynamics Controller Configuration form for the agent.

  • Fill in the configuration settings from your AppDynamics Controller, and click the Validate button. Once the agent setup is complete, monitoring will start.

  • Now add some load to the application. In a few moments, the app will show up in the AppDynamics Controller.

Installing the Site Extension via Kudu

Every Azure App Service is created with a secondary site running the Kudu service, which you can learn more about at the projectkudu on GitHub. The Kudu service is a powerful tool that gives you a behind-the-scenes look at your Azure App Service. It’s also the place where Site Extensions are run. Installing the AppD Site Extension from the Kudu service is just as simple as from the Azure Portal.

Step 1: Add Site Extension

  • Login to the Azure Portal at https://portal.azure.com and navigate to the Azure App Service to install the AppDynamics Site Extension.

  • The Kudu service is easy to access via the Advanced Tools selection on the App Service sidebar.

  • Another option is to login directly to the secondary site’s URL by including a “.scm” as a prefix to the “.azurewebsite.net” domain. For example: http://appd-appservice-example.azurewebsites.net becomes http://appd-appservice-example.scm.azurewebsites.net. (You can read more about accessing the Kudu service in the projectkudu wiki.)

  • On the Kudu top menu bar, click the Site Extensions link to view the currently installed Site Extensions. To access the Site Extension Gallery, click the Gallery tab.

  • A simple search for “AppDynamics” will bring up all the available AppDynamics Site Extensions. Simply click the add “+” icon on the Site Extension tile to install.

  • On the “terms acknowledgement” dialog pop-up, click the Install button.

  • Finish the setup by clicking the “Restart Site” button on the upper right. This will restart the SCM site and prepare the AppDynamics Controller Configuration form.

Step 2: Launch and Configure

  • Once the restart completes, click the “Launch” icon (play button) on the Site Extension tile. This will launch the AppDynamics Controller Configuration form.

  • Follow the same process as before by filling in the details and clicking the Verify button.

  • The agent is now set up, and AppDynamics is monitoring the application.

AppDynamics Site Extension in Kudu Debug Console

One of the advantages of the Kudo service is the ability to use the Kudu Debug Console to locate App Service files, including the AppDynamics Site Extension installation and AppDynamics Agent log files. Should the Agent need configuration changes, such as adding a “tier” name, you can use the Kudu Debug Console to locate the AppDynamicsConfig.json file and make the necessary modifications.

The Versatile Option: AppDynamics NuGet Packages

The NuGet package installation option is the most versatile deployment method, as the agent is bundled with the application. Wherever the application goes, the agent and monitoring solutions go too. This method is great for monitoring .NET applications running in Azure Service Fabric and Docker containers.

AppDynamics currently has four separate NuGet packages for the .NET Microservices Agent, and each is explained in greater detail in the AppDynamics documentation. Your choice of package should be based on where your application will be hosted, and which .NET framework you will use.

In the example below, we will use the package best suited for an Azure App Service, for a comparison to the Site Extension.

Installing the AppDynamics App Service NuGet Package

The method for installing a NuGet package will vary by tooling, but for simplicity we will assume a simple web application is open in Visual Studio, and that we’re using Visual Studio to manage NuGet packages. If you’re working with a more complex solution with multiple applications bundled together, NuGet package installation will vary by project deployment.

Step 1: Getting the Correct Package

  • On the web app project, right-click and bring up the context menu. Locate and click “Manage NuGet Packages…”.  This should bring up the NuGet Package Manager, where you can search for “AppDynamics” under the Browse tab.  

  • Locate the correct package—in this case, the “AppService” option—select the appropriate version and click Install.

  • Do a build of your project to add the AppDynamics directory to your project.

  • The agent is now installed and ready to configure.

Step 2: Configure the Agent

  • Locate the AppDynamicsConfig.json in the AppDynamics directory and fill in the Controller configuration information.

  • Publish the application to Azure and add some load to the application to test if monitoring was set up properly.

I hope these steps give you an overview of how easy it is to get started with our .NET Microservices Agent. Make sure to review our official .NET Microservices Agent and Deploy AppDynamics for Azure documentation for more information.

Getting Started with Containers and Microservices

Get Ahead of Microservices and Container Proliferation with Robust App Monitoring

Containers and microservices are growing in popularity, and why not? They enable agility, speed, and resource efficiency for many tasks that developers work on daily. They are light in terms of coding and interdependencies, which makes it much easier and less time consuming to deliver apps to app users or migrate applications from legacy systems to cloud servers.

What Are Containers and Microservices?

Containers are isolated workload environments in a virtualized operating system. They speed up workload processes and application delivery because they can be spun up quickly; and they provide a solution for application-portability challenges because they are not tied to software on physical machines.

Microservices are a type of software architecture that is light and limited in scope. Single-function applications comprise small, self-contained units working together through APIs that are not dependent on a specific language. A microservices architecture is faster and more agile than traditional application architecture.

The Importance of Monitoring

For containers and microservices to be most effective and impactful as they are adopted, technology leaders must prepare a plan on how to monitor and code within them. They also must understand how developers will use them.

Foundationally, all pieces and parts of an enterprise technology stack should be planned, monitored, and measured. Containers and microservices are no exception. Businesses should monitor them to manage their use according to a planned strategy, so that best practice standards (i.e., security protocols, sharing permissions, when to use and not use, etc.) can be identified, documented, and shared. Containers and microservices also must be monitored to ensure both the quality and security of digital products and assets.

To do all of this, an organization needs robust application monitoring capabilities that provide full visibility into the containers and microservices; as well as insight into how they are being used and their influence on goals, such as better productivity or faster time-to-market.

Assessing Your Application Monitoring Capabilities

Some of questions that enterprises should ask as they assess their application-monitoring capabilities are:

  • How can we ensure development and operations teams are working together to use containers and microservices in alignment with enterprise needs?

  • Will we build our own system to manage container assignment, clustering, etc.? Or should we use third-party vendors that will need to be monitored?

  • Will we be able to monitor code inside containers and the components that make up microservices with our current application performance management (APM) footprint?

Do we need more robust APM to effectively manage containers and microservices? And how do we determine the best solution for our needs? To answer those questions and learn more about containers and microservices—and how to effectively use and manage them — read Getting Started With Containers and Microservices: A Mini Guide for Enterprise Leaders.

This mini eBook expands on the topics discussed in this blog and includes an 8-point plan for choosing an effective APM solution.

Go to the guide.

KubeCon + CloudNativeCon: A Diverse and Growing Community

The motto for this year’s KubeCon + CloudNativeCon, “Keep Cloud Native Weird,” proved to be as much a prediction as a slogan when temperatures plummeted last week and snow began to fall in Austin, Texas. Despite Austin uncharacteristically turning into a winter wonderland, the attendance for this third annual event was truly impressive, boasting over 4,100 attendees. Contrast this with just a few hundred attendees in its first rendition back in 2015, and you can see how quickly communities focused around containerization, dynamic orchestration, and microservices has grown. And with good reason.

These practices, all key tenets of cloud native, have seen a huge upshift in adoption over the past few years. Couple this with the growing utilization and support for open source software by even the largest companies, and it’s easy to see why the community around the projects hosted by the CNCF has exploded over the past 3 years. And while the Cloud Native Community Foundation (CNCF) has grown, so to has the number of projects created and maintained by that community.

As Dan Kohn, executive director of the CNCF said during his opening keynote, the number of projects expanded from just 4  in 2016 (Kubernetes, Prometheus, OpenTracing, and fluentd) to 14 projects in 2017.

In addition to nurturing technical innovation, the CNCF has been going the extra mile to keep its community open to all. This commitment was exemplified by the $250,000 raised to support 103 diversity scholarships for this year’s event. These scholarships were awarded to people from underrepresented and/or marginalized groups in the technology and/or open source communities. Working for a company which prides itself on diversity, I’m glad to see more groups making the effort to ensure that their communities are open and accepting of everyone.

Overall KubeCon + CloudNativeCon was an incredible event, and one not to be missed. But if you did miss this year’s event, fear not, KubeCon + CloudNativeCon will be coming to Copenhagen, Shanghai, and Seattle in 2018!

Application Architecture With Azure Service Fabric

Is Azure the dominant cloud-based development infrastructure of the future? There’s some good evidence to support that claim. At last year’s Dell World conference in Austin, TX, Microsoft CEO Satya Nadella announced on stage that there are only two horses in the contest for control of the cloud. “It’s a Seattle race,” Nadella said. “Amazon clearly is the leader, but we are number two. We have a huge run-rate. All up, our cloud business last time we talked about it was over $8 billion of run-rate.”

Normally, you could dismiss that as typical marketing speak, but market analysts tend to agree with him. Gartner’s Magic Quadrant for Cloud Infrastructure as a Service Report found that there are only two leaders in the space. AWS is ahead, but Microsoft Azure’s offerings are growing faster. Gartner concluded, “Microsoft Azure, in addition to Amazon Web Services, is showing strong legs for longevity in the cloud marketplace, with other vendors falling further to the rear and confined to more of a vendor-specific or niche role.”

The Rundown on Azure Service Fabric and Microservices

Service Fabric is the new middleware layer from Microsoft designed to help companies scale, deploy, and manage microservices. Service Fabric supports both stateless and stateful microservices. In stateful microservices, Service Fabric computes your storage and application code together, reducing latency and automatically provides replication services in the background to improve availability of your services.

Azure Service Fabric improves the deployment process for customers embracing DevOps with features like rolling upgrades and automatic rollback during deployments.

Empowering customers to deliver microservices using Azure Service Fabric is a key contributor powering Microsoft’s revenue growth, expanding 102 percent year-over-year through the success of Azure.

Top enterprises betting on Azure services today include global chocolatier The Hershey Company, Amazon’s e-commerce competition Jet.com, digital textbook builder Pearson, GE Healthcare, and broadcaster NBC Universal. Azure is an optimized multi-platform cloud solution that can power solutions running on Windows and Linux, using .NET, Node.js, and a host of other runtimes in the market, making it easier to adopt regardless of the language or underlying OS for customers deploying applications that scale using microservices.

Why Microsoft Chose Microservices Over Monolithic

When Microsoft started running cloud-scale services such as Bing and Cortana, it ran into several challenges with designing, developing, and deploying apps at cloud-scale. These were services that were always on and in high-demand. They required frequent updates with zero latency. The microservices architecture made much more sense than a traditional monolithic approach.

Microsoft’s Mark Fussell defined the problem with monolithic: “During the client-server era, we tended to focus on building tiered applications by using specific technologies in each tier. The term ‘monolithic application’ has emerged for these approaches. The interfaces tended to be between the tiers, and a more tightly coupled design was used between components within each tier. Developers designed and factored classes that were compiled into libraries and linked together into a few executables and DLLs.”

There were certainly benefits to that methodology at the time in terms of simplicity and faster calls between components using inter-process communication (IPC). Everybody’s on one team testing a single software, so it’s easier to coordinate tasks and collaborate without explaining what each is working on at a given moment.

Azure and Microservices

Monolithic started to fail when the app ecosphere turbocharged the speed of user expectations. If you want to scale a monolithic app, you have to clone it out onto multiple servers or virtual machines (or containers, but that’s another story). In short, there was no easy way to break out and scale components rapidly enough to satisfy the business needs of enterprise-level app customers. The entire development cycle was tightly interconnected by dependencies and divided by functional layers, such as web, business, and data. If you wanted to do a quick upgrade or fix, you had to wait until testing was finished on the earlier work. Monolithic and agility didn’t mix.

The microservices approach is to organize a development project based on independent business functionalities. Each can scale up or down at its own rate. Each service is its own unique instance that can be deployed, tested, and managed across all the virtual machines. This aligns more closely with the way that business actually works in the world of no latency and rapid traffic spikes.

In reality, many development teams start with the monolithic approach and then break it up into microservices bases, in which functional areas need to be changed, upgraded, or scaled. Today, DevOps teams that are responsible for microservices projects tend to be highly cost-effective but insular. APIs and communications channels to other microservices can suffer without strong leadership and foresight.

How Azure Service Fabric Helps

Azure Service Fabric is a distributed systems platform that assigns each microservice a unique name, which can be stateless or stateful. Service Fabric streamlines the management, packaging, and deploying of microservices, so DevOps teams and admins can just forget about the infrastructure complexities and get down to implementing workloads. Microsoft defined Azure Service Fabric as “the next-generation middleware platform for building and managing these enterprise-class, tier-1, cloud-scale applications.”

Azure Service Fabric is behind services like Azure SQL Database, Azure DocumentDB, Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, and Skype for Business. You can create a wide variety of cloud native services that can immediately scale up across thousands of virtual machines. Service Fabric is flexible enough to run on Azure, your own bare metal on-premise servers, or on any third-party cloud. More importantly — especially if you’re an open-source house — is that Service Fabric can also deploy services as processes or in containers.

Azure Container Services

Open-source developers can use Azure Container Service along with Docker container orchestration and scale operations. You’re free to work with Mesos-based DC/OS, Kubernetes, or Docker Swarm, and Compose and Azure will optimize the configuration for .NET and Azure. The containers and your app configuration are fully portable. You can modify the size, the number of hosts, and which orchestrator tools you want to use, and then leave the rest to the Azure Container Service.

Any of the most popular development tools and frameworks are compatible because Azure Container Services exposes the standard API endpoints for their orchestration engine. That opens the door for all of the most common visualizers, monitoring platforms, continuous integration, and whatever the future brings. For .NET developers or those who have worked with the Visual Studio IDE, the Azure interface presents a familiar user experience. Developers can use Azure and cross-platform a fork of .NET known as .NET Core to create an open-source project running ASP.NET applications for Linux, Windows, or even Mac.

Taking on New Challenges With Service Fabric

Microsoft’s role as a hybrid cloud expert gives Azure an edge over virtual-only competitors like AWS and Google Cloud. Azure’s infrastructure is comprised of hundreds of thousands of servers, content distribution networks, edge computing nodes, and fiber optic networks. Azure is built and managed by a team of experts working around the clock to support services for millions of businesses all over the planet.

Developers experienced with microservices have found it valuable to architect around the concept of smart endpoints and dumb pipes. In this approach, the end goal of microservices applications is to function independently, decoupled but as cohesive as possible. Each should receive requests, act on its own domain logic, and then send off a response. Microservices can then be choreographed using RESTful protocols, as detailed by James Lewis and Martin Fowler in their microservices guide from 2014.

If you’re dealing with workloads that have unpredictable bursting, you want an infrastructure that’s reliable and secure while knowing that the data centers are environmentally sustainable. Azure lets you instantly generate a virtual machine with 32TB of storage driving more than 50,000 IOPS. Then, your team can tap into data centers with hundreds of thousands of CPU cores to solve seemingly impossible computational problems.

AppDynamics for Azure

In the end, the user evaluates the app as a singular experience. You need application monitoring that makes sure all the microservices are working together seamlessly and with no downtime. AppDynamics App iQ platform is what you need to handle the flood of data coming through .NET and Azure applications. You can monitor all of the .NET performance data from inside Azure, as well as frameworks and runtimes like WebAPI, OWIN, MVC, and ASP.NET Core on full framework, deploying AppDynamics agents in Azure websites, worker roles, Service Fabric, and in containers. In addition, you can monitor the performance of queues and storage for services like Azure SQL Server and Service Bus. This provides end to end visibility into your production services running in the cloud.

The asynchronous nature of microservices itself makes it nearly impossible to track down the root failure when it starts cascading through services unless you have solid monitoring in place. With AppDynamics, you’ll be able to visualize the services path from end to end for every single interaction — all the way from the origination through the services calls. Otherwise, you’ll get lost in the complexity of microservices and lose all the benefits of building on the Azure infrastructure.

While we see many developers in the Microsoft space attracted to Azure, AppDynamics realizes Azure is a cross-platform solution supporting both Windows and Linux. In addition to .NET runtimes, AppDynamics provides a rich set of monitoring capabilities that many of the modern technologies being used in the Azure cloud require, including Node.js, PHP, Python and Java applications.

Learn more

Learn more about our .NET monitoring solution or take our free trial today.

Microservices Sprawl: How Not to be Overrun

The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations teams are suddenly finding themselves flying blind. Unless IT operations upgrade to new approaches to managing DevOps by using more modern monitoring solutions, containers and microservices will wind up doing more to troubleshoot issues than actually speeding up development processes.

So what steps should you take to ensure that your containers and microservices framework are performing up to speed?

Microservices enabled by containers are popular with developers because they enable easily isolated functions. That makes it simpler to either build a new application or update an older one. So when product teams are under more pressure to build more dynamic, faster, and more agile applications, microservices and containers can provide those new capabilities.

However, this only marks one part of IT complexity. Every tool and channel that plays a role in the development to deployment process requires monitoring and optimization. Some teams might elect to implement a different monitoring system for every function, but having to connect several tools into one cohesive process can become taxing. Using a single IT monitoring platform that provides all the context an IT organization needs to respond to problems is now rapidly changing IT conditions. It’s necessary to be able to share metrics pertaining to both a specific container as well as the rest of the IT infrastructure environment. After all, while performance attributes of a specific container might be interesting, that information only becomes truly useful when it can be compared against everything else that is happening across the IT environment.

Without that capability, an IT team will waste endless hours in war rooms trying to prove their innocence whenever a problem arises. Given the thousands of containers that might be operating at any given time, IT teams could easily wind up chasing their tails trying to replicate a problem that might only exist for a few intermittent minutes.

Some organizations are also adopting a DevOps mindset by allowing developers to own an entire lifecycle management of containerized applications built by microservices. Rather than handing off the task, developers can manage and own the maintenance for the code they have created. Tasks like these are a sign that microservices and DevOps are working together to fundamentally change the way we optimize an application’s performance.

Learn more

Read more about how you can do more to maintain the microservices framework so it can reach its highest capability in our latest guide on the microservices sprawl. Download the eBook today.

Introducing Microservices iQ

As part of AppDynamics Summer ‘16 release, we are announcing Microservices iQ, a new intelligent application performance engine, that enables enterprises to efficiently manage microservice based application environments and deliver performance that delights their customers while exceeding their scale, sophistication and velocity expectations.

Microservice architecture is an increasingly popular style of enterprise application development where instead of large monolithic code bases, applications are comprised of many fine-grained components or services developed and operated by smaller teams. These independent services may be used in conjunction with other services to support one or more business transactions.

 

Monolith vs Microservices.png

Figure 1: Monolith vs. Microservices 

A microservices architecture significantly enhances the agility and accelerates the velocity of continuous integration and delivery of enterprise applications. However, this approach can result in an exponentially larger number of microservices that are loosely coupled and communicate primarily via asynchronous mechanism, creating increased complexity and a significant management challenge.

AppDynamics, now powered by Microservices iQ, automatically detects the service endpoints of the microservices architecture and allow them to be viewed in isolation of distributed business transactions. We can understand microservice lifecycles and ensure data continuity despite the intermittent presence of the underlying application infrastructure. We can check the availability of microservices within your network as well as the availability of 3rd-party services. Our new Contention Analysis provides the next level of performance diagnostics for microservices, ensuring that a particular service is not a bottleneck in blocking business transactions.

Here are the key capabilities of the AppDynamics Microservices iQ:

Service Endpoints: AppDynamics automatically detects service endpoints of your microservice architecture, enabling you to shine a spotlight on microservices without worrying about the entire distributed business transaction that uses it.

Screen Shot 2016-07-08 at 11.42.28 AM.png

Figure 2: Service Endpoint Dashboard

DevOps teams can monitor the key performance indicators (KPIs) like calls per minute, average response and errors per minute of their microservices not only in production, but also in early development and throughout the entire lifecycle using the Service Endpoint Dashboard (Figure 2).

The dashboard also lists the snapshots with detailed diagnostics that enables the DevOps teams to drill down and isolate the root cause any performance issues affecting the microservices.  

Thread Contention Analysis: Given the independent nature of components in microservice architectures, it is more likely that a particular microservice is invoked as part of multiple business transactions and can become a performance bottleneck for those transactions if it blocks their execution. The new thread contention analyzer helps identify methods, within the scope of service endpoints, where threads are blocked by identifying block time, blocking object and the blocking line of code.  As you can see in the screenshot (Figure 3) of the new thread contention analysis window for the service endpoint, blocking threads, blocking object, block time and the reference to the line of the code are highlighted.

Screen Shot 2016-07-07 at 4.42.12 PM.png

Figure 3: Thread Contention Analysis

This feature can significantly minimize the time required to isolate and resolve application performance issues with the microservices and the business transactions invoking them.

Elasticity Management: In highly dynamic environments, with microservices deployed in elastic infrastructure like containers or the cloud, the underlying infrastructure nodes may scale up or down rapidly, creating a management nightmare to track these microservices and the infrastructure nodes in the context of the associated business transaction.

AppDynamics maintains logical identity and historical data about these transient nodes for a certain period making it easy to track them in context of a business transaction. In addition, it minimizes the system’s overhead by recycling the logical node identity after a certain period to ensure that the enterprise applications can scale to meet their growing business needs.

Extending AppDynamics App iQ Platform: Microservices iQ extends AppDynamics’ existing App iQ Platform that enables enterprises to deliver performance that exceeds the scale, sophistication, and velocity expectations of today’s customers. The platform is the foundation to AppDynamics’ customers’ success and powered by intelligent Application Performance Engines. These intelligent performance engines work in concert to help ensure enterprises can deliver peak performance across any application, user engagement and business transaction.

The new Microservices iQ capabilities enhance the core Appdynamics Platform that is already designed to provide end-to-end visibility into agile application infrastructure where microservices are deployed. For example, AppDynamics can automatically discover a large number of microservices, dynamically baselines their performance, collects deep diagnostics and alerts when the performance deviates from the normal baseline. Manually instrumenting these large number of microservices and setting static threshold for altering can be a very difficult task if not impossible.   

To learn more about AppDynamics Microservices iQ, refer http://www.appdynamics.com/microservices

4 Challenges You Need to Address with Microservices Adoption

In the last few weeks, we’ve introduced the concept of microservices and its role as a business initiative and how to migrate your organization towards a microservices model. Transitioning to microservices creates significant challenges for organizations. This week, I’ll delve into some of the obstacles you might face and the ultimate benefits of your efforts.

Microservices Architecture

Microservices architecture is much more complex than legacy systems. In turn, the environment becomes more complicated because teams have to manage and support many moving parts. Some of the things you must be concerned about include:

  • As you add more microservices, you have to be sure they can scale together. More granularity means more moving parts which increases complexity.
  • When more services are interacting, you increase possible failure points. Smart developers stay one step ahead and plan for failure.
  • Transitioning functions of a monolithic app to microservices creates many small components that constantly communicate. Tracing performance problems across tiers for a single business transaction can be difficult. This can be handled by correlating calls with a variety of methods including custom headers, tokens or IDs.
  • Traditional logging is ineffective because microservices are stateless, distributed and independent — you would produce too many logs to easily locate a problem. Logging must be able to correlate events across several platforms.

Other considerations to take include:

  1. Operations and Infrastructure: The development group has to work closer with operations more than ever before. Otherwise, things will spin out of control due to the multitude of operations going on at once.
  2. Support: It is significantly harder to support and maintain a microservices setup than a monolithic app. Each one may be made from a wide variety of frameworks and languages. The infinite complexities of support influence decisions on adding services. If a team member wants to create a new service in an esoteric language, it impacts the whole team because they have to make sure it can work with the existing setup.
  3. Monitoring: When you add additional new services, your ability to maintain and configure monitoring for them becomes a challenge. You will have to lean on automation to make sure monitoring can keep up with the changes in the scale of services.
  4. Security of Application: The proliferation of services in this architecture creates more soft targets for hackers, crackers and criminals. With a variety of operating systems, frameworks and languages to keep track of, the security group has their hands full making sure the system is not vulnerable.
  5. Requests: One way to send data between services is using request headers. Request headers can contain details like authentication that ultimately reduce the number of requests you need to make. However, when this is happening across a myriad of services, it can increase the need for coordination with members of other teams.
  6. Caching: Caching helps reduce the number of requests you’ll need to make. Caching requests that involve a multitude of services can grow complicated quickly, necessitating communication from different services and their development teams.
  7. Fault Tolerance: The watchword with microservices is “interdependence.” Services have to be able to withstand outright failures and inexplicable timeouts. Failures can multiply quickly, creating a cascading effect through some services, potentially spiking services needlessly. Fault tolerance in this environment is much more complicated than a monolithic system.

Spotlight on DevOps

In an old-school development environment, there was little integration of the separate functions within the operation of the IT department. DevOps is the evolution of collaboration where the operations, development and quality assurance teams collaborate and communicate throughout the software development process. It’s not separate role held by a single person or group of individuals. Rather, it conceptualizes the structure needed to help operations and development work closely together. With a microservices architecture, developers are responsible for creating a system to deliver the final product successfully.

Along with the continuing migration of large and small organizations to microservices, developers must also evolve. Because it is so easy to deploy microservices, developers are getting involved in code deployments and production monitoring. This contrasts with the traditional instance where developers would write code and hand it off for another team (DevOps) to deploy and maintain. Today, developers and DevOps are merging into smaller application teams responsible for three main components: building, deployment and monitoring.

Microservices are changing how teams are structured, allowing organizations to create teams centered on specific services and giving them autonomy and responsibility in a constrained area. This approach lets the company rapidly adjust in response to fluctuating business demand, without interrupting core activities. It also makes it easier to onboard new staff quickly.

Developers may have to handle some additional challenges including:

  • A shortage of developers with JavaScript experience that know how to put together microservices architectures.
  • Understanding and implementing services for the Internet of Things.
  • The ability to help companies introduce technology into business planning and strategy.
  • Teaching business leaders how open APIs can augment their current business lines and open new opportunities in the marketplace.
  • How to simplify the development stack, choose the right technology and push back when vendors offer unproductive middleware.
  • Learn from industry leaders like Netflix, and decide which implementations of microservices will best serve their organizations.
  • Understand that many vendors have still not created a stable microservices platform.
  • Be able to handle the pressure of managing and operating possibly hundreds of individual microservices at the same time.
  • Manage an increasingly complex network of teams including operations, architects, coders, quality assurance and integrators that still may not completely understand the microservices approach.

Beginning the Transition

Once you launch the transition process, you’ll notice that new challenges emerge that you did not expect, including:

  • How much of the workload should be moved to microservices?
  • Should you allow code to be migrated to different services?
  • How do you decide what the boundaries of each microservice will be while the operation is running?
  • How do you monitor the performance of microservices?

 

Want to read more on how enterprise teams scale with microservices? Read the full eBook on “How to Build (and Scale) with Microservices” here!

How to Migrate to Microservices

Today, modern enterprise is rushing head first into an always-on, digital-centric, mobile world. Organizations that fail to modify their approach to technology will be left by the wayside as others incorporate highly flexible and scalable architectures that adapt quickly and efficiently to the demands of the modern marketplace.

The rapid rise in popularity of microservices was driven by these market influences. In just a few short years, companies have implemented various configurations of technologies to offer the best user experience. 

Challenges with Migrating 

One of the primary challenges when considering migrating to microservices is that monolithic legacy systems cannot be changed overnight. DevOps and IT managers must decide where and when they can incorporate microservices into their existing applications. In the “Four-Tier Engagement Platform” for Forrester Research, Ted Schadler, Michael Facemire, and John McCarthy say it is time to move the technology industry to a four-tier architecture.

In an article for Infoworld, Patrick Nommensen summarized the Four-Tier Architecture. As he explains, the dramatic changes in computing, including the incredible market penetration of mobile devices, means developers must take an entirely new approach to thinking about application development. The Four-Tier approach is broken down into different layers:

  • Client Tier: The delivery of customer experience through mobile clients and the Internet of Things.

  • Delivery Tier: Optimizes user experience by device while personalizing content by monitoring user actions.

  • Aggregation Tier: Aggregates data from the services tier while performing data protocol translation.

  • Services Tier: The portfolio of external services such as Twilio and Box, as well as existing data, services, and record systems already in-house.

Perhaps the biggest difference with this new approach is the separation of the client tier. With this model, the layers underneath are constantly changing based on real-time interaction with users. 

A Practical Approach to Migration

So what tools do you need to move into microservices? The first consideration is that you must decide on a microservices architecture. Figure out how the services will interact before trying to optimize their implementation. Next, while microservices architectures provide much speed, you have to continually optimize those speed gains. This means that you have to be flexible in the tools that you use to deploy the architecture.

Owen Garret shares with InfoWorld a practical, three-step approach to handle a migration to microservices: 

  1. Componentize: Choose a component from your existing applications, and create a microservices implementation on a pilot basis.

  2. Collaborate: Share the techniques and lessons learned from the pilot in Stage One with all stakeholders, programmers, and developers on the team. This gets them on board with new processes and initiatives.

  3. Connect: Complete the application and connect to users in a real-world scenario.

Data Coupling

Microservices architecture is loosely coupled with data often communicated through APIs. It is not unusual for one microservice to have less than a couple of hundred lines of code and manage a single task. Loose coupling relies on three things:

  1. Limited scope and focused intelligence built-in.

  2. Intelligence set apart from the messaging function.

  3. Tolerance for a wide variety of modifications of microservices with similar functions — changes are not forced or coordinated.

The APIs translate a specification that creates a contract which indicates what service is provided and how other programs are supposed to use it. Using APIs to decouple services creates a tremendous amount of freedom and flexibility.

New Service Platforms

Platforms for microservices are evolving rapidly. New platforms are emerging while more established platforms are modifying their approach. Some examples include:

  • Microsoft’s Azure BizTalk Microservices lets clients using Azure build microservices applications in their choice of cloud. It is part of a greater effort to move Azure to a model of small components.

  • Gilliam is a Platform as a Service (PaaS) custom made for creating, deploying and scaling microservices. It creates a Docker image of every code repository onsite.

  • LSQ is a PaaS with pre-made templates, documentation editor, and NPM package manager. It includes a development environment, assembly and testing area, and cloud deployment.

  • Pivotal is a native cloud platform that focuses on developing microservices for companies like Ford, Allstate, Mercedes Benz, GE, Humana, and Comcast.

Microservices to Help Legacy Apps

Consider a legacy system coded in C and running on multiple mainframes. It’s been running for years without any major hiccups and delivers the core competency of the business reliably. Should you attempt to rewrite the code to accommodate new features? A gradual approach is recommended because new microservices can be tested quickly without interrupting the reliability of the current monolithic structure. You can easily use microservices to create new features through the legacy API. Another approach is to modularize the monolithic architecture so that it can still share code and deployments, but move modules into microservices independently if needed.

People and Processes

Deploying microservices involves more than incorporating new technology. You have to be able to adopt new processes and team dynamics to make the transition effective over time. Oftentimes managers break applications down by technology, assigning responsibility to different teams. With microservices, applications are separated into services that are grouped by business capability. All software such as user experience, external connections, and storage are implemented within each business domain. Team members handle full application development from user interfaces down to databases.

This change in structure affect the people within it too. Developers used to monolithic systems may have a difficult time switching from a world of one language to a multi-language land of new technologies. Microservices frees them up to be more autonomous and responsible for the “big picture.”

However, operating in this new found freedom can be overwhelming for programmers with years of experience in the old ways of doing things. You must be constantly aware of your team’s ability to change. They may need time to adjust to new guidelines and procedures. Clear communication is the key. Detail their responsibilities in this new style of working and why they are important. Unless you have buy-in from your team members at the start, making adjustments later may be difficult at best and dead on arrival at worst.

Entering the New Era of Computing

This new era of computing is based on ultra-fast data processing. Events are monitored, analyzed and processed as they happen. We can make timely decisions based on this continually updated flow of data, resulting in better service for clients, improved operations and instant monitoring of business performance against predetermined targets.

Microservices are not a panacea. There are potential drawbacks such as code duplication, mismatch of interfaces, operations overhead and the challenge of continuous testing of multiple systems. However, the benefits of creating loosely coupled components by independent teams using a variety of languages and tools far outweigh the disadvantages. In our current computing environment, speed and flexibility are the keys to success — microservices deliver both.

 

Want to read more on the process of migrating to microservices? Read the full eBook on “How to Build (and Scale) with Microservices” here! 

How Microservices are Transforming Python Development

The goal of any tech business worth its salt is to provide the best product or service to its clients in the most efficient and cost-effective way possible. This is just as true in the development of software products as it is in other product design services.

Microservices, an app architecture style that leans mostly on independent, self-contained programs, are quickly becoming the new norm, so to speak. With this change comes a declining reliance on older SOAs like COBRA, a push toward more sustainable API approaches and fewer monolithic development and deployment models.

So why are microservices suddenly at the forefront of the software architecture conversation? They are changing how Python-based developers are getting things done in a way that’s far more efficient than before, and in more ways than one.

The Differences Between Microservices and SOAs

Diving deeper into the differences between microservices and SOAs, you have to remember that, at their core, microservices are essentially an offshoot of SOAs, although they both act and deploy independently from each other.

SOAs also follow four major tenets during the development and deployment phases:

  • Their boundaries are inherently explicit.

  • They provide autonomous services.

  • Those services share both schema and contract but not class.

  • The compatibility of those services is policy-based.

Once you’ve established these distinctions, you can then make a far more accurate comparison between microservices and SOAs in that SOAs are architectural patterns that use their respective components to provide services to other components, within or without the same application. In microservices, only services independent of the application in question deploy those same components.

Although microservices are not a novel or inherently “new” architecture style, as much of their roots derive from the founding design principles found in Unix language, there are still several implications of an increase in productivity and innovation if more developers use microservices.

The Evolution of Microservices

Overall, the timeline for the evolution of Python-based apps, from monolithic to microservices, has been a relatively short one. On top of that, much of the evolution was born out of a necessity for forward progression and increased ease among developers.

It is widely accepted that microservices have more substance attached to them because they’ve done away with bulky XML-based schemas that large corporations are known for using in favor of slimmer applications that rely far less on bloat. Ultimately, microservices have become more common over time because they:

  • Can deploy independently of the core application

  • Can function properly while remaining separate from dependent responsibilities

  • Possess strong backward compatibility, making them less prone to breakage

Development team advantages include:

  • Allow for the decentralization of data management so teams and subteams can be responsible for maintenance on a far more granular level

  • Enable the use of infrastructure automation, from testing to deployment, without much need for human supervision

  • Faster ramp-up time for new team members means they can learn processes faster by focusing on smaller chunks of data.

There are still widely accepted, monolithic-first approaches within the development phase that development teams can break down into SOAs and, further still, into microservices. Some of the more successful applications still employ monolith-first patterns but in conjunction with the use of microservices and even nano-services.

A Word (or Two) on Nano Services

There’s plenty of support (and animosity) toward just how deep developers should go down the rabbit hole concerning the development and use of nano-services. Just as you’d think, nano-services are simply components that designers have drilled down to an even more granular level than their microservices predecessors.

For some, it is a virtual splitting of hairs while, for others, it is yet another landscape that we have yet to understand fully and, therefore, properly utilize. Both sides can agree, however, that the status quo will likely share neither sentiment anytime soon.

Advantages of Microservices

While developers and the enterprise-level businesses they work for tend to flock toward the main codebase approach of a monolithic architecture for its benefits, there are also some pros that come with building a software product incrementally.

Microservices come with a unique set of advantages, some briefly mentioned earlier, that allow developers to create building blocks that they can then retrofit into an existing codebase as needed. Other significant advantages include:

  • There’s ability to change the implementation of a public API, without breaking it, the moment you define it and others start using it

  • The services are so small that they make maintenance from one developer to another easier to facilitate and understand.

  • There are no development language limitations, so you can use what’s best for you and your team.

  • It is easier to upgrade systems one microservice at a time than it is to upgrade a monolithic system.

  • Cross-implementation compatibility allows you to prototype in one language and re-implement in another.

  • Regardless of the size of your operation, if most of your product builds require more detailed components and adaptive development, then microservices are a better approach.

Current Microservices Implementations

Arguably, the go-to implementation for most microservices today, Python-based or otherwise, is Docker. Aside from increased agility and control, many of today’s developers are embracing the ability to work remotely, so naturally any implementation that allows for more portability than the competition is greatly appreciated by the developer community.

Other popular implementations include but are not limited to:

  • Flask

  • MicroService4Net

  • Microsoft Service Fabric

  • NetKernel

  • Nirmata

  • Spring Cloud

Currently, the trend toward component development and product compartmentalization will continue as the need for customizable applications, and modular design becomes more prevalent.

Why Python?

Most Python developers who implement microservices during development likely use the RESTful approach to creating an API, which is an all-inclusive way of utilizing available Web protocols and software to remotely search and manipulate objects.

Reverse engineered by Dr. Roy Fielding in 2000, RESTful microservices has a basic premise that follows three distinct canons:

  • You are required to use any provided links or other resources, making your application’s API browseable.

  • You are expected to recognize the uniform interface of HTTP.

  • You are expected to use each of the verbs (e.g., get, post, put, delete) without violating their own semantics.

Python’s Development Advantages

As mentioned before, you can implement and re-implement microservices across virtually every language, but with Python, there are several advantages that make working within it straightforward and convenient. They include:

  • So long as the API is formatted correctly, prototyping is easier and quicker than in other languages.

  • Instead of having to rely on full-fledged installations of implementations, like Django, you can use lighter installs that are just as powerful, like Flask and others.

  • Looking toward the future, it is a fantastic opportunity to start coding in Python 3, if you do not already.

  • Backward compatibility with legacy languages, like PHP and ASP, allows you to build Web service front ends to a host of microservices.

Furthermore, microservices help to optimize the performance of Python-developed applications two-fold:

  • They become easier to monitor, due to the fact apps are now broken up into components.

  • Performance issues become easier to identify, allowing for more granular diagnoses of flawed, bottlenecked or buggy services.

Moreover, for a design pattern that’s used by the likes of Google, Amazon, Microsoft, Netflix, Uber and more, there are no signs of this architecture going anywhere any time soon.

The Future of Microservices

Although it is easy to agree that the future of software architecture development is moving in the direction of increased modularity and microservices, that does not mean it will not come with its fair share of complications. This is doubly true for larger companies that have created much of their codebase with a monolithic approach in mind.

There are plenty of reasons why an industry-wide shift to microservices might fail, but there are a few challenges that developers and software architects should be mindful of in the coming years:

  • Complete software compatibility: With componentization, much of software’s success depends on its compatibility with its respective components and vice versa. Moving code from service to service becomes difficult, and a development team would need to orchestrate complete coordination.

  • Clean composition: If microservice components do not compose code cleanly, you are simply over complicating the inner workings of connections between their respective components. This shifts unnecessary stress and complexities to an area that’s harder to control.

  • Evolutionary design considerations: When you decide to work in an environment where you can break down components, you are faced with the challenge of figuring out how and where to break them down. It calls for making the risky decision of knowing what you can scrap or save from version 1.0 to version 5.1 and beyond far ahead of time.

  • Required skill sets: Not all teams are created equal. This means that because one team may have the skills required to adopt new techniques does not mean your team will. Foisting an ill-equipped team into uncharted territory could prove disastrous for your entire infrastructure.

Although the distinctions between SOAs and microservices might seem a bit minute, each one still has its intended purposes, whether you are using Python or any other development language.

The fact remains that microservices are only going to become more of necessity as development projects require more specific (and complex) functions and harness the interconnectedness that can come with coding for mutually independent services; however, these changes can, and may, come with a unique collection of augmentations and subsequent headaches.

Using Microservices as a Business Initiative

For microservices to work in an organization, there must be a business initiative attached to it. Questions arise among IT professionals on whether microservices are suited only for giant Web applications like Google and Facebook. However, scale is only one of the business benefits of microservices.

In today’s computing environment, innovation and speed are critical. The movement toward microservices is generated by the need to create new software that can enhance and improve a monolithic system but is separate from it. This decoupling from the legacy system provides the freedom to experiment with new approaches and rapidly iterate changes and modifications.

Traditional systems cannot move at that speed, and that may leave companies disadvantaged. At the AppSphere ’15 conference, Boris Scholl from Microsoft shared a situation they once had with a monolithic system. It had become so complex that when they added new code, the system would stop working, and it took two days for engineers to figure out why. It is too slow.

Companies are trying to decide where microservices fit in with their traditional systems. Developers used to worry simply about coding, but now with the modular approach to technology, they need to widen their view of all the technologies involved and how they work together. They now share responsibility and accountability for the project as a whole — the micro view of their direct assignment, say coding the UX; and the macro view of the final product, a home banking app for example.

Code must be monitored the minute it is deployed. The feedback loop is instantaneous. DevOps may be monitoring 50 different microservices. The data is available right away, but that means IT teams must also continuously monitor, tweak and adjust on-the-fly. It is a challenge.

The Business Case for Microservices

Allan Naim, Product Manager of Container Engine and Kubernetes at Google, told the audience during the panel discussion at Appsphere 15 that it is not easy for IT organizations to incorporate microservices, so they must have an associated business initiative. Often business objectives originate with CEO and Board of Directors. From there, the CMO or the CSO begin to implement them, and it forces the IT staff to start working with microservices. Naim said he sees a time in the not too distant future where every organization, no matter the industry or segment of the market, will ultimately become a software company. That is because the customer data is becoming as valuable as their product or service.

To leverage that asset, organizations must act quickly, changing their offerings based on a constantly evolving landscape. Legacy apps have a hard time adjusting to the new demands of the market such as mobility and the Internet of Things. Competition, especially in the form of aggressive startups that look to disrupt industries, is forcing organizations to integrate microservices architecture with their legacy systems, whether the data is in a relational database or not.

From Highly Specialized to Highly Adaptable

It comes down to the need to provide the highest-quality software to large amounts of customers as quickly as possible. Microservices are not only changing the way companies write code; they are changing the companies themselves. For example, in a monolithic system, the roles of each team member tended to be highly specialized.

In the world of microservices, that approach is highly devalued. Instead, it is better for each team member to be free to operate on different parts of the application without interruption. Rather than hand off development to the next stage, the application is constantly being monitored and modified as it is being developed.

Homegrown Analytics and Monitoring Tools

Another development resulting from these market pressures is that IT teams have started building their own tools. Netflix created its own monitoring system. In fact, they custom made some non-unified tools, a very different approach than that taken by companies like Facebook and Google.

For example, they built their analytics software to process huge volumes of data. How much volume are we talking about? Consider this eye-opening statistic: Networking provider Sandvine reports that just over 30 percent of the traffic on the Web during prime time are Netflix customers streaming movies.

The development of microservices is changing more than software code itself. It is making an enormous impact on how organizations think through their business processes, what products they bring to market and how they are going to support their products with customers in the marketplace.

Because of the explosion of mobile devices and the always shifting wants and needs of consumers, IT professionals have to adapt just as quickly. Microservices architecture is the vehicle in which they are creating rapid change. It is changing not only the technology but also how organizations evaluate business opportunities. On another level, it is altering the organization of talent, encouraging a culture of innovation, expanding the scope of individual responsibility and empowering smart people to take chances.

Agility and Speed are Paramount

Large firms such as Condé Nast and Gilt have always been able to handle large volumes of customer data. However, they see the future and are adapting their legacy systems to utilize microservices architecture. They want to get rid of dependencies and be able to test and deploy code changes quickly. Similar changes across enterprises are helping them become more adaptable to customer needs. It is also pushing them to adopt greater use of the cloud to operate with more agility and speed.

Microservices architecture has a similar mindset as other fast development methodologies like agile software. Fast-moving Web properties like Netflix are constantly looking for greater simplicity and the ability to make changes rapidly without going through numerous committees. The code is small, and every software engineer makes production changes on an ongoing basis.

Sea Change in Software Development

That is why microservices architecture is a natural fit for Web languages such as Node.js that work well in small components. You want to be able to move rapidly and integrate changes to applications quickly. Because microservices are self-contained, it is easy to make changes to a code base and replace or remove services. Instead of rewriting an entire module and trying to propagate across a massive legacy code base, you simply add on a microservice. Any other services that want to tap into the functionality of the additional service can do so without delay.

This is a sea change in how traditional software development takes place. The speed at which code changes in mobile apps and modern websites is way too fast for the legacy software development system. Constantly evolving apps require a new way of thinking.

Changes in Organizations

Back in the 1980s, the role of IT departments began to change with the debut of the personal computer. Every year, PCs became more powerful, and technology staff not only supported individual business functions, but they also had to maintain complete processes. Technology and data were moving closer to the center of the business.

By the 1990s, the IT department had become a critical system in every major company. If the computer systems were down for any length of time, it created bottlenecks for every department of the company.

Data-Driven Design

With microservices, the data inherent to each microservice can only be tapped through its API. The data in the microservice is private. This allows them to be loosely coupled so they can operate and evolve on an independent basis. This creates two challenges: maintaining consistency across several services and implementing queries that grab information from multiple services. With data-driven design, you can experiment and create transactions that cover multiple services consistently.

Unfortunately, many companies still maintain the old software engineering model. However, today they are under pressure to shorten the time to bring new Web and mobile applications to consumers. Speed has become the “coin of the realm.”

Changing Culture in Traditional IT Departments

The rise of microservices is changing a culture in IT that is deeply ingrained. There has always been a division between software development and operations. Now software development is integrated much more tightly with DevOps. Over many years, IT departments had established standards on which technologies they would run. Since these technologies represented serious investments in time and capital, they budgeted carefully for capacity, upgrades and security.

In the brave new world of microservices, department leaders must make significant changes in their organization, so developers play a bigger role in monitoring the software creation during its lifecycle, from development through to production. Interestingly, a similar development happened decades ago when data centers were so complex; Only a select few IT engineers could operate all of the disparate functions. In many cases, the staff maintaining applications were the same people that built them.

Breaking Down Barriers

In effect, microservices is breaking down barriers between the development of software and its operation. That means that any firm that is considering implementing microservices on any substantial level needs to evaluate if they are ready to operate with this new approach.

It does not mean that legacy systems are being disregarded for the new kid in town. In many cases, the traditional system is doing an excellent job for the organization, so changing it without a business case would be folly.

However, the larger trends of cloud computing, mobile device adoption, and low-cost bandwidth are forever changing the way consumers buy and interact with software applications. The pace of change is dizzying, and the need for speed in application development is greater than ever before.