KubeCon + CloudNativeCon: A Diverse and Growing Community

The motto for this year’s KubeCon + CloudNativeCon, “Keep Cloud Native Weird,” proved to be as much a prediction as a slogan when temperatures plummeted last week and snow began to fall in Austin, Texas. Despite Austin uncharacteristically turning into a winter wonderland, the attendance for this third annual event was truly impressive, boasting over 4,100 attendees. Contrast this with just a few hundred attendees in its first rendition back in 2015, and you can see how quickly communities focused around containerization, dynamic orchestration, and microservices has grown. And with good reason.

These practices, all key tenets of cloud native, have seen a huge upshift in adoption over the past few years. Couple this with the growing utilization and support for open source software by even the largest companies, and it’s easy to see why the community around the projects hosted by the CNCF has exploded over the past 3 years. And while the Cloud Native Community Foundation (CNCF) has grown, so to has the number of projects created and maintained by that community.

As Dan Kohn, executive director of the CNCF said during his opening keynote, the number of projects expanded from just 4  in 2016 (Kubernetes, Prometheus, OpenTracing, and fluentd) to 14 projects in 2017.

In addition to nurturing technical innovation, the CNCF has been going the extra mile to keep its community open to all. This commitment was exemplified by the $250,000 raised to support 103 diversity scholarships for this year’s event. These scholarships were awarded to people from underrepresented and/or marginalized groups in the technology and/or open source communities. Working for a company which prides itself on diversity, I’m glad to see more groups making the effort to ensure that their communities are open and accepting of everyone.

Overall KubeCon + CloudNativeCon was an incredible event, and one not to be missed. But if you did miss this year’s event, fear not, KubeCon + CloudNativeCon will be coming to Copenhagen, Shanghai, and Seattle in 2018!

Application Architecture With Azure Service Fabric

Is Azure the dominant cloud-based development infrastructure of the future? There’s some good evidence to support that claim. At last year’s Dell World conference in Austin, TX, Microsoft CEO Satya Nadella announced on stage that there are only two horses in the contest for control of the cloud. “It’s a Seattle race,” Nadella said. “Amazon clearly is the leader, but we are number two. We have a huge run-rate. All up, our cloud business last time we talked about it was over $8 billion of run-rate.”

Normally, you could dismiss that as typical marketing speak, but market analysts tend to agree with him. Gartner’s Magic Quadrant for Cloud Infrastructure as a Service Report found that there are only two leaders in the space. AWS is ahead, but Microsoft Azure’s offerings are growing faster. Gartner concluded, “Microsoft Azure, in addition to Amazon Web Services, is showing strong legs for longevity in the cloud marketplace, with other vendors falling further to the rear and confined to more of a vendor-specific or niche role.”

The Rundown on Azure Service Fabric and Microservices

Service Fabric is the new middleware layer from Microsoft designed to help companies scale, deploy, and manage microservices. Service Fabric supports both stateless and stateful microservices. In stateful microservices, Service Fabric computes your storage and application code together, reducing latency and automatically provides replication services in the background to improve availability of your services.

Azure Service Fabric improves the deployment process for customers embracing DevOps with features like rolling upgrades and automatic rollback during deployments.

Empowering customers to deliver microservices using Azure Service Fabric is a key contributor powering Microsoft’s revenue growth, expanding 102 percent year-over-year through the success of Azure.

Top enterprises betting on Azure services today include global chocolatier The Hershey Company, Amazon’s e-commerce competition Jet.com, digital textbook builder Pearson, GE Healthcare, and broadcaster NBC Universal. Azure is an optimized multi-platform cloud solution that can power solutions running on Windows and Linux, using .NET, Node.js, and a host of other runtimes in the market, making it easier to adopt regardless of the language or underlying OS for customers deploying applications that scale using microservices.

Why Microsoft Chose Microservices Over Monolithic

When Microsoft started running cloud-scale services such as Bing and Cortana, it ran into several challenges with designing, developing, and deploying apps at cloud-scale. These were services that were always on and in high-demand. They required frequent updates with zero latency. The microservices architecture made much more sense than a traditional monolithic approach.

Microsoft’s Mark Fussell defined the problem with monolithic: “During the client-server era, we tended to focus on building tiered applications by using specific technologies in each tier. The term ‘monolithic application’ has emerged for these approaches. The interfaces tended to be between the tiers, and a more tightly coupled design was used between components within each tier. Developers designed and factored classes that were compiled into libraries and linked together into a few executables and DLLs.”

There were certainly benefits to that methodology at the time in terms of simplicity and faster calls between components using inter-process communication (IPC). Everybody’s on one team testing a single software, so it’s easier to coordinate tasks and collaborate without explaining what each is working on at a given moment.

Azure and Microservices

Monolithic started to fail when the app ecosphere turbocharged the speed of user expectations. If you want to scale a monolithic app, you have to clone it out onto multiple servers or virtual machines (or containers, but that’s another story). In short, there was no easy way to break out and scale components rapidly enough to satisfy the business needs of enterprise-level app customers. The entire development cycle was tightly interconnected by dependencies and divided by functional layers, such as web, business, and data. If you wanted to do a quick upgrade or fix, you had to wait until testing was finished on the earlier work. Monolithic and agility didn’t mix.

The microservices approach is to organize a development project based on independent business functionalities. Each can scale up or down at its own rate. Each service is its own unique instance that can be deployed, tested, and managed across all the virtual machines. This aligns more closely with the way that business actually works in the world of no latency and rapid traffic spikes.

In reality, many development teams start with the monolithic approach and then break it up into microservices bases, in which functional areas need to be changed, upgraded, or scaled. Today, DevOps teams that are responsible for microservices projects tend to be highly cost-effective but insular. APIs and communications channels to other microservices can suffer without strong leadership and foresight.

How Azure Service Fabric Helps

Azure Service Fabric is a distributed systems platform that assigns each microservice a unique name, which can be stateless or stateful. Service Fabric streamlines the management, packaging, and deploying of microservices, so DevOps teams and admins can just forget about the infrastructure complexities and get down to implementing workloads. Microsoft defined Azure Service Fabric as “the next-generation middleware platform for building and managing these enterprise-class, tier-1, cloud-scale applications.”

Azure Service Fabric is behind services like Azure SQL Database, Azure DocumentDB, Cortana, Microsoft Power BI, Microsoft Intune, Azure Event Hubs, Azure IoT Hub, and Skype for Business. You can create a wide variety of cloud native services that can immediately scale up across thousands of virtual machines. Service Fabric is flexible enough to run on Azure, your own bare metal on-premise servers, or on any third-party cloud. More importantly — especially if you’re an open-source house — is that Service Fabric can also deploy services as processes or in containers.

Azure Container Services

Open-source developers can use Azure Container Service along with Docker container orchestration and scale operations. You’re free to work with Mesos-based DC/OS, Kubernetes, or Docker Swarm, and Compose and Azure will optimize the configuration for .NET and Azure. The containers and your app configuration are fully portable. You can modify the size, the number of hosts, and which orchestrator tools you want to use, and then leave the rest to the Azure Container Service.

Any of the most popular development tools and frameworks are compatible because Azure Container Services exposes the standard API endpoints for their orchestration engine. That opens the door for all of the most common visualizers, monitoring platforms, continuous integration, and whatever the future brings. For .NET developers or those who have worked with the Visual Studio IDE, the Azure interface presents a familiar user experience. Developers can use Azure and cross-platform a fork of .NET known as .NET Core to create an open-source project running ASP.NET applications for Linux, Windows, or even Mac.

Taking on New Challenges With Service Fabric

Microsoft’s role as a hybrid cloud expert gives Azure an edge over virtual-only competitors like AWS and Google Cloud. Azure’s infrastructure is comprised of hundreds of thousands of servers, content distribution networks, edge computing nodes, and fiber optic networks. Azure is built and managed by a team of experts working around the clock to support services for millions of businesses all over the planet.

Developers experienced with microservices have found it valuable to architect around the concept of smart endpoints and dumb pipes. In this approach, the end goal of microservices applications is to function independently, decoupled but as cohesive as possible. Each should receive requests, act on its own domain logic, and then send off a response. Microservices can then be choreographed using RESTful protocols, as detailed by James Lewis and Martin Fowler in their microservices guide from 2014.

If you’re dealing with workloads that have unpredictable bursting, you want an infrastructure that’s reliable and secure while knowing that the data centers are environmentally sustainable. Azure lets you instantly generate a virtual machine with 32TB of storage driving more than 50,000 IOPS. Then, your team can tap into data centers with hundreds of thousands of CPU cores to solve seemingly impossible computational problems.

AppDynamics for Azure

In the end, the user evaluates the app as a singular experience. You need application monitoring that makes sure all the microservices are working together seamlessly and with no downtime. AppDynamics App iQ platform is what you need to handle the flood of data coming through .NET and Azure applications. You can monitor all of the .NET performance data from inside Azure, as well as frameworks and runtimes like WebAPI, OWIN, MVC, and ASP.NET Core on full framework, deploying AppDynamics agents in Azure websites, worker roles, Service Fabric, and in containers. In addition, you can monitor the performance of queues and storage for services like Azure SQL Server and Service Bus. This provides end to end visibility into your production services running in the cloud.

The asynchronous nature of microservices itself makes it nearly impossible to track down the root failure when it starts cascading through services unless you have solid monitoring in place. With AppDynamics, you’ll be able to visualize the services path from end to end for every single interaction — all the way from the origination through the services calls. Otherwise, you’ll get lost in the complexity of microservices and lose all the benefits of building on the Azure infrastructure.

While we see many developers in the Microsoft space attracted to Azure, AppDynamics realizes Azure is a cross-platform solution supporting both Windows and Linux. In addition to .NET runtimes, AppDynamics provides a rich set of monitoring capabilities that many of the modern technologies being used in the Azure cloud require, including Node.js, PHP, Python and Java applications.

Learn more

Learn more about our .NET monitoring solution or take our free trial today.

Microservices Sprawl: How Not to be Overrun

The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations teams are suddenly finding themselves flying blind. Unless IT operations upgrade to new approaches to managing DevOps by using more modern monitoring solutions, containers and microservices will wind up doing more to troubleshoot issues than actually speeding up development processes.

So what steps should you take to ensure that your containers and microservices framework are performing up to speed?

Microservices enabled by containers are popular with developers because they enable easily isolated functions. That makes it simpler to either build a new application or update an older one. So when product teams are under more pressure to build more dynamic, faster, and more agile applications, microservices and containers can provide those new capabilities.

However, this only marks one part of IT complexity. Every tool and channel that plays a role in the development to deployment process requires monitoring and optimization. Some teams might elect to implement a different monitoring system for every function, but having to connect several tools into one cohesive process can become taxing. Using a single IT monitoring platform that provides all the context an IT organization needs to respond to problems is now rapidly changing IT conditions. It’s necessary to be able to share metrics pertaining to both a specific container as well as the rest of the IT infrastructure environment. After all, while performance attributes of a specific container might be interesting, that information only becomes truly useful when it can be compared against everything else that is happening across the IT environment.

Without that capability, an IT team will waste endless hours in war rooms trying to prove their innocence whenever a problem arises. Given the thousands of containers that might be operating at any given time, IT teams could easily wind up chasing their tails trying to replicate a problem that might only exist for a few intermittent minutes.

Some organizations are also adopting a DevOps mindset by allowing developers to own an entire lifecycle management of containerized applications built by microservices. Rather than handing off the task, developers can manage and own the maintenance for the code they have created. Tasks like these are a sign that microservices and DevOps are working together to fundamentally change the way we optimize an application’s performance.

Learn more

Read more about how you can do more to maintain the microservices framework so it can reach its highest capability in our latest guide on the microservices sprawl. Download the eBook today.

Introducing Microservices iQ

As part of AppDynamics Summer ‘16 release, we are announcing Microservices iQ, a new intelligent application performance engine, that enables enterprises to efficiently manage microservice based application environments and deliver performance that delights their customers while exceeding their scale, sophistication and velocity expectations.

Microservice architecture is an increasingly popular style of enterprise application development where instead of large monolithic code bases, applications are comprised of many fine-grained components or services developed and operated by smaller teams. These independent services may be used in conjunction with other services to support one or more business transactions.

 

Monolith vs Microservices.png

Figure 1: Monolith vs. Microservices 

A microservices architecture significantly enhances the agility and accelerates the velocity of continuous integration and delivery of enterprise applications. However, this approach can result in an exponentially larger number of microservices that are loosely coupled and communicate primarily via asynchronous mechanism, creating increased complexity and a significant management challenge.

AppDynamics, now powered by Microservices iQ, automatically detects the service endpoints of the microservices architecture and allow them to be viewed in isolation of distributed business transactions. We can understand microservice lifecycles and ensure data continuity despite the intermittent presence of the underlying application infrastructure. We can check the availability of microservices within your network as well as the availability of 3rd-party services. Our new Contention Analysis provides the next level of performance diagnostics for microservices, ensuring that a particular service is not a bottleneck in blocking business transactions.

Here are the key capabilities of the AppDynamics Microservices iQ:

Service Endpoints: AppDynamics automatically detects service endpoints of your microservice architecture, enabling you to shine a spotlight on microservices without worrying about the entire distributed business transaction that uses it.

Screen Shot 2016-07-08 at 11.42.28 AM.png

Figure 2: Service Endpoint Dashboard

DevOps teams can monitor the key performance indicators (KPIs) like calls per minute, average response and errors per minute of their microservices not only in production, but also in early development and throughout the entire lifecycle using the Service Endpoint Dashboard (Figure 2).

The dashboard also lists the snapshots with detailed diagnostics that enables the DevOps teams to drill down and isolate the root cause any performance issues affecting the microservices.  

Thread Contention Analysis: Given the independent nature of components in microservice architectures, it is more likely that a particular microservice is invoked as part of multiple business transactions and can become a performance bottleneck for those transactions if it blocks their execution. The new thread contention analyzer helps identify methods, within the scope of service endpoints, where threads are blocked by identifying block time, blocking object and the blocking line of code.  As you can see in the screenshot (Figure 3) of the new thread contention analysis window for the service endpoint, blocking threads, blocking object, block time and the reference to the line of the code are highlighted.

Screen Shot 2016-07-07 at 4.42.12 PM.png

Figure 3: Thread Contention Analysis

This feature can significantly minimize the time required to isolate and resolve application performance issues with the microservices and the business transactions invoking them.

Elasticity Management: In highly dynamic environments, with microservices deployed in elastic infrastructure like containers or the cloud, the underlying infrastructure nodes may scale up or down rapidly, creating a management nightmare to track these microservices and the infrastructure nodes in the context of the associated business transaction.

AppDynamics maintains logical identity and historical data about these transient nodes for a certain period making it easy to track them in context of a business transaction. In addition, it minimizes the system’s overhead by recycling the logical node identity after a certain period to ensure that the enterprise applications can scale to meet their growing business needs.

Extending AppDynamics App iQ Platform: Microservices iQ extends AppDynamics’ existing App iQ Platform that enables enterprises to deliver performance that exceeds the scale, sophistication, and velocity expectations of today’s customers. The platform is the foundation to AppDynamics’ customers’ success and powered by intelligent Application Performance Engines. These intelligent performance engines work in concert to help ensure enterprises can deliver peak performance across any application, user engagement and business transaction.

The new Microservices iQ capabilities enhance the core Appdynamics Platform that is already designed to provide end-to-end visibility into agile application infrastructure where microservices are deployed. For example, AppDynamics can automatically discover a large number of microservices, dynamically baselines their performance, collects deep diagnostics and alerts when the performance deviates from the normal baseline. Manually instrumenting these large number of microservices and setting static threshold for altering can be a very difficult task if not impossible.   

To learn more about AppDynamics Microservices iQ, refer http://www.appdynamics.com/microservices

4 Challenges You Need to Address with Microservices Adoption

In the last few weeks, we’ve introduced the concept of microservices and its role as a business initiative and how to migrate your organization towards a microservices model. Transitioning to microservices creates significant challenges for organizations. This week, I’ll delve into some of the obstacles you might face and the ultimate benefits of your efforts.

Microservices Architecture

Microservices architecture is much more complex than legacy systems. In turn, the environment becomes more complicated because teams have to manage and support many moving parts. Some of the things you must be concerned about include:

  • As you add more microservices, you have to be sure they can scale together. More granularity means more moving parts which increases complexity.
  • When more services are interacting, you increase possible failure points. Smart developers stay one step ahead and plan for failure.
  • Transitioning functions of a monolithic app to microservices creates many small components that constantly communicate. Tracing performance problems across tiers for a single business transaction can be difficult. This can be handled by correlating calls with a variety of methods including custom headers, tokens or IDs.
  • Traditional logging is ineffective because microservices are stateless, distributed and independent — you would produce too many logs to easily locate a problem. Logging must be able to correlate events across several platforms.

Other considerations to take include:

  1. Operations and Infrastructure: The development group has to work closer with operations more than ever before. Otherwise, things will spin out of control due to the multitude of operations going on at once.
  2. Support: It is significantly harder to support and maintain a microservices setup than a monolithic app. Each one may be made from a wide variety of frameworks and languages. The infinite complexities of support influence decisions on adding services. If a team member wants to create a new service in an esoteric language, it impacts the whole team because they have to make sure it can work with the existing setup.
  3. Monitoring: When you add additional new services, your ability to maintain and configure monitoring for them becomes a challenge. You will have to lean on automation to make sure monitoring can keep up with the changes in the scale of services.
  4. Security of Application: The proliferation of services in this architecture creates more soft targets for hackers, crackers and criminals. With a variety of operating systems, frameworks and languages to keep track of, the security group has their hands full making sure the system is not vulnerable.
  5. Requests: One way to send data between services is using request headers. Request headers can contain details like authentication that ultimately reduce the number of requests you need to make. However, when this is happening across a myriad of services, it can increase the need for coordination with members of other teams.
  6. Caching: Caching helps reduce the number of requests you’ll need to make. Caching requests that involve a multitude of services can grow complicated quickly, necessitating communication from different services and their development teams.
  7. Fault Tolerance: The watchword with microservices is “interdependence.” Services have to be able to withstand outright failures and inexplicable timeouts. Failures can multiply quickly, creating a cascading effect through some services, potentially spiking services needlessly. Fault tolerance in this environment is much more complicated than a monolithic system.

Spotlight on DevOps

In an old-school development environment, there was little integration of the separate functions within the operation of the IT department. DevOps is the evolution of collaboration where the operations, development and quality assurance teams collaborate and communicate throughout the software development process. It’s not separate role held by a single person or group of individuals. Rather, it conceptualizes the structure needed to help operations and development work closely together. With a microservices architecture, developers are responsible for creating a system to deliver the final product successfully.

Along with the continuing migration of large and small organizations to microservices, developers must also evolve. Because it is so easy to deploy microservices, developers are getting involved in code deployments and production monitoring. This contrasts with the traditional instance where developers would write code and hand it off for another team (DevOps) to deploy and maintain. Today, developers and DevOps are merging into smaller application teams responsible for three main components: building, deployment and monitoring.

Microservices are changing how teams are structured, allowing organizations to create teams centered on specific services and giving them autonomy and responsibility in a constrained area. This approach lets the company rapidly adjust in response to fluctuating business demand, without interrupting core activities. It also makes it easier to onboard new staff quickly.

Developers may have to handle some additional challenges including:

  • A shortage of developers with JavaScript experience that know how to put together microservices architectures.
  • Understanding and implementing services for the Internet of Things.
  • The ability to help companies introduce technology into business planning and strategy.
  • Teaching business leaders how open APIs can augment their current business lines and open new opportunities in the marketplace.
  • How to simplify the development stack, choose the right technology and push back when vendors offer unproductive middleware.
  • Learn from industry leaders like Netflix, and decide which implementations of microservices will best serve their organizations.
  • Understand that many vendors have still not created a stable microservices platform.
  • Be able to handle the pressure of managing and operating possibly hundreds of individual microservices at the same time.
  • Manage an increasingly complex network of teams including operations, architects, coders, quality assurance and integrators that still may not completely understand the microservices approach.

Beginning the Transition

Once you launch the transition process, you’ll notice that new challenges emerge that you did not expect, including:

  • How much of the workload should be moved to microservices?
  • Should you allow code to be migrated to different services?
  • How do you decide what the boundaries of each microservice will be while the operation is running?
  • How do you monitor the performance of microservices?

 

Want to read more on how enterprise teams scale with microservices? Read the full eBook on “How to Build (and Scale) with Microservices” here!

How to Migrate to Microservices

Today, modern enterprise is rushing head first into an always-on, digital-centric, mobile world. Organizations that fail to modify their approach to technology will be left by the wayside as others incorporate highly flexible and scalable architectures that adapt quickly and efficiently to the demands of the modern marketplace.

The rapid rise in popularity of microservices was driven by these market influences. In just a few short years, companies have implemented various configurations of technologies to offer the best user experience. 

Challenges with Migrating 

One of the primary challenges when considering migrating to microservices is that monolithic legacy systems cannot be changed overnight. DevOps and IT managers must decide where and when they can incorporate microservices into their existing applications. In the “Four-Tier Engagement Platform” for Forrester Research, Ted Schadler, Michael Facemire, and John McCarthy say it is time to move the technology industry to a four-tier architecture.

In an article for Infoworld, Patrick Nommensen summarized the Four-Tier Architecture. As he explains, the dramatic changes in computing, including the incredible market penetration of mobile devices, means developers must take an entirely new approach to thinking about application development. The Four-Tier approach is broken down into different layers:

  • Client Tier: The delivery of customer experience through mobile clients and the Internet of Things.

  • Delivery Tier: Optimizes user experience by device while personalizing content by monitoring user actions.

  • Aggregation Tier: Aggregates data from the services tier while performing data protocol translation.

  • Services Tier: The portfolio of external services such as Twilio and Box, as well as existing data, services, and record systems already in-house.

Perhaps the biggest difference with this new approach is the separation of the client tier. With this model, the layers underneath are constantly changing based on real-time interaction with users. 

A Practical Approach to Migration

So what tools do you need to move into microservices? The first consideration is that you must decide on a microservices architecture. Figure out how the services will interact before trying to optimize their implementation. Next, while microservices architectures provide much speed, you have to continually optimize those speed gains. This means that you have to be flexible in the tools that you use to deploy the architecture.

Owen Garret shares with InfoWorld a practical, three-step approach to handle a migration to microservices: 

  1. Componentize: Choose a component from your existing applications, and create a microservices implementation on a pilot basis.

  2. Collaborate: Share the techniques and lessons learned from the pilot in Stage One with all stakeholders, programmers, and developers on the team. This gets them on board with new processes and initiatives.

  3. Connect: Complete the application and connect to users in a real-world scenario.

Data Coupling

Microservices architecture is loosely coupled with data often communicated through APIs. It is not unusual for one microservice to have less than a couple of hundred lines of code and manage a single task. Loose coupling relies on three things:

  1. Limited scope and focused intelligence built-in.

  2. Intelligence set apart from the messaging function.

  3. Tolerance for a wide variety of modifications of microservices with similar functions — changes are not forced or coordinated.

The APIs translate a specification that creates a contract which indicates what service is provided and how other programs are supposed to use it. Using APIs to decouple services creates a tremendous amount of freedom and flexibility.

New Service Platforms

Platforms for microservices are evolving rapidly. New platforms are emerging while more established platforms are modifying their approach. Some examples include:

  • Microsoft’s Azure BizTalk Microservices lets clients using Azure build microservices applications in their choice of cloud. It is part of a greater effort to move Azure to a model of small components.

  • Gilliam is a Platform as a Service (PaaS) custom made for creating, deploying and scaling microservices. It creates a Docker image of every code repository onsite.

  • LSQ is a PaaS with pre-made templates, documentation editor, and NPM package manager. It includes a development environment, assembly and testing area, and cloud deployment.

  • Pivotal is a native cloud platform that focuses on developing microservices for companies like Ford, Allstate, Mercedes Benz, GE, Humana, and Comcast.

Microservices to Help Legacy Apps

Consider a legacy system coded in C and running on multiple mainframes. It’s been running for years without any major hiccups and delivers the core competency of the business reliably. Should you attempt to rewrite the code to accommodate new features? A gradual approach is recommended because new microservices can be tested quickly without interrupting the reliability of the current monolithic structure. You can easily use microservices to create new features through the legacy API. Another approach is to modularize the monolithic architecture so that it can still share code and deployments, but move modules into microservices independently if needed.

People and Processes

Deploying microservices involves more than incorporating new technology. You have to be able to adopt new processes and team dynamics to make the transition effective over time. Oftentimes managers break applications down by technology, assigning responsibility to different teams. With microservices, applications are separated into services that are grouped by business capability. All software such as user experience, external connections, and storage are implemented within each business domain. Team members handle full application development from user interfaces down to databases.

This change in structure affect the people within it too. Developers used to monolithic systems may have a difficult time switching from a world of one language to a multi-language land of new technologies. Microservices frees them up to be more autonomous and responsible for the “big picture.”

However, operating in this new found freedom can be overwhelming for programmers with years of experience in the old ways of doing things. You must be constantly aware of your team’s ability to change. They may need time to adjust to new guidelines and procedures. Clear communication is the key. Detail their responsibilities in this new style of working and why they are important. Unless you have buy-in from your team members at the start, making adjustments later may be difficult at best and dead on arrival at worst.

Entering the New Era of Computing

This new era of computing is based on ultra-fast data processing. Events are monitored, analyzed and processed as they happen. We can make timely decisions based on this continually updated flow of data, resulting in better service for clients, improved operations and instant monitoring of business performance against predetermined targets.

Microservices are not a panacea. There are potential drawbacks such as code duplication, mismatch of interfaces, operations overhead and the challenge of continuous testing of multiple systems. However, the benefits of creating loosely coupled components by independent teams using a variety of languages and tools far outweigh the disadvantages. In our current computing environment, speed and flexibility are the keys to success — microservices deliver both.

 

Want to read more on the process of migrating to microservices? Read the full eBook on “How to Build (and Scale) with Microservices” here! 

How Microservices are Transforming Python Development

The goal of any tech business worth its salt is to provide the best product or service to its clients in the most efficient and cost-effective way possible. This is just as true in the development of software products as it is in other product design services.

Microservices, an app architecture style that leans mostly on independent, self-contained programs, are quickly becoming the new norm, so to speak. With this change comes a declining reliance on older SOAs like COBRA, a push toward more sustainable API approaches and fewer monolithic development and deployment models.

So why are microservices suddenly at the forefront of the software architecture conversation? They are changing how Python-based developers are getting things done in a way that’s far more efficient than before, and in more ways than one.

The Differences Between Microservices and SOAs

Diving deeper into the differences between microservices and SOAs, you have to remember that, at their core, microservices are essentially an offshoot of SOAs, although they both act and deploy independently from each other.

SOAs also follow four major tenets during the development and deployment phases:

  • Their boundaries are inherently explicit.

  • They provide autonomous services.

  • Those services share both schema and contract but not class.

  • The compatibility of those services is policy-based.

Once you’ve established these distinctions, you can then make a far more accurate comparison between microservices and SOAs in that SOAs are architectural patterns that use their respective components to provide services to other components, within or without the same application. In microservices, only services independent of the application in question deploy those same components.

Although microservices are not a novel or inherently “new” architecture style, as much of their roots derive from the founding design principles found in Unix language, there are still several implications of an increase in productivity and innovation if more developers use microservices.

The Evolution of Microservices

Overall, the timeline for the evolution of Python-based apps, from monolithic to microservices, has been a relatively short one. On top of that, much of the evolution was born out of a necessity for forward progression and increased ease among developers.

It is widely accepted that microservices have more substance attached to them because they’ve done away with bulky XML-based schemas that large corporations are known for using in favor of slimmer applications that rely far less on bloat. Ultimately, microservices have become more common over time because they:

  • Can deploy independently of the core application

  • Can function properly while remaining separate from dependent responsibilities

  • Possess strong backward compatibility, making them less prone to breakage

Development team advantages include:

  • Allow for the decentralization of data management so teams and subteams can be responsible for maintenance on a far more granular level

  • Enable the use of infrastructure automation, from testing to deployment, without much need for human supervision

  • Faster ramp-up time for new team members means they can learn processes faster by focusing on smaller chunks of data.

There are still widely accepted, monolithic-first approaches within the development phase that development teams can break down into SOAs and, further still, into microservices. Some of the more successful applications still employ monolith-first patterns but in conjunction with the use of microservices and even nano-services.

A Word (or Two) on Nano Services

There’s plenty of support (and animosity) toward just how deep developers should go down the rabbit hole concerning the development and use of nano-services. Just as you’d think, nano-services are simply components that designers have drilled down to an even more granular level than their microservices predecessors.

For some, it is a virtual splitting of hairs while, for others, it is yet another landscape that we have yet to understand fully and, therefore, properly utilize. Both sides can agree, however, that the status quo will likely share neither sentiment anytime soon.

Advantages of Microservices

While developers and the enterprise-level businesses they work for tend to flock toward the main codebase approach of a monolithic architecture for its benefits, there are also some pros that come with building a software product incrementally.

Microservices come with a unique set of advantages, some briefly mentioned earlier, that allow developers to create building blocks that they can then retrofit into an existing codebase as needed. Other significant advantages include:

  • There’s ability to change the implementation of a public API, without breaking it, the moment you define it and others start using it

  • The services are so small that they make maintenance from one developer to another easier to facilitate and understand.

  • There are no development language limitations, so you can use what’s best for you and your team.

  • It is easier to upgrade systems one microservice at a time than it is to upgrade a monolithic system.

  • Cross-implementation compatibility allows you to prototype in one language and re-implement in another.

  • Regardless of the size of your operation, if most of your product builds require more detailed components and adaptive development, then microservices are a better approach.

Current Microservices Implementations

Arguably, the go-to implementation for most microservices today, Python-based or otherwise, is Docker. Aside from increased agility and control, many of today’s developers are embracing the ability to work remotely, so naturally any implementation that allows for more portability than the competition is greatly appreciated by the developer community.

Other popular implementations include but are not limited to:

  • Flask

  • MicroService4Net

  • Microsoft Service Fabric

  • NetKernel

  • Nirmata

  • Spring Cloud

Currently, the trend toward component development and product compartmentalization will continue as the need for customizable applications, and modular design becomes more prevalent.

Why Python?

Most Python developers who implement microservices during development likely use the RESTful approach to creating an API, which is an all-inclusive way of utilizing available Web protocols and software to remotely search and manipulate objects.

Reverse engineered by Dr. Roy Fielding in 2000, RESTful microservices has a basic premise that follows three distinct canons:

  • You are required to use any provided links or other resources, making your application’s API browseable.

  • You are expected to recognize the uniform interface of HTTP.

  • You are expected to use each of the verbs (e.g., get, post, put, delete) without violating their own semantics.

Python’s Development Advantages

As mentioned before, you can implement and re-implement microservices across virtually every language, but with Python, there are several advantages that make working within it straightforward and convenient. They include:

  • So long as the API is formatted correctly, prototyping is easier and quicker than in other languages.

  • Instead of having to rely on full-fledged installations of implementations, like Django, you can use lighter installs that are just as powerful, like Flask and others.

  • Looking toward the future, it is a fantastic opportunity to start coding in Python 3, if you do not already.

  • Backward compatibility with legacy languages, like PHP and ASP, allows you to build Web service front ends to a host of microservices.

Furthermore, microservices help to optimize the performance of Python-developed applications two-fold:

  • They become easier to monitor, due to the fact apps are now broken up into components.

  • Performance issues become easier to identify, allowing for more granular diagnoses of flawed, bottlenecked or buggy services.

Moreover, for a design pattern that’s used by the likes of Google, Amazon, Microsoft, Netflix, Uber and more, there are no signs of this architecture going anywhere any time soon.

The Future of Microservices

Although it is easy to agree that the future of software architecture development is moving in the direction of increased modularity and microservices, that does not mean it will not come with its fair share of complications. This is doubly true for larger companies that have created much of their codebase with a monolithic approach in mind.

There are plenty of reasons why an industry-wide shift to microservices might fail, but there are a few challenges that developers and software architects should be mindful of in the coming years:

  • Complete software compatibility: With componentization, much of software’s success depends on its compatibility with its respective components and vice versa. Moving code from service to service becomes difficult, and a development team would need to orchestrate complete coordination.

  • Clean composition: If microservice components do not compose code cleanly, you are simply over complicating the inner workings of connections between their respective components. This shifts unnecessary stress and complexities to an area that’s harder to control.

  • Evolutionary design considerations: When you decide to work in an environment where you can break down components, you are faced with the challenge of figuring out how and where to break them down. It calls for making the risky decision of knowing what you can scrap or save from version 1.0 to version 5.1 and beyond far ahead of time.

  • Required skill sets: Not all teams are created equal. This means that because one team may have the skills required to adopt new techniques does not mean your team will. Foisting an ill-equipped team into uncharted territory could prove disastrous for your entire infrastructure.

Although the distinctions between SOAs and microservices might seem a bit minute, each one still has its intended purposes, whether you are using Python or any other development language.

The fact remains that microservices are only going to become more of necessity as development projects require more specific (and complex) functions and harness the interconnectedness that can come with coding for mutually independent services; however, these changes can, and may, come with a unique collection of augmentations and subsequent headaches.

Using Microservices as a Business Initiative

For microservices to work in an organization, there must be a business initiative attached to it. Questions arise among IT professionals on whether microservices are suited only for giant Web applications like Google and Facebook. However, scale is only one of the business benefits of microservices.

In today’s computing environment, innovation and speed are critical. The movement toward microservices is generated by the need to create new software that can enhance and improve a monolithic system but is separate from it. This decoupling from the legacy system provides the freedom to experiment with new approaches and rapidly iterate changes and modifications.

Traditional systems cannot move at that speed, and that may leave companies disadvantaged. At the AppSphere ’15 conference, Boris Scholl from Microsoft shared a situation they once had with a monolithic system. It had become so complex that when they added new code, the system would stop working, and it took two days for engineers to figure out why. It is too slow.

Companies are trying to decide where microservices fit in with their traditional systems. Developers used to worry simply about coding, but now with the modular approach to technology, they need to widen their view of all the technologies involved and how they work together. They now share responsibility and accountability for the project as a whole — the micro view of their direct assignment, say coding the UX; and the macro view of the final product, a home banking app for example.

Code must be monitored the minute it is deployed. The feedback loop is instantaneous. DevOps may be monitoring 50 different microservices. The data is available right away, but that means IT teams must also continuously monitor, tweak and adjust on-the-fly. It is a challenge.

The Business Case for Microservices

Allan Naim, Product Manager of Container Engine and Kubernetes at Google, told the audience during the panel discussion at Appsphere 15 that it is not easy for IT organizations to incorporate microservices, so they must have an associated business initiative. Often business objectives originate with CEO and Board of Directors. From there, the CMO or the CSO begin to implement them, and it forces the IT staff to start working with microservices. Naim said he sees a time in the not too distant future where every organization, no matter the industry or segment of the market, will ultimately become a software company. That is because the customer data is becoming as valuable as their product or service.

To leverage that asset, organizations must act quickly, changing their offerings based on a constantly evolving landscape. Legacy apps have a hard time adjusting to the new demands of the market such as mobility and the Internet of Things. Competition, especially in the form of aggressive startups that look to disrupt industries, is forcing organizations to integrate microservices architecture with their legacy systems, whether the data is in a relational database or not.

From Highly Specialized to Highly Adaptable

It comes down to the need to provide the highest-quality software to large amounts of customers as quickly as possible. Microservices are not only changing the way companies write code; they are changing the companies themselves. For example, in a monolithic system, the roles of each team member tended to be highly specialized.

In the world of microservices, that approach is highly devalued. Instead, it is better for each team member to be free to operate on different parts of the application without interruption. Rather than hand off development to the next stage, the application is constantly being monitored and modified as it is being developed.

Homegrown Analytics and Monitoring Tools

Another development resulting from these market pressures is that IT teams have started building their own tools. Netflix created its own monitoring system. In fact, they custom made some non-unified tools, a very different approach than that taken by companies like Facebook and Google.

For example, they built their analytics software to process huge volumes of data. How much volume are we talking about? Consider this eye-opening statistic: Networking provider Sandvine reports that just over 30 percent of the traffic on the Web during prime time are Netflix customers streaming movies.

The development of microservices is changing more than software code itself. It is making an enormous impact on how organizations think through their business processes, what products they bring to market and how they are going to support their products with customers in the marketplace.

Because of the explosion of mobile devices and the always shifting wants and needs of consumers, IT professionals have to adapt just as quickly. Microservices architecture is the vehicle in which they are creating rapid change. It is changing not only the technology but also how organizations evaluate business opportunities. On another level, it is altering the organization of talent, encouraging a culture of innovation, expanding the scope of individual responsibility and empowering smart people to take chances.

Agility and Speed are Paramount

Large firms such as Condé Nast and Gilt have always been able to handle large volumes of customer data. However, they see the future and are adapting their legacy systems to utilize microservices architecture. They want to get rid of dependencies and be able to test and deploy code changes quickly. Similar changes across enterprises are helping them become more adaptable to customer needs. It is also pushing them to adopt greater use of the cloud to operate with more agility and speed.

Microservices architecture has a similar mindset as other fast development methodologies like agile software. Fast-moving Web properties like Netflix are constantly looking for greater simplicity and the ability to make changes rapidly without going through numerous committees. The code is small, and every software engineer makes production changes on an ongoing basis.

Sea Change in Software Development

That is why microservices architecture is a natural fit for Web languages such as Node.js that work well in small components. You want to be able to move rapidly and integrate changes to applications quickly. Because microservices are self-contained, it is easy to make changes to a code base and replace or remove services. Instead of rewriting an entire module and trying to propagate across a massive legacy code base, you simply add on a microservice. Any other services that want to tap into the functionality of the additional service can do so without delay.

This is a sea change in how traditional software development takes place. The speed at which code changes in mobile apps and modern websites is way too fast for the legacy software development system. Constantly evolving apps require a new way of thinking.

Changes in Organizations

Back in the 1980s, the role of IT departments began to change with the debut of the personal computer. Every year, PCs became more powerful, and technology staff not only supported individual business functions, but they also had to maintain complete processes. Technology and data were moving closer to the center of the business.

By the 1990s, the IT department had become a critical system in every major company. If the computer systems were down for any length of time, it created bottlenecks for every department of the company.

Data-Driven Design

With microservices, the data inherent to each microservice can only be tapped through its API. The data in the microservice is private. This allows them to be loosely coupled so they can operate and evolve on an independent basis. This creates two challenges: maintaining consistency across several services and implementing queries that grab information from multiple services. With data-driven design, you can experiment and create transactions that cover multiple services consistently.

Unfortunately, many companies still maintain the old software engineering model. However, today they are under pressure to shorten the time to bring new Web and mobile applications to consumers. Speed has become the “coin of the realm.”

Changing Culture in Traditional IT Departments

The rise of microservices is changing a culture in IT that is deeply ingrained. There has always been a division between software development and operations. Now software development is integrated much more tightly with DevOps. Over many years, IT departments had established standards on which technologies they would run. Since these technologies represented serious investments in time and capital, they budgeted carefully for capacity, upgrades and security.

In the brave new world of microservices, department leaders must make significant changes in their organization, so developers play a bigger role in monitoring the software creation during its lifecycle, from development through to production. Interestingly, a similar development happened decades ago when data centers were so complex; Only a select few IT engineers could operate all of the disparate functions. In many cases, the staff maintaining applications were the same people that built them.

Breaking Down Barriers

In effect, microservices is breaking down barriers between the development of software and its operation. That means that any firm that is considering implementing microservices on any substantial level needs to evaluate if they are ready to operate with this new approach.

It does not mean that legacy systems are being disregarded for the new kid in town. In many cases, the traditional system is doing an excellent job for the organization, so changing it without a business case would be folly.

However, the larger trends of cloud computing, mobile device adoption, and low-cost bandwidth are forever changing the way consumers buy and interact with software applications. The pace of change is dizzying, and the need for speed in application development is greater than ever before.

A Quick Primer on Microservices

Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a specific task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. In his book, “Building Microservices,” Sam Newman said microservices are small, focused components built to do a single thing very well.

Martin Fowler’s “Microservices – a Definition of This New Architectural Term” is one of the seminal publications on microservices. He describes some of the key characteristics of microservices as:

  • Componentization: Microservices are independent units that are easily replaced or upgraded. The units use services to communicate with things like remote procedure or web service requests.

  • Business capabilities: Legacy application development often splits teams into areas like the “server-side team” and the “database team.” Microservices development is built around business capability, with responsibility for a complete stack of functions such as UX and project management.

  • Products rather than projects: Instead of focusing on a software project that is delivered following completion, microservices treat applications as products of which they take ownership. They establish an ongoing dialogue with a goal of continually matching the app to the business function.

  • Dumb pipes, smart endpoints: Microservice applications contain their own logic. Resources used often are cached easily.

  • Decentralized governance: Tools are built and shared to handle similar problems on other teams.

History of Microservices

The phrase “Micro-Web-Services” was first used at a cloud computing conference by Dr. Peter Rodgers in 2005, while the term “microservices” debuted at a conference of software architects in the spring of 2011. More recently, they have gained popularity because they’re able to handle many of the changes in modern computing, such as:

  • Mobile devices

  • Web apps

  • Containerization of operating systems

  • Cheap RAM

  • Server utilization

  • Multi-core servers

  • 10 Gigabit Ethernet

The concept of microservices is not new. Google, Facebook and Amazon have employed this approach at some level for more than 10 years. A simple Google search, for example, calls on more than 70 microservices before you get the results page.

Also, other architectures have been developed that address some of the same issues microservices handle. One is called Service-Oriented Architecture (SOA), which provides services to components over a network, with every service able to exchange data with any other service in the system. One of its drawbacks is the inability to handle asynchronous communication.

How Microservices Differ From Service-Oriented Architecture

Service-oriented architecture (SOA) is a software design where components deliver services through a network protocol. This approach gained steam between 2005 and 2007, but has since lost momentum to microservices. As microservices began to move to the forefront a few years ago, a few engineers called it “fine-grained SOA.” Still others said microservices do what SOA should have done in the first place.

SOA is a different way of thinking than microservices. SOA supports Web Services Definition Language (WSDL), which defines service end points rigidly and is strongly typed, while microservices have dumb connections and smart end points. SOA is generally stateless; microservices are stateful and use object-oriented programming (OOP) structures that keep data and logic together.

Some of the difficulties with SOA include:

  • SOA is heavyweight, complex and has multiple processes than can reduce speed.

  • While SOA originally helped prevent vendor lock-in, it eventually wasn’t able to move with the trend toward democratization of IT.

  • Just as CORBA fell out of favor when early Internet innovations provided a better option to implement applications for the Web, SOA lost popularity when microservices offered a better way to incorporate web services.

Problems Microservices Solve

Larger organizations run into problems when monolithic architectures can’t be scaled, upgraded or maintained easily as they grow over time. Microservices architecture is an answer to that problem. It is an architectural-style software where complex tasks are broken down into small processes that operate independently and communicate through language-agnostic APIs.

Monolithic applications are made up of a user interface on the client, an application on the server, and a database. The application processes HTTP requests, gets information from the database, and sends it to the browser. Microservices handle HTTP request/response with APIs and messaging. They respond with JSON/XML or HTML sent to the presentation components. Microservices proponents rebel against enforced standards of architecture groups in large organizations but enthusiastically engage with open formats like HTTP, ATOM and others.

As applications get bigger, intricate dependencies and connections grow. Whether you are talking about monolithic architecture or smaller units, microservices let you split things up into components. This allows horizontal scaling, which makes it much easier to manage and maintain separate components.

The Relationship of Microservices to DevOps

Incorporating new technology is just part of the challenge. Perhaps a greater obstacle is developing a new culture that encourages risk-taking and taking responsibility for an entire project “from cradle to crypt.” Developers used to legacy systems may experience culture shock when they are given more autonomy than ever before. Communicating clear expectations for accountability and performance of each team member is vital.

DevOps is critical in determining where and when microservices should be utilized. It’s an important decision because trying to combine microservices with heavy, monolithic legacy systems may not always work. Changes can’t be made fast enough. With microservices, services are constantly being developed and refined on-the-fly. DevOps must ensure updated components are put into production, working closely with internal stakeholders and suppliers to incorporate updates.

The Move Toward Simpler Applications

As DreamWorks’ Doug Sherman said on a panel at the Appsphere 15 Conference, the film-production company tried an SOA approach several years ago but ultimately found it counterproductive. Sherman’s view is that IT is moving toward simpler applications. At times, SOA seemed more complicated than it should be; microservices were seen as an easier solution than SOA, much like JSON was seen as simpler than XML and people viewed REST as simpler than SOAP. We are moving toward systems that are simpler to build, deploy and understand. While SOA was originally designed with that in mind, it ended up being more complex than needed.

Another panelist, Allan Naim, product manager at Google, agreed. He explained that SOA is really geared for enterprise systems because you need a service registry, a service repository and other components that are expensive to purchase and maintain. They are also closed off from each other. Microservices handle problems that SOA attempted to solve more than a decade ago, yet they are much more open.

How Microservices Differ Among Different Platforms

Microservices is a conceptual approach, and as such it is handled differently in each language. This is a strength of the architecture because developers can use the language they are most familiar with. Older languages can use microservices by using a structure unique to that platform. Here are some of the characteristics of microservices on different platforms:

Java

  • Avoids using Web Archive or Enterprise Archive files

  • Components are not auto-deployed. Instead, Docker containers or Amazon Machine Images are auto-deployed.

  • Uses fat jars that can be run as a process

PHP

REST-style PHP microservices have been deployed for several years now because they are:

  • Highly scalable at enterprise level

  • Easy to test rapidly

Python

  • Easy to create a Python service that acts as a front-end web service for microservices in other languages such as ASP or PHP

  • Lots of good frameworks to choose from, including Flask and Django

  • Important to get the API right for fast prototyping

  • Can use Pypy, Cython, C++ or Golang if more speed or efficiency is required

Node.js

Node.js is a natural for microservices because it was made for modern web applications. Its benefits include:

  • Takes advantage of JavaScript and Google’s high-performance, open-source V8 engine

  • Machine code is optimized dynamically during runtime

  • HTTP server processes are lightweight

  • Nonblocking, event-driven I/O

  • High-quality package management

  • Easy for developers to create packages

  • Highly scalable with asynchronous I/O end-to-end

.NET

In the early 2000s, .NET was one of the first platforms to create applications as services using Simple Object Access Protocol (SOAP), a similar goal of modern microservices. Today, one of the strengths of .NET is a deep presence in enterprise installations. Here are two examples of using .NET microservices:

Responding to a Changing Market

The shift to microservices is clear. The confluence of mobile computing, inexpensive hardware, cloud computing and low-cost storage is driving the rush to this exciting new approach. In fact, organizations don’t have any choice. Matt Miller’s article in The Wall Street Journal sounded the alarm; “Innovate or Die: The Rise of Microservices” explains that software has become the major differentiator among businesses in every industry across the board. The monolithic programs common to many companies cannot change fast enough to adapt to the new realities and demands of a competitive marketplace.

Service-oriented architecture attempted to address some of these challenges but eventually failed to achieve liftoff. Microservices arrived on the scene just as these influences were coming to a head; they are agile, resilient and efficient, qualities many legacy systems lack. Companies like Netflix, Paypal, Airbnb and Goldman Sachs have heeded the alarm and are moving forward with microservices at a rapid pace.

AppDynamics Monitoring Excels for Microservices; New Pricing Model Introduced

It’s no news that microservices are one of the top trends, if not the top trend, in application architectures today. Take large monolithic applications which are brittle and difficult to change and break them into smaller manageable pieces to provide flexibility in deployment models, facilitating agile release and development to meet today’s rapidly shifting digital businesses. Unfortunately, with this change, application and infrastructure management is more complex due to size and technology changes, most often adding significantly more virtual machines and/or containers to handle the growing footprint of application instances.

Fortunately, this is just the kind of environment the AppDynamics Application Intelligence Platform is built for, delivering deep visibility across even the most complex, distributed, heterogeneous environments. We trace and monitor every business transaction from end-to-end — no matter how far apart those ends are, or how circuitous the path between — including any and all API calls across any and all microservices tiers. Wherever there is an issue, the AppDynamics platform pinpoints it and steers the way to rapid resolution. This data can also be used to analyze usage patterns, scaling requirements, and even visibility into infrastructure usage.

This is just the beginning of the microservices trend. With the rise of the Internet of Things, all manner of devices and services will be driven by microservices. The applications themselves will be extended into the “Things” causing even further exponential growth over the next five years. Gartner predicts over 25 billion devices connected by 2020, with the majority being in the utilities, manufacturing, and government sectors.

AppDynamics microservices pricing is based on the size of the Java Virtual Machine (JVM) instance; any JVM running with a maximum heap size of less than one gigabyte is considered a microservice.

We’re excited to help usher in this important technology, and to make it feasible and easy for enterprises to deploy AppDynamics Java microservices monitoring and analytics. For a more detailed perspective, see our post, Visualizing and tracking your microservices.