Advances In Mesh Technology Make It Easier for the Enterprise to Embrace Containers and Microservices

More enterprises are embracing containers and microservices, which bring along additional networking complexities. So it’s no surprise that service meshes are in the spotlight now. There have been substantial advances recently in service mesh technologies—including Istio’s 1.0, Hashi Corp’s Consul 1.2.1, and Buoyant merging Conduent into LinkerD—and for good reason.

Some background: service meshes are pieces of infrastructure that facilitate service-to-service communication—the backbone of all modern applications. A service mesh allows for codifying more complex networking rules and behaviors such as a circuit breaker pattern. AppDev teams can start to rely on service mesh facilities, and rest assured their applications will perform in a consistent, code-defined manner.

Endpoint Bloom

The more services and replicas you have, the more endpoints you have. And with the container and microservices boom, the number of endpoints is exploding. With the rise of Platform-as-a-Services and container orchestrators, new terms like ingress and egress are becoming part of the AppDev team vernacular. As you go through your containerization journey, multiple questions will arise around the topic of connectivity. Application owners will have to define how and where their services are exposed.

The days of providing the networking team with a context/VIP to add to web infrastructure—such as over port 443—are fading. Today, AppDev teams are more likely to hand over a Kubernetes YAML to add to the Ingress controller, and then describe a behavior. Example: the shopping cart Pod needs to talk to the shopping cart validation Pod, which can only be accessed by the shopping cart because the inventory is kept on another set of Reddis Pods, which can’t be exposed to the outside world.

You’re juggling all of this while navigating constraints set by defined and deployed Kubernetes networking. At this point, don’t be alarmed if you’re thinking, “Wow, I thought I was in AppDev—didn’t know I needed a CCNA to get my application deployed!”

The Rise of the Service Mesh

When navigating the “fog of system development,” it’s tricky to know all the moving pieces and connectivity options. With AppDev teams focusing mostly on feature development rather than connectivity, it’s very important to make sure all the services are discoverable to them. Investments in API management are the norm now, with teams registering and representing their services in an API gateway or documenting them in Swagger, for example.

But what about the underlying networking stack? Services might be discoverable, but are they available? Imagine a Venn diagram of AppDev vs. Sys Engineer vs. SRE: Who’s responsible for which task? And with multiple pieces of infrastructure to traverse, what would be a consistent way to describe networking patterns between services?

Service Mesh to the Rescue

Going back to the endpoint bloom, consistency and predictability are king. Over the past few years, service meshes have been maturing and gaining popularity. Here are some great places to learn more about them:

Service Mesh 101

In the Istio model, applications participate in a service mesh. Istio acts as the mesh, and then applications can participate in the mesh via a sidecar proxy—Envoy, in Istio’s case.

Your First Mesh

DZone has a very well-written article about standing up your first Java application in Kubernetes to participate in an Istio-powered service mesh. The article goes into detail about deploying Istio itself in Kubernetes (in this case, MinuKube). For an AppDev team, the new piece would be creating the all-important routing rules, which are deployed to Istio.

Which One of these Meshes?

The New Stack has a very good article comparing the pros and cons of the major service mesh providers. The post lays out the problem in granular format, and discusses which factors you should consider to determine if your organization is even ready for a service mesh.

Increasing Importance of AppDynamics

With the advent of the service mesh, barriers are falling and enabling services to communicate more consistently, especially in production environments.

If tweaks are needed on the routing rules—for example, a time out—it’s best to have the ability to pinpoint which remote calls would make the most sense for this task. AppDynamics has the ability to examine service endpoints, which can provide much-needed data for these tweaks.

For the service mesh itself, AppDynamcs in Kubernetes can even monitor the health of your applications deployed on a Kubernetes cluster.

With the rising velocity of new applications being created or broken into smaller pieces, AppDynamics can help make sure all of these components are humming at their optimal frequency.

Battle of the PaaS: Python Apps in the Cloud

In the early days of the web, web pages were static and did not change. Today, however, most websites you visit are like applications you might find on your desktop — they offer dynamic content that changes on the fly. Users can interact with the app and get different information from another user, all without leaving the same page.

Python has been, and still is, a popular language to build a web application stack to organize and display that data. In this article, we will examine several of the most popular Python Platform-as-a-Service (PaaS) platforms and evaluate their pros and cons.

You have several options for choosing a web host:

  • Virtualized servers
  • Platform as a service (PaaS)
  • Infrastructure as a service (IaaS)
  • Bare-metal

Bare-metal, IaaS, and virtualized servers are somewhat the same. You must start with a Linux-based server, then install the system packages, web server, WSGI server, Python environment and database.

One of the benefits of PaaS is the ability to deploy a project at no or low cost. You do not have to concern yourself with configuring the operating system or server setup. This speeds up your ability to deploy as compared to other options. On the other hand, if you are interested in developing your knowledge about the Python stack, a traditional server will give you more in-depth understanding. You will also keep more money in your wallet as you scale and have much more control over your environment. With that in mind, let’s review some of the well-known Python hosts and examine their similarities and differences.

Google App Engine

Google App Engine is an easy way to deploy your web app and not have to worry about large amounts of data processing and heavy usage loads. It will run in a secure environment, regardless of the operating system or server location. You can get a feel for how rapidly you can deploy to the platform by downloading a simple Hello World file to your computer, and then test the app with the development server that comes along with the App Engine Software Development Kit.

Google App Engine offers a standard environment and flexible environment options. The standard environment utilizes custom-made Google containers that are wrapped around your code, and that run on Google infrastructure. The flexible environment is still in beta and, instead of Google containers, it employs Docker containers to wrap the code.

The flexible environment is built on Google Compute Engine. Your app will scale, and the load will balance automatically in response to fluctuations in demand. The runtime is based on Debian Jessie, and you can locate the source code for the runtime on GitHub.

Amazon Web Services

Amazon Web Services has continually improved offerings over the last several years. You can use boto3, the Amazon Web Services Software Development Kit for Python, to rapidly deploy to AWS, and then combine your script, library or Python web app with Amazon services such as Amazon EC2, Amazon DynamoDB, Amazon S3 and others.

Boto3 offers two APIs. The first is a client API that lets you map to HTTP API operations. Resource APIs give you resource objects and collections that allow you to perform actions and tap attributes. Both APIs generate classes on a dynamic basis powered by JSON models, giving you rapid updates and stable consistency among all of the supported AWS services.

Boto3 was designed from the beginning to provide support for both Python 2 and 3 (specifically, versions 2.6.5+, 2.7, 3.3 and 3.4). Boto3 also has “waiters” that look for status changes in Amazon Web Services resources automatically. For instance, you can launch an instance of Amazon EC2 and employ a waiter to stand-by until running state is reached. Waiters are available on resource and client APIs.


Regardless of whether you use a Flask, Django or other Python framework, Heroku lets you deploy and scale your app the way you want. Heroku utilizes git as the method for deploying apps. When setting up an application on Heroku, it associates with a remote, usually called ‘heroku’ with the git repository aligned with your application. That means that to deploy the code you are using a command called ‘heroku’ much the same as you would ‘git push.’

However, that is not the only way to deploy to Heroku. You can integrate with GitHub and associate each additional pull request with a new app. You can also tap the power of Dropbox Sync, or create and release apps through the Heroku API.

Once the platform receives your source, it begins building the app. All of the publishing and infrastructures is handled automatically.


PythonAnywhere is a full Python environment that allows you to begin hosting your Python application for free. The basic level includes everything you need to host the code and your website without installing software or managing a server or Linux machine. It takes less than a minute to get started, and you can upgrade as you grow with paid plans that start as low as five dollars a month. This plan can easily accommodate a website that is getting around 10,000 hits every day. As the site grows in popularity, you can upgrade seamlessly to a bigger account.

There are installers available for a wide variety of frameworks, including Django, Bottle, web2py, Flask and most WSGI web frameworks. Custom and web developer accounts can host their dedicated domains at PythonAnywhere.

You can also develop your code in a web-based editor and store it on PythonAnywhere servers. It will preserve your session, so you can pick it up on another device where you left off from anywhere in the world. Your Python installation includes some popular libraries like Mechanize and BeautifulSoup.

Although you can get started at a low cost, you can be confident that the servers are powerful. All servers are hosted on Amazon EC2. Experiment with various simple programs at no charge to develop your concept. When you are ready to ramp up with heavier processing, you only invest in what you use, allowing you to tap into teraflops of power.


WebFaction is an inexpensive, Python-friendly web hosting provider. Python is available on all servers, and version 2.5 through 3.5 are already installed. The default version varies when you actuate the ‘python’ command, depending on the server. To figure out which default version is on your server, use ‘python -V.’ If you want to use a particular Python version, enter ‘pythonX.Y,’ with X and Y representing the Python version number. All Python modules and packages are distributed automatically as a Python Egg. Alternatively, they are distributed in archive form along with a ‘’ file, or as source ‘.py’ files in a collection.

All servers are pre-installed with CentOS 7. Servers are continually monitored for security and high performance. They are patched on the fly and backed up regularly (I expect this to be the case with any modern hosting provider, of course). You will get complete shell access to control databases and files.

Which Is the Right Host for You?

Choosing the right Python web host depends on your goals. If you want to get started without spending any money and have access to a variety of installers, including Flask and Django, PythonAnywhere is a good choice. The web-based editor gives you a quick way to develop ideas wherever you are without needing to launch your own development environment. Once your application is launched, you can easily upgrade down the road if it grows in popularity.

WebFaction is another good selection if you are operating with a slim budget. Every version of Python is pre-installed along with CentOS 7. It is a no muss, no fuss option that gets you up and running with shell access control.

Power users with some resources will more than likely turn to stalwarts such as Google App Engine and Amazon Web Services. Frameworks that run on GAE include Pyramid, Flask, web2py and Django. However, you are not limited to those. Every Python stack that supports WSGI can be utilized to build an app using a CGI adapter. The framework is uploaded alongside the application. You can also call Python third-party libraries.

Amazon Web Services allows you to combine your Python app with the powerful spectrum of Amazon services, including Amazon S3 and Elastic Cloud. Heroku seems to be a “fan favorite,” with some developers praising it for its flexibility and deployment method options. If you are experimenting with new apps, or just want to save money, go with PythonAnywhere or WebFaction. Serious developers with a budget should choose Heroku, Google App Engine or Amazon Web Services.

Battle of the PaaS: PHP Apps in the Cloud

Platform as a Service (PaaS) providers have grown rapidly over the last few years. Now you can choose from a number of robust services that can help you rapidly develop, deploy and manage your PHP application. To help you make sense of this crowded market, this article will examine the features, benefits and drawbacks of five of the most popular platforms right now: Heroku, Google App Engine, Microsoft Windows Azure, Amazon Web Services (AWS) and Engine Yard — and help you determine which option is best for you.


Heroku is a web hosting company that began with Ruby on Rails apps and now handles PHP, Java, Clojure, Go, Scala and Node.js. The service started operations in 2007, making it one of the pioneering cloud platforms. Acquired by Salesforce in 2010, it is free for small applications. If you get more traffic, you can expand your account and scale your costs economically.

Although there are cheaper providers, Heroku is well-known and popular. But Heroku can become expensive quickly when you order several dynos. Dynos are Linux containers that handle a single command — any command that is part of the default environment or in the slug, which is a pre-prepared and compressed copy of the app and related dependencies. One way to save money is to invest in additional services rather than defaulting to adding more dynos.

Heroku is ideal for building applications quickly. Setup is painless — much of the operation is hidden from you by design. The whole idea is to make the process simple. Heroku customers include Code for America, Rapportive, TED, Facebook, Lyft, Urban Dictionary, GitHub and Mailchimp.

Google App Engine

Google App Engine is ideal for creating scalable web apps and backends for mobile apps. You get a number of services and APIs including Memcache, NoSQL datastores and user authentication. Your apps will be scaled automatically depending on the amount of traffic they get, so you only lay out cash for what you use.

You do not have to worry about provisioning or maintaining servers. Services such as application logging, health checks and load-balancing allow you to deploy your app quickly. App Engine is compatible with common development tools including Git, Jenkins and Maven.

While Google App Engine is easy to use, it might also be considered a weakness. Many things are handled automatically, but if you want to customize it to your liking, you may be frustrated. Customers using Google App Engine include Gigya, NewsLimited, Mimiboard, Khan Academy, WebFilings, Best Buy, MetOffice, Getaround and CloudLock.

Microsoft Windows Azure

Like Amazon AWS, Windows Azure is really a combination of IaaS and PaaS. It supports PHP, .NET, Node.js, Ruby, Python and Java. You can utilize Visual Studio for building and deploying PHP applications. Options include an SQL Database, Blobs and Tables for persistent storage. You can administer your app with the command line or Windows Azure dashboard.

Because Azure is effectively both a PaaS and IaaS at the same time, you have a broad selection of components you can assemble for a custom solution, giving you lots of control over the process. On the other hand, Azure has a stripped-down administrative portal that may seem too sparse to some developers.

There are no upfront costs to use Windows Azure. You pay for only what you use, and there are no termination fees. Azure has been used by companies such as BMW, easyJet, HarperCollins, TalkTalk, Telenor, Toyota, Avanade, NBC Sports and Aviva.

Amazon Web Services

Although Amazon Web Services is better known as an Infrastructure as a Service (IaaS), it offers many of the features available on a PaaS. You can utilize the services available in Amazon AWS without resorting to building and maintaining application servers on your own. Because the AWS server is a raw OS, you can implement any language you choose including PHP, Ruby, Python and other languages. You can tap the power of Amazon Elastic Beanstalk for autoscaling, application health monitoring and automatic load-balancing.

You can use the AWS Software Development Kit for PHP library, documentation and code samples. At the AWS PHP Developer Center, you’ll also discover:

  • How to deploy PHP apps on AWS Elastic Beanstalk and AWS OpsWorks.
  • Access to white papers created by the AWS team on an array of technical topics including economics, security and architecture.
  • How to connect with other developers via GitHub, the PHP Developers Blog, Community Forum and Twitter.

One great advantage is you can get started on AWS for free to give you hands-on experience. The Free Tier offers 12 months of service at no charge. You can employ any of the services and products within specific usage limits. Feature products include Amazon EC2 compute capacity, Amazon S3 storage infrastructure, Amazon RDS relational database service, AWS IoT for connecting devices to the cloud and Amazon EC2 Container Registry used to store and retrieve Docker images. One of the drawbacks of Amazon AWS is that you may need to handle more management than other PaaS providers. The AWS client list has included GE, Pinterest, Netflix, Pfizer and Nasdaq.

Engine Yard

Engine Yard is for developers who are creating Node.js, Ruby on Rails and PHP applications and want the power of the cloud without the hassle of operations management. Many of the services are provided on top of Amazon AWS. Engine Yard itself is a run on Amazon. That’s why its strengths are management and orchestration more so than providing a deep bench of components. With Engine Yard, you can manage snapshots, administer databases, manage clusters, perform backups and do load-balancing. Engine Yard’s advantages include dedicated instances, lots of control over virtual machine instances and integration with private and public Git repositories. It is considered by some to be a “heavier” PaaS than Heroku, meaning they believe it should be used for more heavy-duty, serious applications.

One reviewer said that Heroku is nice for setting up apps quickly, but serious apps need Engine Yard. Not everyone agrees, however. Another reviewer reported felt Heroku was far better than Engine Yard, saying that you can install gem and have your app deployed in just a few minutes. Pricing for Engine Yard is a pay-as-you-go model. There are premium options along with standard setups. Pricing ranges from $25/month for a solo instance to $150/month for a standard instance and $300/month for a premium instance. Engine Yard accounts include Appboy, Vitrue, TST Media, RepairPal, MTV, Badgeville and Estée Lauder.

Review and Recommendations

So what have we learned?

  • Heroku is easy to manage, well-known, simple to use and is great for building apps rapidly. It can get pricey, so you need to manage dynos carefully.
  • Google App Engine is well-suited for managing back-end operations of mobile apps and creating web apps that can scale. Although simple to use, it is not easy to customize.
  • Azure has gained market share quickly by providing lots of components and user control. Its hybrid IaaS/PaaS personality allows both Windows and Linux users to find a solution on the platform.
  • Amazon Web Services is a proven system that has recently cut prices due to competition from Azure and others. There are many support and educational resources to tap into including the Developer Center, a blog for programmers and an online forum for community members.
  • Engine Yard has excellent management and orchestration tools, as well as great support and robust scaling options. It can be harder to master than other platforms, but is excellent for those new to PaaS platforms that need more support to get up and running.

Adoption of cloud technology will continue to grow as organizations shift apps from internal data centers to the cloud to cut expenses and become more nimble. These five PaaS platforms will help you get your PHP app up and running quickly to take advantage of the on-going move to the cloud.

Choosing the Right PaaS

Choosing the right PaaS comes down to evaluating your cloud goals and the needs of your developers. Start with your target language, in this case, PHP. Every layer of the LAMP stack has more depth than ever before, and most PaaS providers are language agnostic, even if they initially supported only a single language.

Also, consider if you will benefit from a PaaS that functions as a quasi-IaaS/PaaS. Hybrid models provide several advantages. For example, you may have a database that is too large to handle in the cloud and is better suited to be located on-site. A hybrid approach lets you access local data from the cloud quickly. One disadvantage of this setup is having to worry about configuring an abstraction layer, which means your team needs the training and know-how to maintain it.

Other considerations are: How will you achieve scalability? Will you be able to move apps quickly away from your PaaS if needed? PaaS does not always mean development in the cloud. The advantage is simple deployment of applications which saves you time, money and hassle with your next PHP web application.


Battle of the PaaS: Node.js Apps in the Cloud

No matter why you came to Node.js to develop your app (blasting past slow I/O, free data exchange, moving out of the Flash sandbox, etc.) what you want to know now is the best place to host it. The answer to that question, though, really depends on what you want to do when you get to the cloud which involves your original intent for the app.

Unless you have your own IT department, PaaS beats self-hosting in a variety of ways. Your app will be able to handle surges in traffic inside data centers that are high-powered and geographically distributed. PaaS comes pre-built with all of the programming languages, frameworks, libraries, services and tools that your app is going to need as it grows and evolves. Before we get there, though, we should review some of the issues around how a typical Node.js app is deployed.

Getting Ready for Deployment

As you develop in Node.js, you’ve probably become familiar with the active and enthusiastic community of JavaScript developers that have already faced many of your issues. Although there’s plenty of free support for development end of Node.js, there’s not anywhere near that amount when you get to the deployment. Each hosting environment tends to offer its own advice on the right way to deploy.

Many developers use tools like the Node Package Manager to get their packages ready for deployment. On the other hand, if you end up deciding on a PaaS like Heroku, go directly to the PaaS deployment support site (for example, take a look at Heroku Dev Center) to get step by step instructions for getting your app up and running, along with any advanced features you may need to access.

Top PaaS Destinations for Your Apps

1. Heroku

Heroku has a well-deserved reputation for making deployment easy, as you may have seen if you went to the above link to their dev center. It was built by Ruby developers for Ruby developers, but this has proven to be just as useful for Node.js apps. Although a good choice for beginning developers, Heroku has also been chosen for its simplicity in hosting larger commercial projects like Faces of NY Fashion Week and National VIP.


  • 24 hour monitoring. You have professional status monitoring, including a frequently updated Status Site.
  • Low cost to start. You can use Heroku for free at the entry level with 512 MB RAM and one web for one worker.
  • Portability. It uses standard tools, so it is easy to pack up and move to another host if necessary.
  • Integration. You can use Heroku with several third parties, such as MongoDB and Redis.
  • Popularity. Heroku lists Macy’s and MalwareBytes among its users, as well as several smaller companies. Its popularity also means that there’s a broad range of supported plugins to choose from.


  • Lack of control. The simplicity comes at the cost of freedom to select the precise configurations such as hardware, OS, and firewall.
  • Steep price jump. When you are ready to move up to the professional level of support, the costs can be much higher than the competition.
  • Limited performance. The fact that it was designed for Ruby on Rails means that Node.js apps can show performance strains.

Best for:

Heroku is the best choice for beginner Node.js developers and those who want to get online fast with ample support. Many open-source advocates around the world swear by Heroku. It is pretty common to deploy to Heroku and once the app begins to get serious traffic, migrate to a PaaS with more freedom

2. Modulus

Modulus is likened to a “bicycle with a jet engine attached to it”. Born from developers originally trying to build a game in Node.js, they were frustrated with the lack of Node.js hosting options. So, they built their own premier Node.js hosting platform. Modulus has become more popular among startups such as Habitat and iChamp.


  • Excellent Support. Modulus understands that building a loyal customer base takes time and dedication. There are several ways to contact support, and user reviews consider them to be very helpful.
  • Automatic Scaling. For those of you who mainly want to focus on the bare-bones building process, Modulus provides auto-scaling – one less thing to worry about during the day-to-day management of your app. For those who like a little more control, this feature is entirely optional.
  • Simplicity. Modulus is incredibly easy to use and is, therefore, suitable for absolute beginners. They have an app of their own which allows users to track statistics and manage their app on-the-go.


  • Price. Unlike other PaaS solutions, Modulus has no free tier. At the very minimum (1GB file storage, 64MB database, 1 server) you will still be paying $7.20 per month. However, the higher data bands are not overpriced, and there is a free trial available (although it does not last long).
  • Smaller user base for support.

Best for:

Modulus is another good PaaS for start-ups and beginning developers. They have excellent customer support, a simple interface, and very few issues, which would make them a great choice to those of us who are only dipping our feet into the PaaS world – if it were not for the sheer cost of their services. A big draw of using a PaaS is that you are usually spending less money on hosting, and yet, with no free tier, Modulus is the most expensive option featured on this list. It is up to you whether you think the significant ‘pros’ of using Modulus are worth it.

3. Microsoft Azure

Many developers tend to shy away from Microsoft due to the portability issues, but Azure is a solid PaaS offering with plenty of functionality. The Azure’s site points out that more than 66 percent of Fortune 500 companies, 10,000 customers per week, rely on their 22 regional data centers for app hosting.


  • Pay-as-you-use pricing. Microsoft Azure offers three different payment plans, each one tailored to the user’s needs. The basic app service is free.
  • Microsoft Azure works with other Microsoft services such as Visual Studio and WebMatrix.
  • Security. Microsoft Azure takes security very seriously. It uses penetration testing and even offers that same service to its customers. It also uses in-house security, instead of relying on Amazon Web Services.
  • Scale and performance. Microsoft Azure works with a specific set of runtime libraries, which creates excellent scaling and performance. However, this does create a risk of lock-in.


  • Dealing with Windows. You’ll need to ensure you use npm install for all dependencies in case you need to compile C code on Linux, Mac or Windows since you can’t transfer a precompiled binary from one OS/architecture to another.
  • Risk of ‘lock-in.’ Microsoft Azure requires tailoring your code to the PaaS, which means that migration to another PaaS can be difficult.
  • Poor support. Despite their popularity, Microsoft Azure is lacking in the support department. Users report that getting hold of support was challenging, and the documentation provided is over two years old.

Best for:

Microsoft Azure is best for large scale projects. It has advantages in terms of pricing, security, and functionality. However, it comes at the price of being difficult to migrate in from other OS environments and out to other PaaS options. You might be concerned about the level of support if you a developer new to the Node.js environment.

4. Linode

Linux users have been significant supporters of Linode for more than a decade. It has seen a good deal of upgrades and developments in the past couple years, so it is a reliable choice. Linode features a hub of high-performance SSD Linux servers for variable coverage of infrastructure requirements. They now offer a 40-gigabit network with Intel E5 Processors and support for Ipv6.


  • Active development. Linode recently added features like the Longview server monitoring. It is capabilities are evolving rapidly.
  • Linux user base. The community of Node.js developers is supported by a wider community of Linux user who can troubleshoot most issues you encounter.
  • Free upgrades. The base level has been upgraded to offer 1GB ram, 48GB storage, and 2TB transfer. Long time users will see their capabilities grow with regular upgrades.


  • Server attacks. Last Christmas, Linode was plagued with DDoS attacks. While it was not their fault, their reputation suffered for it. Linode managers blamed the size of the data center as one reason it is an attractive target.
  • No control panel. You have to configure your own virtual hosting. You will not be able to run CPanel or Plesk, so you have to be pretty comfortable with command line controls.

Best for:

Linode devotees tend to be pretty enthusiastic about it. If you are an experienced developer and want the support of a robust Linux community that goes beyond development, this is an excellent destination for your apps. Be aware that they may continue to be a target for hackers, but you can enjoy watching the latest technologies evolve on their servers.

Concluding Checklist

If you are a beginning developer or a startup: go with Heroku or Modulus. If cost or portability is a concern, start with Heroku.

If you are working in a team on a larger, collaborative project, particularly on Windows machines, Microsoft Azure is extensive enough to handle just about anything you can create.

If you are an advanced Linux aficionado with a deep interest in testing out the bounds of new technology as it appears, go with Linode.

AppDynamics Partners with OpenShift by Red Hat to Empower DevOps

We’re proud to partner with OpenShift by Red Hat to help monitor their open-source platform-as-a-service (PaaS). Together we make it easier to scale into the cloud. The integration helps foster DevOps by increasing the visibility and collaboration between the typically fragmented development and operations teams throughout the product lifecycle. We caught up with Chris Morgan, Technical Director of Partner Ecosystem at Red Hat, to discuss all the ways Agile and rapid-release cycles have changed development and sped up innovation.

Morgan refers to these new DevOps tools as driving innovation and empowering developers by cultivating a constant feedback loop and proving end-to-end visibility while help scale applications.


“We have a great partner that’s able to provide [APM] to enhance the platform and make it more desirable to developers and for our customers. Ease of use and deployment is what everyone wants.”

“Using AppDynamics, we can monitor the existing application and understand how best it’s performing and then re-architect it so it can take advantage of the things that platform-as-a-service has to offer and you move to OpenShift.”

AppDynamics is excited to announce we are available in the OpenShift marketplace to make it easier than ever to add application performance monitoring to OpenShift based applications.

AppDynamics in the OpenShift Marketplace

AppDynamics in the OpenShift Marketplace

Monitoring Apps on the Cloud Foundry PaaS

At AppDynamics, we pride ourselves on making it easier to monitor complex applications. This is why we are excited to announce our partnership with Pivotal to make it easier to deploy built-in application performance monitoring to the cloud.


Getting started with Pivotal’s Cloud Foundry Web Service

Cloud Foundry is the open platform as a service, developed and operated by Pivotal. You can deploy applications to the hosted Pivotal Web Services (much like you host apps on Heroku) or you can run your own Cloud Foundry PaaS on premise using Pivotal CF. Naturally, Cloud Foundry is an open platform that is used and operated by many companies and service providers.

1) Sign up for a Pivotal CF account and AppDynamics Pro SaaS account

In the future, Pivotal Web Services will include the AppDynamics SaaS APM services, so you’ll only need to sign up for Pivotal Web Services and it will automatically create an AppDynamics account.

2) Download the Cloud Foundry CLI (Command Line Interface)

Pivotal Web Services has both a web based GUI as well as a full featured linux style command line interface (CLI). Once you have a PWS account, you can download a Mac, Windows or Unix CLI from the “Tools” tab in the PWS dashboard or directly for OSX, Linux, and Windows.

Pivotal Web Services CLI

3) Sign in with your Pivotal credentials

Using the CLI, log in to your Pivotal Web Services account. Remember to preface all commands given to Cloud Foundry with “cf”.  Individual Cloud Foundry PaaS clouds are identified by their domain API endpoint. For PWS, the endpoint is The system will automatically target your default org (you can change this later) and ask you to select a space (a space is similar to a project or folder where you can keep a collection of app(s).

$ cf login

Cloud Foundry CLI 

Monitoring Cloud Foundry apps on Pivotal Web Services

Cloud Foundry uses a flexible approach called buildpacks to dynamically assemble and configure a complete runtime environment for executing a particular class of applications. Rather than specifying how to run applications, your developers can rely on buildpacks to detect, download and configure the appropriate runtimes, containers and libraries. The AppDynamics agent is built-in to the Java buildpack for easy instrumentation so if you have AppDynamics monitoring running, the Cloud Foundry DEA will auto-detect the service and enable the agent in the buildpack. If you start the AppDynamics monitoring for an app already running, just restart the app and the DEA will autodetect the new service.

1) Clone the Spring Trader demo application

The sample Spring Trader app is provided by Pivotal as a demonstration. We’ll use it to show how monitoring works. First git clone the app from the Github repository.

$ git clone

2) Create a user provided service to auto-discover the AppDynamics agent

$ cf create-user-provided-service demo-app-dynamics-agent -p “host-name,port,ssl-enabled,account-name,account-access-key”

Cloud Foundry CLI

Find out more about deploying on PWS in the Java buildpack docs.

3) Use the Pivotal Web Services add-on marketplace to add a cloud based AMQP + PostgreSQL database instance

$ cf create-service elephantsql turtle demo-db

$ cf create-service cloudamqp lemur demo-amqp

Cloud Foundry CLI

4) Bind PostgreSQL, AMQP, and AppDynamics services to app

$ git clone

$ cd rabbitmq-cloudfoundry-samples/spring

$ mvn package

$ cf bind-service demo-app demo-app-dynamics-agent

$ cf bind-service demo-app demo-amqp

$ cf bind-service demo-app demo-db

Cloud Foundry CLI

5) Push the app to production using the Cloud Foundry CLI (Command Line Interface)

$ cf push demo-app -i 1 -m 512M -n demo-app -p target/rabbitmq-spring-1.0-SNAPSHOT.war

Cloud Foundry CLI

Spring AMQP Stocks Demo App

Spring Trader

Pivotal Web Services Console

Pivotal PaaS CloudFoundry



Production monitoring with AppDynamics Pro

Monitor your critical cloud-based applications with AppDynamics Pro for code level visibility into application performance problems.

AppD Dashboard

Pivotal is the proud sponsor of Spring and the related open-source JVM technologies Groovy and Grails. Spring helps development teams build simple, portable, fast, and flexible JVM-based systems and applications. Spring is the most popular application development framework for enterprise Java. AppDynamics Java agent supports the latest Spring framework and Groovy natively. Monitor the entire Pivotal stack including TC server and Web Server, GreenPlum, RabbitMQ, and the popular Spring framework:



Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.


Bootstrapping DropWizard apps with AppDynamics on OpenShift by Red Hat

Getting started with DropWizard, OpenShift, and AppDynamics

In this blog post, I’ll show you how to deploy a Dropwizard-based application on OpenShift by Red Hat and monitor it with AppDynamics.

DropWizard is a high-performance Java framework for building RESTful web services. It is built by the smart folks at Yammer and is available as an open-source project on GitHub. The easiest way to get started with DropWizard is with the example application. The DropWizard example application was developed to, as its name implies, provide examples of some of the features present in DropWizard.


OpenShift can be used to deploy any kind of application with the DIY (do it yourself) cartridge. To get started, log in to OpenShift and create an application using the DIY cartridge.

With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for an account on OpenShift by RedHat

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a Do It Yourself application on OpenShift

$ rhc app create appdynamicsdemo diy-0.1

Getting started is as easy as creating an application from an existing git repository:

DIY Cartridge

% rhc app create appdynamicsdemo diy-0.1 --from-code

Application Options
Domain: appddemo
Cartridges: diy-0.1
Source Code:
Gear Size: default
Scaling: no

Creating application ‘appdynamicsdemo’ … done
Waiting for your DNS name to be available … done

Cloning into ‘appdynamicsdemo’…
Your application ‘appdynamicsdemo’ is now available.

SSH to:
Git remote: ssh://

Run ‘rhc show-app appdynamicsdemo’ for more details about your app.

With the OpenShift Do-It-Yourself container you can easily run any application by adding a few action hooks to your application. In order to make DropWizard work on OpenShift we need to create three action hooks for building, deploying, and starting the application. Action hooks are simply scripts that are run at different points during deployment. To get started simply create a .openshift/action_hooks directory:

mkdir -p .openshift/action_hooks

Here is the example for the above sample application:

When checking out the repository use Maven to download the project dependencies and package the project for production from source code:



mvn -s $OPENSHIFT_REPO_DIR/.openshift/settings.xml -q package

When deploying the code you need to replace the IP address and port for the DIY container. The properties are made available as environment variables:



sed -i 's/@OPENSHIFT_DIY_IP@/'"$OPENSHIFT_DIY_IP"'/g' example.yml
sed -i 's/@OPENSHIFT_DIY_PORT@/'"$OPENSHIFT_DIY_PORT"'/g' example.yml

Let’s recap some of the smart decisions we have made so far:

  • Leverage OpenShift platform as a service (PaaS) for managing the infrastructure
  • Use DropWizard as a solid foundation for our Java application
  • Monitor the application performance with AppDynamics Pro

With a solid Java foundation we are prepared to build our new application. Next, try adding another machine or dive into the DropWizard documentation.

Combining DropWizard, OpenShift, and AppDynamics

AppDynamics allows you to instrument any Java application with by simply adding the AppDynamics agent to the JVM. Sign up for a AppDynamics Pro self-service account. Log in using your account details in your email titled “Welcome to your AppDynamics Pro SaaS Trial” or the account details you have entered during On-Premise installation.

The last step to combine the power of OpenShift and DropWizard is to instrument the app with AppDynamics. Simply update your AppDynamics credentials in the Java agent’s AppServerAgent/conf/controller-info.xml configuration file.

Finally, to start the application we need to run any database migrations and add the AppDynamics Java agent to the startup commmand:



java -jar target/dropwizard-example-0.7.0-SNAPSHOT.jar db migrate example.yml

java -javaagent:${OPENSHIFT_REPO_DIR}AppServerAgent/javaagent.jar
     -jar ${OPENSHIFT_REPO_DIR}target/dropwizard-example-0.7.0-SNAPSHOT.jar
     server example.yml > ${OPENSHIFT_DIY_LOG_DIR}/helloworld.log &

OpenShift App

Additional resources on running DropWizard on OpenShift:

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Monitoring Java Applications with AppDynamics on OpenShift by Red Hat

At AppDynamics, we are all about making it easy to monitor complex applications. That is why we are excited to announce our partnership with OpenShift by RedHat to make it easier than ever before to deploy to the cloud with application performance monitoring built-in.

Getting started with OpenShift

OpenShift is Red Hat’s Platform-as-a-Service (PaaS) that allows developers to quickly develop, host, and scale applications in a cloud environment. With OpenShift you have choice of offerings, including online, on premise, and open source project options.

OpenShift Online is Red Hat’s public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or next big idea.

RedHat OpenShift

OpenShift is a platform as a service (PaaS) by RedHat ideal for deploying large distributed applications. With the official OpenShift quick start guide to AppDynamics getting started with AppDynamics on OpenShift couldn’t be easier.

1) Signup for a RedHat OpenShift account

2) Setup RedHat client tools on your local machine

$ gem install rhc
$ rhc setup

3) Create a JBoss application on OpenShift

$ rhc app create appdynamicsdemo jbossews-2.0 --from-code

AppDynamics @ OpenShift

Get started today with the AppDynamics OpenShift getting started guide.

Production monitoring with AppDynamics Pro

Monitor your critical cloud-based applications with AppDynamics Pro for code level visibility into application performance problems.

OpenShift App

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.

Cloud Migration Tips #2: We Should Use the Cloud Because…

Welcome back to my blog series on deploying applications to the cloud.

What’s the point of deploying an application to the cloud versus just hosting it in your own data center? Is it really a good idea? Will it save you money? Will it work better? Will it cause new deployment and management problems? How do you monitor it?


These are all basic questions you should ask yourself before deciding IF your new or existing application will end up in a cloud environment.

The answers might be different for each application supporting your business. Cloud is really a set of architectural patterns that are available to help you solve business problems using technology. If you’re considering cloud you better have a business problem that you need to solve.

Here are a few business problems that would make me consider a cloud implementation for my application(s):

  • We’re out of space in our data center and most of our applications are used via the internet–should we build another data center or move some applications to public cloud providers?
  • Our new mobile application will need to scale rapidly as it becomes more popular–we have to be able to scale as needed so our customers have a good user experience.
  • We need to accelerate our time to market and make our business more agile–we don’t have time to wait for IT and all of our productivity sapping processes.

No matter what your business reasons are you need to come up with quantifiable and measurable success criteria so that you can prove out the benefits (or failure) of your cloud computing initiative. This implies you are already measuring something BEFORE you move to the cloud so you can compare metrics before and after. Here are some example KPIs that might be applicable:

  • Time to deliver requested environment to developers
  • Number of application impact incidents
  • Infrastructure cost per application
  • Time to scale / Cost to scale Application
  • Transaction throughput
  • SLA (yeah, you really should have one of these)

So You’ve Decided To Go For It

You’ve got your business justification nailed down and decided you really do need a cloud based application. Great! If this is a brand new application you can design it from the ground up and just deploy it, right? No!!! Remember, you need to monitor and manage this application if you stand any chance of providing a good user experience over the long haul.

“My cloud provider has all the monitoring and management tools I need.” – Wrong! Your cloud provider has basic monitoring tools that show you infrastructure metrics (CPU usage, Memory Usage, I/O usage, etc…). These monitoring tools don’t tell you anything about your application. Here’s what you need to know about your cloud application (at a minimum):

  • Which application nodes are in use at any given time. (Dynamic scaling, provisioning, de-provisioning will change this picture at any given time)
  • Application calls to external services with response times and error rates. (External service calls are performance killers for cloud applications and drive up the cost as most provider charge for network traffic leaving their cloud)
  • The response time and errors of all of my users business transactions. (Applies to any application architecture but cloud deployments can experience greater variance due to factors outside of the application owners control (network congestion, regional provider issues, etc…))
  • When a problem occurs – full application call stack for code analysis. (Applies to any application architecture)
  • Host level KPIs correlated with all of the application activity. (Really important in the cloud due to host virtualization, shared resources, and multiple sizing options when you select a host to deploy. Select the wrong size by mistake and you just limited your max application performance)
  • Historic baselines for everything so you know what normal behavior looks like. (Critical to identifying problems regardless of architecture)

If you’re deploying a new application you should have a really good idea of any external application dependencies (like calling a payment gateway to process credit card orders). If you are moving an existing application there is more work that needs to be done up front. In particular you need to really understand your existing application dependencies. Is there a service or backend database that your application relies upon that you’re not planning to move with the application? If so you can really screw up the entire cloud implementation if you make a bunch of calls to a component that lives outside of your chosen cloud environment.


Modern applications have many external dependencies. You absolutely MUST know what they are before moving to the cloud.

If you’re moving an existing application you better deploy a tool that can dynamically detect and show application flow maps. I’m not talking about those agentless tools that scan your hosts everyday looking for network connections (those usually miss all of the short lived services calls). I mean a solution that will give you the entire picture regardless of persistent and transient connection methodologies.

Since you need to monitor your existing environment anyway you might as well collect performance data and save it so that you have a good point of comparison for your “before and after” application environment (We’ll discuss this item more in a future blog post).

There are a ton of considerations when you choose to implement your application using cloud computing architecture patterns. In my next post I’ll go into more detail around the planning phase. Having all your ducks in a row before your begin the migration is critical to success.

Cloud Migration won’t happen overnight

There is a massive difference between migrating some code to the cloud and migrating an entire application to the cloud. Yes, the heart of any application is indeed its codebase, but code can’t execute without data and there lies the biggest challenge of any cloud migration. “No problem,” you say. “We can just move our database to the cloud and our code will be able to access it.” Sounds great, apart from most storage services in the cloud tend to run on cheap disk which is often virtualized and shared across several tenants. It’s no secret that databases store and access data from disk; the problem these days is that disks have got bigger and cheaper, but they haven’t exactly got much faster. The assumption that Cloud will offer your application a better Quality of Service (QoS) at a cheaper price is therefore not always true when you include application tiers that manage data. Your code might run faster with cheaper and elastic computing power, but it can only go as fast as the data which it retrieves and processes.