AppDynamics Launches Extension BuildPack for Pivotal CloudFoundry Applications

Not long ago, we told you about Pivotal Cloud Foundry (PCF) buildpacks and service brokers, and all the ways you can deploy AppDynamics agents in a PCF environment.

Buildpack is the key concept here. When you do a deployment to PCF, the buildpack is your foundation. You include the app with the buildpack, which incorporates all the logic needed to connect to various PCF services. Because the Cloud Foundry platform includes a mechanism for adding support for third-party services like AppDynamics, it’s really easy to add our APM instrumentation to all your applications without having to make any code changes. We’ve been doing this for some time, of course, and Pivotal recently recognized AppDynamics for our outstanding solutions and services, specifically our support for .NET in the Pivotal environment.

Here’s a new example of how we’re staying on the front edge of PCF development. For the first time, we’re using an innovative Cloud Foundry feature called multi-buildpacks. Starting with v4.5.514 of the AppDynamics Application Monitoring for PCF tile, we’re offering an AppDynamics Extension Buildpack that works in tandem with standard buildpacks using Cloud Foundry’s multi-buildpack workflow.

The .NET team at PCF has been leading the way in multi-buildpack development (more on this in a bit) and we’ve recognized the value of this approach. Now our goal is to apply the same model to AppDynamics’ APM support for all PCF applications.

The Standard Buildpack Model

With a traditional buildpack, we build the logic for integrating AppDynamics agents directly into the official Cloud Foundry community buildpack. We test our code against the main buildpack code, which is maintained by Pivotal on behalf of the Cloud Foundry community. We then send a pull request to Pivotal, which takes our code and releases it as an official part of the buildpack. This is a well-established model carefully managed by Pivotal and adhered to by third-party service providers like AppDynamics. It works because it’s a well-known and understood mechanism. But there’s a better way to do it.

The Advantages of Multi-Buildpack

Pivotal’s multi-buildpack concept is like a layer cake. The main buildpack—the base layer—is the official community buildpack. Third-party providers like AppDynamics provide additional functionality (or layers) on top of the base layer. The end result is a multi-buildpack that can be deployed as essentially a single piece. For example, here’s how we’d push a .NET HWC application with the AppDynamics-specific extension (appdbuildpack) and the base buildpack from Cloud Foundry (hwc_buildpack):

cf push -b appdbuildpack -b hwc_buildpack -s windows2016

This is a good model with many benefits, including a clear separation of responsibilities. Pivotal is responsible for the core buildpack and how it links to the service broker and other parts of the Cloud Foundry platform. It also manages all the services your application needs, such as routing and deployment. Third-party providers like AppDynamics are responsible for how their agent installs. If a third-party service introduces a bug, the glitch won’t break the main buildpack.

From our perspective, another benefit of this model is that it gives AppDynamics more control over what goes inside our buildpack, such as custom configuration for our APM Agents. Suppose, for instance, you want to include a custom configuration definition file or custom logging capabilities. It’s very easy to do so. Our buildpack extension defines a folder where you can include the appropriate custom files when you push the application. Once deployed, the application will have the AppDynamics agent installed with the custom configuration file in place. This eliminates the need to fork a buildpack for the sake of customizing agent behavior.

From the customer’s perspective, the multi-buildpack model provides a strong support system. It’s very clear who they need to work with (e.g., AppDynamics or Pivotal) for help with specific components or services. Another plus is that we bundle this buildpack with the AppDynamics Service Broker tile. So when you install the latest version of our tile, it will automatically install the buildpack in your environment. And when you deploy an application using any of the main language buildpacks, our extension will be applied on top.

AppDynamics and Multi-Buildpack

Our goal isn’t simply to make AppDynamics work on PCF, it’s to make it work in the best way possible. We already have added support for .NET HWC applications and .NET Core to our AppDynamics Extension Buildpack, and we will soon bring this approach to other dynamic language environments as well, including Python, Go and NodeJS. We will also add support for the Java buildpack to do advanced configuration of AppDynamics Java Agents, although we will, of course, continue to support basic configuration in the standard Java buildpack.

See for yourself how the AppDynamics Extension BuildPack (Multi-BuildPack Approach) can make your life easier!

The AppD Approach: Deployment Options for .NET Microservices Agent

There are numerous ways to develop .NET applications, and several ways to run them. As the landscape expands for .NET development—including advances in .NET Core with its cross-platform capabilities, self-contained deployments, and even the ability to run an ASP.NET Core app on a Raspberry PI with the upcoming .NET Core 2.1 ARM32 support—it’s only fitting that AppDynamics should advance its abilities to monitor this new landscape.

One of these advancements is our new .NET Microservices Agent. Like .NET Core, this agent has evolved to become more portable and easier to use, providing more value to our customers who monitor .NET Core applications. Its portability and refinement enable a couple of installation options, both of which align closely with the movement to host .NET applications in the cloud, the development of microservices, and the growing use of containers. This flexibility in deployment was a requirement of our customers, as they had concerns over the one-size fits all deployment options of some of our competitors. These deployment methods include:

  • Installing via the AppDynamics Site Extension in Azure

  • Installing via the NuGet package bundled with the application

Each method has its advantages and disadvantages:

AppDynamics Site Extension

    • Advantage: Azure Site Extension is an easy deployment method that decouples the AppDynamics agent from the code. A couple of clicks and some basic configuration settings and—voila!—an Azure App Service has an AppDynamics monitoring solution.

    • Disadvantage: It is an Azure App Service-only option. Should the application need to be moved to another service such as Azure Service Fabric, a different installation method would be needed.

AppDynamics NuGet Package

  • Advantage: the NuGet package installation method is super versatile. Since it’s bundled with the application, wherever it goes, the agent and monitoring go too. An excellent option for microservices and containers.

  • Disadvantage: It’s biggest advantage is also a drawback, as coupling the agent with the application increases operational requirements. Agent updates, for instance, would require small configuration changes and redeployments.

The Easy Option: AppDynamics Site Extension

Azure provides the ability to add Site Extensions, a simple way to add functionality and tooling to an Azure App Service.

In the case of AppDynamics’ .NET Microservices Agent, Site Extensions is a wonderful deployment method that allows you to set up monitoring on an Azure App Service without having to modify your application. This method is great for an operations team that either wants to monitor an existing Azure App Service without deploying new bits, or decouple the monitoring solution from the application.

The installation and configuration of the AppDynamics Site Extension is simple:

  1. Add the Site Extension to the App Service from the Site Extension Gallery.

  2. Launch the Controller Configuration Form and set up the Agent.

As always, Azure provides multiple ways to do things. Let’s break down these simple steps and show installation from two perspectives: from the Azure Portal, and from the Kudu service running on the Azure App Service Control Manager site.

Installing the Site Extension via the Azure Portal

The Azure Portal provides a very easy method to install the AppDynamics Site Extension. As the Portal is the most common interface when working with Azure resources, this method will feel the most comfortable.

Step 1: Add the Site Extension

  • Log into the Azure Portal at https://portal.azure.com and navigate to the Azure App Service to install the AppDynamics Site Extension.

  • In the menu sidebar, click the Extensions option to load the list of currently installed Site Extensions for the Azure App Service. Click the Add button near the top of the page (see below) to load the Site Extension Gallery, where you can search for the latest AppDynamics Site Extension.

  • In the “Add extension” blade, select the AppDynamics Site Extension to install.
    (The Portal UI is not always the most friendly. If you hover over the names, a tooltip should appear showing the full extension name.)

  • After choosing the extension, click OK to accept the legal terms, and OK again to finish the selection. Installation will start, and after a moment the AppDynamics Site Extension will be ready to configure.

Step 2: Launch and Configure

  • To configure the AppDynamics Agent, click the AppDynamics Site Extension to bring up the details blade, and then click the Browse button at the top. This will launch the AppDynamics Controller Configuration form for the agent.

  • Fill in the configuration settings from your AppDynamics Controller, and click the Validate button. Once the agent setup is complete, monitoring will start.

  • Now add some load to the application. In a few moments, the app will show up in the AppDynamics Controller.

Installing the Site Extension via Kudu

Every Azure App Service is created with a secondary site running the Kudu service, which you can learn more about at the projectkudu on GitHub. The Kudu service is a powerful tool that gives you a behind-the-scenes look at your Azure App Service. It’s also the place where Site Extensions are run. Installing the AppD Site Extension from the Kudu service is just as simple as from the Azure Portal.

Step 1: Add Site Extension

  • Login to the Azure Portal at https://portal.azure.com and navigate to the Azure App Service to install the AppDynamics Site Extension.

  • The Kudu service is easy to access via the Advanced Tools selection on the App Service sidebar.

  • Another option is to login directly to the secondary site’s URL by including a “.scm” as a prefix to the “.azurewebsite.net” domain. For example: http://appd-appservice-example.azurewebsites.net becomes http://appd-appservice-example.scm.azurewebsites.net. (You can read more about accessing the Kudu service in the projectkudu wiki.)

  • On the Kudu top menu bar, click the Site Extensions link to view the currently installed Site Extensions. To access the Site Extension Gallery, click the Gallery tab.

  • A simple search for “AppDynamics” will bring up all the available AppDynamics Site Extensions. Simply click the add “+” icon on the Site Extension tile to install.

  • On the “terms acknowledgement” dialog pop-up, click the Install button.

  • Finish the setup by clicking the “Restart Site” button on the upper right. This will restart the SCM site and prepare the AppDynamics Controller Configuration form.

Step 2: Launch and Configure

  • Once the restart completes, click the “Launch” icon (play button) on the Site Extension tile. This will launch the AppDynamics Controller Configuration form.

  • Follow the same process as before by filling in the details and clicking the Verify button.

  • The agent is now set up, and AppDynamics is monitoring the application.

AppDynamics Site Extension in Kudu Debug Console

One of the advantages of the Kudo service is the ability to use the Kudu Debug Console to locate App Service files, including the AppDynamics Site Extension installation and AppDynamics Agent log files. Should the Agent need configuration changes, such as adding a “tier” name, you can use the Kudu Debug Console to locate the AppDynamicsConfig.json file and make the necessary modifications.

The Versatile Option: AppDynamics NuGet Packages

The NuGet package installation option is the most versatile deployment method, as the agent is bundled with the application. Wherever the application goes, the agent and monitoring solutions go too. This method is great for monitoring .NET applications running in Azure Service Fabric and Docker containers.

AppDynamics currently has four separate NuGet packages for the .NET Microservices Agent, and each is explained in greater detail in the AppDynamics documentation. Your choice of package should be based on where your application will be hosted, and which .NET framework you will use.

In the example below, we will use the package best suited for an Azure App Service, for a comparison to the Site Extension.

Installing the AppDynamics App Service NuGet Package

The method for installing a NuGet package will vary by tooling, but for simplicity we will assume a simple web application is open in Visual Studio, and that we’re using Visual Studio to manage NuGet packages. If you’re working with a more complex solution with multiple applications bundled together, NuGet package installation will vary by project deployment.

Step 1: Getting the Correct Package

  • On the web app project, right-click and bring up the context menu. Locate and click “Manage NuGet Packages…”.  This should bring up the NuGet Package Manager, where you can search for “AppDynamics” under the Browse tab.  

  • Locate the correct package—in this case, the “AppService” option—select the appropriate version and click Install.

  • Do a build of your project to add the AppDynamics directory to your project.

  • The agent is now installed and ready to configure.

Step 2: Configure the Agent

  • Locate the AppDynamicsConfig.json in the AppDynamics directory and fill in the Controller configuration information.

  • Publish the application to Azure and add some load to the application to test if monitoring was set up properly.

I hope these steps give you an overview of how easy it is to get started with our .NET Microservices Agent. Make sure to review our official .NET Microservices Agent and Deploy AppDynamics for Azure documentation for more information.

The AppD Approach: Monitoring a Docker-on-Windows App

Here at AppDynamics, we’ve developed strong support for .NET, Windows, and Docker users. But something we haven’t spent much time documenting is how to instrument a Docker-on- Windows app. In this blog, I’ll show you how straightforward it is to get one up and running using our recently announced micro agent. Let’s get started.

Sample Reference Application

Provided with this guide is a simple ASP.NET MVC template app running on an ASP.NET application on the .NET full framework. The sample application link is provided below:

source.zip

If you have your own source code, feel free to use it.

Guide System Information

This guide was written and built on the following platform:

  • Windows Server 2016 Build 14393.rs1_release.180329-1711 (running on VirtualBox)

  • AppDynamics .NET Micro Agent Distro 4.4.3

Prerequisite Steps

Before instrumenting our sample application, we first need to download and get the .NET micro agent. This step assumes you are not using an IDE such as Visual Studio, and are working manually on your local machine.

Step 1: Get NuGet Package Explorer

If you already have a way to view and/or download NuGet packages, skip this step. There are many ways to extract and view a NuGet package, but one method is with a tool called NuGet Package Explorer, which can be downloaded here.

Step 2: Download and Extract the NuGet Package

We’ll need to download the appropriate NuGet package to instrument our .NET application.

  1. Go to https://www.nuget.org/

  2. Search for “AppDynamics”

  3. The package we need is called “AppDynamics.Agent.Distrib.Micro.Windows.”

  4. Click “Manual Download” or use the various standard NuGet packages.

  5. Now open the package with NuGet Package Explorer.

  6. Choose “Open a Local Package.”

  7. Find the location of your downloaded NuGet package and open it. You should see the screen below:

  1. Choose “File” and “Export” to export the NuGet package to a directory on your local machine.

  1. Navigate to the directory where you exported the NuGet package, and confirm that you see this:

Step 3: Create Directory and Configure Agent

Now that we’ve extracted our NuGet package, we will create a directory structure to deploy our sample application.

  1. Create a local directory somewhere on your machine. For example, I created one on my Desktop:
    C:\Users\Administrator\Docker Sample\

  1. Navigate to the directory in Step 1, create a subfolder called “source” and add the sample application code provided above (or your own source code) to this directory. If you used the sample source provided, you’ll see this:

  1. Go back to the root directory and create a directory called “agent”.

  2. Add the extracted AppDynamics micro agent components from Step 1 to this directory.

  3. Edit “AppDynamicsConfig.json” and add in your controller and application information.

{
  "controller": {
    "host": "",
    "port": ,
    "account": "",
    "password": "",
    "ssl": false,
    "enable_tls12": false
  },
  "application": {
    "name": "Sample Docker Micro Agent",
    "tier": "SampleMVCApp"
  }
}
  1. Navigate to the root of the folder, create a file called “dockerFile” and add the following text:

Sample Docker Config

FROM microsoft/iis
SHELL ["powershell"]

RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
   Install-WindowsFeature Web-Asp-Net45

ENV COR_ENABLE_PROFILING="1"
ENV COR_PROFILER="{39AEABC1-56A5-405F-B8E7-C3668490DB4A}"
ENV COR_PROFILER_PATH="C:\appdynamics\AppDynamics.Profiler_x64.dll"

RUN mkdir C:\webapp
RUN mkdir C:\appdynamics

RUN powershell -NoProfile -Command \
  Import-module IISAdministration; \    
  New-IISSite -Name "WebSite" -PhysicalPath C:\webapp -BindingInformation "*:8000:" 

EXPOSE 8000

ADD agent /appdynamics
ADD source /webapp

RUN powershell -NoProfile -Command Restart-Service wmiApSrv
RUN powershell -NoProfile -Command Restart-Service COMSysApp

Here’s what your root folder will now look like:

Building the Docker Container

Now let’s build the Docker container.

  1. Open Powershell Terminal and navigate to the location of your Docker sample app. In this example, I will call my image “appdy_dotnet” but feel free to use a different name if you desire.

  2. Run the following command to build the Docker image:

docker build –no-cache -t  appdy_dotnet .

  1. Now build the image:

docker run –name appdy_dotnet -d appdy_dotnet ping -t localhost

  1. Log into the container via powershell/cmd:

docker exec -it appdy_dotnet cmd

  1. Get the container IP by running the “ipconfig” command:
C:\ProgramData\AppDynamics\DotNetAgent\Logs>ipconfig
Windows IP Configuration


Ethernet adapter vEthernet (Container NIC 69506b92):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::7049:8ad9:94ad:d255%17
   IPv4 Address. . . . . . . . . . . : 172.30.247.210
   Subnet Mask . . . . . . . . . . . : 255.255.240.0
  1. Copy the IPv4 address, add port 8000, and request the URL from a browser. You should get the following site back (below). This is just a simple ASP.NET MVC template app that is provided with Visual Studio. In our example, the address would be:

http://<ip4-address>:8000

Here’s what the application would look like:

  1. Generate some load in the app by clicking the Home, About, and Contact tabs. Each will be registered as a separate business transaction.

Killing the Container (Optional)

In the event you get some errors and want to rebuild the container, here are some helpful commands that can be used for stopping and removing the container, if needed.

  1. Stop the container:

docker stop appdy_dotnet

  1. Remove the container:

docker rm appdy_dotnet

  1. Remove image:

docker rmi appdy_dotnet

Verify Successful Configuration via Controller

Log in to your controller and verify that you are seeing load. If you used the sample app, you’ll see the following info:

Application Flow Map

Business Transactions

Tier Information


As you can see, it’s fairly easy to instrument a Docker-on-Windows app using AppDynamics’ recently announced micro agent. To learn more about AppD’s powerful approach to monitoring .NET Core applications, read this blog from my colleague Meera Viswanathan.

The AppD Approach: How to Monitor .NET Core Apps

For the past few months we’ve been collecting customer feedback on a new agent designed specifically to monitor microservices built with .NET Core. As I discussed a few weeks ago in my post “The Challenges of App Monitoring with .NET Core,” the speed and portability that make .NET Core a popular choice for companies seeking to more fully embrace the world of complex, containerized applications placed new demands on monitoring solutions.

Today we’re announcing the general availability of the AppDynamics .NET Core agent for Windows. Please stay tuned for news about a native C++-based Linux agent we are working on, as well. Our goal is to design agents that address the three biggest challenges of monitoring .NET Core: performance, flexibility, and functionality. As companies modernize monolithic applications and increasingly shift parts of their IT infrastructure to the cloud, these agents will ensure deep visibility into rapidly evolving production, testing, and development environments.

In this blog post, I’d like to share some of the considerations that went into the choices we made in architecting the new agents. It was extremely important to our engineering team to create an agent that is as light-weight and reliable as the microservices and containers we monitor without compromising functionality. One change we made was removing the Windows Service that required a machine-level install, which increased reliability and freed up CPU and considerable memory (70 MB). In addition, the new .NET Core agents for Windows require just half the disk space of traditional .NET agents and consist of only two DLLs and two configuration files.

Our approach to monitoring .NET Core recognizes that the deployment of .NET Core applications is fundamentally different from those built with the full .NET framework. In Windows environments deployment was dependent on both the framework and the machine, and our agent was installed using the traditional Windows installer (via MSI files). In contrast, the advantage of .NET Core is that it runs on a variety of platforms and runtimes.

Last year, our team made the decision we would mirror .NET Core’s flexibility in deployment. Unlike some other app monitoring solutions, the AppDynamics .NET Core agents reside next to the application. This architecture means containers can be spun up and spun down or moved around without affecting visibility. Operations engineers can integrate AppDynamics in any way that makes sense, while developers are able to leverage NuGet package-management tools. The pipeline for deploying and installing the agents on each platform is the same as for deploying applications and microservices there. For example, agents can be deployed with Azure Site Extensions for Azure or buildpacks for Pivotal Cloud Foundry (available soon). In the case of Docker, the agents can be embedded in a Docker image with engineers setting a few environment variables for monitoring to then proceed automatically. During our recent beta it was great to see our customers deploying the AppDynamics .NET Core agents to Docker, Azure App Services, Azure Service Fabric, Pivotal Cloud Foundry, and other environments.

How it works

The .NET Core agents deliver all the functionality and automation you expect from  AppDynamics. The agents auto-detect apps, which in the case of .NET Core could be running on Kestrel or WebListener. The agents then talk to the AppDynamics Controller providing everything from business and performance-related transaction metrics to errors, and health alerts.

Similar to the traditional .NET agent, the new .NET Core agents are particularly suited to monitoring the asynchronous transactions that often characterize microservices. We automatically instrument asynchronous apps and provide deep visibility at the code level with built-in visualizations such as snapshots and full-stack call graphs that include unrestricted views into the ASP.NET Core middleware.

Although certain Windows-environment specific machine metrics like performance counters are not available to the new .NET Core agents due to the new cross-platform architecture, as I previously discussed, AppDynamics continues to provide cross-stack and full-stack visibility by automatically correlating the metrics collected by the .NET Core agents with infrastructure and end-user metrics. This allows transactions to be traced from an end user to an application or microservice through databases such as Azure SQL, SQL Server, and MongoDB, across distributed tiers, and back to the end user, automatically discovering dependencies and identifying anomalies along the way. These unified full-stack and cross-stack topologies are critical to developing and deploying microservices that are responsive to business needs.

Drive business outcomes

AppDynamics’ Business iQ connects application performance with business results using a variety of data collectors to pull detailed, real-time information on everything from users to pricing. With the new .NET Core agents, it is even easier to create contextual information points to collect custom data from microservices. Thanks to run-time reinstrumentation, engineers can make changes in existing information points without restarting the microservice.

Customers have asked whether this functionality will be available in hybrid environments. Yes, this is one of the great advantages of the new .NET Core agents. Customers will have visibility into the performance of their business and their applications across on-premises installations running on the full .NET framework and .NET Core applications running on the Azure cloud or other public clouds. Just as .NET Core seeks to enable microservices to move between platforms, AppD is continually working to provide complete, end-to-end visibility into apps and microservices wherever they are running and regardless of the underlying technologies.

It is worth acknowledging that the first generation of .NET core monitoring tools is shipping with a tradeoff between ease of deployment and performance and reliability. Some vendors, especially those who shipped early, emphasize the simplicity and speed of their agents. Deploying AppD’s agents does involve more than “one” step. However, customers assure us that the reliability of our agents combined with their lack of overhead more than compensates for the small, upfront investment made in deployment. In the meantime, our engineering teams remain hard at work tuning and automating deployment and installation processes.

The AppD approach to monitoring .NET Core apps illustrates the importance of a unified solution for maintaining full-stack and cross-stack visibility. The ultimate goal of monitoring is to improve business performance. Ideally, performance issues—and potential problems— are automatically detected before they affect business goals. Achieving this requires real-time data collection on-premises, on IoT devices, and across clouds. It depends on the continuous monitoring of everything—applications, containers, microservices, machines, and databases—as well as on the continuous improvement of AI and machine learning algorithms. Our new agents represent one more step in this exciting journey. Onward!

The Challenges of App Monitoring with .NET Core

The evolution of software development from monolithic on-premises applications to containerized microservices running in the cloud took a major step forward last summer with the release of .NET Core 2. As I wrote in the “Understanding the Momentum Behind .NET Core,” the number of developers using .NET Core recently passed the half million mark. Yet in the rush to adoption many developers have encountered a speed bump. It turns out the changes that make .NET Core so revolutionary create new challenges for application performance monitoring.

Unlike .NET apps that run on top of IIS and are tied to Windows infrastructure, microservices running on .NET Core can be deployed anywhere. The customers I’ve spoken with are particularly interested in deploying microservices on Linux systems, which they believe will deliver the greatest return on investment. But the flexibility comes at cost.

When operations engineers move .NET applications to .NET Core they are seeking fully functional, performant environments that are designed for a microservice. What they are finding is that the .NET Core environment requirements vary substantially from the environments that the full framework runs on. While .NET Core’s independence from IIS and Windows machines provides flexibility, it also means that some performance tools for system metrics may no longer be relevant.

Engineers who are used to debugging apps in a traditional Windows environment find that valuable tools like Event Tracing for Windows (ETW) and performance counters are not consistently available. For example, an on-premises Windows machine allows you to read performance counters while Azure WebApps on Windows only provides access to application-specific performance counters. Neither ETW nor performance counters are available on Linux, so if you want to deploy an ASP.NET Core microservice on Linux you will need to modify your method of collecting system-level data.

In creating .NET Core and the ASP.NET Core framework Microsoft made improving performance a top priority. One of the biggest changes was replacing the highly versatile but comparatively slow IIS web server with Kestrel, a stripped-down, cross-platform web server. Unlike IIS, Kestrel does not maintain backwards compatibility with a decade-and-half of previous development and is specifically suited to the smaller environments that characterize microservices development and deployment. Open-source, event-driven, and asynchronous, Kestrel is built for speed. But the switch from IIS to Kestrel is not without tradeoffs. Tools we relied on before like IIS Request Failed logging don’t consistently work. The fact is, Kestrel is more of an application server than a web server, and many organizations will want to use a full-fledged web server like IIS, Apache, or Nginx in front as a reverse proxy. This means engineers have to now familiarize themselves with the performance tools, logging, and security setup for these technologies.

Beyond monitoring web servers, developers need performance metrics for the entire platform where a microservice is deployed—from Azure and AWS to Google Cloud Platform and Pivotal Cloud Foundry, not to mention additional underlying technologies like Docker. The increase in platforms has a tendency to add up to an unwelcome increase in monitoring tools.

At the same time, the volume, velocity, and types of data from heterogeneous, multi-cloud, microservices-oriented environments is set to increase at exponential rates. This is prompting companies who are adopting .NET Core and microservices to take a hard look at their current approach to application monitoring. Most are concluding that the traditional patchwork of multiple tools is not going to be up to the task.

While application performance monitoring has gotten much more complex with .NET Core, the need for it is even more acute. Migrating applications from .NET without appropriate monitoring solutions in place can be particularly risky.

One key concern is that not all .NET Framework functionality is available in .NET Core, including .NET Remoting, Code Access Security and AppDomains. Equivalents are available in ASP.NET Core, but they require code changes by a developer. Likewise, HTTP handlers and other IIS tools must be integrated into a simplified middleware pipeline in ASP.NET Core to ensure that the logic remains part of an application as it migrated from .NET to .NET Core.

Not all third-party dependencies have a .NET Core-compatible release. In some cases, developers may be forced to find new libraries to address an application’s needs.

Given all of the above, mistakes in migration are possible. There may be errors in third-party libraries, functionality may be missing, and key API calls may cause errors. Performance tools are critical in helping this migration by providing granular visibility into the application and its dependencies. Problems can thus be identified earlier in the cycle, making the transition smoother.

AppDynamics had been tackling the challenges outlined in this post for more than a year. A beta release of support for .NET Core 2.0 on Windows became available in January, and we’ll have more news going forward.

Please stay tuned for my next blog post about AppDynamics’ approach to app monitoring with .NET Core.

Understanding the Momentum Behind .NET Core

Three years ago Satya Nadella took over as CEO of Microsoft, determined to spearhead a renewal of the iconic software maker. He laid out his vision in a famous July 10, 2014 memo to employees in which he declared that “nothing was off the table” and proclaimed his intention to “obsess over reinventing productivity and platforms.”

How serious was Nadella? In the summer of 2016, Microsoft took the bold step of releasing .NET Core, a free, cross-platform, open-source version of its globally popular .NET development platform. With .NET Core, .NET apps could run natively on Linux and macOS as well as Windows.

For customers .NET Core solved a huge problem of portability. .NET shops could now easily modernize monolithic on-premises enterprise applications by breaking them up into microservices and moving them to cloud platforms like Microsoft Azure, Amazon Web Services, or Google Cloud Platform. They had been hearing about the benefits of containerization: speed, scale and, most importantly, the ability to create an application and run it anywhere. Their developers loved Docker’s ease of use and installation, as well as the automation it brought to repetitive tasks. But just moving a large .NET application to the cloud had presented daunting obstacles. The task of lifting and shifting the large system-wide installations that supported existing applications consumed massive amounts of engineering manpower and often did not deliver the expected benefits, such as cost savings. Meanwhile, the dependency on the Windows operating system limited cloud options, and microservices remained a distant dream.

.NET Core not only addressed these challenges, it was also ideal for containers. In addition to starting a container with an image based on the Windows Server, engineers could also use much smaller Windows Nano Server images or Linux images. This meant engineers had the freedom of working across platforms. They were no longer required to deploy server apps solely on Windows Server images.

Typically, the adoption of a new developer platform would take time, but .NET Core experienced a large wave of early adoption. Then, in August 2017, .NET Core 2.0 was released, and adoption increased exponentially. The number of .NET Core users reached half a million by January 2018. By achieving almost full feature parity with .NET Framework 4.6.1, .NET Core 2.0 took away all the pain that had previously existed in shifting from the traditional .NET Framework to .NET Core. Libraries that hadn’t existed in .NET Core 1.0 were added to .NET Core 2.0. Because .NET Core implemented all 32,000 APIs in .NET Standard 2.0 most applications could reuse their existing code.

Engineering teams who have struggled with DevOps initiatives found that .NET Core allowed them to accelerate their move to microservices architectures and to put in place a more streamlined path from development to testing and deployment. Lately, hiring managers have started telling their recruiters to be sure and mention the opportunity to work with .NET Core as an enticement to prospective hires—something that never would have happened with .NET.

At AppDynamics, we’re so excited about the potential of .NET Core that we’ve tripled the size of the engineering team working on .NET. And, just last month, we announced a beta release of support for .NET Core 2.0 on Windows using the new the .NET micro agent released in our Winter ‘17 product release. This agent provides improved microservices support as more customers choose .NET Core to implement multicloud strategies. Reach out to your account team to participate in this beta.

Stay tuned for my next blog posts on how to achieve end-to-end visibility across all your .NET apps, whether they run on-premises, in the cloud, or in multi-cloud and hybrid environments.

Top 5 Conferences for .NET Developers

Partly due to the influence of software giant Microsoft, the .NET community is expansive. Developers, programmers and IT decision makers regularly meet at .NET conferences to share news, information and ideas to help each other keep up with the rapid digital transformation in today’s IT landscape. Here are five .NET conferences you should consider attending to advance your knowledge, skills and career growth.

Build

Build is Microsoft’s annual convention geared toward helping software and web programmers learn about the latest developments in .NET, Azure, Windows and related technologies. It began in 2011, and the 2017 conference will run May 10 – 12 at the Washington State Convention Center in Seattle, WA. Don’t get your hopes too high because registration has already closed as the conference is sold out. However, there is a wait list if you’re hoping the stars align in your favor and you’re granted admission.

Build takes over for the now defunct Professional Developers Conference, which focused on Windows, and MIX, which centered on developing web apps using Silverlight and ASP.NET. For 2017, major topic themes included .NET Standard Library, Edge browser, Windows Subsystem for Linux, ASP.NET core and Microsoft Cortana. Sessions included debugging tricks for .NET using Visual Studio, a look at ASP.NET Core 1.0, deploying ASP.NET Core apps, a deep dive into MVC with ASP.NET Core, Entity Framework Core 1.0, a .NET overview, and creating desktop apps with Visual Studio vNext.

Reviews of the prior years were positive. One reviewer appreciated the introduction of a BASH Shell, the first environment that allowed cross-platform Windows developers to code completely in Windows without resorting to Linux or Mac OS X machines. Another commented that they liked getting Xamarin, a set of developer tools, for free, saving them hundreds of dollars. Both these moves were strong indicators of Microsoft’s re-commitment to developers as it embraces our new multi-platform world encompassing open-source and proprietary programs side by side.

DEVintersection

This year’s DEVintersection will be staged at the Walt Disney World Swan and Dolphin Resort in Lake Buena Vista, Florida, on May 21 – 24, 2017. This is the fifth year for the conference which brings together engineers and key leaders from Microsoft with a variety of industry professionals. The goal is to help attendees help stay on top of developments such as ASP.NET Core, Visual Studio, SQL Server, SharePoint and Windows 10.

Since ASP.NET, as well as .NET, are moving to open source status, it is another sign Microsoft is further encouraging open source as a preeminent approach in web development. You will learn skills to tackle the transition to open source and handle the concomitant issues that come with that move. The IT landscape continues to shift and evolve, and software developers need to consider a wide variety of challenges, such as microservices, the cloud and containerization.

Major conference tracks include Visual Studio interaction, Azure intersection, ASP.NET intersection, IT EDGE intersection, Emerging Experiences and SQL intersection. IT Edge is a co-located event — attendees can take part in sessions from different tracks for no extra charge. There will be ten workshops lasting throughout the day for the four days of the conference. More than 40 sessions are focused on a number of technology topics, with the goal to give you techniques and skills you can use right away in your day-to-day work.

This year, expect to see plenty of discussion around designing for scale, performance monitoring, the cloud, troubleshooting and features and benefits of the 2012, 2014 and 2016 editions of SQL Server. Are you considering migrating 2008 all the way to 2016 in one go? You’ll get the feedback and advice you need to make these important decisions. Past attendees appreciated that every day of the conference started with a report from Microsoft specialists in the main hall. One reviewer called the session breakouts “involving and useful,” and another said the full-day workshops that ran before and after the main convention gave them both “practical and theoretical knowledge.”

Microsoft Ignite

Formerly known as TechEd, Microsoft changed the name to Ignite in 2015. The original TechEd started in Orlando in 1993, and the last chapter of the series was staged in 2016 in Atlanta, Georgia. The 2017 Ignite conference is slated for September 25 – 29 in Orlando. Registration opens on March 28, so be sure to save the date. Registration sold out last year for the 2016 conference.

The Microsoft Data Science Summit will span two days during Ignite and is geared to engineers, data scientists, machine learning professionals and others interested in the latest in the world of analytics.

MS Ignite is for IT professionals, developers and managers. Decision makers can see what .NET advancements and developments are available, while developers can get information on how to implement those platforms in their current IT profile. There are presentations, breakout sessions and lab demonstrations. Microsoft .NET experts and community members alike meet to socialize, share news and evaluate the latest software defined tech. There are over 1000 Microsoft Ignite sessions to learn the latest developments in technology, each giving you a chance to meet face-to-face with industry experts.

For companies using .NET solutions, Ignite gives leaders and developers a chance to discuss current trends on the platform directly with Microsoft influencers. High profile Microsoft attendees in the past have included Jeffrey Snover, the lead architect of Windows Server; Brand Anderson, Corporate VP of the Enterprise Client and Mobility; and Mark Russinovich, Chief Technology Officer of Microsoft Azur

IT/Dev Connections

Presented by Penton Media, the annual IT/Dev Connections conference is scheduled for October 23 – 26, 2017 at the Hilton Union Square in San Francisco. Topics to be covered include ASP.NET, Visual Studio, Azure, SQL Server, SharePoint, VMware and more. There are five main technical topic tracks with over 200 sessions and one sponsor track and one Community Track. Conference leaders known as Track Chairs hand pick the best content and speakers. The goal is to omit any fluff and marketing hype, focusing only on high-value presenters and panelists. The five topic tracks are Cloud and Data Center; Enterprise Collaboration; Development and DevOps; Data Platform and Business Intelligence; and Enterprise Management, Mobility and Security.

Speakers at the 2017 conference include Windows technical specialist John Savill, Data Professional Tim Ford, and SharePoint expert Liam Clearly. A series of pre-conference workshops give developers and programmers a chance for one-on-one training. Workshops include troubleshooting ASP.NET Web applications, mastering the SharePoint dev toolchain, and skill-building for ASP.NET Core with Angular 2. Other sessions include Azure for ASP.NET programmers, Dockerizing ASP.NET apps, and ASP.NET development without using Windows. The State of the Union Session topic will discuss .NET from the desktop and mobile device to the server.

The strength of the IT/Dev Connections conference is the focus on developers and programmers. Commercial interests are kept to a minimum, and speakers are vetted for the amount of take-away value in their presentations. Attendees from past events have lauded the “user focus” of the conference and “intensely personal” feel of the breakout sessions. In other events, session rooms may have hundreds of chairs, while sessions at IT/Dev Connections generally accommodate around 100 people, providing a more personal, hands-on feel to each session. The speakers are also well diversified among different sections of the developer community, including a number of MVP designated presenters.

Visual Studio Live

Visual Studio Live! events for 2017 are a series of conferences throughout the year at cities around the country like Las Vegas, Chicago, and Washington D.C. The subtitle for the series is “Rock Your Code Tour.” The meetings give .NET developers and programmers a chance to level up their skills in Visual Studio, ASP.NET and more.

Visual Studio Live! focuses on practical training for developers using Visual Studio. For example, the Austin meeting is five days of education on the .NET framework, JavaScript/HTML/5, Mobile Computing and Visual Studio. There are more than 60 sessions conducted by Microsoft leaders and industry insiders. Topics to be covered include Windows Client, Application Lifecycle Management, Database and Analytics, Web Server and Web Client, Software Practices, and Cloud Computing.

If you participated in the previous Live! 360 program for discounted rates, be sure to reach out to the organizing committee as they do have special pricing for their most frequent customers.

Visual Studio Live! is known for its hands-on approach, with extensive workshops that give developers a deep dive into each topic. The workshops are featured throughout each day, so attendees have lots of opportunity to get targeted learning.

Attendees have responded enthusiastically to the co-located conference arrangement. One said it was an ideal chance to catch up with a number of technologies after being out of the tech world for a few years, and another lauded the enthusiasm of the speakers and workshop leaders.

There is a myriad of software development conferences that will help you grow as a .NET developer, DevOps thinker, or business influencer. Check out these five to see which one best fits your needs and goals.

Top 10 New Improvements Found in the .NET Framework Version 4.6.2

In the late 1990s, Microsoft began working on a general purpose development platform that quickly became the infrastructure for building, deploying, and running an unlimited number of applications and services with relative ease while focusing on the Internet User Experience (IUE). Then, in February of 2002, Microsoft finally launched the first version of these shared technology resources under its original name, Next Generation Windows Services (NGWS). With DLL libraries and object-oriented support for web app development, the .Net Framework 1.0 was a digital transformation that introduced us all to managed coding.

Although .NET Core has been a focus in recent years, work on the original .NET Framework has still progressed. In fact, on August 2, 2016, Microsoft announced the much-anticipated release of Version 4.6.2. According to MS, there are “dozens of bug fixes and improvements.” Actually, there are almost 14 dozen bug fixes — 166 to be exact — not to mention all the API changes. Moreover, many of the changes found in this new version were based on developer feedback. Needless to say, things have definitely improved. The following is a list of the top ten improvements found in .NET 4.6.2:

1. Windows Hello

The Windows 10 Anniversary Update was released the same day as the latest .NET Framework. This version is already included with the anniversary update. Although it doesn’t show up as an installed application in “Programs and Features,” you can find it by searching for features and clicking on “Turn Windows features on and off.” From here, you can adjust your features accordingly, and select specific features by utilizing “developer mode.” Also, Windows Hello allows developers and programmers to use Windows Hello for their apps. For example, third-party developers can now allow users to log in with their face and fingerprint with ease. Simply download the framework update.

2. Removal of Character Limits (BCL)

Microsoft removed the 260 character limitation, MAX_PATH, for NTFS in Windows. Characters in .NET 4.6.2 are now classified based on the Unicode Standard, Version 8.0.0. You’re probably used to getting the “path too long issue” prompt, especially with MSBuild definitions. The error details usually state something like:

TF270002: An error occurred copying files: The specified path, file name, or both are too long.

Or the error might state something similar to:

Unable to create folder. Filename or extension is too long.

Programs and server tools can also show problems in these areas, and solutions normally involved renaming something to fit the profile. Usually not an issue for end users, this limitation is more common on developer machines that use specialized tools also running on Unix, or while building source trees. However, now that the MAX_PATH limitation has been removed, we may never have to see this error message again.

However, long paths are not yet enabled by default. Therefore, you need to set the policy to enable the support: “Enable Win32 long paths” or “Enable NTFS long paths.” Your app must also have a specific manifest setting. Also, there’s the use of long paths on any OS if you use the \\?\ syntax, which is now supported by this feature.

3. Debugging APIs (CRL)

The main adjustment to the CRL is that if the developer chooses, null reference exceptions will now provide much more extensive debugging data. The unmanaged debugging APIs can request more information and perform additional analysis. Next, a debugger can determine which variable in a single line of source code is null, making your job a lot easier.

MS reports state the following APIs have been added to the unmanaged debugging API:

4. TextBoxBase Controls (WPF)

For security purposes, the copy and cut methods have been known to fail when they are called in partial trust. According to Microsoft, “developers can now opt-in to receiving an exception when TextBoxBase controls fail to complete a copy or cut operation.” Standard copy and cut through keyboard shortcuts, as well as the context menu, will still work the same way as before in partial trust.

5. Always Encrypted Enhancement (SQL)

This database engine is designed to protect sensitive data, such as credit card numbers. The .NET Framework for SQL Server contains two important enhancements for Always Encrypted centered around performance and security:

  • Performance: Encryption metadata for query parameters is now cached. This means when the property is set to true (the default), database clients can retrieve parameter metadata from the server only once. This is true even if the same query is called multiple times.

  • Security: Column encryption entries in the key cache will be evicted after a reasonable time interval. This can be set using the SqlConnection.ColumnEncryptionKeyCacheTtl property. The default time is two hours, while zero means no caching at all.

6. Best Match (WCF)

NetNamedPipeBinding has been upgraded to support a new pipe lookup called “Best Match.” When using this option, the NetNamedPipeBinding service will force you to search for the service all the way to the best matching URI, found at the requested endpoint, instead of the first matching service found. For example, multiple WCF Data Services are known to frequently listen in on named pipes. Often, a few of these WCF clients could be connected to the wrong service. This feature is set to connect with “First Match” as the default option. If you wish to enable this feature, you can add the AppSetting to the App.config or Web.config file on the client’s application.

7. Converting to UWP

According to Developer Resources, Windows now offers capabilities to bring existing Windows desktop apps to the Universal Windows Platform. This includes WPF as well as Windows Forms apps. For example, WPF is a powerful framework and has become a mature and stable UI platform suitable for long-term development. However, it is also a complex beast at times, because it works differently from other GUI frameworks and has a steep learning curve. However, Microsoft always seems to plan ahead, and that’s where converting to the Universal Windows Platform (UWP) enhancement comes in. This improvement enables you to gradually migrate your existing codebase to UWP, which, in turn, can help you bring your app to all Windows 10 devices. Also, it makes UWP APIs more accessible, allowing you to enable features such as Live Tiles and notifications.

8. ClickOnce

Designed long before the invention of the App Store, ClickOnce allows applications to be distributed via URLs. It can even self-update as new versions are released. Unfortunately, security has always been a big concern. Many DevOps teams have shown frustration over MS’s failure to adopt TSL standards. Finally, in addition to the 1.0 protocol, this application now supports TLS 1.1 and TLS 1.2. In fact, ClickOnce will automatically detect which protocol to use, and no action is required to enable this feature.

9. SignedXml

An implementation of the W3C’s XML Digital Signature standard, SignedXml now supports the SHA-2 family of hashing algorithm.

The following are included signature methods, as well as reference digest algorithms that are frequently used:

  • RSA-SHA256

  • RSA-SHA384

  • RSA_512 PKCS#1

For more information on these and other security concerns, along with update deployment and developer guidance, please see Microsoft Knowledge Base Article 3155464, as well as the MS Security Bulletin MS16-065.

10. Soft Keyboard Support

On previous versions of .NET, it wasn’t possible to utilize focus tracking without disabling WPF pen/touch gesture support. Developers were forced to choose between full WPF touch support or Windows mouse promotion. In the latest version of Microsoft’s .NET 4.6.2, Soft keyboard support allows the use of the touch keyboard in WPF applications without disabling WPF stylus/touch support on Windows 10.

To find out which version of the .NET Framework is installed on a computer:

  1. Tap on the Windows key, type regedit.exe, and hit enter.

  2. Confirm the UAC prompt.

  3. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full

Check for a DWORD value labeled Release, which indicates you have the .NET Framework 4.5 or newer.

For all versions of the .NET Framework and its dependencies, please check the charts listed in the Microsoft library for more information.

If you want to have the complete .NET Framework set in your computer, you’ll need to install the following versions:

  • .NET Framework 1.1 SP1

  • .NET Framework 3.5 SP1

  • .NET Framework 4.6

The above list is only the tip of the iceberg when describing all the features and improvements that can be found in the .NET Framework Version 4.6.2. There are numerous security and crash fixes, added support, networking improvements, active directory services updates, and even typo correction in EventSource. Because Microsoft took user feedback into consideration, developers, programmers, and engineers may feel that Microsoft is finally listening to their needs and giving them a little more of what they want in their .NET Framework.

Learn more

Find out how AppDynamics .NET application monitoring solution can help you today.

10 Things You Should Know About Microsoft’s .NET Core 1.0

On June 27th, Microsoft announced the release of a project several years in the making — .NET Core. The solution resulted from the need for a nonproprietary version of Microsoft’s .NET Framework — one that runs on Mac and several versions of Linux, as well as on Windows. This cross-platform .NET product offers programmers new opportunities with its open-source design, flexible deployment, and command-line tools. These features are just part of what makes .NET Core an important evolution in software development. The following are ten key facts you should be aware of when it comes to Microsoft’s .NET Core 1.0 and its impact on software.

1. The .NET Core Platform Is Open-Source

.NET Core is part of the .NET Foundation, which exists to build a community around and innovate within the .NET development framework. The .NET Core project builds on these priorities, starting with its creation by both Microsoft’s .NET team and developers dedicated to the principles of open-source software.

Your advantages in using this open-source platform are many — you have more control in using and changing it, and transparency in its code can provide information and inspiration for your own projects based on .NET Core. In addition, .NET Core is more secure, since you and your colleagues can correct errors and security risks more quickly. Its open-source status also gives .NET Core more stability, because unlike that of proprietary software defined and later abandoned by its creators, the code behind this platform’s tools will always remain publicly available.

2. It Was Created and Is Maintained Through a Collaborative Effort

Related to its development using open-source design principles, the .NET Core platform was built with the assistance of about 10,000 developers. Their contributions included creating pull requests and issues, as well as providing feedback on everything from design and UX to performance.

By implementing the best suggestions and requests, the development team turned .NET Core into a community-driven platform, making it more accessible and effective for the programming community than if it had been created purely in-house. The .NET Core platform continues to be refined through collaboration as it is maintained by both Microsoft and GitHub’s .NET community. As a developer, you have the opportunity to influence the future advancement of .NET Core by working with its code and providing your feedback.

3. The Main Composition of .NET Core Includes Four Key Parts

The first essential aspect is a .NET runtime, which gives .NET Core its basic services, including a type system, garbage collector, native interop, and assembly loading. Secondly, primitive data types, app composition types, and fundamental utilities are provided by a set of framework libraries (CoreFX). Thirdly, the .NET Core developer experience is created by a set of SDK tools and language compilers that are part of .NET Core. Finally, the “dotnet” app host selects and hosts the runtime, allowing .NET Core applications to launch. As you develop, you’ll access .NET Core as the .NET Core Software Development Kit (SDK). This includes the .NET Core Command Line Tools, the .NET Core, and the dotnet driver — everything you need to create a .NET Core application or a .NET Core library.

4. Flexible Deployment Means More Options for Using .NET Core

One of the defining features of .NET Core is its flexible deployment — you can install the platform either as part of your application or as a separate installation. Framework-dependent deployment (FDD) is based on the presence of .NET Core on the target system and has many advantages. With FDD, your deployment package will be smaller. Also, disk space use and memory use are minimized on devices, and you can execute the .NET Core app on any operating system without defining them in advance.

Self-contained deployment (SCD) packages all components (including .NET Core libraries and runtime) with your application, in isolation from other .NET Core applications. This type of deployment gives you complete control of the version of .NET Core used with your app and guarantees accessibility of your app on the target system. The unique characteristics of each deployment type ensure you can deploy .NET Core apps in a way that works best for your particular needs.

5. The .NET Core Platform Is a Cross-Platform Design

This unique software platform already runs on Windows, Mac OS X, and Linux, as its cross-platform nature was one of the main priorities for its development. While this may seem like a strange move for Microsoft, it’s an important one in a technological world that’s increasingly focused on flexibility and segmented when it comes to operating systems and platforms. .NET Core’s availability on platforms other than Windows makes it a better candidate for use by all developers, including Mac and Linux developers, and also gives the entire .NET framework the benefit of feedback and use from a much wider set of programmers. This additional feedback results in a product that works better for all of its users and makes the .NET Core platform a move forward for software-defined, rather than platform-defined applications.

6. Modular Development Makes .NET Core an Agile Development Tool

As part of its cross-compatibility design, the software development platform includes a modular infrastructure. It is released through NuGet, and you can access it as feature-based packages rather than one large assembly. As a developer, you can design lightweight apps that contain only the necessary NuGet packages, resulting in better security and performance for your app. The modular infrastructure also allows faster updates of the .NET Core platform, as affected modules can be updated and released on an individual basis. The focus on agility and fast releases, along with the aforementioned collaboration, positively positions .NET Core within the DevOps movement.

7. .NET Core Features Command-Line Tools

Microsoft states that .NET Core’s command-line tools mean that “all product scenarios can be exercised at the command-line.” The .NET Core Command Line Interface (CLI) is the foundation for high-level tools, such as Integrated Development Environments, which are used for developing applications on this platform. Like the .NET Core platform, this CLI is cross-platform, so that once you’ve learned the toolchain, you can use it the same way on any supported platform. The .NET Core CLI is the basis for applications to be portable whether .NET Core is already installed or an application is self-contained.

8. .NET Core Is Similar to .NET Framework

While .NET Core was designed to be an open-source, cross-platform version of the .NET Framework, there are differences between the two that go beyond those key features. Many of these comparisons result from the design itself as well as the relative newness of the .NET Core software development platform. App models built on Windows technologies are not supported by .NET Core, but console and ASP.NET Core app models are supported by both .NET Core and .NET Framework.

.NET Core has fewer APIs than the .NET Framework, but it will include more as it develops. Also, .NET Core only implements some of .NET Framework’s subsystems in order to maintain the simplified, agile design of the platform. These differences may limit the .NET Core platform in some ways now — however, the advantages of its cross-platform, open-source design should definitely outweigh any limitations as the platform is further enhanced.

9. The .NET Core Platform Is Still Under Construction

The nature of this software development platform makes it a work in progress, continually refined by both Microsoft’s .NET Core team and invested developers worldwide. The .NET Core 1.1 release, scheduled for this fall, is set to bring greater functionality to the platform. One of the intended features is an increase in support for APIs at the BCL level — enough to make .NET Core equal to the .NET Framework as well as Mono. In addition, .NET Core 1.1 will transition the platform’s default built system and project model to MSBuilt and csprog. The .NET Core roadmap on GitHub also cites changes in middleware and Azure integration as goals for the 1.1 release. These features are just a small subset of the purported changes for .NET Core based on natural goals for its development as well as contributions from .NET developers.

10. The .NET Core Platform Is Part of a Digital Transformation

This uniquely conceived and crafted platform for software development is far more than just a new tool for application developers. It represents a much larger shift in technology — one in which you can more easily deploy applications to multiple platforms by using the same initial framework and tools. This is a big change from the traditionally fragmented implementation of the .NET Framework across various platforms — or even across different applications on the same platform.

This addition to software development puts more freedom and control into your hands while you develop, especially when it comes to deploying and updating .NET Core applications in the way that you choose. Although quite new and destined to undergo significant changes in the near future, .NET Core should definitely be a tool of interest to all developers, as it takes the field of programming in an exciting direction.

Learn more

Find out how AppDynamics .NET application monitoring solution can help you today.

Top Performance Metrics for Java, .NET, PHP, Node.js, and Python

No application is the same. Some legacy apps were built in a monolithic environment built on a homogeneous language, say Java or .NET. As environments become more distributed, and technology has innovated to a near-breaking speed, application architectures tend to be built using a multitude of languages often leveraging the more dynamic languages for specific use cases.

Luckily, these distributed and extremely complex environments are where AppDynamics thrives with monitoring. AppDynamics supports Java, .NET, PHP, Node.js, Python, C/C++, and any combination of them — fitting nearly any environment.

After speaking with several customers and analyzing their performance, we’ve compiled a list of the most common performance problems for each language and the performance metrics to help measure your application health.

Below, we’ve compiled a brief summary of our findings and link to the full analysis in the respective complimentary eBooks.

Top Java Performance Metrics

Java remains one of the most widely used technology languages in enterprise applications. However, though it’s so widespread, it’s a clunky legacy language that can often have performance issues.

Along with monitoring external dependencies, garbage collection, and having a solid caching strategy, it’s important to measure business transactions. We define a business transaction as any end-user interaction with the application. These could include adding something to a cart, logging in, or any other interaction. It’s vital to measure the response times of these transactions to understand fully your user experience. If a response time takes longer than the norm, it’s important to get this resolved as quickly as possible to maintain optimal user experience.

Read the full eBook, Top 5 Java Performance Metrics, Tips & Tricks here.

Top .NET Performance Metrics

There are times in your application code when you want to ensure that only a single thread can execute a subset of code at a time. Examples include accessing shared software resources, such as a single threaded rule execution component, and shared infrastructure resources, such as a file handle or a network connection. The .NET framework provides different types of synchronization strategies, including locks/monitors, inter-process mutexes, and specialized locks like the Reader/Writer lock.

Regardless of why you have to synchronize your code or of the mechanism you choose to synchronize your code, you are left with a problem: there is a portion of your code that can only be executed by one thread at a time.

In addition to synchronization and locking, make sure to measure excessive or unnecessary logging, code dependencies, and underlying database and infrastructure issues.

Read the full eBook, Top 5 .NET Performance Metrics, Tips & Tricks here.

Top PHP Performance Metrics

Your PHP application may be utilizing a backend database, a caching layer, or possibly even a queue server as it offloads I/O intensive blocking tasks onto worker servers to process in the background. Whatever the backend your PHP application interfaces with, the latency to these backend services can affect the performance of your PHP application performance. The various types of internal exit calls may include:

  • SQL databases
  • NoSQL servers
  • In-memory cache
  • Internal services
  • Queue servers

In some environments, your PHP application may be interfacing with an obscure backend or messaging/queue server. For example, you may have an old message broker serving as an interface between your PHP application and other applications. While this message broker may be outdated, it is nevertheless part of an older architecture and is part of the ecosystem in which your distributed applications communicate with.

Along with monitoring the internal dependencies, make sure you measure your business transaction response time (as described above), external calls, and have an optimal caching strategy with full visibility into your application topography.

Read the full eBook, Top 5 PHP Performance Metrics, Tips & Tricks here.

Top Node.js Performance Metrics

In order to understand what metrics to collect surrounding Node.js event loop behavior, it helps to first understand what the event loop actually is and how it can potentially impact your application performance. For illustrative purposes, you may think of the event loop as an infinite loop executing code in a queue. For each iteration within the infinite loop, the event loop executes a block of synchronous code. Node.js – being single-threaded and non-blocking – will then pick up the next block of code, or tick, waiting in the queue as it continue to execute more code. Although it is a non-blocking model, various events that potentially could be considered blocking include:

  • Accessing a file on disk
  • Querying a database
  • Requesting data from a remote webservice

With Javascript (the language of Node.js), you can perform all your I/O operations with the advantage of callbacks. This provides the advantage of the execution stream moving on to execute other code while your I/O is performing in the background. Node.js will execute the code awaiting in the Event Queue, execute it on a thread from the available thread pool, and then move on to the next code in queue. As soon as your code is completed, it then returns and the callback is instructed to execute additional code as it eventually completes the entire transaction.

In addition to event loops, make sure to monitor external dependencies, memory leaks, business transaction response time, and have a full and complete view of your application topography.

Read the full eBook, Top 5 Node.js Performance Metrics, Tips & Tricks here.

Top Python Performance Metrics

It is always faster to serve an object from memory than it is to make a network call to retrieve the object from a system like a database; caches provide a mechanism for storing object instances locally to avoid this network round trip. But caches can present their own performance challenges if they are not properly configured. Common caching problems include:

  • Loading too much data into the cache
  • Not properly sizing the cache

When measuring the performance of a cache, you need to identify the number of objects loaded into the cache and then track the percentage of those objects that are being used. The key metrics to look at are the cache hit ratio and the number of objects that are being ejected from the cache. The cache hit count, or hit ratio, reports the number of object requests that are served from cache rather than requiring a network trip to retrieve the object. If the cache is huge, the hit ratio is tiny (under 10% or 20%), and you are not seeing many objects ejected from the cache then this is an indicator that you are loading too much data into the cache. In other words, your cache is large enough that it is not thrashing (see below) and contains a lot of data that is not being used.

In addition to measure your caching, also, monitor your external calls, application visibility, and internal dependencies.

In addition to measure your caching, also monitor your external calls, application visibility, and internal dependencies.

Read the full eBook, Top 5 Python Performance Metrics, Tips & Tricks here.

To recap, if you’d like to read our language-specific best practices, please click on one of the links below: