The AppD Approach: Monitoring a Docker-on-Windows App

Here at AppDynamics, we’ve developed strong support for .NET, Windows, and Docker users. But something we haven’t spent much time documenting is how to instrument a Docker-on- Windows app. In this blog, I’ll show you how straightforward it is to get one up and running using our recently announced micro agent. Let’s get started.

Sample Reference Application

Provided with this guide is a simple ASP.NET MVC template app running on an ASP.NET application on the .NET full framework. The sample application link is provided below:

source.zip

If you have your own source code, feel free to use it.

Guide System Information

This guide was written and built on the following platform:

  • Windows Server 2016 Build 14393.rs1_release.180329-1711 (running on VirtualBox)

  • AppDynamics .NET Micro Agent Distro 4.4.3

Prerequisite Steps

Before instrumenting our sample application, we first need to download and get the .NET micro agent. This step assumes you are not using an IDE such as Visual Studio, and are working manually on your local machine.

Step 1: Get NuGet Package Explorer

If you already have a way to view and/or download NuGet packages, skip this step. There are many ways to extract and view a NuGet package, but one method is with a tool called NuGet Package Explorer, which can be downloaded here.

Step 2: Download and Extract the NuGet Package

We’ll need to download the appropriate NuGet package to instrument our .NET application.

  1. Go to https://www.nuget.org/

  2. Search for “AppDynamics”

  3. The package we need is called “AppDynamics.Agent.Distrib.Micro.Windows.”

  4. Click “Manual Download” or use the various standard NuGet packages.

  5. Now open the package with NuGet Package Explorer.

  6. Choose “Open a Local Package.”

  7. Find the location of your downloaded NuGet package and open it. You should see the screen below:

  1. Choose “File” and “Export” to export the NuGet package to a directory on your local machine.

  1. Navigate to the directory where you exported the NuGet package, and confirm that you see this:

Step 3: Create Directory and Configure Agent

Now that we’ve extracted our NuGet package, we will create a directory structure to deploy our sample application.

  1. Create a local directory somewhere on your machine. For example, I created one on my Desktop:
    C:\Users\Administrator\Docker Sample\

  1. Navigate to the directory in Step 1, create a subfolder called “source” and add the sample application code provided above (or your own source code) to this directory. If you used the sample source provided, you’ll see this:

  1. Go back to the root directory and create a directory called “agent”.

  2. Add the extracted AppDynamics micro agent components from Step 1 to this directory.

  3. Edit “AppDynamicsConfig.json” and add in your controller and application information.

{
  "controller": {
    "host": "",
    "port": ,
    "account": "",
    "password": "",
    "ssl": false,
    "enable_tls12": false
  },
  "application": {
    "name": "Sample Docker Micro Agent",
    "tier": "SampleMVCApp"
  }
}
  1. Navigate to the root of the folder, create a file called “dockerFile” and add the following text:

Sample Docker Config

FROM microsoft/iis
SHELL ["powershell"]

RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
   Install-WindowsFeature Web-Asp-Net45

ENV COR_ENABLE_PROFILING="1"
ENV COR_PROFILER="{39AEABC1-56A5-405F-B8E7-C3668490DB4A}"
ENV COR_PROFILER_PATH="C:\appdynamics\AppDynamics.Profiler_x64.dll"

RUN mkdir C:\webapp
RUN mkdir C:\appdynamics

RUN powershell -NoProfile -Command \
  Import-module IISAdministration; \    
  New-IISSite -Name "WebSite" -PhysicalPath C:\webapp -BindingInformation "*:8000:" 

EXPOSE 8000

ADD agent /appdynamics
ADD source /webapp

RUN powershell -NoProfile -Command Restart-Service wmiApSrv
RUN powershell -NoProfile -Command Restart-Service COMSysApp

Here’s what your root folder will now look like:

Building the Docker Container

Now let’s build the Docker container.

  1. Open Powershell Terminal and navigate to the location of your Docker sample app. In this example, I will call my image “appdy_dotnet” but feel free to use a different name if you desire.

  2. Run the following command to build the Docker image:

docker build –no-cache -t  appdy_dotnet .

  1. Now build the image:

docker run –name appdy_dotnet -d appdy_dotnet ping -t localhost

  1. Log into the container via powershell/cmd:

docker exec -it appdy_dotnet cmd

  1. Get the container IP by running the “ipconfig” command:
C:\ProgramData\AppDynamics\DotNetAgent\Logs>ipconfig
Windows IP Configuration


Ethernet adapter vEthernet (Container NIC 69506b92):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::7049:8ad9:94ad:d255%17
   IPv4 Address. . . . . . . . . . . : 172.30.247.210
   Subnet Mask . . . . . . . . . . . : 255.255.240.0
  1. Copy the IPv4 address, add port 8000, and request the URL from a browser. You should get the following site back (below). This is just a simple ASP.NET MVC template app that is provided with Visual Studio. In our example, the address would be:

http://<ip4-address>:8000

Here’s what the application would look like:

  1. Generate some load in the app by clicking the Home, About, and Contact tabs. Each will be registered as a separate business transaction.

Killing the Container (Optional)

In the event you get some errors and want to rebuild the container, here are some helpful commands that can be used for stopping and removing the container, if needed.

  1. Stop the container:

docker stop appdy_dotnet

  1. Remove the container:

docker rm appdy_dotnet

  1. Remove image:

docker rmi appdy_dotnet

Verify Successful Configuration via Controller

Log in to your controller and verify that you are seeing load. If you used the sample app, you’ll see the following info:

Application Flow Map

Business Transactions

Tier Information


As you can see, it’s fairly easy to instrument a Docker-on-Windows app using AppDynamics’ recently announced micro agent. To learn more about AppD’s powerful approach to monitoring .NET Core applications, read this blog from my colleague Meera Viswanathan.

The AppD Approach: How to Monitor .NET Core Apps

For the past few months we’ve been collecting customer feedback on a new agent designed specifically to monitor microservices built with .NET Core. As I discussed a few weeks ago in my post “The Challenges of App Monitoring with .NET Core,” the speed and portability that make .NET Core a popular choice for companies seeking to more fully embrace the world of complex, containerized applications placed new demands on monitoring solutions.

Today we’re announcing the general availability of the AppDynamics .NET Core agent for Windows. Please stay tuned for news about a native C++-based Linux agent we are working on, as well. Our goal is to design agents that address the three biggest challenges of monitoring .NET Core: performance, flexibility, and functionality. As companies modernize monolithic applications and increasingly shift parts of their IT infrastructure to the cloud, these agents will ensure deep visibility into rapidly evolving production, testing, and development environments.

In this blog post, I’d like to share some of the considerations that went into the choices we made in architecting the new agents. It was extremely important to our engineering team to create an agent that is as light-weight and reliable as the microservices and containers we monitor without compromising functionality. One change we made was removing the Windows Service that required a machine-level install, which increased reliability and freed up CPU and considerable memory (70 MB). In addition, the new .NET Core agents for Windows require just half the disk space of traditional .NET agents and consist of only two DLLs and two configuration files.

Our approach to monitoring .NET Core recognizes that the deployment of .NET Core applications is fundamentally different from those built with the full .NET framework. In Windows environments deployment was dependent on both the framework and the machine, and our agent was installed using the traditional Windows installer (via MSI files). In contrast, the advantage of .NET Core is that it runs on a variety of platforms and runtimes.

Last year, our team made the decision we would mirror .NET Core’s flexibility in deployment. Unlike some other app monitoring solutions, the AppDynamics .NET Core agents reside next to the application. This architecture means containers can be spun up and spun down or moved around without affecting visibility. Operations engineers can integrate AppDynamics in any way that makes sense, while developers are able to leverage NuGet package-management tools. The pipeline for deploying and installing the agents on each platform is the same as for deploying applications and microservices there. For example, agents can be deployed with Azure Site Extensions for Azure or buildpacks for Pivotal Cloud Foundry (available soon). In the case of Docker, the agents can be embedded in a Docker image with engineers setting a few environment variables for monitoring to then proceed automatically. During our recent beta it was great to see our customers deploying the AppDynamics .NET Core agents to Docker, Azure App Services, Azure Service Fabric, Pivotal Cloud Foundry, and other environments.

How it works

The .NET Core agents deliver all the functionality and automation you expect from  AppDynamics. The agents auto-detect apps, which in the case of .NET Core could be running on Kestrel or WebListener. The agents then talk to the AppDynamics Controller providing everything from business and performance-related transaction metrics to errors, and health alerts.

Similar to the traditional .NET agent, the new .NET Core agents are particularly suited to monitoring the asynchronous transactions that often characterize microservices. We automatically instrument asynchronous apps and provide deep visibility at the code level with built-in visualizations such as snapshots and full-stack call graphs that include unrestricted views into the ASP.NET Core middleware.

Although certain Windows-environment specific machine metrics like performance counters are not available to the new .NET Core agents due to the new cross-platform architecture, as I previously discussed, AppDynamics continues to provide cross-stack and full-stack visibility by automatically correlating the metrics collected by the .NET Core agents with infrastructure and end-user metrics. This allows transactions to be traced from an end user to an application or microservice through databases such as Azure SQL, SQL Server, and MongoDB, across distributed tiers, and back to the end user, automatically discovering dependencies and identifying anomalies along the way. These unified full-stack and cross-stack topologies are critical to developing and deploying microservices that are responsive to business needs.

Drive business outcomes

AppDynamics’ Business iQ connects application performance with business results using a variety of data collectors to pull detailed, real-time information on everything from users to pricing. With the new .NET Core agents, it is even easier to create contextual information points to collect custom data from microservices. Thanks to run-time reinstrumentation, engineers can make changes in existing information points without restarting the microservice.

Customers have asked whether this functionality will be available in hybrid environments. Yes, this is one of the great advantages of the new .NET Core agents. Customers will have visibility into the performance of their business and their applications across on-premises installations running on the full .NET framework and .NET Core applications running on the Azure cloud or other public clouds. Just as .NET Core seeks to enable microservices to move between platforms, AppD is continually working to provide complete, end-to-end visibility into apps and microservices wherever they are running and regardless of the underlying technologies.

It is worth acknowledging that the first generation of .NET core monitoring tools is shipping with a tradeoff between ease of deployment and performance and reliability. Some vendors, especially those who shipped early, emphasize the simplicity and speed of their agents. Deploying AppD’s agents does involve more than “one” step. However, customers assure us that the reliability of our agents combined with their lack of overhead more than compensates for the small, upfront investment made in deployment. In the meantime, our engineering teams remain hard at work tuning and automating deployment and installation processes.

The AppD approach to monitoring .NET Core apps illustrates the importance of a unified solution for maintaining full-stack and cross-stack visibility. The ultimate goal of monitoring is to improve business performance. Ideally, performance issues—and potential problems— are automatically detected before they affect business goals. Achieving this requires real-time data collection on-premises, on IoT devices, and across clouds. It depends on the continuous monitoring of everything—applications, containers, microservices, machines, and databases—as well as on the continuous improvement of AI and machine learning algorithms. Our new agents represent one more step in this exciting journey. Onward!

The Challenges of App Monitoring with .NET Core

The evolution of software development from monolithic on-premises applications to containerized microservices running in the cloud took a major step forward last summer with the release of .NET Core 2. As I wrote in the “Understanding the Momentum Behind .NET Core,” the number of developers using .NET Core recently passed the half million mark. Yet in the rush to adoption many developers have encountered a speed bump. It turns out the changes that make .NET Core so revolutionary create new challenges for application performance monitoring.

Unlike .NET apps that run on top of IIS and are tied to Windows infrastructure, microservices running on .NET Core can be deployed anywhere. The customers I’ve spoken with are particularly interested in deploying microservices on Linux systems, which they believe will deliver the greatest return on investment. But the flexibility comes at cost.

When operations engineers move .NET applications to .NET Core they are seeking fully functional, performant environments that are designed for a microservice. What they are finding is that the .NET Core environment requirements vary substantially from the environments that the full framework runs on. While .NET Core’s independence from IIS and Windows machines provides flexibility, it also means that some performance tools for system metrics may no longer be relevant.

Engineers who are used to debugging apps in a traditional Windows environment find that valuable tools like Event Tracing for Windows (ETW) and performance counters are not consistently available. For example, an on-premises Windows machine allows you to read performance counters while Azure WebApps on Windows only provides access to application-specific performance counters. Neither ETW nor performance counters are available on Linux, so if you want to deploy an ASP.NET Core microservice on Linux you will need to modify your method of collecting system-level data.

In creating .NET Core and the ASP.NET Core framework Microsoft made improving performance a top priority. One of the biggest changes was replacing the highly versatile but comparatively slow IIS web server with Kestrel, a stripped-down, cross-platform web server. Unlike IIS, Kestrel does not maintain backwards compatibility with a decade-and-half of previous development and is specifically suited to the smaller environments that characterize microservices development and deployment. Open-source, event-driven, and asynchronous, Kestrel is built for speed. But the switch from IIS to Kestrel is not without tradeoffs. Tools we relied on before like IIS Request Failed logging don’t consistently work. The fact is, Kestrel is more of an application server than a web server, and many organizations will want to use a full-fledged web server like IIS, Apache, or Nginx in front as a reverse proxy. This means engineers have to now familiarize themselves with the performance tools, logging, and security setup for these technologies.

Beyond monitoring web servers, developers need performance metrics for the entire platform where a microservice is deployed—from Azure and AWS to Google Cloud Platform and Pivotal Cloud Foundry, not to mention additional underlying technologies like Docker. The increase in platforms has a tendency to add up to an unwelcome increase in monitoring tools.

At the same time, the volume, velocity, and types of data from heterogeneous, multi-cloud, microservices-oriented environments is set to increase at exponential rates. This is prompting companies who are adopting .NET Core and microservices to take a hard look at their current approach to application monitoring. Most are concluding that the traditional patchwork of multiple tools is not going to be up to the task.

While application performance monitoring has gotten much more complex with .NET Core, the need for it is even more acute. Migrating applications from .NET without appropriate monitoring solutions in place can be particularly risky.

One key concern is that not all .NET Framework functionality is available in .NET Core, including .NET Remoting, Code Access Security and AppDomains. Equivalents are available in ASP.NET Core, but they require code changes by a developer. Likewise, HTTP handlers and other IIS tools must be integrated into a simplified middleware pipeline in ASP.NET Core to ensure that the logic remains part of an application as it migrated from .NET to .NET Core.

Not all third-party dependencies have a .NET Core-compatible release. In some cases, developers may be forced to find new libraries to address an application’s needs.

Given all of the above, mistakes in migration are possible. There may be errors in third-party libraries, functionality may be missing, and key API calls may cause errors. Performance tools are critical in helping this migration by providing granular visibility into the application and its dependencies. Problems can thus be identified earlier in the cycle, making the transition smoother.

AppDynamics had been tackling the challenges outlined in this post for more than a year. A beta release of support for .NET Core 2.0 on Windows became available in January, and we’ll have more news going forward.

Please stay tuned for my next blog post about AppDynamics’ approach to app monitoring with .NET Core.

Understanding the Momentum Behind .NET Core

Three years ago Satya Nadella took over as CEO of Microsoft, determined to spearhead a renewal of the iconic software maker. He laid out his vision in a famous July 10, 2014 memo to employees in which he declared that “nothing was off the table” and proclaimed his intention to “obsess over reinventing productivity and platforms.”

How serious was Nadella? In the summer of 2016, Microsoft took the bold step of releasing .NET Core, a free, cross-platform, open-source version of its globally popular .NET development platform. With .NET Core, .NET apps could run natively on Linux and macOS as well as Windows.

For customers .NET Core solved a huge problem of portability. .NET shops could now easily modernize monolithic on-premises enterprise applications by breaking them up into microservices and moving them to cloud platforms like Microsoft Azure, Amazon Web Services, or Google Cloud Platform. They had been hearing about the benefits of containerization: speed, scale and, most importantly, the ability to create an application and run it anywhere. Their developers loved Docker’s ease of use and installation, as well as the automation it brought to repetitive tasks. But just moving a large .NET application to the cloud had presented daunting obstacles. The task of lifting and shifting the large system-wide installations that supported existing applications consumed massive amounts of engineering manpower and often did not deliver the expected benefits, such as cost savings. Meanwhile, the dependency on the Windows operating system limited cloud options, and microservices remained a distant dream.

.NET Core not only addressed these challenges, it was also ideal for containers. In addition to starting a container with an image based on the Windows Server, engineers could also use much smaller Windows Nano Server images or Linux images. This meant engineers had the freedom of working across platforms. They were no longer required to deploy server apps solely on Windows Server images.

Typically, the adoption of a new developer platform would take time, but .NET Core experienced a large wave of early adoption. Then, in August 2017, .NET Core 2.0 was released, and adoption increased exponentially. The number of .NET Core users reached half a million by January 2018. By achieving almost full feature parity with .NET Framework 4.6.1, .NET Core 2.0 took away all the pain that had previously existed in shifting from the traditional .NET Framework to .NET Core. Libraries that hadn’t existed in .NET Core 1.0 were added to .NET Core 2.0. Because .NET Core implemented all 32,000 APIs in .NET Standard 2.0 most applications could reuse their existing code.

Engineering teams who have struggled with DevOps initiatives found that .NET Core allowed them to accelerate their move to microservices architectures and to put in place a more streamlined path from development to testing and deployment. Lately, hiring managers have started telling their recruiters to be sure and mention the opportunity to work with .NET Core as an enticement to prospective hires—something that never would have happened with .NET.

At AppDynamics, we’re so excited about the potential of .NET Core that we’ve tripled the size of the engineering team working on .NET. And, just last month, we announced a beta release of support for .NET Core 2.0 on Windows using the new the .NET micro agent released in our Winter ‘17 product release. This agent provides improved microservices support as more customers choose .NET Core to implement multicloud strategies. Reach out to your account team to participate in this beta.

Stay tuned for my next blog posts on how to achieve end-to-end visibility across all your .NET apps, whether they run on-premises, in the cloud, or in multi-cloud and hybrid environments.

Top 5 Conferences for .NET Developers

Partly due to the influence of software giant Microsoft, the .NET community is expansive. Developers, programmers and IT decision makers regularly meet at .NET conferences to share news, information and ideas to help each other keep up with the rapid digital transformation in today’s IT landscape. Here are five .NET conferences you should consider attending to advance your knowledge, skills and career growth.

Build

Build is Microsoft’s annual convention geared toward helping software and web programmers learn about the latest developments in .NET, Azure, Windows and related technologies. It began in 2011, and the 2017 conference will run May 10 – 12 at the Washington State Convention Center in Seattle, WA. Don’t get your hopes too high because registration has already closed as the conference is sold out. However, there is a wait list if you’re hoping the stars align in your favor and you’re granted admission.

Build takes over for the now defunct Professional Developers Conference, which focused on Windows, and MIX, which centered on developing web apps using Silverlight and ASP.NET. For 2017, major topic themes included .NET Standard Library, Edge browser, Windows Subsystem for Linux, ASP.NET core and Microsoft Cortana. Sessions included debugging tricks for .NET using Visual Studio, a look at ASP.NET Core 1.0, deploying ASP.NET Core apps, a deep dive into MVC with ASP.NET Core, Entity Framework Core 1.0, a .NET overview, and creating desktop apps with Visual Studio vNext.

Reviews of the prior years were positive. One reviewer appreciated the introduction of a BASH Shell, the first environment that allowed cross-platform Windows developers to code completely in Windows without resorting to Linux or Mac OS X machines. Another commented that they liked getting Xamarin, a set of developer tools, for free, saving them hundreds of dollars. Both these moves were strong indicators of Microsoft’s re-commitment to developers as it embraces our new multi-platform world encompassing open-source and proprietary programs side by side.

DEVintersection

This year’s DEVintersection will be staged at the Walt Disney World Swan and Dolphin Resort in Lake Buena Vista, Florida, on May 21 – 24, 2017. This is the fifth year for the conference which brings together engineers and key leaders from Microsoft with a variety of industry professionals. The goal is to help attendees help stay on top of developments such as ASP.NET Core, Visual Studio, SQL Server, SharePoint and Windows 10.

Since ASP.NET, as well as .NET, are moving to open source status, it is another sign Microsoft is further encouraging open source as a preeminent approach in web development. You will learn skills to tackle the transition to open source and handle the concomitant issues that come with that move. The IT landscape continues to shift and evolve, and software developers need to consider a wide variety of challenges, such as microservices, the cloud and containerization.

Major conference tracks include Visual Studio interaction, Azure intersection, ASP.NET intersection, IT EDGE intersection, Emerging Experiences and SQL intersection. IT Edge is a co-located event — attendees can take part in sessions from different tracks for no extra charge. There will be ten workshops lasting throughout the day for the four days of the conference. More than 40 sessions are focused on a number of technology topics, with the goal to give you techniques and skills you can use right away in your day-to-day work.

This year, expect to see plenty of discussion around designing for scale, performance monitoring, the cloud, troubleshooting and features and benefits of the 2012, 2014 and 2016 editions of SQL Server. Are you considering migrating 2008 all the way to 2016 in one go? You’ll get the feedback and advice you need to make these important decisions. Past attendees appreciated that every day of the conference started with a report from Microsoft specialists in the main hall. One reviewer called the session breakouts “involving and useful,” and another said the full-day workshops that ran before and after the main convention gave them both “practical and theoretical knowledge.”

Microsoft Ignite

Formerly known as TechEd, Microsoft changed the name to Ignite in 2015. The original TechEd started in Orlando in 1993, and the last chapter of the series was staged in 2016 in Atlanta, Georgia. The 2017 Ignite conference is slated for September 25 – 29 in Orlando. Registration opens on March 28, so be sure to save the date. Registration sold out last year for the 2016 conference.

The Microsoft Data Science Summit will span two days during Ignite and is geared to engineers, data scientists, machine learning professionals and others interested in the latest in the world of analytics.

MS Ignite is for IT professionals, developers and managers. Decision makers can see what .NET advancements and developments are available, while developers can get information on how to implement those platforms in their current IT profile. There are presentations, breakout sessions and lab demonstrations. Microsoft .NET experts and community members alike meet to socialize, share news and evaluate the latest software defined tech. There are over 1000 Microsoft Ignite sessions to learn the latest developments in technology, each giving you a chance to meet face-to-face with industry experts.

For companies using .NET solutions, Ignite gives leaders and developers a chance to discuss current trends on the platform directly with Microsoft influencers. High profile Microsoft attendees in the past have included Jeffrey Snover, the lead architect of Windows Server; Brand Anderson, Corporate VP of the Enterprise Client and Mobility; and Mark Russinovich, Chief Technology Officer of Microsoft Azur

IT/Dev Connections

Presented by Penton Media, the annual IT/Dev Connections conference is scheduled for October 23 – 26, 2017 at the Hilton Union Square in San Francisco. Topics to be covered include ASP.NET, Visual Studio, Azure, SQL Server, SharePoint, VMware and more. There are five main technical topic tracks with over 200 sessions and one sponsor track and one Community Track. Conference leaders known as Track Chairs hand pick the best content and speakers. The goal is to omit any fluff and marketing hype, focusing only on high-value presenters and panelists. The five topic tracks are Cloud and Data Center; Enterprise Collaboration; Development and DevOps; Data Platform and Business Intelligence; and Enterprise Management, Mobility and Security.

Speakers at the 2017 conference include Windows technical specialist John Savill, Data Professional Tim Ford, and SharePoint expert Liam Clearly. A series of pre-conference workshops give developers and programmers a chance for one-on-one training. Workshops include troubleshooting ASP.NET Web applications, mastering the SharePoint dev toolchain, and skill-building for ASP.NET Core with Angular 2. Other sessions include Azure for ASP.NET programmers, Dockerizing ASP.NET apps, and ASP.NET development without using Windows. The State of the Union Session topic will discuss .NET from the desktop and mobile device to the server.

The strength of the IT/Dev Connections conference is the focus on developers and programmers. Commercial interests are kept to a minimum, and speakers are vetted for the amount of take-away value in their presentations. Attendees from past events have lauded the “user focus” of the conference and “intensely personal” feel of the breakout sessions. In other events, session rooms may have hundreds of chairs, while sessions at IT/Dev Connections generally accommodate around 100 people, providing a more personal, hands-on feel to each session. The speakers are also well diversified among different sections of the developer community, including a number of MVP designated presenters.

Visual Studio Live

Visual Studio Live! events for 2017 are a series of conferences throughout the year at cities around the country like Las Vegas, Chicago, and Washington D.C. The subtitle for the series is “Rock Your Code Tour.” The meetings give .NET developers and programmers a chance to level up their skills in Visual Studio, ASP.NET and more.

Visual Studio Live! focuses on practical training for developers using Visual Studio. For example, the Austin meeting is five days of education on the .NET framework, JavaScript/HTML/5, Mobile Computing and Visual Studio. There are more than 60 sessions conducted by Microsoft leaders and industry insiders. Topics to be covered include Windows Client, Application Lifecycle Management, Database and Analytics, Web Server and Web Client, Software Practices, and Cloud Computing.

If you participated in the previous Live! 360 program for discounted rates, be sure to reach out to the organizing committee as they do have special pricing for their most frequent customers.

Visual Studio Live! is known for its hands-on approach, with extensive workshops that give developers a deep dive into each topic. The workshops are featured throughout each day, so attendees have lots of opportunity to get targeted learning.

Attendees have responded enthusiastically to the co-located conference arrangement. One said it was an ideal chance to catch up with a number of technologies after being out of the tech world for a few years, and another lauded the enthusiasm of the speakers and workshop leaders.

There is a myriad of software development conferences that will help you grow as a .NET developer, DevOps thinker, or business influencer. Check out these five to see which one best fits your needs and goals.

Top 10 New Improvements Found in the .NET Framework Version 4.6.2

In the late 1990s, Microsoft began working on a general purpose development platform that quickly became the infrastructure for building, deploying, and running an unlimited number of applications and services with relative ease while focusing on the Internet User Experience (IUE). Then, in February of 2002, Microsoft finally launched the first version of these shared technology resources under its original name, Next Generation Windows Services (NGWS). With DLL libraries and object-oriented support for web app development, the .Net Framework 1.0 was a digital transformation that introduced us all to managed coding.

Although .NET Core has been a focus in recent years, work on the original .NET Framework has still progressed. In fact, on August 2, 2016, Microsoft announced the much-anticipated release of Version 4.6.2. According to MS, there are “dozens of bug fixes and improvements.” Actually, there are almost 14 dozen bug fixes — 166 to be exact — not to mention all the API changes. Moreover, many of the changes found in this new version were based on developer feedback. Needless to say, things have definitely improved. The following is a list of the top ten improvements found in .NET 4.6.2:

1. Windows Hello

The Windows 10 Anniversary Update was released the same day as the latest .NET Framework. This version is already included with the anniversary update. Although it doesn’t show up as an installed application in “Programs and Features,” you can find it by searching for features and clicking on “Turn Windows features on and off.” From here, you can adjust your features accordingly, and select specific features by utilizing “developer mode.” Also, Windows Hello allows developers and programmers to use Windows Hello for their apps. For example, third-party developers can now allow users to log in with their face and fingerprint with ease. Simply download the framework update.

2. Removal of Character Limits (BCL)

Microsoft removed the 260 character limitation, MAX_PATH, for NTFS in Windows. Characters in .NET 4.6.2 are now classified based on the Unicode Standard, Version 8.0.0. You’re probably used to getting the “path too long issue” prompt, especially with MSBuild definitions. The error details usually state something like:

TF270002: An error occurred copying files: The specified path, file name, or both are too long.

Or the error might state something similar to:

Unable to create folder. Filename or extension is too long.

Programs and server tools can also show problems in these areas, and solutions normally involved renaming something to fit the profile. Usually not an issue for end users, this limitation is more common on developer machines that use specialized tools also running on Unix, or while building source trees. However, now that the MAX_PATH limitation has been removed, we may never have to see this error message again.

However, long paths are not yet enabled by default. Therefore, you need to set the policy to enable the support: “Enable Win32 long paths” or “Enable NTFS long paths.” Your app must also have a specific manifest setting. Also, there’s the use of long paths on any OS if you use the \\?\ syntax, which is now supported by this feature.

3. Debugging APIs (CRL)

The main adjustment to the CRL is that if the developer chooses, null reference exceptions will now provide much more extensive debugging data. The unmanaged debugging APIs can request more information and perform additional analysis. Next, a debugger can determine which variable in a single line of source code is null, making your job a lot easier.

MS reports state the following APIs have been added to the unmanaged debugging API:

4. TextBoxBase Controls (WPF)

For security purposes, the copy and cut methods have been known to fail when they are called in partial trust. According to Microsoft, “developers can now opt-in to receiving an exception when TextBoxBase controls fail to complete a copy or cut operation.” Standard copy and cut through keyboard shortcuts, as well as the context menu, will still work the same way as before in partial trust.

5. Always Encrypted Enhancement (SQL)

This database engine is designed to protect sensitive data, such as credit card numbers. The .NET Framework for SQL Server contains two important enhancements for Always Encrypted centered around performance and security:

  • Performance: Encryption metadata for query parameters is now cached. This means when the property is set to true (the default), database clients can retrieve parameter metadata from the server only once. This is true even if the same query is called multiple times.

  • Security: Column encryption entries in the key cache will be evicted after a reasonable time interval. This can be set using the SqlConnection.ColumnEncryptionKeyCacheTtl property. The default time is two hours, while zero means no caching at all.

6. Best Match (WCF)

NetNamedPipeBinding has been upgraded to support a new pipe lookup called “Best Match.” When using this option, the NetNamedPipeBinding service will force you to search for the service all the way to the best matching URI, found at the requested endpoint, instead of the first matching service found. For example, multiple WCF Data Services are known to frequently listen in on named pipes. Often, a few of these WCF clients could be connected to the wrong service. This feature is set to connect with “First Match” as the default option. If you wish to enable this feature, you can add the AppSetting to the App.config or Web.config file on the client’s application.

7. Converting to UWP

According to Developer Resources, Windows now offers capabilities to bring existing Windows desktop apps to the Universal Windows Platform. This includes WPF as well as Windows Forms apps. For example, WPF is a powerful framework and has become a mature and stable UI platform suitable for long-term development. However, it is also a complex beast at times, because it works differently from other GUI frameworks and has a steep learning curve. However, Microsoft always seems to plan ahead, and that’s where converting to the Universal Windows Platform (UWP) enhancement comes in. This improvement enables you to gradually migrate your existing codebase to UWP, which, in turn, can help you bring your app to all Windows 10 devices. Also, it makes UWP APIs more accessible, allowing you to enable features such as Live Tiles and notifications.

8. ClickOnce

Designed long before the invention of the App Store, ClickOnce allows applications to be distributed via URLs. It can even self-update as new versions are released. Unfortunately, security has always been a big concern. Many DevOps teams have shown frustration over MS’s failure to adopt TSL standards. Finally, in addition to the 1.0 protocol, this application now supports TLS 1.1 and TLS 1.2. In fact, ClickOnce will automatically detect which protocol to use, and no action is required to enable this feature.

9. SignedXml

An implementation of the W3C’s XML Digital Signature standard, SignedXml now supports the SHA-2 family of hashing algorithm.

The following are included signature methods, as well as reference digest algorithms that are frequently used:

  • RSA-SHA256

  • RSA-SHA384

  • RSA_512 PKCS#1

For more information on these and other security concerns, along with update deployment and developer guidance, please see Microsoft Knowledge Base Article 3155464, as well as the MS Security Bulletin MS16-065.

10. Soft Keyboard Support

On previous versions of .NET, it wasn’t possible to utilize focus tracking without disabling WPF pen/touch gesture support. Developers were forced to choose between full WPF touch support or Windows mouse promotion. In the latest version of Microsoft’s .NET 4.6.2, Soft keyboard support allows the use of the touch keyboard in WPF applications without disabling WPF stylus/touch support on Windows 10.

To find out which version of the .NET Framework is installed on a computer:

  1. Tap on the Windows key, type regedit.exe, and hit enter.

  2. Confirm the UAC prompt.

  3. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full

Check for a DWORD value labeled Release, which indicates you have the .NET Framework 4.5 or newer.

For all versions of the .NET Framework and its dependencies, please check the charts listed in the Microsoft library for more information.

If you want to have the complete .NET Framework set in your computer, you’ll need to install the following versions:

  • .NET Framework 1.1 SP1

  • .NET Framework 3.5 SP1

  • .NET Framework 4.6

The above list is only the tip of the iceberg when describing all the features and improvements that can be found in the .NET Framework Version 4.6.2. There are numerous security and crash fixes, added support, networking improvements, active directory services updates, and even typo correction in EventSource. Because Microsoft took user feedback into consideration, developers, programmers, and engineers may feel that Microsoft is finally listening to their needs and giving them a little more of what they want in their .NET Framework.

Learn more

Find out how AppDynamics .NET application monitoring solution can help you today.

10 Things You Should Know About Microsoft’s .NET Core 1.0

On June 27th, Microsoft announced the release of a project several years in the making — .NET Core. The solution resulted from the need for a nonproprietary version of Microsoft’s .NET Framework — one that runs on Mac and several versions of Linux, as well as on Windows. This cross-platform .NET product offers programmers new opportunities with its open-source design, flexible deployment, and command-line tools. These features are just part of what makes .NET Core an important evolution in software development. The following are ten key facts you should be aware of when it comes to Microsoft’s .NET Core 1.0 and its impact on software.

1. The .NET Core Platform Is Open-Source

.NET Core is part of the .NET Foundation, which exists to build a community around and innovate within the .NET development framework. The .NET Core project builds on these priorities, starting with its creation by both Microsoft’s .NET team and developers dedicated to the principles of open-source software.

Your advantages in using this open-source platform are many — you have more control in using and changing it, and transparency in its code can provide information and inspiration for your own projects based on .NET Core. In addition, .NET Core is more secure, since you and your colleagues can correct errors and security risks more quickly. Its open-source status also gives .NET Core more stability, because unlike that of proprietary software defined and later abandoned by its creators, the code behind this platform’s tools will always remain publicly available.

2. It Was Created and Is Maintained Through a Collaborative Effort

Related to its development using open-source design principles, the .NET Core platform was built with the assistance of about 10,000 developers. Their contributions included creating pull requests and issues, as well as providing feedback on everything from design and UX to performance.

By implementing the best suggestions and requests, the development team turned .NET Core into a community-driven platform, making it more accessible and effective for the programming community than if it had been created purely in-house. The .NET Core platform continues to be refined through collaboration as it is maintained by both Microsoft and GitHub’s .NET community. As a developer, you have the opportunity to influence the future advancement of .NET Core by working with its code and providing your feedback.

3. The Main Composition of .NET Core Includes Four Key Parts

The first essential aspect is a .NET runtime, which gives .NET Core its basic services, including a type system, garbage collector, native interop, and assembly loading. Secondly, primitive data types, app composition types, and fundamental utilities are provided by a set of framework libraries (CoreFX). Thirdly, the .NET Core developer experience is created by a set of SDK tools and language compilers that are part of .NET Core. Finally, the “dotnet” app host selects and hosts the runtime, allowing .NET Core applications to launch. As you develop, you’ll access .NET Core as the .NET Core Software Development Kit (SDK). This includes the .NET Core Command Line Tools, the .NET Core, and the dotnet driver — everything you need to create a .NET Core application or a .NET Core library.

4. Flexible Deployment Means More Options for Using .NET Core

One of the defining features of .NET Core is its flexible deployment — you can install the platform either as part of your application or as a separate installation. Framework-dependent deployment (FDD) is based on the presence of .NET Core on the target system and has many advantages. With FDD, your deployment package will be smaller. Also, disk space use and memory use are minimized on devices, and you can execute the .NET Core app on any operating system without defining them in advance.

Self-contained deployment (SCD) packages all components (including .NET Core libraries and runtime) with your application, in isolation from other .NET Core applications. This type of deployment gives you complete control of the version of .NET Core used with your app and guarantees accessibility of your app on the target system. The unique characteristics of each deployment type ensure you can deploy .NET Core apps in a way that works best for your particular needs.

5. The .NET Core Platform Is a Cross-Platform Design

This unique software platform already runs on Windows, Mac OS X, and Linux, as its cross-platform nature was one of the main priorities for its development. While this may seem like a strange move for Microsoft, it’s an important one in a technological world that’s increasingly focused on flexibility and segmented when it comes to operating systems and platforms. .NET Core’s availability on platforms other than Windows makes it a better candidate for use by all developers, including Mac and Linux developers, and also gives the entire .NET framework the benefit of feedback and use from a much wider set of programmers. This additional feedback results in a product that works better for all of its users and makes the .NET Core platform a move forward for software-defined, rather than platform-defined applications.

6. Modular Development Makes .NET Core an Agile Development Tool

As part of its cross-compatibility design, the software development platform includes a modular infrastructure. It is released through NuGet, and you can access it as feature-based packages rather than one large assembly. As a developer, you can design lightweight apps that contain only the necessary NuGet packages, resulting in better security and performance for your app. The modular infrastructure also allows faster updates of the .NET Core platform, as affected modules can be updated and released on an individual basis. The focus on agility and fast releases, along with the aforementioned collaboration, positively positions .NET Core within the DevOps movement.

7. .NET Core Features Command-Line Tools

Microsoft states that .NET Core’s command-line tools mean that “all product scenarios can be exercised at the command-line.” The .NET Core Command Line Interface (CLI) is the foundation for high-level tools, such as Integrated Development Environments, which are used for developing applications on this platform. Like the .NET Core platform, this CLI is cross-platform, so that once you’ve learned the toolchain, you can use it the same way on any supported platform. The .NET Core CLI is the basis for applications to be portable whether .NET Core is already installed or an application is self-contained.

8. .NET Core Is Similar to .NET Framework

While .NET Core was designed to be an open-source, cross-platform version of the .NET Framework, there are differences between the two that go beyond those key features. Many of these comparisons result from the design itself as well as the relative newness of the .NET Core software development platform. App models built on Windows technologies are not supported by .NET Core, but console and ASP.NET Core app models are supported by both .NET Core and .NET Framework.

.NET Core has fewer APIs than the .NET Framework, but it will include more as it develops. Also, .NET Core only implements some of .NET Framework’s subsystems in order to maintain the simplified, agile design of the platform. These differences may limit the .NET Core platform in some ways now — however, the advantages of its cross-platform, open-source design should definitely outweigh any limitations as the platform is further enhanced.

9. The .NET Core Platform Is Still Under Construction

The nature of this software development platform makes it a work in progress, continually refined by both Microsoft’s .NET Core team and invested developers worldwide. The .NET Core 1.1 release, scheduled for this fall, is set to bring greater functionality to the platform. One of the intended features is an increase in support for APIs at the BCL level — enough to make .NET Core equal to the .NET Framework as well as Mono. In addition, .NET Core 1.1 will transition the platform’s default built system and project model to MSBuilt and csprog. The .NET Core roadmap on GitHub also cites changes in middleware and Azure integration as goals for the 1.1 release. These features are just a small subset of the purported changes for .NET Core based on natural goals for its development as well as contributions from .NET developers.

10. The .NET Core Platform Is Part of a Digital Transformation

This uniquely conceived and crafted platform for software development is far more than just a new tool for application developers. It represents a much larger shift in technology — one in which you can more easily deploy applications to multiple platforms by using the same initial framework and tools. This is a big change from the traditionally fragmented implementation of the .NET Framework across various platforms — or even across different applications on the same platform.

This addition to software development puts more freedom and control into your hands while you develop, especially when it comes to deploying and updating .NET Core applications in the way that you choose. Although quite new and destined to undergo significant changes in the near future, .NET Core should definitely be a tool of interest to all developers, as it takes the field of programming in an exciting direction.

Learn more

Find out how AppDynamics .NET application monitoring solution can help you today.

Top Performance Metrics for Java, .NET, PHP, Node.js, and Python

No application is the same. Some legacy apps were built in a monolithic environment built on a homogeneous language, say Java or .NET. As environments become more distributed, and technology has innovated to a near-breaking speed, application architectures tend to be built using a multitude of languages often leveraging the more dynamic languages for specific use cases.

Luckily, these distributed and extremely complex environments are where AppDynamics thrives with monitoring. AppDynamics supports Java, .NET, PHP, Node.js, Python, C/C++, and any combination of them — fitting nearly any environment.

After speaking with several customers and analyzing their performance, we’ve compiled a list of the most common performance problems for each language and the performance metrics to help measure your application health.

Below, we’ve compiled a brief summary of our findings and link to the full analysis in the respective complimentary eBooks.

Top Java Performance Metrics

Java remains one of the most widely used technology languages in enterprise applications. However, though it’s so widespread, it’s a clunky legacy language that can often have performance issues.

Along with monitoring external dependencies, garbage collection, and having a solid caching strategy, it’s important to measure business transactions. We define a business transaction as any end-user interaction with the application. These could include adding something to a cart, logging in, or any other interaction. It’s vital to measure the response times of these transactions to understand fully your user experience. If a response time takes longer than the norm, it’s important to get this resolved as quickly as possible to maintain optimal user experience.

Read the full eBook, Top 5 Java Performance Metrics, Tips & Tricks here.

Top .NET Performance Metrics

There are times in your application code when you want to ensure that only a single thread can execute a subset of code at a time. Examples include accessing shared software resources, such as a single threaded rule execution component, and shared infrastructure resources, such as a file handle or a network connection. The .NET framework provides different types of synchronization strategies, including locks/monitors, inter-process mutexes, and specialized locks like the Reader/Writer lock.

Regardless of why you have to synchronize your code or of the mechanism you choose to synchronize your code, you are left with a problem: there is a portion of your code that can only be executed by one thread at a time.

In addition to synchronization and locking, make sure to measure excessive or unnecessary logging, code dependencies, and underlying database and infrastructure issues.

Read the full eBook, Top 5 .NET Performance Metrics, Tips & Tricks here.

Top PHP Performance Metrics

Your PHP application may be utilizing a backend database, a caching layer, or possibly even a queue server as it offloads I/O intensive blocking tasks onto worker servers to process in the background. Whatever the backend your PHP application interfaces with, the latency to these backend services can affect the performance of your PHP application performance. The various types of internal exit calls may include:

  • SQL databases
  • NoSQL servers
  • In-memory cache
  • Internal services
  • Queue servers

In some environments, your PHP application may be interfacing with an obscure backend or messaging/queue server. For example, you may have an old message broker serving as an interface between your PHP application and other applications. While this message broker may be outdated, it is nevertheless part of an older architecture and is part of the ecosystem in which your distributed applications communicate with.

Along with monitoring the internal dependencies, make sure you measure your business transaction response time (as described above), external calls, and have an optimal caching strategy with full visibility into your application topography.

Read the full eBook, Top 5 PHP Performance Metrics, Tips & Tricks here.

Top Node.js Performance Metrics

In order to understand what metrics to collect surrounding Node.js event loop behavior, it helps to first understand what the event loop actually is and how it can potentially impact your application performance. For illustrative purposes, you may think of the event loop as an infinite loop executing code in a queue. For each iteration within the infinite loop, the event loop executes a block of synchronous code. Node.js – being single-threaded and non-blocking – will then pick up the next block of code, or tick, waiting in the queue as it continue to execute more code. Although it is a non-blocking model, various events that potentially could be considered blocking include:

  • Accessing a file on disk
  • Querying a database
  • Requesting data from a remote webservice

With Javascript (the language of Node.js), you can perform all your I/O operations with the advantage of callbacks. This provides the advantage of the execution stream moving on to execute other code while your I/O is performing in the background. Node.js will execute the code awaiting in the Event Queue, execute it on a thread from the available thread pool, and then move on to the next code in queue. As soon as your code is completed, it then returns and the callback is instructed to execute additional code as it eventually completes the entire transaction.

In addition to event loops, make sure to monitor external dependencies, memory leaks, business transaction response time, and have a full and complete view of your application topography.

Read the full eBook, Top 5 Node.js Performance Metrics, Tips & Tricks here.

Top Python Performance Metrics

It is always faster to serve an object from memory than it is to make a network call to retrieve the object from a system like a database; caches provide a mechanism for storing object instances locally to avoid this network round trip. But caches can present their own performance challenges if they are not properly configured. Common caching problems include:

  • Loading too much data into the cache
  • Not properly sizing the cache

When measuring the performance of a cache, you need to identify the number of objects loaded into the cache and then track the percentage of those objects that are being used. The key metrics to look at are the cache hit ratio and the number of objects that are being ejected from the cache. The cache hit count, or hit ratio, reports the number of object requests that are served from cache rather than requiring a network trip to retrieve the object. If the cache is huge, the hit ratio is tiny (under 10% or 20%), and you are not seeing many objects ejected from the cache then this is an indicator that you are loading too much data into the cache. In other words, your cache is large enough that it is not thrashing (see below) and contains a lot of data that is not being used.

In addition to measure your caching, also, monitor your external calls, application visibility, and internal dependencies.

In addition to measure your caching, also monitor your external calls, application visibility, and internal dependencies.

Read the full eBook, Top 5 Python Performance Metrics, Tips & Tricks here.

To recap, if you’d like to read our language-specific best practices, please click on one of the links below:

Top 5 Performance Problems in .NET Applications

The last couple articles presented an introduction to Application Performance Management (APM) and identified the challenges in effectively implementing an APM strategy. This article builds on these topics by reviewing five of the top performance problems you might experience in your .NET application.

Specifically this article reviews the following:

  •       Synchronization and Locking
  •       Excessive or unnecessary logging
  •       Code dependencies
  •       Underlying database issues
  •       Underlying infrastructure issues

1. Synchronization and Locking

There are times in your application code when you want to ensure that only a single thread can execute a subset of code at a time. Examples include accessing shared software resources, such as a single threaded rule execution component, and shared infrastructure resources, such as a file handle or a network connection. The .NET framework provides different types of synchronization strategies, including locks/monitors, inter-process mutexs, and specialized locks like the Reader/Writer lock.

Regardless of why you have to synchronize you code or of the mechanism you choose to synchronize your code, you are left with a problem: there is a portion of your code that can only be executed by one thread at a time. Consider going to the supermarket that only has a single cashier to check people out: multiple people can enter the store, browse for products, and add them to their carts, but at some point they will all line up to pay for the food. In this example, the shopping activity is multithreaded and each person represents a thread. The checkout activity, however, is single threaded, meaning that every person must line up and pay for their purchases one-at-a-time. This process is shown in figure 1.

Figure 1 Thread Synchronization

We have seven threads that all need to access a synchronized block of code, so one-by-one they are granted access to the block of code, perform their function, and then continue on their way.  

The process of thread synchronization is summarized in figure 2. 

Figure 2 Thread Synchronization Process

A lock is created on a specific object (System.Object derivative), meaning that when a thread attempts to enter the synchronized block of code it must obtain the lock on the synchronized object. If the lock is available then that thread is granted permission to execute the synchronized code. In the example in figure 2, when the second thread arrives, the first thread already has the lock, so the second thread is forced to wait until the first thread completes. When the first thread completes, it releases the lock, and then the second is granted access.

As you might surmise, thread synchronization can present a big challenge to .NET applications. We design our applications to be able to support dozens and even hundreds of simultaneous requests, but thread synchronization can serialize all of the threads processing those requests into a single bottleneck!

The solution is two-fold:

  •       Closely examine the code you are synchronizing to determine if another option is viable
  •       Limit the scope of your synchronized block

There are times when you are accessing a shared resource that must be synchronized but there are many times when you can restate the problem in such a way that you can avoid synchronization altogether. For example, we were using a rules process engine that had a single-threaded requirement that was slowing down all requests in our application. It was obviously a design flaw and so we replaced that library with one that could parallelize its work. You need to ask yourself if there is a better alternative: if you are writing to a local file system, could you instead send the information to a service that stores it in a database? Can you make objects immutable so that it does not matter whether or not multiple threads access them? And so forth…

For those sections of code that absolutely must be synchronized, choose your locks wisely. Your goal is to isolate the synchronized code block down to the bare minimum requirement for synchronization. It is typically best to define a specific object to synchronize on, rather than to synchronize on the object containing the synchronized code because you might inadvertently slow down other interactions with that object. Finally, consider when you can use a Read/Write lock instead of a standard lock so that you can allow reads to a resource while only synchronizing changes to the resource.

2. Excessive or Unnecessary Logging

Logging is a powerful tool in your debugging arsenal that allows you to identify abnormalities that might have occurred at a specific time during the execution of your application.  It is important to capture errors when they occur and gather together as much contextual information as you can. But there is a fine line between succinctly capturing error conditions and logging excessively. 

Two of the most common problems are:

  •       Logging exceptions at multiple levels
  •       Misconfiguring production logging levels

It is important to log exceptions so that you can understand problems that are occurring in your application, but a common problem is to log exceptions at every layer of your application. For example, you might have a data access object that catches a database exception and raises its own exception to your service tier. The service tier might catch that exception and raise its own exception to the web tier. If we log the exception at the data tier, service tier, and web tier, then we have three stack traces of the same error condition. This incurs additional overhead in writing to the log file and it bloats the log file with redundant information. But this problem is so common that I assert that if you examine your own log files that you’ll probably find at least a couple examples of this behavior.

The other big logging problem that we commonly observe in production applications is related to logging levels. .NET loggers define the following logging levels (named differently between the .NET TraceLevel and log4net, but categorically similar):

  •       Off
  •       Fatal
  •       Error
  •       Warning
  •       Info
  •       Verbose / Debug 

In a production application you should only ever be logging error or fatal level logging statements. In lower environments it is perfectly fine to capture warning and even informational logging messages, but once your application is in production, the user load will quickly saturate the logger and bring your application to its knees. If you inadvertently leave debug level logging on in a production application, it is not uncommon to see response times two or three times higher than normal! 

3. Code Dependencies

Developing applications is a challenging job. Not only are you building the logic to satisfy your business requirements, but you are also choosing the best libraries and tools to help you. Could you imaging building all of your own logging management code, all of your own XML and JSON parsing logic, or all of your own serialization libraries? You could build code to do this, but why should you when teams of open source developers have already done it for you? Furthermore, if you are integrating with a third party system, should you read through a proprietary communication protocol specification or should you purchase a vendor library that does it for you?

I’m sure you’ll agree that if someone has already solved a problem for you, it is more efficient to use his or her solution than to roll your own. If it is an open source project that has been adopted by a large number of companies then chances are that it should be well tested, well documented, and you should be able to find plenty of examples about how to use it.

There are dangers to using dependent libraries, however. You need to ask the following questions:

  •       Is the library truly well written and well tested?
  •       Are you using the library in the same manner as the large number of companies that are using it?
  •       Are you using it correctly? 

Make sure that you do some research before choosing your external libraries and, if you have any question about a library’s performance, then run some performance tests. Open source projects are great in that you have full access to their source code as well as their test suites and build processes. Download their source code, execute their build process, and look through their test results. If you see a high percentage of test coverage then you can have more confidence than if you do not find any test cases!

Finally, make sure that you are using dependent libraries correctly. I work in an organization that is strongly opposed to Object Relational Mapping (ORM) solutions because of performance problems that they have experienced in the past.  But because I have spent more than a decade in performance analysis, I can assure you that ORM tools can greatly enhance performance if they are used correctly. The problem with ORM tools is that if you do not take the time to learn how to use them correctly, you can easily shoot yourself in the foot and destroy the performance of your application. The point is that tools that are meant to help you can actually hurt you if you do not take the time to learn how to use them correctly.

Before leaving this topic, it is important to note that if you are using a performance management solution, like AppDynamics, it can not only alert you to problems in your application, but it can alert you to problems in your dependent code, even if your dependent code is in a compiled binary form. If you find the root cause of a performance problem in an open source library, you can fix it and contribute an update back to the open source community. If you find the root cause of a performance problem in a vendor built library, you can greatly reduce the amount of time that the vendor will need to resolve the problem. If you have ever opened a ticket with a vendor to fix a performance problem that you cannot clearly articulate, then you have experienced a long and unproductive wait for a resolution. But if you are able to point them to the exact code inside their library that is causing the problem, then you have a far greater chance of receiving a fix in a reasonable amount of time.

4. Underlying Database Issues

Almost all applications of substance will ultimately store data in or retrieve data from a database or document store. As a result, the tuning of your database, your database queries, and your stored procedures is paramount to the performance of your application. 

There is a philosophical division between application architects/developers and database architects/developers. Application architects tend to feel that all business logic should reside inside the application and the database should merely provide access to the data. Database architects, on the other hand, tend to feel that pushing business logic closer to the database improves performance. The answer to this division is probably somewhere in the middle.

As an application architect I tend to favor putting more business logic in the application, but I fully acknowledge that database architects are far better at understanding the data and the best way of interacting with that data.  I think that it should be a collaborative effort between both groups to derive the best solution. But regardless of where you fall on this spectrum, be sure that your database architects review your data model, all of your queries, and your stored procedures. They have a wealth of information to shed on the best way to tune and configure your database and they have a host of tools that can tune your queries for you. For example, there are tools that will optimize your SQL for you, following these steps:

  •       Analyze your SQL
  •       Determine the explain plan for your query
  •       Use artificial intelligence to generate alternative SQL statements
  •       Determine the explain plans for all alternatives
  •       Present you with the best query options to accomplish your objective 

When I was writing database code I used one of these tools and quantified the results under load and a few tweaks and optimizations can make a world of difference. 

5. Underlying Infrastructure Issues

Recall from the previous article that .NET applications run in a layered environment, shown in figure 3.

 Figure 3 .NET Layered Execution Model

Your application runs inside either an ASP.NET or Windows Forms container, uses the ADO libraries to interact with databases, runs inside of a CLR that runs on an operating system that runs on hardware.  That hardware is networked with other hardware that hosts a potentially different technology stack. We typically have one or more load balancers between the outside world and your application as well as between application components. We have API Management services and caches at multiple layers. All of this is to say that we have A LOT of infrastructure that can potentially impact the performance of your application!

You must therefore take care to tune your infrastructure. Examine the operating system and hardware upon which your application components and databases are running to determine if they are behaving optimally.  Measure the network latency between servers and ensure that you have enough available bandwidth to satisfy your application interactions. Look at your caches and validate that you are seeing high cache hit rates. Analyze at your load balancer behavior to ensure that requests are quickly be routed to all available servers. In short, you need to examine your application performance holistically, including both your application business transactions as well as the infrastructure that supports them. 

Conclusion 

This article presented a top-5 list of common performance issues in .NET applications. Those issues include:

  •       Synchronization and Locking
  •       Excessive or unnecessary logging
  •       Code dependencies
  •       Underlying database issues
  •       Underlying infrastructure issues

In the next article we’re going to pull all of the topics in this series together to present the approach that AppDynamics took to implementing its APM strategy. This is not a marketing article, but rather an explanation of why certain decisions and optimizations were made and how they can provide you with a powerful view of the health of a virtual or cloud-based application.

Start solving your .NET application issues today, try AppDynamics for free

How Oceanwide gains visibility into their .NET private cloud environment

I recently had the opportunity to catch up with Jonathan Victor, COO at Oceanwide, a leading insurance software company. Jonathan and I discussed the challenges they faced prior to implementing AppDynamics APM, such as gaining visibility into their cloud operations and diagnosing the root cause of performance issues. We also discussed how they ultimately chose AppDynamics over competitors and the benefits they’ve seen since using AppDynamics.

Hannah Current: Please tell us a little about Oceanwide and your role there.

Jonathan Victor: Since 1996, Oceanwide has been delivering SaaS core processing solutions to property and casualty insurers of all sizes across the globe. Our configurable insurance software solutions enable insurers to react to market changes, configure new products and manage their products with increased speed and lower costs for any line of business, virtually eliminating professional service fees. Designed from the ground up to be web enabled and fully configurable without custom programming, our solutions automate policy, billing, claims, underwriting, document management, agent/consumer portals and more for insurers, MGAs, and brokers.

I’m responsible for managing the Oceanwide private cloud and the insurance applications which we (develop &) run on top of it. I have a dedicated Cloud Operations team which ensures that Oceanwide’s insurance platform is performing optimally and available 24/7/365 to meet the needs of our global user base.

HC: What challenges did you suffer before using APM? And how did you troubleshoot before using an APM tool?

JV: Prior to implementing an APM solution in our cloud operations center, we were limited by only having access to application logs and infrastructure related monitoring and alerting. Identifying the source of a performance or application issue was a challenge as the application stack was very much a black box and the data we were getting from the infrastructure and database tiers was disconnected. We worked with many disparate data sources including logs, and system performance monitors. The task of correlating this data to identify a root cause was a complex, manual effort.

HC: What was your APM selection process/criteria?

JV: We required an APM solution that fully supported the .NET / Windows / SQL technology stack and provided rich user and transaction details from the browser through to our underlying cloud infrastructure. We also required a solution that was scalable and had an intuitive UI that our operations and development teams could use with minimal training. We also required a solution that could run on the Oceanwide private cloud given our industry and data privacy constraints.

HC: Why AppDynamics over the other solutions out there?

JV: We reviewed several APM vendors and felt the AppDynamics’ value proposition best suited our needs. The AppDynamic solution has proven to be a reliable, powerful troubleshooting and real-time monitoring tool that is at the core of our cloud operations centre. AppDynamics’ excellent customer support worked closely with our engineers as we implemented their solution across our private cloud. This was a critical component which was validated during our pilot as we needed to be confident that the AppDynamics agents would not have an adverse impact on our insurance applications.

persp_06

HC: How has AppDynamics helped to solve some critical problems?

JV: AppDynamics has been central in our analysis of performance improvements and/or application issues on our insurance platform.

When investigating application performance, the ability to drill down through application snapshots and view the distribution of time across the transaction path allows for a rapid identification of the components involved. Being able to then directly correlate this information with metrics such as CPU and memory utilization, is incredibly valuable as it would require hours of manual work to accomplish the same task.

AppDynamics had enabled our cloud operations team to quickly identify stored procedure calls as well as to view many other granular details for a given business transaction. The live monitoring provides us with immediate notifications when an application performance issue has occurred and gives our operations team the ability to be to proactive and avert issues before they impact end users.

We are looking forward to migrating to the newest version of AppDynamics, which includes full database monitoring and a single pane of glass view across the application, database and infrastructure tiers.

Interested to see how AppDynamics APM can help you gain visibility into your environment? Check out a FREE trial now!