Understanding the Momentum Behind .NET Core

Three years ago Satya Nadella took over as CEO of Microsoft, determined to spearhead a renewal of the iconic software maker. He laid out his vision in a famous July 10, 2014 memo to employees in which he declared that “nothing was off the table” and proclaimed his intention to “obsess over reinventing productivity and platforms.”

How serious was Nadella? In the summer of 2016, Microsoft took the bold step of releasing .NET Core, a free, cross-platform, open-source version of its globally popular .NET development platform. With .NET Core, .NET apps could run natively on Linux and macOS as well as Windows.

For customers .NET Core solved a huge problem of portability. .NET shops could now easily modernize monolithic on-premises enterprise applications by breaking them up into microservices and moving them to cloud platforms like Microsoft Azure, Amazon Web Services, or Google Cloud Platform. They had been hearing about the benefits of containerization: speed, scale and, most importantly, the ability to create an application and run it anywhere. Their developers loved Docker’s ease of use and installation, as well as the automation it brought to repetitive tasks. But just moving a large .NET application to the cloud had presented daunting obstacles. The task of lifting and shifting the large system-wide installations that supported existing applications consumed massive amounts of engineering manpower and often did not deliver the expected benefits, such as cost savings. Meanwhile, the dependency on the Windows operating system limited cloud options, and microservices remained a distant dream.

.NET Core not only addressed these challenges, it was also ideal for containers. In addition to starting a container with an image based on the Windows Server, engineers could also use much smaller Windows Nano Server images or Linux images. This meant engineers had the freedom of working across platforms. They were no longer required to deploy server apps solely on Windows Server images.

Typically, the adoption of a new developer platform would take time, but .NET Core experienced a large wave of early adoption. Then, in August 2017, .NET Core 2.0 was released, and adoption increased exponentially. The number of .NET Core users reached half a million by January 2018. By achieving almost full feature parity with .NET Framework 4.6.1, .NET Core 2.0 took away all the pain that had previously existed in shifting from the traditional .NET Framework to .NET Core. Libraries that hadn’t existed in .NET Core 1.0 were added to .NET Core 2.0. Because .NET Core implemented all 32,000 APIs in .NET Standard 2.0 most applications could reuse their existing code.

Engineering teams who have struggled with DevOps initiatives found that .NET Core allowed them to accelerate their move to microservices architectures and to put in place a more streamlined path from development to testing and deployment. Lately, hiring managers have started telling their recruiters to be sure and mention the opportunity to work with .NET Core as an enticement to prospective hires—something that never would have happened with .NET.

At AppDynamics, we’re so excited about the potential of .NET Core that we’ve tripled the size of the engineering team working on .NET. And, just last month, we announced a beta release of support for .NET Core 2.0 on Windows using the new the .NET micro agent released in our Winter ‘17 product release. This agent provides improved microservices support as more customers choose .NET Core to implement multicloud strategies. Reach out to your account team to participate in this beta.

Stay tuned for my next blog posts on how to achieve end-to-end visibility across all your .NET apps, whether they run on-premises, in the cloud, or in multi-cloud and hybrid environments.

Top 5 Conferences for .NET Developers

Partly due to the influence of software giant Microsoft, the .NET community is expansive. Developers, programmers and IT decision makers regularly meet at .NET conferences to share news, information and ideas to help each other keep up with the rapid digital transformation in today’s IT landscape. Here are five .NET conferences you should consider attending to advance your knowledge, skills and career growth.

Build

Build is Microsoft’s annual convention geared toward helping software and web programmers learn about the latest developments in .NET, Azure, Windows and related technologies. It began in 2011, and the 2017 conference will run May 10 – 12 at the Washington State Convention Center in Seattle, WA. Don’t get your hopes too high because registration has already closed as the conference is sold out. However, there is a wait list if you’re hoping the stars align in your favor and you’re granted admission.

Build takes over for the now defunct Professional Developers Conference, which focused on Windows, and MIX, which centered on developing web apps using Silverlight and ASP.NET. For 2017, major topic themes included .NET Standard Library, Edge browser, Windows Subsystem for Linux, ASP.NET core and Microsoft Cortana. Sessions included debugging tricks for .NET using Visual Studio, a look at ASP.NET Core 1.0, deploying ASP.NET Core apps, a deep dive into MVC with ASP.NET Core, Entity Framework Core 1.0, a .NET overview, and creating desktop apps with Visual Studio vNext.

Reviews of the prior years were positive. One reviewer appreciated the introduction of a BASH Shell, the first environment that allowed cross-platform Windows developers to code completely in Windows without resorting to Linux or Mac OS X machines. Another commented that they liked getting Xamarin, a set of developer tools, for free, saving them hundreds of dollars. Both these moves were strong indicators of Microsoft’s re-commitment to developers as it embraces our new multi-platform world encompassing open-source and proprietary programs side by side.

DEVintersection

This year’s DEVintersection will be staged at the Walt Disney World Swan and Dolphin Resort in Lake Buena Vista, Florida, on May 21 – 24, 2017. This is the fifth year for the conference which brings together engineers and key leaders from Microsoft with a variety of industry professionals. The goal is to help attendees help stay on top of developments such as ASP.NET Core, Visual Studio, SQL Server, SharePoint and Windows 10.

Since ASP.NET, as well as .NET, are moving to open source status, it is another sign Microsoft is further encouraging open source as a preeminent approach in web development. You will learn skills to tackle the transition to open source and handle the concomitant issues that come with that move. The IT landscape continues to shift and evolve, and software developers need to consider a wide variety of challenges, such as microservices, the cloud and containerization.

Major conference tracks include Visual Studio interaction, Azure intersection, ASP.NET intersection, IT EDGE intersection, Emerging Experiences and SQL intersection. IT Edge is a co-located event — attendees can take part in sessions from different tracks for no extra charge. There will be ten workshops lasting throughout the day for the four days of the conference. More than 40 sessions are focused on a number of technology topics, with the goal to give you techniques and skills you can use right away in your day-to-day work.

This year, expect to see plenty of discussion around designing for scale, performance monitoring, the cloud, troubleshooting and features and benefits of the 2012, 2014 and 2016 editions of SQL Server. Are you considering migrating 2008 all the way to 2016 in one go? You’ll get the feedback and advice you need to make these important decisions. Past attendees appreciated that every day of the conference started with a report from Microsoft specialists in the main hall. One reviewer called the session breakouts “involving and useful,” and another said the full-day workshops that ran before and after the main convention gave them both “practical and theoretical knowledge.”

Microsoft Ignite

Formerly known as TechEd, Microsoft changed the name to Ignite in 2015. The original TechEd started in Orlando in 1993, and the last chapter of the series was staged in 2016 in Atlanta, Georgia. The 2017 Ignite conference is slated for September 25 – 29 in Orlando. Registration opens on March 28, so be sure to save the date. Registration sold out last year for the 2016 conference.

The Microsoft Data Science Summit will span two days during Ignite and is geared to engineers, data scientists, machine learning professionals and others interested in the latest in the world of analytics.

MS Ignite is for IT professionals, developers and managers. Decision makers can see what .NET advancements and developments are available, while developers can get information on how to implement those platforms in their current IT profile. There are presentations, breakout sessions and lab demonstrations. Microsoft .NET experts and community members alike meet to socialize, share news and evaluate the latest software defined tech. There are over 1000 Microsoft Ignite sessions to learn the latest developments in technology, each giving you a chance to meet face-to-face with industry experts.

For companies using .NET solutions, Ignite gives leaders and developers a chance to discuss current trends on the platform directly with Microsoft influencers. High profile Microsoft attendees in the past have included Jeffrey Snover, the lead architect of Windows Server; Brand Anderson, Corporate VP of the Enterprise Client and Mobility; and Mark Russinovich, Chief Technology Officer of Microsoft Azur

IT/Dev Connections

Presented by Penton Media, the annual IT/Dev Connections conference is scheduled for October 23 – 26, 2017 at the Hilton Union Square in San Francisco. Topics to be covered include ASP.NET, Visual Studio, Azure, SQL Server, SharePoint, VMware and more. There are five main technical topic tracks with over 200 sessions and one sponsor track and one Community Track. Conference leaders known as Track Chairs hand pick the best content and speakers. The goal is to omit any fluff and marketing hype, focusing only on high-value presenters and panelists. The five topic tracks are Cloud and Data Center; Enterprise Collaboration; Development and DevOps; Data Platform and Business Intelligence; and Enterprise Management, Mobility and Security.

Speakers at the 2017 conference include Windows technical specialist John Savill, Data Professional Tim Ford, and SharePoint expert Liam Clearly. A series of pre-conference workshops give developers and programmers a chance for one-on-one training. Workshops include troubleshooting ASP.NET Web applications, mastering the SharePoint dev toolchain, and skill-building for ASP.NET Core with Angular 2. Other sessions include Azure for ASP.NET programmers, Dockerizing ASP.NET apps, and ASP.NET development without using Windows. The State of the Union Session topic will discuss .NET from the desktop and mobile device to the server.

The strength of the IT/Dev Connections conference is the focus on developers and programmers. Commercial interests are kept to a minimum, and speakers are vetted for the amount of take-away value in their presentations. Attendees from past events have lauded the “user focus” of the conference and “intensely personal” feel of the breakout sessions. In other events, session rooms may have hundreds of chairs, while sessions at IT/Dev Connections generally accommodate around 100 people, providing a more personal, hands-on feel to each session. The speakers are also well diversified among different sections of the developer community, including a number of MVP designated presenters.

Visual Studio Live

Visual Studio Live! events for 2017 are a series of conferences throughout the year at cities around the country like Las Vegas, Chicago, and Washington D.C. The subtitle for the series is “Rock Your Code Tour.” The meetings give .NET developers and programmers a chance to level up their skills in Visual Studio, ASP.NET and more.

Visual Studio Live! focuses on practical training for developers using Visual Studio. For example, the Austin meeting is five days of education on the .NET framework, JavaScript/HTML/5, Mobile Computing and Visual Studio. There are more than 60 sessions conducted by Microsoft leaders and industry insiders. Topics to be covered include Windows Client, Application Lifecycle Management, Database and Analytics, Web Server and Web Client, Software Practices, and Cloud Computing.

If you participated in the previous Live! 360 program for discounted rates, be sure to reach out to the organizing committee as they do have special pricing for their most frequent customers.

Visual Studio Live! is known for its hands-on approach, with extensive workshops that give developers a deep dive into each topic. The workshops are featured throughout each day, so attendees have lots of opportunity to get targeted learning.

Attendees have responded enthusiastically to the co-located conference arrangement. One said it was an ideal chance to catch up with a number of technologies after being out of the tech world for a few years, and another lauded the enthusiasm of the speakers and workshop leaders.

There is a myriad of software development conferences that will help you grow as a .NET developer, DevOps thinker, or business influencer. Check out these five to see which one best fits your needs and goals.

Top 10 New Improvements Found in the .NET Framework Version 4.6.2

In the late 1990s, Microsoft began working on a general purpose development platform that quickly became the infrastructure for building, deploying, and running an unlimited number of applications and services with relative ease while focusing on the Internet User Experience (IUE). Then, in February of 2002, Microsoft finally launched the first version of these shared technology resources under its original name, Next Generation Windows Services (NGWS). With DLL libraries and object-oriented support for web app development, the .Net Framework 1.0 was a digital transformation that introduced us all to managed coding.

Although .NET Core has been a focus in recent years, work on the original .NET Framework has still progressed. In fact, on August 2, 2016, Microsoft announced the much-anticipated release of Version 4.6.2. According to MS, there are “dozens of bug fixes and improvements.” Actually, there are almost 14 dozen bug fixes — 166 to be exact — not to mention all the API changes. Moreover, many of the changes found in this new version were based on developer feedback. Needless to say, things have definitely improved. The following is a list of the top ten improvements found in .NET 4.6.2:

1. Windows Hello

The Windows 10 Anniversary Update was released the same day as the latest .NET Framework. This version is already included with the anniversary update. Although it doesn’t show up as an installed application in “Programs and Features,” you can find it by searching for features and clicking on “Turn Windows features on and off.” From here, you can adjust your features accordingly, and select specific features by utilizing “developer mode.” Also, Windows Hello allows developers and programmers to use Windows Hello for their apps. For example, third-party developers can now allow users to log in with their face and fingerprint with ease. Simply download the framework update.

2. Removal of Character Limits (BCL)

Microsoft removed the 260 character limitation, MAX_PATH, for NTFS in Windows. Characters in .NET 4.6.2 are now classified based on the Unicode Standard, Version 8.0.0. You’re probably used to getting the “path too long issue” prompt, especially with MSBuild definitions. The error details usually state something like:

TF270002: An error occurred copying files: The specified path, file name, or both are too long.

Or the error might state something similar to:

Unable to create folder. Filename or extension is too long.

Programs and server tools can also show problems in these areas, and solutions normally involved renaming something to fit the profile. Usually not an issue for end users, this limitation is more common on developer machines that use specialized tools also running on Unix, or while building source trees. However, now that the MAX_PATH limitation has been removed, we may never have to see this error message again.

However, long paths are not yet enabled by default. Therefore, you need to set the policy to enable the support: “Enable Win32 long paths” or “Enable NTFS long paths.” Your app must also have a specific manifest setting. Also, there’s the use of long paths on any OS if you use the \\?\ syntax, which is now supported by this feature.

3. Debugging APIs (CRL)

The main adjustment to the CRL is that if the developer chooses, null reference exceptions will now provide much more extensive debugging data. The unmanaged debugging APIs can request more information and perform additional analysis. Next, a debugger can determine which variable in a single line of source code is null, making your job a lot easier.

MS reports state the following APIs have been added to the unmanaged debugging API:

4. TextBoxBase Controls (WPF)

For security purposes, the copy and cut methods have been known to fail when they are called in partial trust. According to Microsoft, “developers can now opt-in to receiving an exception when TextBoxBase controls fail to complete a copy or cut operation.” Standard copy and cut through keyboard shortcuts, as well as the context menu, will still work the same way as before in partial trust.

5. Always Encrypted Enhancement (SQL)

This database engine is designed to protect sensitive data, such as credit card numbers. The .NET Framework for SQL Server contains two important enhancements for Always Encrypted centered around performance and security:

  • Performance: Encryption metadata for query parameters is now cached. This means when the property is set to true (the default), database clients can retrieve parameter metadata from the server only once. This is true even if the same query is called multiple times.

  • Security: Column encryption entries in the key cache will be evicted after a reasonable time interval. This can be set using the SqlConnection.ColumnEncryptionKeyCacheTtl property. The default time is two hours, while zero means no caching at all.

6. Best Match (WCF)

NetNamedPipeBinding has been upgraded to support a new pipe lookup called “Best Match.” When using this option, the NetNamedPipeBinding service will force you to search for the service all the way to the best matching URI, found at the requested endpoint, instead of the first matching service found. For example, multiple WCF Data Services are known to frequently listen in on named pipes. Often, a few of these WCF clients could be connected to the wrong service. This feature is set to connect with “First Match” as the default option. If you wish to enable this feature, you can add the AppSetting to the App.config or Web.config file on the client’s application.

7. Converting to UWP

According to Developer Resources, Windows now offers capabilities to bring existing Windows desktop apps to the Universal Windows Platform. This includes WPF as well as Windows Forms apps. For example, WPF is a powerful framework and has become a mature and stable UI platform suitable for long-term development. However, it is also a complex beast at times, because it works differently from other GUI frameworks and has a steep learning curve. However, Microsoft always seems to plan ahead, and that’s where converting to the Universal Windows Platform (UWP) enhancement comes in. This improvement enables you to gradually migrate your existing codebase to UWP, which, in turn, can help you bring your app to all Windows 10 devices. Also, it makes UWP APIs more accessible, allowing you to enable features such as Live Tiles and notifications.

8. ClickOnce

Designed long before the invention of the App Store, ClickOnce allows applications to be distributed via URLs. It can even self-update as new versions are released. Unfortunately, security has always been a big concern. Many DevOps teams have shown frustration over MS’s failure to adopt TSL standards. Finally, in addition to the 1.0 protocol, this application now supports TLS 1.1 and TLS 1.2. In fact, ClickOnce will automatically detect which protocol to use, and no action is required to enable this feature.

9. SignedXml

An implementation of the W3C’s XML Digital Signature standard, SignedXml now supports the SHA-2 family of hashing algorithm.

The following are included signature methods, as well as reference digest algorithms that are frequently used:

  • RSA-SHA256

  • RSA-SHA384

  • RSA_512 PKCS#1

For more information on these and other security concerns, along with update deployment and developer guidance, please see Microsoft Knowledge Base Article 3155464, as well as the MS Security Bulletin MS16-065.

10. Soft Keyboard Support

On previous versions of .NET, it wasn’t possible to utilize focus tracking without disabling WPF pen/touch gesture support. Developers were forced to choose between full WPF touch support or Windows mouse promotion. In the latest version of Microsoft’s .NET 4.6.2, Soft keyboard support allows the use of the touch keyboard in WPF applications without disabling WPF stylus/touch support on Windows 10.

To find out which version of the .NET Framework is installed on a computer:

  1. Tap on the Windows key, type regedit.exe, and hit enter.

  2. Confirm the UAC prompt.

  3. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full

Check for a DWORD value labeled Release, which indicates you have the .NET Framework 4.5 or newer.

For all versions of the .NET Framework and its dependencies, please check the charts listed in the Microsoft library for more information.

If you want to have the complete .NET Framework set in your computer, you’ll need to install the following versions:

  • .NET Framework 1.1 SP1

  • .NET Framework 3.5 SP1

  • .NET Framework 4.6

The above list is only the tip of the iceberg when describing all the features and improvements that can be found in the .NET Framework Version 4.6.2. There are numerous security and crash fixes, added support, networking improvements, active directory services updates, and even typo correction in EventSource. Because Microsoft took user feedback into consideration, developers, programmers, and engineers may feel that Microsoft is finally listening to their needs and giving them a little more of what they want in their .NET Framework.

Learn more

Find out how AppDynamics .NET application monitoring solution can help you today.

10 Things You Should Know About Microsoft’s .NET Core 1.0

On June 27th, Microsoft announced the release of a project several years in the making — .NET Core. The solution resulted from the need for a nonproprietary version of Microsoft’s .NET Framework — one that runs on Mac and several versions of Linux, as well as on Windows. This cross-platform .NET product offers programmers new opportunities with its open-source design, flexible deployment, and command-line tools. These features are just part of what makes .NET Core an important evolution in software development. The following are ten key facts you should be aware of when it comes to Microsoft’s .NET Core 1.0 and its impact on software.

1. The .NET Core Platform Is Open-Source

.NET Core is part of the .NET Foundation, which exists to build a community around and innovate within the .NET development framework. The .NET Core project builds on these priorities, starting with its creation by both Microsoft’s .NET team and developers dedicated to the principles of open-source software.

Your advantages in using this open-source platform are many — you have more control in using and changing it, and transparency in its code can provide information and inspiration for your own projects based on .NET Core. In addition, .NET Core is more secure, since you and your colleagues can correct errors and security risks more quickly. Its open-source status also gives .NET Core more stability, because unlike that of proprietary software defined and later abandoned by its creators, the code behind this platform’s tools will always remain publicly available.

2. It Was Created and Is Maintained Through a Collaborative Effort

Related to its development using open-source design principles, the .NET Core platform was built with the assistance of about 10,000 developers. Their contributions included creating pull requests and issues, as well as providing feedback on everything from design and UX to performance.

By implementing the best suggestions and requests, the development team turned .NET Core into a community-driven platform, making it more accessible and effective for the programming community than if it had been created purely in-house. The .NET Core platform continues to be refined through collaboration as it is maintained by both Microsoft and GitHub’s .NET community. As a developer, you have the opportunity to influence the future advancement of .NET Core by working with its code and providing your feedback.

3. The Main Composition of .NET Core Includes Four Key Parts

The first essential aspect is a .NET runtime, which gives .NET Core its basic services, including a type system, garbage collector, native interop, and assembly loading. Secondly, primitive data types, app composition types, and fundamental utilities are provided by a set of framework libraries (CoreFX). Thirdly, the .NET Core developer experience is created by a set of SDK tools and language compilers that are part of .NET Core. Finally, the “dotnet” app host selects and hosts the runtime, allowing .NET Core applications to launch. As you develop, you’ll access .NET Core as the .NET Core Software Development Kit (SDK). This includes the .NET Core Command Line Tools, the .NET Core, and the dotnet driver — everything you need to create a .NET Core application or a .NET Core library.

4. Flexible Deployment Means More Options for Using .NET Core

One of the defining features of .NET Core is its flexible deployment — you can install the platform either as part of your application or as a separate installation. Framework-dependent deployment (FDD) is based on the presence of .NET Core on the target system and has many advantages. With FDD, your deployment package will be smaller. Also, disk space use and memory use are minimized on devices, and you can execute the .NET Core app on any operating system without defining them in advance.

Self-contained deployment (SCD) packages all components (including .NET Core libraries and runtime) with your application, in isolation from other .NET Core applications. This type of deployment gives you complete control of the version of .NET Core used with your app and guarantees accessibility of your app on the target system. The unique characteristics of each deployment type ensure you can deploy .NET Core apps in a way that works best for your particular needs.

5. The .NET Core Platform Is a Cross-Platform Design

This unique software platform already runs on Windows, Mac OS X, and Linux, as its cross-platform nature was one of the main priorities for its development. While this may seem like a strange move for Microsoft, it’s an important one in a technological world that’s increasingly focused on flexibility and segmented when it comes to operating systems and platforms. .NET Core’s availability on platforms other than Windows makes it a better candidate for use by all developers, including Mac and Linux developers, and also gives the entire .NET framework the benefit of feedback and use from a much wider set of programmers. This additional feedback results in a product that works better for all of its users and makes the .NET Core platform a move forward for software-defined, rather than platform-defined applications.

6. Modular Development Makes .NET Core an Agile Development Tool

As part of its cross-compatibility design, the software development platform includes a modular infrastructure. It is released through NuGet, and you can access it as feature-based packages rather than one large assembly. As a developer, you can design lightweight apps that contain only the necessary NuGet packages, resulting in better security and performance for your app. The modular infrastructure also allows faster updates of the .NET Core platform, as affected modules can be updated and released on an individual basis. The focus on agility and fast releases, along with the aforementioned collaboration, positively positions .NET Core within the DevOps movement.

7. .NET Core Features Command-Line Tools

Microsoft states that .NET Core’s command-line tools mean that “all product scenarios can be exercised at the command-line.” The .NET Core Command Line Interface (CLI) is the foundation for high-level tools, such as Integrated Development Environments, which are used for developing applications on this platform. Like the .NET Core platform, this CLI is cross-platform, so that once you’ve learned the toolchain, you can use it the same way on any supported platform. The .NET Core CLI is the basis for applications to be portable whether .NET Core is already installed or an application is self-contained.

8. .NET Core Is Similar to .NET Framework

While .NET Core was designed to be an open-source, cross-platform version of the .NET Framework, there are differences between the two that go beyond those key features. Many of these comparisons result from the design itself as well as the relative newness of the .NET Core software development platform. App models built on Windows technologies are not supported by .NET Core, but console and ASP.NET Core app models are supported by both .NET Core and .NET Framework.

.NET Core has fewer APIs than the .NET Framework, but it will include more as it develops. Also, .NET Core only implements some of .NET Framework’s subsystems in order to maintain the simplified, agile design of the platform. These differences may limit the .NET Core platform in some ways now — however, the advantages of its cross-platform, open-source design should definitely outweigh any limitations as the platform is further enhanced.

9. The .NET Core Platform Is Still Under Construction

The nature of this software development platform makes it a work in progress, continually refined by both Microsoft’s .NET Core team and invested developers worldwide. The .NET Core 1.1 release, scheduled for this fall, is set to bring greater functionality to the platform. One of the intended features is an increase in support for APIs at the BCL level — enough to make .NET Core equal to the .NET Framework as well as Mono. In addition, .NET Core 1.1 will transition the platform’s default built system and project model to MSBuilt and csprog. The .NET Core roadmap on GitHub also cites changes in middleware and Azure integration as goals for the 1.1 release. These features are just a small subset of the purported changes for .NET Core based on natural goals for its development as well as contributions from .NET developers.

10. The .NET Core Platform Is Part of a Digital Transformation

This uniquely conceived and crafted platform for software development is far more than just a new tool for application developers. It represents a much larger shift in technology — one in which you can more easily deploy applications to multiple platforms by using the same initial framework and tools. This is a big change from the traditionally fragmented implementation of the .NET Framework across various platforms — or even across different applications on the same platform.

This addition to software development puts more freedom and control into your hands while you develop, especially when it comes to deploying and updating .NET Core applications in the way that you choose. Although quite new and destined to undergo significant changes in the near future, .NET Core should definitely be a tool of interest to all developers, as it takes the field of programming in an exciting direction.

Learn more

Find out how AppDynamics .NET application monitoring solution can help you today.

Top Performance Metrics for Java, .NET, PHP, Node.js, and Python

No application is the same. Some legacy apps were built in a monolithic environment built on a homogeneous language, say Java or .NET. As environments become more distributed, and technology has innovated to a near-breaking speed, application architectures tend to be built using a multitude of languages often leveraging the more dynamic languages for specific use cases.

Luckily, these distributed and extremely complex environments are where AppDynamics thrives with monitoring. AppDynamics supports Java, .NET, PHP, Node.js, Python, C/C++, and any combination of them — fitting nearly any environment.

After speaking with several customers and analyzing their performance, we’ve compiled a list of the most common performance problems for each language and the performance metrics to help measure your application health.

Below, we’ve compiled a brief summary of our findings and link to the full analysis in the respective complimentary eBooks.

Top Java Performance Metrics

Java remains one of the most widely used technology languages in enterprise applications. However, though it’s so widespread, it’s a clunky legacy language that can often have performance issues.

Along with monitoring external dependencies, garbage collection, and having a solid caching strategy, it’s important to measure business transactions. We define a business transaction as any end-user interaction with the application. These could include adding something to a cart, logging in, or any other interaction. It’s vital to measure the response times of these transactions to understand fully your user experience. If a response time takes longer than the norm, it’s important to get this resolved as quickly as possible to maintain optimal user experience.

Read the full eBook, Top 5 Java Performance Metrics, Tips & Tricks here.

Top .NET Performance Metrics

There are times in your application code when you want to ensure that only a single thread can execute a subset of code at a time. Examples include accessing shared software resources, such as a single threaded rule execution component, and shared infrastructure resources, such as a file handle or a network connection. The .NET framework provides different types of synchronization strategies, including locks/monitors, inter-process mutexes, and specialized locks like the Reader/Writer lock.

Regardless of why you have to synchronize your code or of the mechanism you choose to synchronize your code, you are left with a problem: there is a portion of your code that can only be executed by one thread at a time.

In addition to synchronization and locking, make sure to measure excessive or unnecessary logging, code dependencies, and underlying database and infrastructure issues.

Read the full eBook, Top 5 .NET Performance Metrics, Tips & Tricks here.

Top PHP Performance Metrics

Your PHP application may be utilizing a backend database, a caching layer, or possibly even a queue server as it offloads I/O intensive blocking tasks onto worker servers to process in the background. Whatever the backend your PHP application interfaces with, the latency to these backend services can affect the performance of your PHP application performance. The various types of internal exit calls may include:

  • SQL databases
  • NoSQL servers
  • In-memory cache
  • Internal services
  • Queue servers

In some environments, your PHP application may be interfacing with an obscure backend or messaging/queue server. For example, you may have an old message broker serving as an interface between your PHP application and other applications. While this message broker may be outdated, it is nevertheless part of an older architecture and is part of the ecosystem in which your distributed applications communicate with.

Along with monitoring the internal dependencies, make sure you measure your business transaction response time (as described above), external calls, and have an optimal caching strategy with full visibility into your application topography.

Read the full eBook, Top 5 PHP Performance Metrics, Tips & Tricks here.

Top Node.js Performance Metrics

In order to understand what metrics to collect surrounding Node.js event loop behavior, it helps to first understand what the event loop actually is and how it can potentially impact your application performance. For illustrative purposes, you may think of the event loop as an infinite loop executing code in a queue. For each iteration within the infinite loop, the event loop executes a block of synchronous code. Node.js – being single-threaded and non-blocking – will then pick up the next block of code, or tick, waiting in the queue as it continue to execute more code. Although it is a non-blocking model, various events that potentially could be considered blocking include:

  • Accessing a file on disk
  • Querying a database
  • Requesting data from a remote webservice

With Javascript (the language of Node.js), you can perform all your I/O operations with the advantage of callbacks. This provides the advantage of the execution stream moving on to execute other code while your I/O is performing in the background. Node.js will execute the code awaiting in the Event Queue, execute it on a thread from the available thread pool, and then move on to the next code in queue. As soon as your code is completed, it then returns and the callback is instructed to execute additional code as it eventually completes the entire transaction.

In addition to event loops, make sure to monitor external dependencies, memory leaks, business transaction response time, and have a full and complete view of your application topography.

Read the full eBook, Top 5 Node.js Performance Metrics, Tips & Tricks here.

Top Python Performance Metrics

It is always faster to serve an object from memory than it is to make a network call to retrieve the object from a system like a database; caches provide a mechanism for storing object instances locally to avoid this network round trip. But caches can present their own performance challenges if they are not properly configured. Common caching problems include:

  • Loading too much data into the cache
  • Not properly sizing the cache

When measuring the performance of a cache, you need to identify the number of objects loaded into the cache and then track the percentage of those objects that are being used. The key metrics to look at are the cache hit ratio and the number of objects that are being ejected from the cache. The cache hit count, or hit ratio, reports the number of object requests that are served from cache rather than requiring a network trip to retrieve the object. If the cache is huge, the hit ratio is tiny (under 10% or 20%), and you are not seeing many objects ejected from the cache then this is an indicator that you are loading too much data into the cache. In other words, your cache is large enough that it is not thrashing (see below) and contains a lot of data that is not being used.

In addition to measure your caching, also, monitor your external calls, application visibility, and internal dependencies.

In addition to measure your caching, also monitor your external calls, application visibility, and internal dependencies.

Read the full eBook, Top 5 Python Performance Metrics, Tips & Tricks here.

To recap, if you’d like to read our language-specific best practices, please click on one of the links below:

Top 5 Performance Problems in .NET Applications

The last couple articles presented an introduction to Application Performance Management (APM) and identified the challenges in effectively implementing an APM strategy. This article builds on these topics by reviewing five of the top performance problems you might experience in your .NET application.

Specifically this article reviews the following:

  •       Synchronization and Locking
  •       Excessive or unnecessary logging
  •       Code dependencies
  •       Underlying database issues
  •       Underlying infrastructure issues

1. Synchronization and Locking

There are times in your application code when you want to ensure that only a single thread can execute a subset of code at a time. Examples include accessing shared software resources, such as a single threaded rule execution component, and shared infrastructure resources, such as a file handle or a network connection. The .NET framework provides different types of synchronization strategies, including locks/monitors, inter-process mutexs, and specialized locks like the Reader/Writer lock.

Regardless of why you have to synchronize you code or of the mechanism you choose to synchronize your code, you are left with a problem: there is a portion of your code that can only be executed by one thread at a time. Consider going to the supermarket that only has a single cashier to check people out: multiple people can enter the store, browse for products, and add them to their carts, but at some point they will all line up to pay for the food. In this example, the shopping activity is multithreaded and each person represents a thread. The checkout activity, however, is single threaded, meaning that every person must line up and pay for their purchases one-at-a-time. This process is shown in figure 1.

Figure 1 Thread Synchronization

We have seven threads that all need to access a synchronized block of code, so one-by-one they are granted access to the block of code, perform their function, and then continue on their way.  

The process of thread synchronization is summarized in figure 2. 

Figure 2 Thread Synchronization Process

A lock is created on a specific object (System.Object derivative), meaning that when a thread attempts to enter the synchronized block of code it must obtain the lock on the synchronized object. If the lock is available then that thread is granted permission to execute the synchronized code. In the example in figure 2, when the second thread arrives, the first thread already has the lock, so the second thread is forced to wait until the first thread completes. When the first thread completes, it releases the lock, and then the second is granted access.

As you might surmise, thread synchronization can present a big challenge to .NET applications. We design our applications to be able to support dozens and even hundreds of simultaneous requests, but thread synchronization can serialize all of the threads processing those requests into a single bottleneck!

The solution is two-fold:

  •       Closely examine the code you are synchronizing to determine if another option is viable
  •       Limit the scope of your synchronized block

There are times when you are accessing a shared resource that must be synchronized but there are many times when you can restate the problem in such a way that you can avoid synchronization altogether. For example, we were using a rules process engine that had a single-threaded requirement that was slowing down all requests in our application. It was obviously a design flaw and so we replaced that library with one that could parallelize its work. You need to ask yourself if there is a better alternative: if you are writing to a local file system, could you instead send the information to a service that stores it in a database? Can you make objects immutable so that it does not matter whether or not multiple threads access them? And so forth…

For those sections of code that absolutely must be synchronized, choose your locks wisely. Your goal is to isolate the synchronized code block down to the bare minimum requirement for synchronization. It is typically best to define a specific object to synchronize on, rather than to synchronize on the object containing the synchronized code because you might inadvertently slow down other interactions with that object. Finally, consider when you can use a Read/Write lock instead of a standard lock so that you can allow reads to a resource while only synchronizing changes to the resource.

2. Excessive or Unnecessary Logging

Logging is a powerful tool in your debugging arsenal that allows you to identify abnormalities that might have occurred at a specific time during the execution of your application.  It is important to capture errors when they occur and gather together as much contextual information as you can. But there is a fine line between succinctly capturing error conditions and logging excessively. 

Two of the most common problems are:

  •       Logging exceptions at multiple levels
  •       Misconfiguring production logging levels

It is important to log exceptions so that you can understand problems that are occurring in your application, but a common problem is to log exceptions at every layer of your application. For example, you might have a data access object that catches a database exception and raises its own exception to your service tier. The service tier might catch that exception and raise its own exception to the web tier. If we log the exception at the data tier, service tier, and web tier, then we have three stack traces of the same error condition. This incurs additional overhead in writing to the log file and it bloats the log file with redundant information. But this problem is so common that I assert that if you examine your own log files that you’ll probably find at least a couple examples of this behavior.

The other big logging problem that we commonly observe in production applications is related to logging levels. .NET loggers define the following logging levels (named differently between the .NET TraceLevel and log4net, but categorically similar):

  •       Off
  •       Fatal
  •       Error
  •       Warning
  •       Info
  •       Verbose / Debug 

In a production application you should only ever be logging error or fatal level logging statements. In lower environments it is perfectly fine to capture warning and even informational logging messages, but once your application is in production, the user load will quickly saturate the logger and bring your application to its knees. If you inadvertently leave debug level logging on in a production application, it is not uncommon to see response times two or three times higher than normal! 

3. Code Dependencies

Developing applications is a challenging job. Not only are you building the logic to satisfy your business requirements, but you are also choosing the best libraries and tools to help you. Could you imaging building all of your own logging management code, all of your own XML and JSON parsing logic, or all of your own serialization libraries? You could build code to do this, but why should you when teams of open source developers have already done it for you? Furthermore, if you are integrating with a third party system, should you read through a proprietary communication protocol specification or should you purchase a vendor library that does it for you?

I’m sure you’ll agree that if someone has already solved a problem for you, it is more efficient to use his or her solution than to roll your own. If it is an open source project that has been adopted by a large number of companies then chances are that it should be well tested, well documented, and you should be able to find plenty of examples about how to use it.

There are dangers to using dependent libraries, however. You need to ask the following questions:

  •       Is the library truly well written and well tested?
  •       Are you using the library in the same manner as the large number of companies that are using it?
  •       Are you using it correctly? 

Make sure that you do some research before choosing your external libraries and, if you have any question about a library’s performance, then run some performance tests. Open source projects are great in that you have full access to their source code as well as their test suites and build processes. Download their source code, execute their build process, and look through their test results. If you see a high percentage of test coverage then you can have more confidence than if you do not find any test cases!

Finally, make sure that you are using dependent libraries correctly. I work in an organization that is strongly opposed to Object Relational Mapping (ORM) solutions because of performance problems that they have experienced in the past.  But because I have spent more than a decade in performance analysis, I can assure you that ORM tools can greatly enhance performance if they are used correctly. The problem with ORM tools is that if you do not take the time to learn how to use them correctly, you can easily shoot yourself in the foot and destroy the performance of your application. The point is that tools that are meant to help you can actually hurt you if you do not take the time to learn how to use them correctly.

Before leaving this topic, it is important to note that if you are using a performance management solution, like AppDynamics, it can not only alert you to problems in your application, but it can alert you to problems in your dependent code, even if your dependent code is in a compiled binary form. If you find the root cause of a performance problem in an open source library, you can fix it and contribute an update back to the open source community. If you find the root cause of a performance problem in a vendor built library, you can greatly reduce the amount of time that the vendor will need to resolve the problem. If you have ever opened a ticket with a vendor to fix a performance problem that you cannot clearly articulate, then you have experienced a long and unproductive wait for a resolution. But if you are able to point them to the exact code inside their library that is causing the problem, then you have a far greater chance of receiving a fix in a reasonable amount of time.

4. Underlying Database Issues

Almost all applications of substance will ultimately store data in or retrieve data from a database or document store. As a result, the tuning of your database, your database queries, and your stored procedures is paramount to the performance of your application. 

There is a philosophical division between application architects/developers and database architects/developers. Application architects tend to feel that all business logic should reside inside the application and the database should merely provide access to the data. Database architects, on the other hand, tend to feel that pushing business logic closer to the database improves performance. The answer to this division is probably somewhere in the middle.

As an application architect I tend to favor putting more business logic in the application, but I fully acknowledge that database architects are far better at understanding the data and the best way of interacting with that data.  I think that it should be a collaborative effort between both groups to derive the best solution. But regardless of where you fall on this spectrum, be sure that your database architects review your data model, all of your queries, and your stored procedures. They have a wealth of information to shed on the best way to tune and configure your database and they have a host of tools that can tune your queries for you. For example, there are tools that will optimize your SQL for you, following these steps:

  •       Analyze your SQL
  •       Determine the explain plan for your query
  •       Use artificial intelligence to generate alternative SQL statements
  •       Determine the explain plans for all alternatives
  •       Present you with the best query options to accomplish your objective 

When I was writing database code I used one of these tools and quantified the results under load and a few tweaks and optimizations can make a world of difference. 

5. Underlying Infrastructure Issues

Recall from the previous article that .NET applications run in a layered environment, shown in figure 3.

 Figure 3 .NET Layered Execution Model

Your application runs inside either an ASP.NET or Windows Forms container, uses the ADO libraries to interact with databases, runs inside of a CLR that runs on an operating system that runs on hardware.  That hardware is networked with other hardware that hosts a potentially different technology stack. We typically have one or more load balancers between the outside world and your application as well as between application components. We have API Management services and caches at multiple layers. All of this is to say that we have A LOT of infrastructure that can potentially impact the performance of your application!

You must therefore take care to tune your infrastructure. Examine the operating system and hardware upon which your application components and databases are running to determine if they are behaving optimally.  Measure the network latency between servers and ensure that you have enough available bandwidth to satisfy your application interactions. Look at your caches and validate that you are seeing high cache hit rates. Analyze at your load balancer behavior to ensure that requests are quickly be routed to all available servers. In short, you need to examine your application performance holistically, including both your application business transactions as well as the infrastructure that supports them. 

Conclusion 

This article presented a top-5 list of common performance issues in .NET applications. Those issues include:

  •       Synchronization and Locking
  •       Excessive or unnecessary logging
  •       Code dependencies
  •       Underlying database issues
  •       Underlying infrastructure issues

In the next article we’re going to pull all of the topics in this series together to present the approach that AppDynamics took to implementing its APM strategy. This is not a marketing article, but rather an explanation of why certain decisions and optimizations were made and how they can provide you with a powerful view of the health of a virtual or cloud-based application.

Start solving your .NET application issues today, try AppDynamics for free

How Oceanwide gains visibility into their .NET private cloud environment

I recently had the opportunity to catch up with Jonathan Victor, COO at Oceanwide, a leading insurance software company. Jonathan and I discussed the challenges they faced prior to implementing AppDynamics APM, such as gaining visibility into their cloud operations and diagnosing the root cause of performance issues. We also discussed how they ultimately chose AppDynamics over competitors and the benefits they’ve seen since using AppDynamics.

Hannah Current: Please tell us a little about Oceanwide and your role there.

Jonathan Victor: Since 1996, Oceanwide has been delivering SaaS core processing solutions to property and casualty insurers of all sizes across the globe. Our configurable insurance software solutions enable insurers to react to market changes, configure new products and manage their products with increased speed and lower costs for any line of business, virtually eliminating professional service fees. Designed from the ground up to be web enabled and fully configurable without custom programming, our solutions automate policy, billing, claims, underwriting, document management, agent/consumer portals and more for insurers, MGAs, and brokers.

I’m responsible for managing the Oceanwide private cloud and the insurance applications which we (develop &) run on top of it. I have a dedicated Cloud Operations team which ensures that Oceanwide’s insurance platform is performing optimally and available 24/7/365 to meet the needs of our global user base.

HC: What challenges did you suffer before using APM? And how did you troubleshoot before using an APM tool?

JV: Prior to implementing an APM solution in our cloud operations center, we were limited by only having access to application logs and infrastructure related monitoring and alerting. Identifying the source of a performance or application issue was a challenge as the application stack was very much a black box and the data we were getting from the infrastructure and database tiers was disconnected. We worked with many disparate data sources including logs, and system performance monitors. The task of correlating this data to identify a root cause was a complex, manual effort.

HC: What was your APM selection process/criteria?

JV: We required an APM solution that fully supported the .NET / Windows / SQL technology stack and provided rich user and transaction details from the browser through to our underlying cloud infrastructure. We also required a solution that was scalable and had an intuitive UI that our operations and development teams could use with minimal training. We also required a solution that could run on the Oceanwide private cloud given our industry and data privacy constraints.

HC: Why AppDynamics over the other solutions out there?

JV: We reviewed several APM vendors and felt the AppDynamics’ value proposition best suited our needs. The AppDynamic solution has proven to be a reliable, powerful troubleshooting and real-time monitoring tool that is at the core of our cloud operations centre. AppDynamics’ excellent customer support worked closely with our engineers as we implemented their solution across our private cloud. This was a critical component which was validated during our pilot as we needed to be confident that the AppDynamics agents would not have an adverse impact on our insurance applications.

persp_06

HC: How has AppDynamics helped to solve some critical problems?

JV: AppDynamics has been central in our analysis of performance improvements and/or application issues on our insurance platform.

When investigating application performance, the ability to drill down through application snapshots and view the distribution of time across the transaction path allows for a rapid identification of the components involved. Being able to then directly correlate this information with metrics such as CPU and memory utilization, is incredibly valuable as it would require hours of manual work to accomplish the same task.

AppDynamics had enabled our cloud operations team to quickly identify stored procedure calls as well as to view many other granular details for a given business transaction. The live monitoring provides us with immediate notifications when an application performance issue has occurred and gives our operations team the ability to be to proactive and avert issues before they impact end users.

We are looking forward to migrating to the newest version of AppDynamics, which includes full database monitoring and a single pane of glass view across the application, database and infrastructure tiers.

Interested to see how AppDynamics APM can help you gain visibility into your environment? Check out a FREE trial now!

 

A UNIX Bigot Learns About .NET and Azure Performance – Part 1

This blog post is the beginning of my journey to learn more about .NET and Microsoft Azure as it applies to performance monitoring. I’ve long admitted to being a UNIX bigot but recently I see a lot of good things going on with Microsoft. As a performance monitoring geek I feel compelled to understand more about these technologies at a deep enough level to provide good guidance when asked by peers and acquaintances.

The Importance of .NET

Here are some of the reasons why .NET is so important:

  • In a 2013 Computer World article, C# was listed as the #6 most important programming language to learn along with ASP.NET ranking at #14.
  • In a 2010 Forrester article .NET was cited as the top development platform used by respondents.
  • .NET is also a very widely used platform in financial services. An article published by WhiteHat Security stated that “.NET, Java and ASP are the most widely used programming languages at 28.1%, 25% and 16% respectively.” In reference to financial service companies.
  • The Rise of Azure

    .NET alone is pretty interesting from a statistical perspective but the rise of Azure in the cloud computing PaaS and IaaS world is a compounding factor. In a “State of the Cloud” survey conducted by RightScale, Azure was found to be the 3rd most popular public cloud computing platform for Enterprises. In a report published by Capgemini, 73% of their respondents globally stated that Azure was part of their cloud computing strategy with strong support across retail, financial services, energy/utilities, public, and telecommunications/media verticals.

    Developer influence

    Not to be underestimated in this .NET/Azure world is the influence that developers will have on overall adoption levels of each technology platform. Microsoft has created an integration between Visual Studio (the IDE used to develop on the .NET platform) and Azure that makes it extremely easy to deploy .NET applications onto the Azure cloud. Ease of deployment is one of the key factors in the success of new enterprise technologies and Microsoft has definitely created a great opportunity for itself by ensuring that .NET apps can be easily deployed to Azure through the interface developers are already familiar with.

    The fun part is yet to come

    Before I started my research for this blog series I didn’t realize how far Microsoft had come with their .NET and Azure technologies. To me, if you work in IT operations you absolutely must understand these important technologies and embrace the fact that Microsoft has really entrenched itself in the enterprise. I’m looking forward to learning more about the performance considerations of .NET and Azure and sharing that information with you in my follow-up posts. Keep an eye out for my next post as I dive into the relevant IIS and WMI/Perfmon performance counters and details.

    Diving Into What’s New in Java & .NET Monitoring

    In the AppDynamics Spring 2014 release we added quite a few features to our Java and .NET APM solutions. With the addition of the service endpoints, an improved JMX console, JVM crash detection and crash reports, additional support for many popular frameworks, and async support we have the best APM solution in the marketplace for Java and .NET applications.

    Added support for frameworks:

    • TypeSafe Play/Akka

    • Google Web Toolkit

    • JAX-RS 2.0

    • Apache Synapse

    • Apple WebObjects

    Service Endpoints

    With the addition of service endpoints customers with large SOA environments can define specific service points to track metrics and get associated business transaction information. Service endpoints helps service owners monitor and troubleshoot their own specific services within a large set of services:

    JMX Console

    The JMX console has been greatly improved to add the ability to manage complex attributes and provide executing mBean methods and updating mBean attributes:

    JVM Crash Detector

    The JVM crash detector has been improved to provide crash reports with dump files that allow tracing the root cause of JVM crashes:

    Async Support

    We  added improved support for asynchronous calls and added a waterfall timeline for better clarity in where time is spent during requests:

      

    AppDynamics for .NET applications has been greatly improved by adding better integration and support for Windows AzureASP.Net MVC 5, improved Windows Communication Foundation support, and RabbitMQ support:

     

     

    Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

    Announcing AppDynamics 2014 Spring Release

    AppDynamics is excited to announce the  2014 Spring Release of the AppDynamics platform which continues our tradition of disruptive innovation and extends our leadership position through:

    • Expanded support for a wide range of applications and frameworks including Java, .Net, PHP, Scala, and now Node.js applications

    • Launches mobile application performance monitoring to include end to end visibility across iOS and Android

    • Launches support for Big Data + NoSQL databases with MongoDB and extensions for Cassandra, Couchbase, Hbase, and Redis

    • Launches Cloud partnerships with Azure, Amazon Web Services, Pivotal, and OpenShift by RedHat

    • Delivers over 50 new integrations and partnerships with infrastructure, and other 3rd party solutions including NetApp, Keynote, ServiceNow, Mulesoft, TypeSafe, Apica, Splunk, Boundary, Amazon Web Services and Google Compute Engine

    “We chose AppDynamics because they are the proven leader in application intelligence,” said Ravi Nekkalapu, director of technology and infrastructure architecture, Wyndham Hotel Group. “The latest improvements including enhanced support for apps built in Java, .NET and PHP and brand new support for apps built in Node.js and Scala provide us better and deeper visibility into all our real time production applications and allows us to better manage and improve application performance 24/7. With AppDynamics Spring 2014 Release the company strengthens its leadership position in application performance management.”

    At AppDynamics we strive to deliver an intuitive user experience that is useful not only for developers and operations professionals, but also the executive team as well. In this release we have improved the user experience of application flow maps to be more scalable for complex applications:

    We provide the best APM solution in the market for Java, .NET, PHP, and now support Node.js, Scala, iOS and Android. The extensive new capabilities allow organizations to proactively monitor, manage and analyze the most complex software environments. All of this happens in real time, in production, giving increased visibility, understanding, and control across applications, infrastructure and user experience. By eliminating blind spots, IT can resolve issues faster reducing downtime costs.

    AppDynamics Spring 2014 Release features expanded support for the Java ecosystem including support for the Scala language and the Typesafe Reactive Platform. Get complete visibility into applications built on top of Play/Akka:

    “Our new partnership with AppDynamics provides end to end visibility into production applications running on the Typesafe Reactive Platform. With AppDynamics’ unique support of Scala, Akka and Play, developers will be able to build reactive applications in record time, troubleshoot issues in real-time, and most importantly be certain that every user has a great experience with their application,” said Dave Martin, Vice president of Worldwide Sales and Business Development, Typesafe

    AppDynamics is now available for mobile apps running on iOS and Android. Through AppDynamics Mobile APM you can get complete visibility into the end user experience of your iOS and Android users globally in real-time.

    • Crash Reporting – Understand the root cause of application crashes and hangs

    • Network Request Snapshots with server-side correlation  – Get end to end visibility from the mobile device all the way to multiple tiers on the server-side

    • Device & User Analytics – Analytics on device, carriers, OS, and application versions

    AppDynamics Mobile APM features crash reports and network request snapshots to get to the root cause of performance problems whether on the mobile device or the server-side.

    AppDynamics Spring 2014 Release features beta support for Node.js applications with support for all the core AppDynamics features users know and love including auto-discovery of business transactions, dynamic baselining, application flow maps  and transaction/process snapshots.

    Understand exactly what is happening in your Node.js applications with process snapshots and support for PostgreSQL, MySQL, MongoDB, Riak, Cassandra, Memcache and Redis backends:

    AppDynamics for Java continues to be the best APM solution in the market with new support for Google Web Toolkit, JAX-RS 2.0, Apache Synapse, Apple WebObjects. We have released Service Endpoints which enables customers with large SOA environments can define specific service points to track metrics and get associated business transaction information which helps service owners monitor and troubleshoot their own specific services within a large set of services.

    AppDynamics for .NET continues to be the best APM solution in the market with new support for MVC5, RabbitMQ, and improved Windows Azure integration. We introduced support for async calls and added waterfall visualization to easily identify problems in your async applications:

    AppDynamics for PHP is now available for PHP 5.2-5.5 with distributed transaction correlation. We introduced support for command line scripts, Redis and RabbitMQ backends.

     

    AppDynamics End User Experience management has been greatly improved with client-side waterfall timing in browser snapshots and server -side correlation.

    With the new waterfall client-side timings you get granular insight into performance on both the client-side and the server-side:

     

    AppDynamics’ Spring 2014 Release includes new support for NoSQL Big Data stores including MongoDB and Hadoop, Couchbase and Cassandra.  NoSQL databases are growing in popularity because they allow for design simplicity, horizontal scaling and greater control over availability. AppDynamics for databases now supports MongoDB natively and can auto-detect replica sets and sharded clusters, monitor all queries and provides a drill down to query executions:

    “Together with AppDynamics and MongoDB, organizations can now leverage application performance management solutions to gain further insight into their MongoDB-based applications. This partnership allows users end-to-end visibility for optimal performance in production, an important feature for companies as they scale their MongoDB deployments,” said Matt Asay, Vice president of Marketing and Business Development, MongoDB

    AppDynamics formed strategic alliances with leading web infrastructure companies like NetApp, MongoDB, TypeSafe, and MuleSoft. The AppSphere community delivered over 50+ new extensions providing integrations to Keynote, ServiceNow, Splunk and Apica. Use our machine agent to track, graph, and correlate metrics from your underlying infrastructure (databases, caches, queues, hardware, etc.) in the AppDynamics metrics browser.

    Enhanced support for major cloud providers including Amazon Web Services, Windows Azure, Pivotal, and OpenShift by RedHat. Monitor infrastructure and costs with Amazon Web Services and Google Compute Engine.  AppDynamics Amazon Web Services extension allows users to integrate CloudWatch into AppDynamics and have richer metrics around their Amazon cloud applications by combining both our own application metrics along with ones from CloudWatch. For example, by looking at CloudWatch’s billing metrics you can analyze the costs associated with various levels of performance.

    “With our Spring 2014 Release, we are providing organizations enterprise-wide visibility into the performance and behavior of the applications that drive their software-defined business,” said Jyoti Bansal, founder and CEO of AppDynamics. “Once again, we are innovating with a new and enhanced set of capabilities that apply intelligence to instantly identify performance bottlenecks, anomalies, enable automatic fixes and continuously measure business impact. We do this in real time, in production, with cloud or on-premise deployment flexibility. This goes way beyond monitoring—it’s true application intelligence.”

    We released far too many features and improvements to cover in one blog post so stay tuned for deep dives into what is new in Java & .NET, PHP & Node.js, and Mobile & End User Monitoring.

    Take five minutes to get complete visibility into the performance of your production applications with AppDynamics Pro today.