Monitoring ADO.NET Performance

One of the most overlooked parts of .NET is the glue that links your application through to a database known as ADO.NET (an evolution of ActiveX Data Objects). The lowly connection string is the only real access we get to the underlying technology.

I was with a client recently and noticed they had the following connection string:

I was interested in why they had a pool size setting of 1500. The answer lay in the fact that 2 years ago they had a pooling issue (much like the one we’ll explore shortly!) and had set the Max Pool Size to 15 times more than the default!  We then set about seeing if that was needed with their current application.

Using AppDynamics we monitored the calls to their database from the Application’s perspective, you can automatically detect the calls:

Down to the methods where the connection and queries are run:

Extracting the Connection String (without the password) and seeing how long it takes to get a connection from the pool:

These snapshots provide deep visibility into an application’s health and performance, but in the case of performance issues – what’s the easiest way to tune production applications? What happens we you start to see the following?:

ADO.NET pools connections for reuse, there are a number of key rules for ADO.NET and using these connections:

  •       “Open Late and Close Early” – Don’t hold on to connections in code, only use them when you need to.
  •       Beware of “Pool Fragmentation” – Connections are pooled according to the connection string plus the user identity. Therefore, if you use Integrated Security, you may get a Connection pool for every user.

To aid tuning the following parameters can be used:

Using Windows performance counters you can monitor multiple ADO.NET metrics, for any .NET deployments I’d recommend monitoring:

  •       The Number of Pooled Connections
  •       Number of Active Connections

Within AppDynamics, you can monitor and baseline in your production application the health of your ADO.NET connection pool strategy.

The following illustrates a custom dashboard created to simply monitor the database throughput rate and response time against the size of a connection pool. In this scenario by slowly ramping up the number of concurrent users on the application, you can visualise whether your application will scale.

As new users are added to the application, the Connection Pool size increases automatically. This metric can be monitored in conjunction with database response time, error rates, and overall load.

SQL Server, for example, has a default maximum connection pool of 100 and default connection timeout of 30 seconds. I often see organisations dropping the Connection Timeout to 10-15 seconds if you are going to fail to get a database connection it’s better to fail sooner than later, leaving your user hanging for 30 seconds!

Utilize AppDynamics database view to monitor average and maximum pool times:

By registering a performance counter, AppDynamics will automatically baseline and then enable you to configure effective alerts for your application.

The following performance counters are off by default (they can be enabled in the Web.Config file see ActivatingOffByDefault):

  •       NumberOfFreeConnections
  •       NumberOfActiveConnections
  •       SoftDisconnectsPerSecond
  •       SoftConnectsPerSecond

By enabling NumberOfActiveConnections, you can visualize how the connection pool size increases.

To add any performance counter into AppDynamics use the following tool:

http://community.appdynamics.com/t5/eXchange-Community-AppDynamics/Windows-Performance-Counter-Configuration-Extension/idi-p/9713

Start monitoring your .NET application, check out a FREE trial today!