Obtaining leading Application Intelligence is now a seamless part of the Oracle Cloud Platform experience

Developing high-performance applications for private, public, and hybrid cloud environments just became easier with Oracle’s recent enhancements to its cloud platform. Whether you are looking for a IaaS, PaaS, or SaaS environment, Oracle and AppDynamics are committed to providing their customers with a robust environment that delivers a broad portfolio of integrated services across applications, platforms, and infrastructure.

But, the benefits to Oracle Cloud Platform users doesn’t stop there. Through easy access to Oracle Cloud Marketplace, customers can now have instant access to all the benefits of AppDynamics’ leading Application Intelligence Platform, which is being announced this week at AppSphere. As Oracle mentions, AppDynamics is one of a handful of partners that are driving innovation and customer success with the Oracle Cloud Platform.

Today’s complex software applications often contain frustrating blind spots and mysterious, recurring problems. AppDynamics eliminates that complexity by delivering the simplicity, visibility, and deep diagnostics that Ops and Dev teams require. By selecting AppDynamics on Oracle’s Cloud Marketplace, every application you run on the Oracle Cloud Platform can perform the way it should— any time, on any cloud.  

Now, you can build high-performance into your applications at every stage of the software development lifecycle – from dev/test, to scaling/deployment, to production.

So, add the power of application intelligence to all your activities on Oracle Cloud Platform. Just visit AppDynamics on the Oracle Cloud Marketplace today!

10 Popular Oracle Performance Metrics that Every DBA Should Know

Before you jump into creating KPIs for your Oracle database, clearly state and test your assumptions right up front. That has to come first. Otherwise, you’ll go down a blind alley many times trying to achieve better performance that will never happen because your assumptions were wrong in the first place. Be prepared to fail as often as necessary to discover which assumptions have to go. It will also help to have a more general grounding in how Oracle came about and where Oracle excels now.

The Origin of Oracle

Oracle started off as Software Development Laboratories, founded by Larry Ellison, Bob Miner and Ed Oates in 1977. They wrote Oracle Version 1 in assembly language, but that version was never released.

By 1979, they had Oracle Version 2 on the market, billed as the first commercially available SQL relational database management system (RDBMS). The founders changed the name to Relational Software Inc. (RSI), but shortly afterward it was rebranded again in honor of the company’s star performer: the Oracle database. The unofficial story is that it was RSI’s first major client, the CIA, who chose the more imaginative name Oracle.

Oracle Now

The latest release, as of the date of this article, is the Oracle Database 12c, which came out in 2013. The 12c was specially designed to be more cloud-friendly. It introduced a multitenant architecture so that a company with many databases dispersed geographically could bring them together rapidly and manage them in the cloud from a central location. Many sys admins have been pushing for in-memory data processing as a part of their analytics requirements and 12c finally gave them that capability also. There are five levels of complexity that businesses can choose from: Express, Standard One, Standard, Enterprise, and Personal.

  • Express is the stripped down, entry-level database for small companies.

  • Standard One is the next step up for small businesses.

  • Standard is for larger organizations that require application clustering and have particular types of hardware.

  • Enterprise is for high-volume online transactions, data warehouses with intense query requirements and Internet-based applications with the most strict uptime parameters.

  • Personal is the same as Enterprise except that it is for single-user development with no clustering. Because it is for database developers, features like tuning and diagnostics are not available.

Other than the Express, which is a separate item, all of these editions are in the same download, so you can choose one at the time of deployment rather than purchase.

What Oracle Does Best

Databases are the backbone of most mission-critical applications today. Hundreds of thousands of businesses are using Oracle for on-premise data centers as well as cloud-based and hybrid architectures. Databases run everything from back office applications to reporting on daily business intelligence statistics to strategic analysis and forecasting for the C-suite.

Oracle has been optimized for dealing with massive data sets common to business challenges involving big data. Solutions that depend on Oracle are frequently targeted at enterprise architects searching for an IT infrastructure robust enough to handle changing business needs. Also, Oracle is often chosen by developers who need a reliable database solution for their applications. Database administrators and sysadmins often turn to Oracle when they are looking for quick provisioning and high performance. IT execs regularly cite Oracle as a flexible and well-supported database option that has a widely established reputation across industries.

While your Oracle database is designed for high-volume usage, you’ll need to monitor carefully performance levels to see if further resources are required to support the deployment. You do not want to find out that you need massive configuration changes or many extra servers with additional CPUS after your database crashes right in the middle of the business week. Proactively keep an eye on KPIs of overall database health, potential bottlenecks and indications that the system is not working up to its potential.

Any active monitoring operation requires you to stay on your toes and strive to measure things you have not thought of before. Here are 10 well-known metrics that you will need to use at some point.

1. Under-allocated RAM regions – Most of the time you can rely on the automatic memory management processes recommended by Oracle. Sometimes, you can observe significant speed improvements by increasing RAM instead of using the much slower disk access. When you do not allocate sufficient RAM for shared_pool_size, pga_aggregate_target, and db_cache_size, you database will chug along at the rate of physical I/O. Use Memory Advisor whenever possible in making manual adjustments.

2. In Memory Sort Ratio – When your database is slow, this can be an important piece of the puzzle because disk sorts must be handled in the tablespace, which is vastly slower than sorting in RAM.

3. Parse to Execute Ratio – The first time they execute, SQL must be parsed, which includes a syntax check, a semantic check, a decision tree and an execution plan to run it with maximum efficiency. Execution plans are then stored in the library cache to save time on the next execution. Parses can be hard or soft, but you want to reduce both. A hard parse should be only the initial run when SQL must parse everything. A soft parse means only the variable are parsed. For a better parse to execute ratio, increase session cache cursors from the default at 50 to find your best performance somewhere between 100 and 1000.

4. Excessive nested loop joins – Iterative loops are going to be slow. There’s no way around it. You’ll need to dig into the code to find faster solutions than a nested loop joins wherever possible. If you have no better options, 64-bit Oracle systems are meant for you because they have gigabytes for RAM sorts and hash joins. Be sure you have enough RAM to allow the CBO to choose hash joins by setting the pga_aggregate_target parameters for faster turnaround.

5. Page Cleaning Ratio – This has become a more important metric for online publishers and e-commerce pages. Page cleaners write old pages to the disk asynchronously so new pages can be read into the buffer pool. A great page cleaning ratio is around 95 percent.

6. Average Buffer Pool I/O Response Time – This is something that end users will be very interested in finding out. People can sense I/O response bottlenecks quickly and tend to complain loudly. Look for an average buffer pool read/write time in the neighborhood of 10 milliseconds.

7. Long Full Table Scans – If you are consistently seeing full table scans, something has gone terribly wrong. Online transactions and high-volume operations need to work faster. Look at your transaction design again. Search for deadly indexing and fully optimize your SQL. If your full table scan brings back under 20 percent of table rows, it is likely there are missing indexes.

8. Transaction Log Response Time – Latency in transactions can be a huge problem when payments are involved and log response times can have a big influence over latency. Look for your log response to be no more than 10 milliseconds, just like your buffer pool I/O.

9. Rows Read/Row Selected Ratio – This can save you hours of research. It will answer how many rows of the database were read before the specified rows were returned. If you are dealing with a ratio higher than 20, you may have a problem with creating indexes. From there you can go deeper and investigate cases where the number of rows read in much larger than the total executions.

10. Human Misfeasance – Yes, it is true. The biggest problem with your database may be the DBA. Misfeasance is better than malfeasance, however. It means you did not intend to do anything wrong. Misfeasance means you may have neglected to monitor their database (STATSPACK/AWR), for example. Maybe you forgot to set-up custom exception reporting alerts on the OEM performance screen. This is where you’ll find instance efficiency, wait bottlenecks, slow SQL response time or wasted space in the shared global area. A wise man once said, “Man is the measure of all things.” Don’t neglect to measure yourself.

Keys to Improving Performance

There are two areas you need to investigate before you measure for any of these KPIs.

1. Ask your users what they need.

Are processes running too slow? Do they have to be executed within a particular time window, as before the boss arrives on Monday morning? What drove people crazy in the past? What are their biggest frustrations now? The answer will guide you on which KPIs apply to you most urgently.

2. Diagnose problems.

Whenever things get buggy, the Web page will not refresh or searches just hang there, get a snapshot of all the vital info, such OS and processes running. You should also take note when everything is working great so you can make a precise comparison. Oracle’s Automatic Database Diagnostic Monitor can help you diagnose the most prevalent problems.

The Database And the Total System

Approach your performance improvements as a recursive and iterative process. Even after you make adjustments to the database, you may not see the improvements you expect because your assumptions were wrong. You’ll need to experiment to find the bottlenecks, and each one may reveal another series of bottlenecks somewhere else in the system.

Your total system performance is also affected by other system resources that interface with the database. For example, CPU cycles are often the culprit when SQL drags. Make sure you have enough CPUs for your run-queue and that there is a large amount of unused RAM. Data buffers and in-memory sorts all depend on your available RAM. The sure sign of a beginner system administrator is someone who tries to tune the database before checking out the stressors on the external environment.

When you are sure that the external environment is stable, it is time to start planning out your metrics in association with what the users need. Keep in mind as you develop your toolbox of metrics that having too many metrics will just slow you down in the end. Make sure your metrics are easy to collect, so you are not becoming part of the problem.

KPIs Are Personal

Remember that nothing is written in stone, and every business has its challenges. There are no specific numbers you are trying to hit. For example, 100 percent uptime is ideal, and 0 percent is uniformly bad, but there’s a great deal of room in the middle for establishing your individual KPI goals.

Go over your KPIs a thousand times to make sure you precisely measure what you think you are. Remember that if the metric is fluctuating drastically, there is probably something wrong in the measurement itself. Enjoy your Oracle database and never stop looking for ways to improve it.

Dressing for dinner with Oracle Tuxedo

Can you imagine a world where you have a single, centrally managed, platform instrumenting and collecting data from your whole IT landscape in real-time? Allowing you to troubleshoot issues, analyze their business impact, and even track the flow of business operations (order shipments, transaction settlements, phone line installations, etc.) in real time? We can at AppDynamics, which is why we have adopted the “Application Intelligence Platform” moniker.

The breadth and depth of this vision is a perfect illustration of how far we have come from our genesis as merely just a Java performance troubleshooting solution.

Of course, the world is not just composed of Java, which is why we now support heterogeneous applications composed of not only Java code, but also any combination of .NET, PHP, C++, and Node.js.
Screen_Shot_2014-07-17_at_3.42.47_PM-960x0 (2)

This composition of supported runtimes covers the majority of modern (post Y2K) application platforms large enterprise environments use, where the biggest challenges exist. However, of course, the technology landscape in most business environments has magnified over the decades, not been built anew since Java roamed the earth, which takes me back to my past…

So, what is Tuxedo, then?

Today, if you want to write an application that serves a user base of any size, you wouldn’t hesitate to put it into an application runtime of some sort (a Java app server, or a plugin to a web server for PHP, .NET or Node.js). Since this takes care of a lot of the issues raised by multiplexing a single application between multiple users, and provides facilities for things like database connection pooling on the back end.

Back in the 90s, however, there was no such thing as an app server. Mainframe applications were coded in assembler and COBOL and had TP Monitors like CICS to provide these capabilities. UNIX applications were implemented in C for the most part but had to reinvent all these non-functional wheels every time (or used a 2-tier architecture with a fat client connected directly to a database, an approach which scales very poorly as the application user population grows).

In a nutshell, Tuxedo is an application server which allows you to write C/C++ or COBOL applications on UNIX (or Windows). Until the dot-com boom swept Java to pre-eminence, it was the only game in town for creating scalable UNIX based applications. Many people today are Tuxedo users without even realizing. Use PeopleSoft? Clarify? Amdocs? To name a few. All of these packaged apps have Java application server based web-facing tiers, but a large proportion of the application logic is implemented in Tuxedo-based back-end services.

So, the net of this is that many “complex, distributed, modern (post 1990) applications” use Tuxedo to provide the necessary distribution and other runtime services.

Now for the good news

One piece of news announced at AppSphere is that AppDynamics have just gone into beta with support for a new application runtime, C/C++ on Linux or Windows. Moreover, our approach allows instrumentation of C/C++ applications purely via configuration, not requiring development time code-changes or application rebuilds. The only tool that can boast this. Although an SDK is also available, so you can hand-instrument the application code if you are feeling “old-school”.

Now for the better news

The better news still is that since the constraints Tuxedo imposes on the applications using it means there are a well-known set of APIs that can be used to identify transactions entering a Tuxedo environment and trace them through it, the configuration is near identical for all Tuxedo applications.

So, what next?

To cut a long story short, AppDynamics now provides the capability to trace transactions end-to-end across a hybrid Java / Tuxedo environment. In the next article, I will present a simple demo application and walk step-by-step through what’s required to achieve this feat, without a introducing any AppDynamics dependencies into the application code!

Try AppDynamics Java monitoring for FREE today!

Database Monitoring for MariaDB and Percona Server

Both MariaDB and Percona Server are forks of MySQL and strive to be drop in replacements for MySQL from a binary, api compatibility, and command line perspective.

It’s great to have an alternative to MySQL since you never know what might happen to it given that Oracle bought it for 1 billion dollars. In this blog post I set out to see if these MySQL forks would work 100% with AppDynamics for Databases. If you’re not familiar with the AppDynamics for Databases product I suggest you take a few minutes to read this other blog post.

The Setup

Getting both MariaDB and Percona Server installed onto test instances was pretty simple. I chose to use 2 Red Hat Enterprise Linux (RHEL) servers running on Amazon Web Services (AWS) for no particular reason other than they were quick and easy to get running. My first step was to make sure that MySQL was gone from my RHEL servers by running “yum remove mysql-server”.

Installing both MariaDB and Percona Server consisted of setting up yum repository files (documented here and here) and running the yum installation commands. This took care of getting the binaries installed so the rest of the process was related to starting and configuring the individual database servers.

The startup command for both MariaDB and Percona Server is “/etc/init.d/mysql start” so you can see that these products really do strive for direct drop in adherence to MySQL. As you can see in the screen grabs below I ended up running MariaDB 10.0.3 and Percona Server 5.5.31-30.3.

Screen Shot 2013-07-01 at 2.21.48 PM Screen Shot 2013-07-01 at 2.23.30 PM

Connected to each of these databases were 1 instance of WordPress and 1 instance of Drupal in a nearly “out of the box” configuration besides adding a couple of new posts to each CMS to help drive a small amount of load. I didn’t want to set up a load testing tool so I induced a high disk I/O load on each server by running the UNIX command “cat /dev/zero > /tmp/zerofile”. This command pumps the number 0 into that file as fast as it can basically crushing the disk. (Use Ctrl-C to kill this command before you fill up your disk.)

The Monitoring

Getting the monitoring set up was really easy. I used a test instance of AppDynamics for Databases to remotely monitor each database instance (yep, no agent install required). To initiate monitoring I opened up my AppDynamics for Databases console, navigated to the agent manager, clicked the “add agent” button, and filled in the fields as shown below (I selected MySQL as the database type):

Screen Shot 2013-07-01 at 3.39.47 PM

My remote agent didn’t connect the first time I tired this because I forgot to configure iptables to let my connection through even though I had set up my AWS firewall rules properly (facepalm). After getting iptables out of the way (I just turned it off since these were test instances) my database monitoring connections came to life and I was off and running.

The Result

Taking a look at all of the data pouring into AppDynamics for Databases I can see that it is 100% compatible with MariaDB and Percona Server. There are no errors being thrown and the data is everything that it should be.

The beauty of my induced disk I/O load was that just by clicking around the web interface of WordPress and Drupal I was getting slow response times. That always makes data more interesting to look at. So here are some screen grabs for each database type for you to check out…

MariaDB Activity

AppDynamics for Databases activity screen for the MariaDB database.

Percona Activity

AppDynamics for Databases activity screen for the Percona database.

MariaDB Explain Statement

Explain statement for a select statement in MariaDB.

Percona Explain Statement

Explain statement for a select statement in Percona.

MariaDB Statistics

A couple of statistics charts for MariaDB.

Percona Statistics

A couple of statistics charts for Percona.

If you’re currently running MySQL you might want to check out MariaDB and Percona Server. It’s possible that you might see some performance improvements since the storage engine for MariaDB and Percona is XtraDB as opposed to MySQL’s InnoDB. Having choices in technology is a great thing. Having a unified monitoring platform for your MySQL, MariaDB, Percona Server, Oracle, SQL Server, Sybase, IBM DB2, and PostgreSQL database is even better. Click here to get started with your free trial of AppDynamics for Databases today.

How To Set Up and Monitor Amazon RDS Databases

Relational databases are still an important application component even in today’s modern application architectures. There is usually at least one relational database lurking somewhere within the overall application flow and understanding the behavior of these databases is major factor in rapidly troubleshooting application problems. In 2009, Amazon launched their RDS service which basically allows anyone to spin up a MySQL, Oracle, or MS-SQL instance whenever the urge strikes.

While this service is amazingly useful there are also some drawbacks:

  1. You cannot login and access the underlying OS of your database instance. This means that you can’t use any agent based monitoring tools to get the visibility you really want.
  2. The provided CloudWatch monitoring metrics are high level statistics and not helpful in troubleshooting SQL issues.

The good news is that you can monitor all of your Amazon RDS instances using AppDynamics for Databases (AppD4DB) and in this article I will show you how. If you’re unfamiliar with AppD4DB click here for an introduction.

Setting Up A Database Instance In RDS

Creating a new database instance in RDS is really simple.

Step 1, login to your Amazon AWS account and open the RDS interface.

 RDS 1

Step 2, Initiate the “Launch a DB Instance” workflow.


Step 3, select the type of instance you want to launch. In this case we will use MySQL but I did test Oracle and MS-SQL too.


Step 4, fill in the appropriate instance details. Pay attention to the master user name and password as we will use those later when we create our monitoring configuration (although we could create a new user only for monitoring if we want).


Step 5, finish the RDS workflow. Notice I called the database “wordpress” as I will use it to host a WordPress instance. Also notice that we chose to use the “default” DB security group. You will need to access the security group settings after your new instance is created so that you allow access to the database from the internet. For the sake of testing I opened up my database to (not shown in this workflow) which allows the entire internet to connect to my database if they have the credentials. You should be much more selective if you have a real database instance with production applications connected.




Step 6, wait for your instance to be created and watch for the “available” status. When you click on the database instance row you will see the details populate in the “Description” tab below. We will use the “Endpoint” information to connect AppD4DB to our new instance. (At this point you can also build the database structure and connect your application to your running instance.)


Monitor Your Database With AppD4DB

Step 1, enable database monitoring from the “Agent Manager” tab in AppD4DB. Notice we map RDS “Endpoint” to AppD4DB “Hostname or IP Address” and in this case we are using the RDS “Master Username” and “Master Password” for “Username” and “Password” in AppD4DB. Also, since Amazon does not allow any access to the associated OS (via SSH or any other method) we cannot enable OS monitoring.


Step 2, start your new database monitoring and use your application. Here is a screen grab showing a couple of slow SQL queries.

RDS SQL Activity

The Results

So here is what I found for each type of database offered by Amazon RDS.

  • MySQL: Fully functional database monitoring.
  • Oracle: Fully functional database monitoring.
  • MS-SQL: All database monitoring functionality works except for File I/O Statistics. This means that we are 99% functional and capture everything else as expected including the ability to show SQL execution plans.

Amazon RDS makes it fast and easy to stand up MySQL, MS-SQL and Oracle databases. AppDynamics for Databases makes it fast and easy to monitor your RDS databases at the level required to solve your application and database problems. Sounds like a perfect match to me. Sign up for your free trial of AppD4DB and see for yourself today.

Glassdoor proves AppDynamics is a Great Place to Work!

AD TeamIt’s been almost two years since I joined AppDynamics and it’s been one of the best career moves I’ve ever made. I used to work at a competitor, and quickly realized I was working for the wrong company. Sometimes you just have to trust your gut feeling when it comes to technology–you’ve either got a product that’s special or you don’t, and I know what it’s like to experience both feelings.

At AppDynamics the technology is definitely special, but I also joined a group of like-minded people who shared the same passion as I did for application monitoring. The no-compromise approach to figuring out new ways of doing things that couldn’t be done previously, along with a laser-focus on solving real world problems for customers, is pretty inspiring. Things are never perfect at any company but the passion to make our customers successful, and the will to win business professionally, is unique at AppDynamics. We really believe that enterprise software doesn’t have to suck, it should never be shelfware, and it should be affordable by everyone–which is one of the reasons why we created a free product AppDynamics Lite that now has over 100,000 users and our commercial product AppDynamics Pro is reasonably priced.

In just two years we’ve disrupted an application monitoring market that was previously dominated by expensive complex solutions that quite frankly sucked. This disruption was one of the reasons why Gartner recognized AppDynamics as a Leader in their 2012 APM Magic Quadrant, and we’ve only been selling our product for two years! This speaks volumes for what we’ve achieved in such a short period of time. What’s also great is that our customers are very vocal about their success; our case study page is packed with customer success stories, with several customers willing to publish actual ROI results from their AppDynamics deployments. How many real customer ROI stories have you read recently from any vendor? My guess is not many.

One online community that provides an accurate inside look at companies is Glassdoor.com. It basically lets employees rate different aspects of the company they work for, from compensation all the way through to culture and leadership. If you search for all the APM companies on Glassdoor.com that are currently recognized in the Gartner’s APM Magic Quadrant, here is what the top 10 looks like:

Glassdoor APM ratings

*Glassdoor ratings correct as of 1/10/2013

I’m pretty proud to work for a company where employees are very satisfied and give their CEO 100% approval. That says a lot about the success and leadership of the company–happy employees also means a happy place to work and trust me, this is pretty important when you spend most of your life at work!

One company that didn’t score well was Compuware. Only 38% of employees would recommend a friend and only 68% approve of their CEO. Not particularly encouraging when you need your employees to innovate, run through walls, and beat the competition. A hedge fund recently put an offer on the table to take Compuware private–let’s hope those guys can get the employees jazzed.

If you’re looking for the next challenge, cool technology and a great place to work, you should consider joining AppDynamics. We’ve got 21 positions currently open and we need great people to help scale the great company we’re building!

With customers like Netflix, Orbitz, Fox News, Vodafone and Yahoo you’ll experience the ins and outs of monitoring some of the largest applications in the world.

Oh, and you get to work with a superhero like me!


Application Monitoring with JConsole, VisualVM and AppDynamics Lite

What happens when mission critical Java applications slow down or keep crashing in production? The vast majority of IT Operations (Ops) today bury their heads in log files. Why? because thats what they’ve been doing since IBM invented the mainframe. Diving into the weeds feels good, everyone feels productive looking at log entries, hoping that one will eventually explain the unexplainable. IT Ops may also look at system and network metrics which tell them how server resource and network bandwidth is being consumed. Again, looking at lots of metrics feels good but what is causing those server and network metrics to change in the first place? Answer: the application.

IT Ops monitor the infrastructure that applications run on, but they lack visibility of how applications actually work and utilize the infrastructure. To get this visibility, Ops must monitor the application run-time. A quick way to get started is to use the free tools that come with the application run-time. In the case of Java applications, both JConsole and VisualVM ship with the standard SDK and have proved popular choices for monitoring Java applications. When we built AppDynamics Lite we felt their was a void of free application monitoring solutions for IT Ops, the market had plenty of tools aimed at developers but many were just too verbose and intrusive for IT Ops to use in production. If we take a look at how JConsole, VisualVM and AppDynamics Lite compare, we’ll see just how different free application monitoring solutions can be.