What kind of developer are you? [QUIZ]

Are you a code maverick? Maybe you’re a buzzword bandit quick to use “SoMoClo” and other trendy terms? Or maybe you’re just not sure which stereotypical bucket you fall under. That’s why we created this highly scientific personality quiz to assess your true developer personality.

This quick test asks about your food preferences, tools you use, and even your TV show habits. What can we say, it’s pretty accurate.

Are you a developer looking for a new and exciting challenge? You’re in luck, we’re hiring!

Buzzword Bingo by UrbanDictionary.com

It seems like every article, tweet, blog post I read someone has a different definition of the same buzzwords – especially in technology.  Mentioning cloud or big data on a tech blog is like bringing sand to the beach. That’s one of the reasons why we made The Real DevOps of Silicon Valley – to make fun of the hype.  I got to thinking… has anyone taken the time to shed some light on these ambiguous terms?  I investigated on Urban Dictionary and this is what I found…

Screen Shot 2013-02-22 at 3.04.29 PM

IT According to UrbanDictionary.com
(Not kidding, look it up…)

 

Screen Shot 2013-02-22 at 5.28.11 PM

1. CLOUD COMPUTING
cloud com·put·ing, noun.

“Utilizing the resonance of water molecules in clouds when disturbed by
wireless signals to transmit data around the globe from cloud to cloud.
I use cloud computing so I don’t have to worry about viruses, I
only have to worry about birds flying through my cloud.’”

 

2. AGILE

Screen Shot 2013-02-22 at 5.30.08 PMag·ile
, adj.

“Agile is a generalized term for a group of anti-social behaviors used by office workers to avoid doing any work while simultaneously giving the appearance of being insanely busy. Agile methods include visual distraction, subterfuge, camouflage, psycho-babble, buzzwords, deception, disinformation, and ritual humiliation. It has nothing to do with the art and practice of software engineering.”

 

3. BIG DATA
Screen Shot 2013-02-22 at 5.18.19 PMbig da·ta, noun.

“Modern day version of Big Brother. Online searches, store purchases, Facebook posts, Tweets or Foursquare check-ins, cell phone usage, etc. is creating a flood of data that, when organized and categorized and analyzed, reveals trends and habits about ourselves and society at large.”

 

4. DEVOPS
Screen Shot 2013-02-22 at 5.13.42 PMdev·ops, adj.

“When developers and operations get together to drink beer and color on whiteboards to avoid drama in the War Room.  Also a buzzword for recruiters to use to promote overpaid dev or ops jobs.”

Watch episode HERE.

 

5. SOFTWARE
soft·ware, noun.

Screen Shot 2013-02-22 at 5.13.21 PM

“The parts of a computer that can’t be kicked, but ironically
deserve it most.

 

6. IT
i·t, noun. 

“The word the Knights of Ni cannot hear or say.”Screen Shot 2013-02-22 at 5.14.11 PM
(Monty Python & the Holy Grail reference)

 

QCon: Enough with the theory, already—Let’s see the code

San Francisco’s QCon was expecting a smaller crowd, but ended up bursting at the seams: the event sold out weeks ahead of time and in many sessions it was standing room only.

Targeted at architects and operations folks as much as developers, QCon was heavy on the hot topics of the day: SOA, agile, and DevOps. But if there was a consistent trend throughout the three days, it was “No more theory. Show us the practice.”

At Jesper Boeg’s talk for example—“Raising the Bar: Using Kanban and Lean to Super Optimize Your Agile Implementation”—the talk was peppered with some good sound bites (“If it hurts, do it more often and bring the pain forward”). But it also stressed the meat: Boeg demonstrated a “deployment pipeline” that represented an automated implementation of the build, deploy, test, and release process—a way to find and eliminate bottlenecks in agile delivery.

Similarly, John Allspaw started high in his talk—sharing his ideas on the areas of ownership and specialization between Ops and Dev, a typical DevOps presentation—but backs up the theory with code-level discussions of how logging, metrics, and monitoring works at Etsy.  (His blog entry on the subject and complete Qcon slides can be found on his blog, Kitchen Soap.)

Adrian Cockroft, who is leading a trailblazing public cloud deployment of production-level applications at Netflix, also wrapped theory around juicy substance. He “showed the code” and the screenshots of his company’s app scaling and monitoring tools (you can find his complete slide presentation here).

Not everyone took the time to drill down, though. Tweets from QCon attendees showed that the natives got restless in talks that stayed too high level:

“OK, just because you can draw a block diagram out of something doesn’t mean it makes sense.”

“Ok, we get it. Your company is very interesting, now get to the nerd stuff.”

“These sessions are high-level narratives. Show me the code, guys! Devil’s in the details.”

At the same time, they would shower plaudits and congratulations on speakers who gave them what they wanted: something new to learn.

When the Twitter stream started to compare QCon’s activities with an event happening concurrently in the city, Cloud Expo, the nature of the attendees was draw into sharp relief:

“At #cloudexpo people used laptops during sessions to check email… At #qconsf they are writing code.”

When it comes to agile, SOA, DevOps, and other problems of the day, people are ready for answers.

What’s Under Your Hood? APM for the Non Java Guru: ORMs & Slow SQL

It’s time for another update to our series on Top Application Performance Challenges. Last time I looked at Java’s synchronization mechanism as a source for performance problems. This time around I take on what is likely the Performance Engineer’s bread and butter … slow database access!

Behind this small statement lies a tricky and multifaceted discussion. For now, I’m going to focus on just one particular aspect – the Object Relational Mapper (ORM).

The ORM has become a method of choice for bringing together the two foundational technologies that we base business applications on today – object-oriented applications (Java, .NET) and relational databases (Oracle, mySQL, PostgreSQL, etc.). For many developers, this technology can seem like a godsend, eliminating the need for them to drill-down into the intricacies of how these two technologies interact. But at the same time, ORMs can place an additional burden on applications, significantly impacting performance while everything looks fine on the surface.

Here’s my two cents on ORMs and why developers should take a longer look under the hood:

In the majority of cases, the time and resources taken to retrieve data are orders of magnitude greater than what’s required to process it. It is no surprise that performance considerations should always include the means and ways of accessing and storing data.

I already mentioned the two major technology foundations on which we build business applications today, object oriented applications, used to model and execute the business logic, and relational databases, used to manage and store the data. Object oriented programs unite data and logic into object instances, relational databases on the other hand isolate data into columns and tables, joined by keys and indexes.

This leaves a pretty big gap to bridge, and it falls upon the application to do the legwork. Since bridging this gap is something many applications must do, enter the ORM as a convenient, reusable framework. Hibernate, for instance, is quite likely the most popular ORM out there.

While intuitive for an application developer to use (ORMs do hide the translation complexities) an ORM can also be a significant weight on an application’s performance.

Let me explain.

Take a Call Graph from AppDynamics and follow the execution path of a transaction, method by method, from the moment a user request hits the application until calls to the database are issued to retrieve the requested information. Then size up the layers of code this path has to go through to get to the data. If your application has implemented an ORM like Hibernate, I assure you’ll be surprised how much stuff is actually going on in there.

No finger-pointing intended. A developer will emphasize that the benefits of just using the ORM component (without having to understand how its working) greatly increases his productivity. Point taken. It does pay, however, to review an ORM’s data access strategy.

I recently worked with a company and saw transactions (single user requests) that each bombard the database with 3000+ distinct calls. Seems a little excessive? You would be right. More over, nobody knew that this was happening and certainly nobody intended for it to be so.

In many cases simple configuration settings or using a different ‘fetch’ method offered by the ORM itself can affect performance significantly. Whether for instance the ORM accesses each row of the customer table individually to fill an array of customers in your code or is actually constructing a query that encompasses all of the expected result-set and gets them in one fell swoop does make a big difference.

Seems obvious, right? But do you actually know what your ORM really does when retrieving the information?

You might be surprised.

AppDynamics Lite Turns 5000!

It’s a big day for us here at AppDynamics as we mark (and pass!) the 5000th download of AppDynamics Lite! Released just a few months ago, AppDynamics Lite is our free APM tool for troubleshooting Java performance in production.  We hear every day from companies who have deployed Lite in production environments and are using it to solve real problems.  That’s why we built it, and it’s very rewarding to hear about IT Operations professionals and developers getting real value.

We’re very pleased with the wave of enthusiasm that the tool has been able to generate in such a short time.   Here’s just one example: one of our sales reps was speaking to a company this morning, and he asked how they had heard of us.  The prospect said, “I downloaded and used your Lite tool, which my team thought was phenomenal. One of our business requirements was that our potential vendors could give us immediate access to their product and demonstrate their unique approach to application performance management.  You met this requirement–no one else we talked to did. And therefore, you immediately made our short list.”

This conversation represents another goal of ours: removing the friction that typically exists between software companies and evaluators. With Lite, companies can download it whenever they want and put it immediately into action.  No waiting, no negotiating…just instant access.  This strategy works for us because, as a startup, we believe that if someone becomes familiar with our solution, they’ll see that it’s considerably better than the legacy offerings on the market.

The success of the tool continues to demonstrate that there’s unquestionably a need from both dev and ops teams to have greater insight and understanding of performance as they deploy applications across cloud, virtual and physical environments. Companies big and small are taking advantage of AppDynamics Lite already; you might even recognize some of the names (Ikea, Nokia, MasterCard, Dell, FranklinCovey and Samsung among others).

The great thing about AppDynamics Lite (besides the fact that it takes two minutes to download and it’s free), is that with thousands of users worldwide we’ve been able to gather a lot of product feedback in just a few months. Lite has now been deployed on every conceivable combination of Java application servers and application stacks. This means our core agent technology has been battle-tested in many environments, and we’ve been able to incorporate all that feedback into our commercial product. This is quite different than the usual slow feedback loop that exists in traditional enterprise software go-to-market approach. The benefit to our customers and partners is that our technology is mature beyond its years.

As we’ve always promised, we’ll continue to offer AppDynamics Lite for free. Our goal is to empower developers and IT operations teams to combat one-off performance issues as they occur, while giving them a first-hand look at the AppDynamics approach to application performance management.

Exceptional Performance Improvement

This post is part of our series on the Top Application Performance Challenges.

Over the past couple of weeks we helped a few DevOps groups troubleshoot performance problems in their production Java-based applications. While the applications themselves bore little resemblance to one another, they all shared one common problem. They were full of runtime exceptions.

When I brought the exceptions to the attention of some of the teams I got a variety of different answers.

  • “Oh yeah, we know all about that one, just some old debug info … I think”
  • “Can you just tell the APM tool to just ignore that exception because it happens way too much”
  • “I have no idea where that exception comes from, I think it’s the vendor/contractor/off shore team’s/someone else’s code”
  • “Sometimes that exception indicates an error, sometimes we use it for flow control”

The response was the same – because the exception did not affect the functionality of (read: did not break) the application it was deemed unimportant. But what about performance?

In one high-volume production application (100’s of calls/second) we observed an NPE (Null Pointer Exception) rate of more than 1000 exceptions per minute (17,800/15 min). This particular NPE occurred in the exact same line of code and propagated some 20 calls up in the stack before a catch clause handled it. The pseudo code for the line read if the string equaled null and the length of the trimmed string was zero then do x otherwise just exit. When we read the line out loud the problem was immediately obvious – you cannot trim a null string. While the team agreed that they needed to fix the code, they remained skeptical that simply changing ‘=’ to ‘!’ would really improve performance.

Using basic simulations we estimate this error rate imposes a 3-4% slow down on basic response time at a minimum. You heard me, 3-4% slow down at a minimum.

And these response issues get compounded in multi-threaded application server environments. When multiple pooled threads slow down to cope with error handling then the application has fewer resources available to process new requests. At the JVM level the errors cause more IO, more objects created in the heap, more garbage collection. In a multi-threaded environment the error threatens more than response times, it threatens load capacity.

And if the application is distributed across multiple JVM’s … well you get the picture.

The web contains many articles and discussions on the performance impact of Java’s exception handling. The examples in these posts provide sufficient data to show that a real performance penalty exists for throwing and catching exceptions. Doing the fix really does improve the performance!

Runtime Exceptions happen. When they occur frequently, they do appreciably slow down your application. The slowness becomes contagious to all transactions being served by the application. Don’t mute them. Don’t ignore them. Don’t dismiss them. Don’t convince yourself they are harmless. If you want a simple way to improve your applications performance, start by fixing up these regular occurring exceptions. Who knows, you just might help everyone’s code run a little faster.

APM for the Non Java Guru – Threading & Synchronization Issues

This is the first post in a series on the Top Application Performance Challenges.

Of the many issues affecting the performance of Java/.NET applications, synchronization ranks near the top.  Issues arising from synchronization are often hard to recognize and their impact on performance can be become significant. What’s more, they are often, at least in principle, avoidable.

The fundamental need to synchronize lies with Java’s support for concurrency. This is implemented by allowing the execution of code by separate threads within the same process. Separate threads can share the same resources, objects in memory. While being a very efficient way to get more work done (while one thread waits for an IO operation to complete, another thread gets the CPU to run a computation), the application is also exposed to interference and consistency problems.

The JVM/CLR does not guarantee an execution order of the code running in concurrent threads. If multiple threads reference the same object there is no telling what state that object will be in at a given moment in time. The repercussions of that simple fact can be enormous with, for example, one thread running calculations and returning wrong results because a concurrent thread accessing and modifying shared bits of information at the same time.

To prevent such a scenario (a program needs to execute correctly, after all) a programmer uses the “synchronize” keyword in his/her program to force order on concurrent thread execution. Using “synchronize” prevents threads from obtaining the same object at the same time.

In practice, however, this simple mechanism comes with substantial side effects. Modern business applications are typically highly multi-threaded. Many threads execute concurrently, and consequently “contend” heavily for shared objects. Contention occurs when a thread wants to access a synchronized object that is already held by another thread. All threads contending effectively “block,” halting their execution until they can acquire the object. Synchronization effectively forces concurrent processing back into sequential execution.

With just a few metrics we can show the effects of synchronization on an application’s performance. For instance, take a look at the graph below.

While increasing load (number of users = blue), we see that at some point midway the response time (yellow) takes an upward curve, while at the same time resource usage (CPU = red) somewhat increases to eventually plateau and even recedes. It almost looks like the application runs with the “handbrake on,” a classic, albeit high-level, symptom of an application that has been “over-synchronized.”

With every new version of the JVM/CLR improvements are made to mitigate this issue. However, while helpful, these improvements can’t fully resolve the issue and address the application’s negative performance.

Also, developers have come to adopt “defensive” coding practices, synchronizing large pieces of code to prevent possible problems. In large development organizations this problem is further magnified as no one developer or team has full ownership of an application’s entire code base. The practice to err on the side of safety can quickly exacerbate with large portions of synchronized code significantly impacting the performance of an application’s potential throughput.

It is often too arduous a task to maintain a locking strategy fine grained enough to ensure that only the necessary minimum of execution paths are synchronized. New approaches to better manage state in a concurrent environment are available in newer versions of Java such as readWriteLocks, but they are not widely adopted yet.  These approaches promise a higher degree of concurrency, but it will always be up to the developer to implement and use the mechanism correctly.

Is synchronization, then, always going to result in a high MTTR?

New technologies exist on the horizon that may lend some relief.  Software Transactional Memory Systems (STM), for example, might become a powerful weapon for dealing with synchronization issues. They may not be ready for prime time yet, but given what we’ve seen with database systems, they might be the key to taming the concurrency challenges affecting applications today. Check out JVSTM, Multiverse and Clojure for examples of STMs.

For now, the best development organizations are the ones that can walk the fine line of balancing code review/rewrite burdens and concessions to performance. APM tools can help quite a lot in such scenarios, allowing to monitor application execution under high load (aka “in production”) and quickly pinpoint to the execution times for particular highly contended Objects, Database connections being a prime example. With the right APM in place, the ability to identify thread synchronization issues become greatly increased—and the overall MTTR will drop dramatically.

Who owns the application?

The debate over who owns an application in production continues to unfurl.  What I’ve found interesting is, after a period of time where writers and bloggers were looking towards IT Operations to be application owners, I’ve started to detect a bit of a backlash.

The arguments for IT Operations owning the application is simple:

  • Development should stay focused on creating new applications and adding features, not maintaining what’s already in production
  • If you ask a developer, they’ll always say they’d rather spend their time on innovation, rather than production support or bug fixes
  • IT Operations generally oversees the production infrastructure (servers, storage, networks etc), so they are the natural caretakers of production applications as well

Analysts often point out that this is easier said that done: what is sometimes called the “required cooperation” between operations and development is often difficult to obtain.  I’ve also seen it suggested that putting production applications in the hands of operations is a Utopian dream, leading to performance issues and SLA violations.

If I were to describe my own view of “natural selection” in regards to managing application performance, it would be more along the lines of collaboration. The development team is likely to help support applications as long as IT Operations is able to provide them with the information and data they need to fix root-cause issues quickly.  Because much development is now done in agile environments, this sort of teamwork is becoming less of a philosophical choice and more of a business necessity.

If you look through blogs and Twitter, you’ll find some interesting grassroots movements such as #devops and #agileoperations.  These are communities forming that acknowledge the need to break down the traditional walls that exist between Dev and Ops, and radically restructure those relationships so that they are focused on shared goals and outcomes.

One devops proponent, James Turnbull at Kartar.net, explains the problem:

“So … why should we merge or bring together the two realms?  Well there are lots of reasons but first and foremost because what we’re doing now is broken.  Really, really broken.  In many shops the relationship between development (or engineering) and operations is dysfunctional to the point of occasional toxicity.”

(I love the phrase “occasional toxicity”…)

He goes on to add:

“DevOps is all about trying to avoid that epic failure and working smarter and more efficiently at the same time. It is a framework of ideas and principles designed to foster cooperation, learning and coordination between development and operational groups. In a DevOps environment, developers and sysadmins build relationships, processes, and tools that allow them to better interact and ultimately better service the customer.

“DevOps is also more than just software deployment – it’s a whole new way of thinking about cooperation and coordination between the people who make the software and the people who run it.  Areas like automation, monitoring, capacity planning & performance, backup & recovery, security, networking and provisioning can all benefit from using a DevOps model to enhance the nature and quality of interactions between development and operations teams.”

I believe that the question that comes naturally out of these conversations is this: does IT operations have the tools they need to facilitate this collaboration with their peers in development?  Traditionally, they haven’t.  The tools either didn’t provide much deep visibility into the application, or when they did provide deep visibility, they were extremely complicated for IT Operations to be able to understand and use. But creating those tools, and encouraging that collaboration, is one of my own company’s guiding principles.

Applications are becoming more complex and distributed, and development is increasingly taking place in the context of agile release cycles.  So really, the question isn’t “who owns the app”–but how best to foster the collaborative process that enables dev and ops to both build out applications and resolve their performance problems, and to do so in record time.