Buzzword Bingo by

It seems like every article, tweet, blog post I read someone has a different definition of the same buzzwords – especially in technology.  Mentioning cloud or big data on a tech blog is like bringing sand to the beach. That’s one of the reasons why we made The Real DevOps of Silicon Valley – to make fun of the hype.  I got to thinking… has anyone taken the time to shed some light on these ambiguous terms?  I investigated on Urban Dictionary and this is what I found…

Screen Shot 2013-02-22 at 3.04.29 PM

IT According to
(Not kidding, look it up…)


Screen Shot 2013-02-22 at 5.28.11 PM

cloud com·put·ing, noun.

“Utilizing the resonance of water molecules in clouds when disturbed by
wireless signals to transmit data around the globe from cloud to cloud.
I use cloud computing so I don’t have to worry about viruses, I
only have to worry about birds flying through my cloud.’”



Screen Shot 2013-02-22 at 5.30.08 PMag·ile
, adj.

“Agile is a generalized term for a group of anti-social behaviors used by office workers to avoid doing any work while simultaneously giving the appearance of being insanely busy. Agile methods include visual distraction, subterfuge, camouflage, psycho-babble, buzzwords, deception, disinformation, and ritual humiliation. It has nothing to do with the art and practice of software engineering.”


Screen Shot 2013-02-22 at 5.18.19 PMbig da·ta, noun.

“Modern day version of Big Brother. Online searches, store purchases, Facebook posts, Tweets or Foursquare check-ins, cell phone usage, etc. is creating a flood of data that, when organized and categorized and analyzed, reveals trends and habits about ourselves and society at large.”


Screen Shot 2013-02-22 at 5.13.42 PMdev·ops, adj.

“When developers and operations get together to drink beer and color on whiteboards to avoid drama in the War Room.  Also a buzzword for recruiters to use to promote overpaid dev or ops jobs.”

Watch episode HERE.


soft·ware, noun.

Screen Shot 2013-02-22 at 5.13.21 PM

“The parts of a computer that can’t be kicked, but ironically
deserve it most.


6. IT
i·t, noun. 

“The word the Knights of Ni cannot hear or say.”Screen Shot 2013-02-22 at 5.14.11 PM
(Monty Python & the Holy Grail reference)


Top 5 Gotchas for Monitoring Applications in the Cloud

If you haven’t already, many IT organizations are migrating some of their applications to the cloud to become more agile, alleviate operational complexity and spend less time managing infrastructure and servers. The next question you may ask yourself is, “How will we monitor these applications and where should we even begin with so many monitoring tools on the market?”

I’m glad you asked. Here is a list of gotchas you should look out for. If you have your own list, feel free to comment below and share with us.

1. Lack of End User or Business Context – With apps running in the cloud, monitoring infrastructure metrics indicates very little about your end-user experience,or the performance of your apps or business running in the cloud. End users experience business transactions so make sure your monitoring gives you this visibility.

2. Node Churn – How well does your application monitoring solution deal with node churn – the provisioning and de-provisioning of servers and application nodes? The monitoring solution has to work in dynamic, virtual and elastic environment where change is constant, otherwise you’ll end up with blind spots in your application and monitoring. Many of the current monitoring solutions today are unable to monitor and adapt to dynamic cloud infrastructure changes, requiring manual intervention by operations so new nodes can be registered and monitored.

3. Agent-less is Tough in the Cloud – You may not have any major issues with installing a packet sniffer or network-monitoring appliance in your own private cloud or data-center, but you won’t be able to place these kinds of devices in PaaS or IaaS environments to monitor your application performance. Monitoring agents in comparison can easily be embedded or piggy-backed as part of an application deployment in the cloud. Agent-less may not be a option when trying to monitor many cloud applications.

4. High Network Bandwidth Costs – Cloud providers typically charge per gigabyte of inbound and outbound traffic. If your cloud application has 100 nodes and your collecting megabytes of performance data every minute, all of that data has to be communicated outside of the cloud to your monitoring solution’s management server, which can be on-premise or in another cloud. Monitoring what’s relevant in your application versus monitoring everything means you’ll avoid exorbitant cloud usage bandwidth costs for transferring monitoring data.

5. Inflexible Licensing – If you want to monitor specific nodes, will your application monitoring vendor lock each license down to a physical server, hostname or IP, OR can your licenses float to monitor any server/node? This can be a severe limitation as now your agents are locked down to a specific node indefinitely. Even if you weren’t monitoring your applications running in the cloud, it’s still a nuisance to have a monitoring agent handcuffed to a physical server without being given the licensing flexibility to move agents around to monitor different server or nodes. As stated above, with node churn occurring frequently in cloud environments, you need a monitoring solution to be as flexible as possible so you can deploy agents anywhere, at anytime.

The good news is, monitoring application performance in the cloud is hardly a new concept for AppDynamics as we address all of these requirements with flying colors. If you’re interested in a robust application monitoring solution in the cloud, you can take a free 30-day trial of AppDynamics Pro.


Making The Business More Agile

We’re pretty lucky these days to work and play with lots of cool stuff. In a consumer world of HD TVs, Mac books, iPhones, Droids, Angry Birds, Face books and tweets, life is rarely boring. Working in IT is the same. We’ve got clouds, NoSQL, agile, SOA, ria, pythons, scalas, rubies, and lots of ideas and technologies to play with every week. If only our friends and relatives outside of IT could figure out what the hell we’re all excited about, and the simple fact that most of us aren’t millionaires.

Self Tuning Applications in the Cloud: It’s about time!

In my previous blog I’ve written about the hard work needed to successfully migrate applications to the cloud.   But why go through all that work to get to the cloud? It’s to take advantage of the dynamic nature of the cloud with the ability (and agility) to quickly scale applications. Your application’s load probably changes all day, all week, and all year. Now your application can utilize more or less resources based on the changes in load. Just ask the cloud for as much computing resources that you need at any given time, and unlike at data centers, the resources are available at the push of a button.

But that only works during the marketing video. Back in the real world, no one can find that magic button to push. Instead scaling in the cloud involves pushing many buttons, running many scripts, configuring various software, and then fixing whatever didn’t quite work. Oh, and of course even that is the easy part, compared to actually knowing when to scale, how much to scale and even what parts of your application to scale. And this repeats all day, every day, at least until everyone gets discouraged.

Cloudfail: Lessons Learned from AWS Outage

The Amazon AWS outage has cast questions as to whether AWS (and the cloud in general) is ready for hosting revenue-critical production applications. The outage lasted for more than a day for many popular sites like Reddit and Zuora, and it raised many doubts about cloud computing.

But before we write off the cloud, let’s review a few lessons we can learn from this outage.

Some survived, many did not
The number one lesson to learn is that not EVERY application running in AWS died. Netflix, one of the biggest web apps running in AWS, survived the outage without any issues while sites like Reddit and Zuora crashed for more than a day. So why is it that some survived and many did not? It’s simply because many of these companies forgot that cloud is not a magical solution to everything, and you still have to remember to implement the architectural techniques that have been perfected for years in the physical data center world as you move in the cloud world.

How should you manage performance in the cloud?

I’m looking forward to my Cloud Connect panel, “Instrumenting Applications When Access Goes Away,” on Monday March 7th in Santa Clara. I’ve seen a lot of companies migrate their mission critical applications to the cloud. And what changes when companies start managing cloud-based apps?  To quote our customer, Adrian Cockcroft at Netflix– “Everything. Data center oriented tools don’t work in a cloud environment.”

QCon: Enough with the theory, already—Let’s see the code

San Francisco’s QCon was expecting a smaller crowd, but ended up bursting at the seams: the event sold out weeks ahead of time and in many sessions it was standing room only.

Targeted at architects and operations folks as much as developers, QCon was heavy on the hot topics of the day: SOA, agile, and DevOps. But if there was a consistent trend throughout the three days, it was “No more theory. Show us the practice.”

At Jesper Boeg’s talk for example—“Raising the Bar: Using Kanban and Lean to Super Optimize Your Agile Implementation”—the talk was peppered with some good sound bites (“If it hurts, do it more often and bring the pain forward”). But it also stressed the meat: Boeg demonstrated a “deployment pipeline” that represented an automated implementation of the build, deploy, test, and release process—a way to find and eliminate bottlenecks in agile delivery.

Similarly, John Allspaw started high in his talk—sharing his ideas on the areas of ownership and specialization between Ops and Dev, a typical DevOps presentation—but backs up the theory with code-level discussions of how logging, metrics, and monitoring works at Etsy.  (His blog entry on the subject and complete Qcon slides can be found on his blog, Kitchen Soap.)

Adrian Cockroft, who is leading a trailblazing public cloud deployment of production-level applications at Netflix, also wrapped theory around juicy substance. He “showed the code” and the screenshots of his company’s app scaling and monitoring tools (you can find his complete slide presentation here).

Not everyone took the time to drill down, though. Tweets from QCon attendees showed that the natives got restless in talks that stayed too high level:

“OK, just because you can draw a block diagram out of something doesn’t mean it makes sense.”

“Ok, we get it. Your company is very interesting, now get to the nerd stuff.”

“These sessions are high-level narratives. Show me the code, guys! Devil’s in the details.”

At the same time, they would shower plaudits and congratulations on speakers who gave them what they wanted: something new to learn.

When the Twitter stream started to compare QCon’s activities with an event happening concurrently in the city, Cloud Expo, the nature of the attendees was draw into sharp relief:

“At #cloudexpo people used laptops during sessions to check email… At #qconsf they are writing code.”

When it comes to agile, SOA, DevOps, and other problems of the day, people are ready for answers.

If Your App Had a Facebook Status It’d Be, “It’s Complicated”

AppDynamics is founded on a set of deeply held beliefs regarding our industry and how it’s changed over the last several years.  But it’s never good to let deeply held beliefs stay unchallenged.  So every now and then, we do a reality check.

Our most recent reality check was during our webinar presentation of AppDynamics 3.0. We attracted hundreds of IT ops and dev professionals who wanted to learn both about our solution as well as the specific features of our new release—so we took the opportunity to poll them and ask them a few questions. First, we asked if they operated in a SOA environment:

AppDynamics believes that applications are increasingly moving to SOA, turning monolithic web architectures from the early 2000s into obsolete antiques.  As you can see, that belief was confirmed; the vast majority of our webinar attendees have already entered that world.

Then we asked if they followed an agile development approach:

Again, the vast majority of attendees have embraced agile—in fact, nearly 50% release new features or capabilities at least monthly!  Only 8% report that they follow the traditional, waterfall approach to development. With those kinds of tumultuous deadlines, AppDynamics remains impressed that these hardy souls were actually able to take enough time out of their schedule to watch our presentation.

Finally, we wanted to know the punch line: what’s the effect of all this change on their ability to manage application performance?

Over half were really feeling the crunch, and only a scant few had escaped unscathed.

It’s not that AppDynamics enjoys the pain of others. (We don’t. Really.)  But having our fundamental beliefs confirmed—that the world of applications has changed, and application management solutions need to change with them—simply lets us know that we exist for the right reasons.

Take the example of one of our most recent customers, TiVo.  Operating in a highly distributed environment, TiVo has hundreds of individual Java and proprietary applications, designed to work together to deliver service to its customers.

“We used to spend hours troubleshooting issues,” Richard Rothschild, Senior Director of IT, told us.  “If a service was running slowly and we didn’t know the cause, finding that root cause was like looking for a needle in a haystack.

He continued, “We used to spend up to 6 hours on root cause. AppDynamics brought that time down to ten minutes.  We’ve already seen a big improvement in the reliability and uptime of our services—anything that simplifies our job in this complex environment makes us feel much more confident about taking on new business projects.”

It’s complicated out there, and with the advent of cloud and virtual environments, it’s not getting any easier. But we went into this business in order to simplify application performance management and support application teams in their quest for both performance and availability.  So far, it looks like we chose the right reasons to exist.

Application Virtualization Survey Reveals Hesitation

Taking a break from the rush of activity at VMWorld, today we released the results of our first Application Virtualization Outlook survey.  With virtualization a top priority for many CIOs in the next year, we wanted to hear from IT professionals about their plans to virtualize mission-critical apps. Most importantly, we wanted to learn how quickly they’re making the transition, or whether there are obstacles getting in their way.

We found the survey results pretty interesting – overall, it seems like many application owners recognize the potential benefits of virtualization, but lack the confidence that their Tier-1 apps will continue to perform as needed when moved to a virtual environment. Let’s take a closer look at a few of these results and how we came to this conclusion:

More than 80 percent of professionals polled had virtualized their non-critical systems, but on the flip side, only 14 percent had virtualized more than three quarters or more of their Tier 1 apps (see charts below).  That’s a big divide in the pace of adoption.

The number one obstacle to virtualizing apps? “People Issues,” i.e., application owners who impede the virtualization process.  Respondents also cited worries about performance degradation and application design.

And one of our favorite questions- almost one-third of respondents reported that there are people in their organization who would say, “My Tier 1 application will be virtualized over my dead body!!”  This definitely proves that application virtualization is can elicit an emotional response in concerned application owners.

But, let’s not get discouraged. Just about everyone surveyed agreed that there are some great benefits to be had with virtualization. In particular, respondents noted that server consolidation, power and cost savings were likely payoffs, along with disaster recovery and agility improvements.

So how can virtualization teams assuage the concerns of their colleagues and reap these benefits? By providing hard evidence that the owners’ application will perform well in a virtual environment.

Application owners are responsible for meeting SLAs and for maintaining 100% up-time – and they’re not going to hand over their application without proof that those business objectives won’t be compromised.

The best way to get these facts is to conduct tests to establish a baseline performance in a non-virtual environment, and then provide a comparison of application performance, pre-and post-virtualization.  This “apples-to-apples” strategy presents app owners with evidence that their application will continue to perform to their high standards in a virtual environment.

For more information about how we help companies manage application performance in virtual environments, check this out.