The Consumerization of IT and What It Means for Modern DevOps

As professionals in the IT space, we’re constantly introduced to new terms, concepts, technologies and practices. In many cases, we view these terms as IT-specific to help us be more proficient and cutting-edge. Or at least that’s what we strive to achieve. With many companies trying to disrupt the verticals they target, it’s important for us to understand how these new facets of technology impact the bottom line.

Business is under pressure to deliver more groundbreaking ideas than ever. Understanding the impact of new technology will empower you to engage business leaders to be supportive in both principal values and budgetary needs. When you look at companies that successfully disrupted a specific space, you’ll see one key element in the mix: the end-user experience. This used to mean how nice your app looks. Today, the user experience is more about speed and ease-of-access, while also maintaining a level of confidence that the app will do what it’s supposed to. If you fail to deliver this, your end user will simply move on. As a matter of fact, Bloomberg reports that approximately $6 trillion dollars are moving from digital laggards to businesses that provide the best user experience through digital transformation.

The first point we all must realize is that in the past 10 years, the consumerization of IT has taken the industry by storm. What this really means is that consumers of your IT services are not much different from consumers of your public-facing applications. We know that public-facing applications are the front door to your business—this is where customers are won and lost by the adoption of technology. As an IT professional, your internal business leaders are your customers, and it’s up to you to deliver and drive technology solutions that help drive the business metrics so near and dear to your “customers” hearts. So, making sure you articulate the changes to your principles reflects how IT will impact your business.

Secondly, a DevOps shift for internal process and procedure is no easy feat, especially when you’re dealing with years of hardened policies and practices. But it’s crucial for building a modern DevOps function. This means you’ll need to coordinate a holistic effort that includes development and operations teams, as well as the line of business. Otherwise, the moves you need to make will become exponentially more difficult as business demands become more severe, particularly with the rising number of disruptors in your market.

Lastly, the proof is in the pudding. When you begin your journey to the DevOps shift, it’s critically important to keep all the key players engaged, thereby enabling them to see the value you’re bringing to the table. This is particularly important when you’re demonstrating how new technology implementations are impacting the business in a positive—or negative—way. In this scenario, what you’ll show is either, “Yes, our technology is on the right path,” or “No, the implementation is giving us a negative response from our customers, so we need to quickly course-correct to minimize the damage and regain a positive direction.”

When all is said and done, we must understand the user experience is the new currency in the hyper-connected world we live in. But what is even more critical is that frequent change is required to stay ahead of the competition. This is where your business leaders come in. It’s in nobody’s best interest to stay stagnant, regardless of your industry. Disruption has hit retail, transportation, finance, healthcare and the list goes on. Making frequent changes to beat the disruptors requires you to build out a DevOps practice to ensure you have the ability and tools to respond to high business demands. Here’s how this impacts the business and helps push your DevOps plan forward:

  1. Business leaders are under tremendous pressure to drive continuous growth. A flat-line approach is a leading indicator that your company is falling behind competitors. Highlighting that you want to build out a practice that enables you to quickly develop, monitor, analyze and respond is exactly what your business wants to hear. But be prepared to knuckle down, as this is a never-ending loop to ensure you’re on the right path. When your leaders understand they’re now part of the process, they’ll become more tightly aligned with your strategy.

  1. Defining the critical metrics with your business leaders will allow you to understand how your technology provides the greatest impact. This can include any array of vital measurements that enable you to correlate your application performance to key business metrics. And not necessarily just monetary metrics either—they can be tied to conversions, promotion success, how frequently users are using (or not using) your application, and overall customer satisfaction.  Having these metrics in place will ensure your business leaders’ involvement moving forward, and gain their confidence that your strategy is on target.

  1. Embrace analytics to gain the ability to understand business transactions and the user journey.  The key part here is that you’re building out a DevOps strategy to be lean and nimble, but the end goal must be to understand how your end users are reacting to your applications. Leveraging an analytical platform like AppDynamics Business iQ is key to showing how your application ties directly to metrics defined by your business leaders. These leaders will gain immediate value from key data they’re not accustomed to seeing. This effort will also help you set priorities on which items you should develop first.

  1. As with Agile development and DevOps, this is an iterative process and a continuous cycle. Automating the process to remove the human element is key: Leveraging AI to help predict anomalies and stay ahead of the consumer will build the highest degree of confidence in your new DevOps implementation. However, this can’t be done in a silo. Once everyone is involved and engaged, showcasing your strategy to other parts of the business will be as celebratory a ticker-tape parade by a championship-winning team. Take your success and show how you’re an innovative technology leader—not one who sits in the server room, but rather one who’s engaged with the business. One who proudly bears the title, “Disruptor.”

Smart monitoring and automation help business leaders see issues that concern them most, and should be your first priority when rolling out organizational changes. In addition to pinpointing issues within applications, these tools help predict future issues by identifying trends as they arise. Consumerization of IT has taken our world by storm. Business leaders have lost faith in IT, which needs to reinvent itself as a leader driving business, rather than a team of technicians responding to the crisis of the day. By implementing a valuable, new technological shift—one with all the right tools in place, a keen understanding of the business, and impactful solutions—you’ll be seen as a key partner and leader with innovations that disrupt the competition and make your business a success.

Learn more about how AppDynamics can help you succeed with your business transformation.

How to Monitor Code You Didn’t Write

Almost no one writes all their own code. Whether you are an engineer in IT operations or a developer, you are most likely working with code written by someone outside your organization. As a result, one question we hear repeatedly is how do you monitor third-party code?

The answer is that it depends whether you have access to the runtime, the runtime and the source code, or neither. The good news is that in every case monitoring the code you didn’t write is easier than you might think.

Standalone Software

Commercial off-the-shelf software (COTS) provides a complete, self-contained solution running in the equivalent of its own box in your environment. While you generally have access to the runtime, it can be difficult to instrument because you don’t know what business transactions are most meaningful. With AppDynamics, the challenge of figuring out what to monitor is made easier with pre-built support for many popular COTS solutions such as SAP, Atlassian Confluence, or IBM Integration Bus.

For products where we do not provide out-of-the-box visibility, you can write an extension that will provide insight into what is going on inside the software. For example, by ingesting metrics such as the number of requests per minute or the associated disk usage provided by the COTS software, you can correlate unexpected changes to baseline behavior to the corresponding business transactions.

 

 

Remote Services

In contrast to COTS software, third-party remote services can only be accessed via the network and offer no access to runtime whatsoever. By monitoring the endpoints—calls to and responses from a service—AppDynamics will give you a clear indication whether or not a particular service is responsible for slowness in an application. And we also have additional capabilities that help you monitor the health of these endpoints through Service Availability Monitoring or Synthetic Monitoring. If your third-party service offers visibility metrics, we can integrate those into your AppDynamics dashboard. We can also set up alerts that trigger when the average response time rises above a certain level, so you can easily ensure compliance with SLAs.

 

Libraries

Third-party libraries are another common scenario. What is the best way to troubleshoot issues with a threading library, a logging library, or a client library for a cache? If a library is integrated with your code, your business transactions will flow through it. If there are any performance issues with your application, the third-party code will, in most cases, show up in a snapshot and you will be able to see if it is to blame. In some more advanced cases, however, additional instrumentation may be necessary. Please see documentation on how to Configure Instrumentation for more information.

 

Conclusion

Third-party software may seem like a black box, but extracting performance insights from someone else’s code can be surprisingly simple. AppDynamics offers pre-built support for standalone software and provides visibility into the behavior of third-party services by observing the health of their endpoints. Many third-party libraries are automatically incorporated into AppDynamics’ snapshots. Our belief is that it shouldn’t matter who wrote the code, business and application performance should be transparent all across your networked environment.

 

 

How APM Can Support Service Assurance

I’m a New Zealand-based Senior Sales Engineer for AppDynamics with more than 12 years of experience in the IT sector. I’ve had a lot of interesting work-related experiences over the years—living in Thai hospitals, traveling by armed convoy to remote gold mining sites, and even getting down and dirty with calculating the yield of offal and tallow within the primary industries. And while the organizations I’ve worked with spanned a variety of industries, each shared a common objective: the need for service assurance.

The idea of assurance came to me from an AppDynamics customer, whose company was trying to move all of its IT expenditures from CapEx to OpEx, which meant a major shift to managed services. The company had a few key partnerships it relied upon heavily, partners that provided Level 1 support, infrastructure and, in some cases, both application-level support and applications-as-a-service (SaaS).

Rather than buy an application performance monitoring solution directly from an APM provider like AppDynamics, the company wanted to consume APM as a service from another vendor such as a service provider. Why take this approach? The customer—as is often the case with many organizations—was looking for service assurance that an MSP could provide. A laudable goal, certainly, but one with considerable risk as well.

Retaining Visibility

When an organization outsources numerous managed services, it starts to lose visibility into its operation. It grows unsure of what’s going on behind the scenes. And while this organization may be very happy with the managed services it’s receiving, it has no visibility into what’s coming down the line. If, for instance, it’s 10 minutes away from a critical failure, it may not be able to see the event coming. In short, it lacks visibility into its own operations.

In this scenario, the company becomes uncomfortably reliant on its partners, who may not be willing to readily accept blame for a problem, largely because the admission of fault may reflect badly upon them and trigger contractual obligations. And if the company uses multiple MSPs, the situation will be worse, with each MSP pointing the finger away from themselves and saying, “It’s that guy’s fault.”

How APM Can Help

By providing advanced visibility into an application, APM enables the end customer to quickly find the fault, work with the appropriate MSP, and not waste support hours diagnosing a problem with the wrong vendors.

APM offers comfort—or assurity—that the end customer can very quickly identify the area at fault, thereby reducing its mean time to identify (MTTI). And because the customer is working with a single APM provider, it can reduce its mean time to recovery (MTTR) as well.

MSPs benefit, too. Providing service providers with access and visibility into faults can help with reductions in MTTR. And when multiple vendors are involved, the single-pane-of-glass view provided by an APM dashboard can be utilised by all parties for diagnosis and repair.

Keeping Them ‘Honest’

As an end customer, what level of assurance do you have that your MSP is operating as expected? And how do you know if an application, which you invested heavily in, is behaving properly? If your MSP is only telling you that the app is “available,” that’s not good enough.

It’s not always easy to determine whether an application is performing well, though, and which metrics best gauge its overall business impact. In addition to technical metrics, you’ll also need to identify the business transactions (BTs) that are most valuable to you, and whether they’re delivering their expected value.

You should encourage your MSP to have a service-level agreement (SLA) that covers not only technical metrics and BTs, but also valuable business metrics such as sales conversion, digital service adoption, and so on.

APM delivers these insights by providing real-time visibility into the performance of your business. Instead of relying on daily or monthly business intelligence reports to see if you’re meeting your SLAs from a technical and business standpoint, you get this information in real time.

In addition to helping you get the best return from your MSP, these insights enable you to make informed, moment-by-moment decisions, such as driving users to a particular channel as part of a digital transformation, i.e. steering customers away from an overloaded call center and toward online support.

The MSP also benefits from APM, which becomes a value-added service they can offer customers, one providing complete transparency and forging a true strategic partnership with customers.

A True Strategic Partnership

Outsourcing has its advantages but can also lead a loss of control, creating risk for organizations and individuals alike. Service assurance provides a level of comfort, giving an organization control and insight into the performance of its application, as well as confidence that its MSP is providing value.

From the perspective of the MSP, this also demonstrates a willingness and openness for a true strategic partnership with its customer. This strong and trusted partnership is critical to ensure success for the customer, the MSP and, most importantly, service for the end user.

CGI, a leading IT and business process services firm, had a major infrastructure contract that required end-to-end service delivery, but the nature of the environment made this difficult to achieve. To comply with its SLA, CGI needed an efficient way to measure business transactions end-to-end. CGI integrated AppDynamics into its existing platform and immediately began getting insights into system performance, enabling it to demonstrate SLA compliance, get complete end-to-end visibility of business transactions, and build a more robust process between its development, testing and production environments.

Schedule a demo to learn how AppDynamics can help assure your own service success!

A non-technical guy’s take on Business Transaction Monitoring

I began my journey in the Application Performance Management (APM) space a little over two years ago. Transitioning from a security background, the biggest thing I was concerned about was picking up the technology quickly. Somewhere between JVMs, CLRs, JMX, IIS, and APIs I was a little overwhelmed.

The thing that caught my eye most in the early stages of learning about the APM space (outside of AppDynamics’ growth & the growing IT Ops market) was the term Business Transaction Monitoring.

Gartner defines one of their key criteria’s as “User Defined Transaction Profiling – The tracing of user-grouped events, which comprise a transaction as they occur within the application as they interact with components discovered in the second dimension; this is generated in response to a user’s request to the application.”

I know what you’re thinking…. “That is what caught your eye about APM?!” Actually, yes! Despite Gartner’s Merrian-Webster-esque response, I was able to understand the intended definition of what the term is attempting to communicate: end-to-end visibility is critical. Or, in other words, pockets of visibility (silos) are bad. Unfortunately, so many organizations have silos that keep them in the dark from understanding and managing their customer’s digital experience. That’s often because as companies grow, they become plagued by structure; individual toolsets purposed with proving innocence rather than achieving a resolution, and a lack of decentralization.

In talking with hundreds of customers over the past two years, it’s amazing how so many mature application teams still struggle with end-to-end visibility. The market only makes the problem worse since every solution provider out there claims they provide true end-to-end visibility. So, when you’re researching potential application monitoring & analytics solutions, make sure and ask the vendor what information they’re collecting & how they’re presenting that data to the end user. For example, if you look under the hood of a potential app monitoring solution and you find out they take a “machine-centric” approach by displaying JMX metrics & CPU usage & render that data to you in a fat client UI, then you’re probably looking at the wrong solution. Compare that against a purpose-built APM solution that counts, measure, and scores every single user’s interaction with your mobile / web app and presents that data in the vehicle of a business transaction. It’ll be like finally getting your eyes checked and getting prescription glasses!

Alright, let’s get to the core message of this post. A solid APM strategy must be based on business transactions. Why? The end user’s experience (BT) is the only constant unit of measurement in today’s applications. Architectures, languages, cloud platforms, and frameworks all come and go, but the user experience stays the same. With such dynamic changes occurring the app landscape of today, it’s also important that your APM solution can dynamically instrument in those environments. If you have to tell your monitoring solution what to monitor anytime, there’s a new code release, then you’re wasting time. The reason so many companies have chosen AppDynamics is because AppDynamics has architected every feature of its platform around the concept of a BT. AppDynamics provides Unified Monitoring through business transactions. One can pivot & view data across all tiers of an app including code, end user browser experience, machine data, and even database queries in just a few clicks.

I wish I had a helpful analogy for a Business Transaction, but I’m going to have to settle to encourage you to view a demo of our solution. Or, download our free trial, install the app agent, generate load, and watch how we automatically categorize similar requests into BTs. If you have any questions, then feel free to reach out to me or anyone else in our sales org.

The title of the blog was a non-Tech guys approach to BTs, so I’ll simplify my ranting with the following conclusion: Your business is likely being disrupted and now defined by software. If so, having a tool like AppDynamics is key to being able to manage your app-first approach. Your APM solution (hopefully you’re already using AppDynamics) must have a proven approach to handling complex apps by focusing on unit groupings that can provide end-to-end visibility called “Business Transactions” or BTs. A proper approach to monitoring with BTs is critical because it perfectly marries the 10,000-foot view of the end user’s digital interaction with the business (what the business wants) and the 1-foot view of class/method visibility (what your app teams need). A successful BT monitoring strategy will enable your business effectively to monitor, manage, and scale your critical apps, and provide rich context for your business to make intelligent, data-driven decisions.

2015 APM Predictions from AppDynamics

This article originally appeared on APMDigest.com

In addition to our predictions on APMdigest’s list of 15 Predictions for 2015, AppDynamics offers some additional predictions:

1. In 2014, we continued to see the rapid ascent of a new generation of APM company that is leveraging big data, cloud, SOA, analytics and Agile technologies to out-innovate the solutions provided by legacy APM providers. We will see more of this in 2015 with many older companies going private due to their inability to compete with younger, more innovative companies and the way in which these legacy providers are structured internally.

2. Mobile APM as a standalone application will no longer exist in 2015. Businesses will realize the importance of reliance on backend infrastructure, which will necessitate end-to-end visibility across all of their applications from one comprehensive APM solution.

3. The gap between business analytics and IT analytics is quickly narrowing. In 2015, software analytics and business analytics will be viewed as one and the same and as a critical piece of business intelligence from stakeholders on both sides of the equation.

4. APM is not a new space, but is changing rapidly with the ever-changing design and delivery of software. Old APM solutions cannot keep up with upgraded applications and the amount of distributed infrastructure required by new applications. In 2015, companies that do not apply a modern APM solution by year’s end will be severely hindered and fall behind their competitors.

5. Shop Direct’s CEO commented this year that 50% of the company’s consumers viewed its site through a mobile device, but in 2015 100% of their customers will test content through their mobile app. 2015 will be do or die in terms of mobile. Mobile channels are exploding and businesses need to get the mobile experience right this year — not just one time, but on an ongoing basis with the right kind of APM tools.

If you can’t see it, you can’t manage it – ITOA use case #1

“There was 5 exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days, and the pace is increasing…,” – Eric Schmidt, Former CEO, Google.

If IT leaders hadn’t already heard Schmidt’s famous quotation, today they are definitely facing the challenge he describes. Gone are the days when IT leaders were tasked with just keeping an organization running, now IT teams are charged with driving innovation. As businesses become defined by the software that runs them, IT leaders must not only collect and try to make sense of the increasing amount of information these systems generate, but leverage this data as a competitive advantage in the marketplace. This type of competitive advantage may come in many forms, but generally speaking, the more IT leaders know about their environments and the ways end users interact with them, the better off they (and the business) will be. Gleaning this type of insight from IT environments is what analysts refer to as IT Operations Analytics (ITOA). ITOA solutions collect the structured and unstructured data generated by IT environments, process that data, and display the information in an actionable way so operations teams can make better informed decisions in real-time. I’d like to discuss five common ITOA use cases we see across our customer base in this series, starting with visualizing your environment. In the rest of this series I’ll examine each of the other use cases and describe how a solution like the Application Intelligence Platform can address each and in turn provide value for operations teams.

The five common ITOA use cases I’ll delve into are:

  • Visualize the environment
  • Rapid troubleshooting
  • Prioritize issues and opportunities
  • Analyze business impact
  • Create action plans

Visualizing the environment

The first use case refers to the ability for an ITOA system to model infrastructure and / or the application stack being monitored. These models vary in nature but oftentimes are topological representations of the environment. Being able to visualize the application environment and see the dependencies is an important foundation for the rest of the use cases on this list.

In the Summer ‘14 release announcement blog, we highlighted the enhancements we’ve made in regard to our flow maps, which is the visual representation of the application environment, including application servers, databases, web services, and more.

What’s great about the AppDynamics approach is that this flow map is discovered automatically out of the box, unlike legacy monitoring solutions that require significant manual configuration to get the same kind of view. We also automatically adjust this flow map on the fly when your application changes (re-architected app, code release, etc.). Because we know all the common entry and exit points of each node, we simply tag and trace the paths the different user requests take to paint a picture of the flow of requests and all the interactions between different components inside the application. Most customers see something like the flow map below within minutes of installing AppDynamics in their environment.
Screen Shot 2014-12-11 at 8.41.42 AM
Now a flow map like this is obviously very valuable, but what happens when the application environment is very large and complex? How does this kind of view scale for the kinds of enterprise applications many AppDynamics customers have deployed? Environments with thousands of nodes and potentially hundreds of tiers? Luckily for our customers, the Application Intelligence Platform was built from the ground up to handle these kinds of environments with ease. There are two characteristics of our flow maps that enable operations teams to manage flow maps of large-scale application performance management deployments; self-aggregation and self-organizing layouts.

Self-aggregation refers to our powerful algorithms that make these complex environments more manageable by condensing and expanding the visualization to enable intelligent zooming in and zooming out of the topology of the application. This allows us to automatically deliver the right level of application health indicators to match the zoom level.

For example, this is what a complex application could look like when zoomed all the way out:
Screen Shot 2014-12-11 at 8.41.51 AM
As one zooms in, relevant metrics information becomes visible:
Screen Shot 2014-12-11 at 8.42.00 AM
Until you are zoomed all the way in on a particular tier and can see all of the associated metrics you’d care about:
Screen Shot 2014-12-11 at 8.42.08 AM
The ability to iterate back and forth between a macro-level view of the application and a close-up of a particular part of the environment gives operations teams the visibility they need to understand exactly how an application functions and how the different components interact with each other.

Self-organizing layouts relates to our ability to automatically format the service and tier dependencies by using auto-grouping heuristics to dynamically determine tier and node weightages. By leveraging static data (like application tier profiles) and dynamic KPIs (like transaction response times) we organize the business-critical tiers in a way that brings the most important parts of the application to the forefront depending on the type of layout you prefer.

One can automatically group the flow map into a circular view:
Screen Shot 2014-12-11 at 8.42.18 AM
You can let AppDynamics suggest a layout:
Screen Shot 2014-12-11 at 8.42.27 AM
You can create a custom layout just by dragging and dropping individual components:
Screen Shot 2014-12-11 at 8.42.36 AM
And you can auto-fit your layout to the screen for efficient zooming in / out:
Screen Shot 2014-12-11 at 8.42.45 AM
You’ve seen how AppDynamics can visualize individual applications, but what if, like many of our large enterprise customers, you have many different complex applications that have dependencies on one or more other applications? How does one obtain a data-center view to understand, at a high level, what application health looks like across all applications?

With the cross-app business flow feature, customers can do just that. AppDynamics even supports role-based access control (RBAC) so administers can limit user access to a particular application. We allow customers to group, define, and limit access to applications however makes the most sense for their individual environments and for their business.

Screen Shot 2014-12-11 at 8.42.54 AM

As you can see, AppDynamics provides a great way for IT Operations teams to discover and visualize their application environment. We automatically map the application out of the box, we provide flexible layout options so customers can customize the view to their liking, and offer a way for Ops teams to understand how different applications interact with each other.

In the next post in this series, we’ll discuss how the Application Intelligence platform can address the second common ITOA use case, rapid troubleshooting. In the meantime, I encourage you to sign up for free and try AppDynamics for yourself.

5 Secrets to Better PHP Performance

Wait! Do you really need to profile that PHP code? Are you sure you want to start down that time-consuming, tedious path? If you’re looking to squeeze some more performance out of your PHP web application, there are a few relatively quick and easy checks to perform that can give your performance a boost before you dive into refactoring the code. And even if you’re intent on profiling your PHP code, you should still look at these areas to make sure you’re getting maximum performance.

Cache In On OPCache

One of PHP’s strengths is that it’s interpreted on the fly into executable code called opcodes, so you can develop rapidly and test without pausing to compile your code with every change you make. However, it’s inefficient and slow to recompile the identical code each time that code runs on your website.

For many years, opcode cache has been a go-to solution for this particular slowdown. These caches are PHP extensions that create hooks into the compilation system and save the output of compiled code into memory. Then in future runs, PHP checks to make sure that the source file has not changed — via timestamps and file size checking — and if it hasn’t, it runs the cached copy of the code.

The most famous of these caches was APC or Alternative PHP Cache. APC not only provided an opcode cache, but also permitted making user data persistent in shared memory as well.

Given the importance of having an opcode cache configured to get optimal performance out of a PHP application, the PHP core team decided to include one by default with all versions of PHP since version 5.5. They chose OPCache, formerly Zend Optimizer+. Part of the commercial Zend Server offering, Zend Optimizer has now been open-sourced back to the community.

(To learn more about the importance of OpCache to PHP application performance, see this excellent article by my colleague Rob Bolton.)

Screen Shot 2014-12-01 at 9.07.55 AM

Look Outside Your Application

Maybe it’s not your PHP at all that’s slowing your application down, especially if you’ve implemented opcode cache. It’s more than likely that at least several of your application bottlenecks happen when accessing external resources. Let’s look at a couple suspects.

Database Delays

It’s not untypical for the database layer to account for 90 percent of measured execution time for a PHP application. So it makes sense to spend the necessary time to review your codebase for all instances of database access.

First and most obvious, turn on the slow SQL logs and find and fix slow SQL queries. Then proceed to query the queries themselves. Are they efficient? Do you make the same queries multiple times in one execution of your code? (Even with a query cache, that’s still inefficient.) Are you making too many queries? Do you have queries hitting a table without an appropriate index?
Investing a little time to fix your queries can noticeably reduce your database access time and noticeably increase your application performance.

Filesystem Snafus

I/O, I/O, who knows where the time goes? Some of it is with all the in-and-out of your file system. So study your filesystem for the same kinds of inefficiencies you looked for in your database queries. Some likely time-consuming culprits: reading in local files, processing XML, image processing, or using the filesystem for session storage.

Specifically, look for code that would cause a file stat to happen — reading of a file’s statistics, such as the date it was last modified. Functions such as file_exists(), filesize(), or filetime() cause file stats to happen, and are easy to leave accidentally in a loop. Never do something twice that only needs to be done once. That’s the worst kind of wasted time.

Keep Your Eye On APIs

What other external resources do you rely on? It’s a rare application that doesn’t leverage APIs. Unfortunately, in many or most cases, you don’t have control over the remote APIs you’re using, so you can’t do anything directly about their performance. You can, however, mitigate the effect of API performance in your code through techniques such as caching API output or making API calls in the background.

Your main goal is to protect your user from a failing or misbehaving API. Make sure you have reasonable timeouts in place for any API requests and, to the best of your application’s ability, be ready to display your application’s output without the API’s response.

Now Profile Your PHP

If you’re lucky, just enabling an opcode cache and optimizing external resource usage is enough to get the performance gains you need at the moment. But eventually, as your application needs increase, you’ll need or want to go deeper to get better performance to maintain or boost user experience and conserve hosting costs.

A number of open source tools can help you profile your PHP code and discover where the most time is being spent. One of the most common is Xdebug. Xdebug does require compiling and running special extensions on your server, and was originally intended, as its name implies, to be a debugging tool; the profiling aspects were added later.

It’s not easy to keep application performance in step with ever-increasing user expectations. But looking into and optimizing the functionality described above can help make sure your PHP application is performing at its best.

Gain better visibility and ensure optimal PHP application performance, try AppDynamics for FREE today!

Before PHP Performance, Looking at Your Software Process

In an effort to optimize your application performance you benchmark and profile your code, built a solid testing environment, collect key metrics, the whole nine yards. Yet, a growing realization eventually dawns on you that your team isn’t pushing out new features as fast as they used to without placing the performance of your app in jeopardy. You’ve hit a tipping point and are neck-deep in a backlog-pile-up and can’t evolve your software as fast as you did. You miss those good ‘ol days when new features were being pushed out at almost every iteration, bugs were few and far in between, the company couldn’t keep up with how fast your team was developing and life was good.

The answer may be simpler than you think as the solution to reaching maximum development velocity probably lies in your development process itself. Your team velocity should evolve to include focusing on a combination of new features, bug resolution and application optimizations. Striking the right balance among all these variables will help you reach your maximum effectiveness.

There are a number of things that you need to look at and evaluate, such as:

Project Management Methodology

Software lifecycle management patterns have been observed and summarized into best practices but similar to software design patterns there is no perfect solution that applies to every situation. The implementation of a methodology can (and probably should) be tailored to best fit the personality of your team.

An agile shop may implement an Agile process, keep track of velocity, estimate story points and have the art of estimating down to a science. But where do the lines of effectiveness and efficiency cross? You may be extremely efficient at implementing your process but you’re so caught up with following the “rules” that you’re losing effectiveness.

That’s right, effectiveness and efficiency are not the same thing.

Of course certain rules and checkpoints are necessary: having code-review, ensuring proper builds are run before deployment, etc. You should ask yourself whether having checkpoints are necessary or are they hindering progress? You should also explore whether you’re lacking certain processes that may bring some control to the chaos. But at some point you’ll need to determine for yourself whether you have too many checkpoints or not enough.

Building Your Team

You need to ensure that your development team is correct for your project. This means you have both the right skills as well as the right size.

Understaffing a project leaves you struggling to get everything done in time, with everyone pulling long hours and getting burned out. Too many people can be just as bad of a metric. Not only will people feel left out, feeling like they can’t contribute, but they may find themselves poking at pieces of the code in their spare time that really didn’t need touched. The team will lose focus, individuals will have their moral level drop, and the entire team dynamic will suffer.

It’s also just as important to make sure that you have the right mix of skills to go along with this. Do you need PHP experts? Dedicated JavaScript developers? HTML/CSS Wizards? DevOps people to integrate tightly with the systems? Or do a few general purpose jack-of-all-trade types fit your project best.

There really is no right answer here; there is only a right answer per project. That answer will constantly change as the project itself evolves as well, and it takes a good manager to be on top of this.
This project is going to need a lot of overtime

Speaking of building teams, check out AppDynamics openings here!

Budget

Going hand-in-hand with building out your team, you (unfortunately) always have to think of your budget as well. You need to make sure from the very beginning that you have enough budget allocated. There is nothing worse than getting halfway through a project and suddenly having to shelter it because there’s no money left.

Beware of of feature creep! When scoping a software you’ll need to stay disciplined in keeping true to the original MVP (minimum viable product). Once the minimum feature-set has been designed and approved, keep true to that roadmap. Of course every feature can always be improved! Of course you’ll think of new ways to do things! But in order to get a product out the door, you need to learn when you draw the line and call for a feature-freeze. Otherwise, you’ll burn right through your budget, miss critical deadlines and turn your project into a speeding train without brakes. Remember, done is better than perfect.

Don’t extremely over-budget either. Having padding is a smart idea, but your company probably could have used that extra budget in better ways. It leads you down a path of over-staffing, and running into the same issues we mentioned above.

Using Version Control

Let’s focus back to the application itself. At the very top of that list, is the need to use Version Control. To many of us it seems like such an obvious thing, yet it’s amazing how many people still haven’t seen the need for this.

Whether you are a team of one, or a team of 1000, you really need to be using some form of version control. Beyond the obvious need to “find that code that was blown away two days ago”, the abilities provided by the version control systems to manage having multiple developers all working together, on the same project, and committing code while the software ensures that there are no issues, is simply invaluable.

There are plenty of VCS options out there, with Git probably being the most popular at the moment. But there are a slew of others such as CVS, Mercurial, Bazaar, SourceSafe and more.

You need to find a version-control workflow that best match how your team is (or will be) working together. For example Git is designed around the idea of extremely distributed workload, with lots of developers working independently of each other, throwing away lots of local code that they only kept temporarily, and committing back just the final pieces that they are ready with.

Tracking Issues/Bugs

It’s a fact of life; your software is going to have bugs. When they are found you are going to need a good solution to allow you to store the details of the bug, assign it off to a developer to fix, and then track the bug as it’s fixed and verified.

At the same time, most bug tracking software is designed to also allow you to track your new product features and enhancements as well. If you are a sole developer on a project, using software such as this can be crucial to remembering all the moving parts and where you are trying to go. If you are running a large team, then it’s extremely important to make sure that issues are captured, shared with the team, and that it’s clear who is working on it. There is nothing worse then finding out that the last two days of work you just did on a bug, was just solved by another developer.

There are lots of issue/ticket tracking software solutions out there. These range from software packages such as Trac or JIRA, to hosted solutions such as provided by GitHub or Unfuddle. There are simply too many options to list. The best option for your team is one that they will use. Check out the options that are available and find what matches your development process best.
iStock_000003180321_Small

Avoiding Code Debt

Hopefully you are familiar with the concept of code debt. It’s where you have code in your system that is subpar, that you need to fix before you can actually add new features (or where you have to work-around it in order to do anything). Code debt can be created by bad architecture designs (even if they were good decisions at the time), or by simply having cases where a programmer chooses to create sub-par code in the first place, saying: “I’ll come back and fix this later,” just to get a feature out the door more quickly. An easy way to estimate your code-debt is by simply searching your entire code-base for the strong “// todo”. You’d be surprised how many times programmers know their code will need to be re-factored before they’re even finished writing it.

The more code debt that you build up, the longer it takes you to complete any task, be that adding a new feature, or fixing an old bug. Not only that but there’s a good chance that any case of code debt existing is also a potential source of performance issues in your software product.

One common solution to attempt to reduce code debt (though there is no magic bullet) is the use of Agile development processes. In Agile development you aren’t locked to a huge architecture decision in the beginning, but you build the software in small pieces, one at a time, modifying the product as you go. With a proper Agile environment, you shouldn’t ever push off some bad code to “Get a feature out quickly,” nor have the problem of realizing that a massive architecture mistake was made too late. As the whole time you are developing it is a constant iterative process, where the entire system is always getting just slightly better.

Deployment Process & Rollbacks

Of course you also need to have a solid system for handling deploying your code out to your server(s). This is a highly “personal” thing. There are solutions as simple as a small script that checks out code, zips it, and uploads it to full-blown and very complicated systems like Capistrano.

There are two important things here for you to make sure that your team has covered. First of all, you want it to be as easy as possible to deploy code – Optimally you want to be doing this as often as possible, with as small of deployments as possible so that there is less code movement per release. A deployment shouldn’t involve having the entire team on call and an hour-long process. A single script should be able to run that makes all the magic happen.

Secondly, the opposite of deployment is just as (if not more) important. You need to be able to rollback a deployment as well. We mentioned before how bugs are inevitable? Well, that applies not only to small bugs in existing code, but to show-stopping bugs upon deployment. Hopefully this won’t happen to you, but you need to be prepared when it does. Have just as simple of a way to rollback a deployment, as it was for you to push it live in the first place.

That one fact will save you countless hours, countless issues, and keep the hair on your head.
Presentation of web application lifecycle

Wrapping it Up

Well, that’s a tour of some of the biggest process items for your team to think about. As we continue this series of posts we will cover some of this things in more detail, as well as really start getting into various performance issues that will happen in your code itself.

Start solving your PHP performance problems, start a FREE trial of AppDynamics today!

Focusing on Business Transactions is a Proven Best Practice

In today’s software-defined business era, uptime and availability are key to the business survival. The application is the business. However, ensuring proper application performance remains a daunting task for their production environments, where do you start?

Enter Business Transactions.

By starting to focus on the end-user experience and measuring application performance based on their interactions, we can correctly gauge how the entire environment is performing. We follow each individual user request as it flows through the application architecture, comparing the response time to its optimal performance. This inside-out strategy allows AppDynamics to instantly identify performance bottlenecks and allows application owners to get to the root-cause of issues that much faster.
output_R1SFaZ

By becoming business transaction-centric application owners can ensure uptime and availability even within a challenging application environment. Business transactions give them the insight that’s required in order to react to quickly changing conditions and respond accordingly.

So, what exactly is a Business Transaction?

Simply: any and every online user request.

Consider a business transaction to be a user-generated action within your system. The best practice for determining the performance of your application isn’t to measure CPU usage, but to track the flow of a transaction that your customer, the end user, has requested.

It can be requests such as:

  • logging in
  • adding an item to a cart
  • checking out
  • searching for an item
  • navigating different tabs
  • Shifting your focus to business transactions completely changes the game in terms of your ability to support application performance.

    Business Transactions and APM

    Business Transactions equip application owners with three important advantages.

    Knowledge of User Experience

    If a business transaction is a “user-generated action,” then it’s pretty clear how monitoring business transactions can have a tremendous effect on your ability to understand the experience of your end user.

    If your end user adds a book to a shopping cart, is the transaction performing as expected or is it taking 3 seconds longer? (And what kind of impact will that have on end users? Will they decide to surf away and buy books somewhere else, thus depriving your business of not just the immediate purchase but the potential loss of lifetime customer revenue?)

    Monitoring business transactions gives you a powerful insight into the experience of your end user.

    Service Assurance – the ability to track baseline performance metrics

    AppDynamics hears from our clients all the time that it’s difficult to know what “normal” actually is. This is particularly true in an ever-changing application environment. If you try to determine normal performance by correlating code-level metrics – while at the same time reacting to frequent code drops – you will never get there.

    Business transactions offer a Service Assurance constant that you can use for ongoing monitoring. The size of your environment may change and the number of nodes may come and go, but by focusing on business transactions as your ongoing metric, you can begin to create baseline performance for your application. Understanding this baseline performance is exactly what you need in order to understand whether your application is running as expected and desired, or whether it’s completely gone off the rails.

    For example, you may have a sense of how your application is supposed to perform. But do you really know how it performs every Sunday at 6 p.m.? Or the last week of December? And if you don’t, how will you know when the application is deviating from acceptable performance? It’s figuring out “normal” in terms of days, weeks, and even seasons that you need to truly understand your application’s baseline performance.

    Triage & Diagnosis – always knowing where to look to solve problems

    Finally, when problems occur, business transactions prevent you from hunting through logs and swimming through lines of code. The transaction’s poor performance immediately shines a spotlight on the problem – and your ability to get to root cause quickly is dramatically improved.

    If you’re tracking code-level metrics in a large environment instead of monitoring business transactions, the chances are that the fire you’re troubleshooting is going to roar out of hand before you’re able to douse it.

    Summary

    Application owners are under extraordinary pressure to incorporate frequent code changes while still being held responsible for 100% application uptime and performance. In a distributed and rapidly changing environment, meeting these high expectations becomes tremendously challenging.

    A strong focus on business transactions becomes absolutely essential for maintaining application performance. Transaction-centric monitoring provides the basis for a stable performance assurance metric, it delivers powerful insights into user experience, and it ensures the ability to know where to hunt during troubleshooting.

    The right APM solution can automate much of this work. It can help application owners identify and bucket their business transactions, as well as assist with triage, troubleshooting, and root cause diagnosis when transactions violate their performance baselines. In this way, business transactions are essential to ensuring the success of Developers, Operations, and Architects – anyone with a stake in application performance.

    The Incredible Extensible Machine Agent

    Our users tell us all the time: The AppDynamics platform is amazing right out of the box. But everybody has something special they want to do, whether it’s to add some functionality, set up a unique monitoring scenario, whatever. That’s what makes AppDynamics’ emphasis on open architecture so important and useful. The functionality of the AppDynamics machine agent can be customized and extended to perform specific tasks to meet specific user needs, either through existing extensions from the AppDynamics Exchange or through user customizations.

    It helps to understand what the machine agent is and how it works. The machine agent is a stand-alone java application that can be run in conjunction with application agents or separate from them. This means monitoring can be extended to environments outside the realm of the application being monitored. It can be deployed to application servers, databases servers, web servers — really anything running Linux, UNIX, Windows, or MAC.

    Screen Shot 2014-08-21 at 9.03.06 AM

    The real elegance of the machine agent is its tremendous extensibility. For non-Windows environments, there are three ways to extend the machine agent: through a script, with Java, or by sending metrics to the agent’s HTTP listener. If you have a .NET environment, you also have the capability of adding additional hardware metrics, over and above these three ways.

    Let’s look at a real-life example. Say I want to create a extension using cURL that would give the HTTP status of certain websites. My first step is to look for one in the AppDynamics Exchange, our library of all the extensions and integrations currently available. It’s also the place one can request extensions that they need or submit extensions they have built.

    Sure enough, there’s one already available (community.appdynamics.com/t5/AppDynamics-eXchange/idbp/extensions) called Site Monitor, written by Kunal Gupta. I decided to use it, and followed these steps to create my HTTP status collection functionality.

    1. Download the extension to the machine agent on a test machine.
    2. Edit the Site Monitor configuration file (site-config.xml) to ping the sites that I wanted (in this case www.appdynamics.com). The sites can also be HTTPS sites if needed.
    3. Restart the machine agent.

    That’s it. It started pulling in the status code right away and, as a bonus, also the response time for requesting the status code of the URL that I wanted.

    Screen Shot 2014-08-21 at 9.02.55 AM

    It’s great that I can now see the status code (200 in this case), but now I can truly use its power. I can quickly build dashboards displaying the information.

    Screen Shot 2014-08-21 at 9.02.45 AM

    There also is the ability to hook the status code into custom health rules, which provide alerts when performance becomes unacceptable.

    Screen Shot 2014-08-21 at 9.02.35 AM
    Screen Shot 2014-08-21 at 9.02.14 AM

    So there it is. In just a matter of minutes, the extension was up and running, giving me valuable data about the ongoing status of my application. If the extension that I wanted didn’t exist, it would have been just as easy to use the cURL command (curl –sL –w “{http_code} \\n “ www.appdynamics.com -o /dev/null).

    Either way, the machine agent can be extended to support your specific needs and solve specific challenges. Check out the AppDynamics Exchange to see what kinds of extensions are already available, and experiment with the machine agent to see how easily you can expand its capabilities.

    If you’d like to try AppDynamics check out our free trial and start monitoring your apps today!