Improve Your UX and You’re Bound to See eCommerce Success

Commerce has become both digital and global: Online sales are expected to exceed $1.6 trillion dollars by 2020. As a customer preferred way of doing business, ecommerce offers increased selection, value, and convenience. Online shopping also offers merchants increased access to customer data and opportunities to capitalize on that information.

If your business isn’t keeping pace with best practices in ecommerce UX—not to mention leveraging mobile to capture even more opportunities—you’ll miss out on the continuously growing percentage of consumer online spending. In 2016, for instance, shoppers made 51 percent of their purchases online (compared to 48 percent in 2015 and 47 percent in 2014). Regardless of the path of innovation you choose in the ecommerce space, your online presence should be optimized for user experience. This article explores the top features and UX best practices that are at the heart of a compelling ecommerce experience. While it complicates the evolution of the buyer’s journey, multichannel commerce offers new opportunities for conversions as well. Whether your shoppers flock to phones or desktops when they view your site, the key features emphasized here apply to all platforms. Let’s start with some of the important moments in a shopper’s journey, and then take a deep-dive into optimal UX in each.

There are four critical areas where your ecommerce presence should demonstrate great UX: navigation, product pages, checkout, and optimization. Each of these areas has corresponding UX best practices, which we explore in-depth.

Navigation

Invest in an intuitive IA

Don’t organize your products based solely on how you think about them. Your users have their own intuitive ways of thinking about and grouping your products, and it’s possible, using research methodologies such as card sorting, to cater to that. Card sorting is an exercise in a lab setting that helps develop your site taxonomy by collecting patterns in the way customers sort your products. Research participants are typically given representative sets of items, and asked to group and name them intuitively into their own categories. Across multiple users, patterns emerge that help guide the creation of intuitive navigation categories and product groupings.

Have a content governance strategy

A search bar is crucial. And if you’re going to offer a product search, you should have a strong approach to content governance. This is fundamental for two reasons: You need to make your ecommerce platform fully accessible using search and filtering, and as your goods and services evolve they need to be accessible just as consistently as the products indexed before them. Strong content governance equates to quality metadata on a per-product basis. You will have addressed the following questions in developing your strategy: What search terms should be associated with my product(s)? What filtering facets do I make available to my customers during the browsing process? In the absence of specific user input, how should content in search results and elsewhere be ranked, sorted, and organized by default?

Avoid dead-end search results

Nothing is more disheartening to shoppers than entering a search term and pulling up a “0 results” page. If your site has the capability to perform a fuzzy search (thanks to good tagging and content governance), provide these products on the results list in place of a dead end. Consider implementing an autocomplete feature within your site search. This way, shoppers are exposed to new combinations of search terms, which may yield more results than they think of on their own.

Use breadcrumbs

This easily-overlooked navigational element helps shoppers locate themselves on your site and in time, get a feel for how your products are organized. As best practice, breadcrumbs are expected to appear on category-level and detail pages shortly after header and navigation content. Usually breadcrumbs appear as an unobtrusive line of text, which maps to the customer’s location and depth in a site.

Product Pages

Offer high-quality, informative imagery

Customers are evaluating the smallest of details when comparing your product to competitors. Make the decision clear for them with images sized to highlight details such as stitching, seams, colors, and functionality. Many shoppers enjoy 360-degree views and even videos of products in action. Sweeten the deal by including carousels and large hero banners on category and homepages, which rotate through quality images of featured products.

Help customers browse efficiently with quick views

When shoppers view your goods—whether in a category page or on search result page—they’ve likely invested a fair amount of effort navigating to that specific page. This is where quick view comes in handy. A properly implemented quick view allows shoppers to evaluate the most salient points of a product without being led away from the list of products they created. A good quick-view feature will provide shadow-box preview, creating a temporary content container above the page body. This will showcase a summary of key details and larger product views. Well-implemented quick views have a prominent entry point, for instance hover states above item thumbnails displaying “Preview.” Without quick views, users are often resentful and hesitant to peruse large numbers of product detail pages individually. This is because they generally won’t have confidence in their ability to return to the original product list. Additionally, page-loading delays can add up to a significant speed bump in the shopping journey.

Leverage user-generated content

The smaller your brand name relative to your competitors, the more shoppers will be skeptical of the quality of your products and services. Customers on most ecommerce sites take each other’s reviews to heart and write their own. There is no excuse for not allowing this capability. If you don’t permit user content, you’re not helping your shoppers become comfortable with your products, and they will leave your storefront and convert to a competitor providing this functionality.

Clarify pricing and discounts

It is imperative that pricing for each product is not only displayed prominently and clearly, but also broken down in relation to discounts or applicable sales. Based on a survey of American shoppers, 71 percent had abandoned a retailer at some point because of pricing concerns; they further reported finding better deals online. Your only differentiator, in the eyes of many potential customers, may be a seasonal sale or the discounts available by combining certain goods. For every item in grid, list, and detail views, the best ecommerce storefronts clearly delineate applicable discounts.

Provide a prominent path to assistance

Customers prefer immediate resolution to their questions and issues. Providing a live chat feature as well as clearly visible contact information throughout your site can go a long way in turning shoppers into buyers. Most shoppers are accustomed to finding help and support information in two areas: in the top right part of the navigation, adjacent to login functionality, and in the site footer.

Checkout

Always provide a guest checkout option

When customers embark on the checkout process, speed bumps such as mandatory account creation can be costly to your business. Guest checkouts empower customers to quickly place an order if they value the product more than membership. If you insist on guiding shoppers toward site membership, do so after an order has been placed with an invitation to create an account.

Map out the checkout

Customers appreciate seeing their progression across checkout phases. Provide graphical feedback of the movement between forms for shipping information, billing information, and order completion. Be sure to indicate which step the user is currently on. This helps customers gauge how quickly they can move through the process. In addition, providing this feedback bolsters confidence that changes can easily be made if a shopper decides to revisit one of the checkout stages.

Prioritize security awareness

With information breeches increasingly common, shoppers are more security conscious. They look toward the presence of specific iconography on your site—locks, checkmarks, and the like—to determine if their transaction will be secure. It’s better still if the graphics correspond to recognized brands in security, such as Verisign and McAfee. You need to make sure that these cues appear during the checkout process, where the need for trust is greatest.

Optimization

Optimize for quick load times

Internet access speeds vary significantly based on your customer’s browsing platform and available bandwidth. Because there’s a good chance your shoppers are on a mobile device, it’s more important than ever to ensure your content loads before your customer calls it quits. Depending on the type of storefront you run, don’t forget to assess the extent to which API calls and scripts affect and compound overall load times.

Design with high scannability in mind

You want to do everything possible to facilitate the shopping process. To do this, you can’t ignore the scannability and prominence of important information on your site. Generous use of white space, minimal use of large text blocks, and easily distinguishable hyperlink styles are all examples of ways to improve your site’s scannability. When customers are able to process the information on your site more swiftly, it’s more likely they’ll complete a purchase and bring repeat business.

Conclusion

Providing a frictionless shopping experience should be the goal of your ecommerce presence. As improved UX takes center stage in your organization, your customers will have more memorable and positive experiences as your business grows. By prioritizing an intuitive navigation, rich and intelligible product details, and a no-nonsense checkout process, you can greatly increase the odds that your shoppers will become buyers.

 

 

 

How to Trim Page Weight to Boost Page Speed and Increase Conversions [Infographic]

One of the best ways to increase website conversions and build customer trust in your online brand is to boost your page speed. However, many businesses fail to make the connection between site performance and increased revenue. More than half of mobile web visitors will abandon sites that take more than three seconds to load, and one in five users who abandon your website will never return. Increasing page speed leads to more visitors and higher conversion rates, and boosts your bottom line. Website performance has many facets, but today we will focus on page weight (sometimes called page size).

What is page weight?

Page weight is the combined size of all elements required to load a web page such as HTML, CSS, JavaScript, images, and custom fonts. Reducing page weight can significantly improve your page speed, especially for mobile visitors.

  • Wireless for 4G LTE speeds can reach up to 12 Mbps.
  • According to the HTTP archive, the average size of a mobile page is 2.2 MB (a nearly 70 percent increase from a year ago) and consists of 92 individual HTTP requests on average.

If we assume an average 4G LTE speed of 8.5 Mbps, every megabyte increase in page weight slows your site down by another second. While modern 4G networks offer plenty of bandwidth, latency of mobile networks is the silent performance killer. Even under favorable conditions, latency can easily add 300ms or more to your TCP round trip time (RTT).

For example, the average mobile page makes 92 HTTP requests. Assuming 20 percent of those requests establish a new TCP connection, your site’s download time will increase by 5.4 seconds. For the average mobile page (2.2 MB), that’s 7.6 seconds total—more than double the 3-second benchmark.

Measuring page speed and page weight

Before you can optimize page speed, you must first establish a performance baseline. Without one, you will fly blind; any optimization efforts you make are just guesses and you won’t know if you’ve improved speed or not. The fastest way to start is with the tools already included in your browser, or with a free online speed test.

Page speed tools

Reducing page weight

Now that we understand the need to reduce page weight, let’s look at some best practices to do that. Each technique deserves its own blog post, but for now we will divide them into three high-level optimization categories.

Reduce file size

  • Optimize images for web and mobile devices.
  • Remove unnecessary characters from custom fonts.
  • Enable gzip compression in your web server configuration.

Reduce HTTP requests

  • Avoid redirects, especially for landing pages.
  • Concatenate CSS and JavaScript files.
  • Use image sprites, especially for icon sets.

Network optimizations

  • Use a content delivery network to reduce latency and RTT.
  • Use responsive and adaptive design techniques to further trim page weight for mobile visitors.
  • Leverage browser caching to avoid downloading resources more than once.

For more in-depth discussions on each of these techniques to reduce page weight, check out these guides:

Continuously monitoring your page speed

Improving page speed isn’t a one-time activity. With new content, new products, and new features, your site is always changing. Each change could add weight and ruin your previous optimization efforts. That’s why it’s critical to routinely monitor all aspects of your site’s performance. With a good End-User Monitoring solution, you can:

  • Analyze site performance trends over time
  • Alert when page load times deviate from established baselines
  • Provide in-depth vision into real user sessions
  • Correlate performance regressions and improvements with business outcomes in real time

Next steps

The results are in: Trimming page weight will boost page speed and increase your company’s reputation and conversion rates, especially for mobile. Armed with tools to measure website performance, techniques to reduce page weight, and solutions to continuously monitor your end-user experience, it’s time for you to take action. Slim down your pages, turbocharge your site’s performance, and leave your competitors in the dust!

The Smoke and Mirrors of UX vs. Application Performance

Can a better UX simultaneously deliver a worse user experience? It sounds like a paradox, but it may be more common than you think. It describes a category of UX design practices that have little to do with improving the actual experience and everything to do with suggesting that the experience is a good one.

The difference can be subtle, but true user experience improvement begins with precision application performance optimization that only an APM solution can provide. APM diagnoses the cause of a slowdown, allowing developers to address the root cause and not the surface symptoms. Ignoring the underlying performance bottlenecks and tricking the user with a UI bandaid is akin to placing tape over a crack in your drywall — you’re only covering up the symptom of an underlying problem. That’s the difference between improving the quality of the software and generating economic waste for short term gains.

Defining Perceived Performance

UX designers commonly deploy a set of techniques related to “perceived performance.” This is the
principle that the user’s perceptions about the app’s performance need to be managed. One real world example involves a test of loading screen animations for the mobile version of Facebook. The test demonstrated that users reported an improved perception of the app’s loading speed when developers changed the design of the loading animation.

As iOS developer, Rusty Mitchell reported, “A Facebook test indicating that when their users were presented with a custom loading animation in the Facebook iOS app, they blamed the app for the delay. But when users were shown the iOS system spinner, they were more likely to blame the system itself.”

Perceived performance techniques are part a larger trend in design beyond the world of software known as “benevolent deception.”

A Study of Benevolent Deception

In a report by Microsoft and university researchers, author Eytan Adar said his team found increasing use of “benevolent deception” in a range of software applications. Adar said that these deceptions arise from the stress of opposing market forces. While enterprise and consumer apps are growing more complex, users increasingly prefer apps with a simpler, more intuitive interface. The complexities must be simplified on the front end and deception is the most direct route.

Adar wrote, “We’re seeing the underlying systems become more complex and automated to the point where very few people understand how they work, but at the same time we want these invisible public-facing interfaces. The result is a growing gulf between the mental model (how the person thinks the thing works) and the system model (how it actually works). A bigger gulf means bigger tensions, and more and more situations where deception is used to resolve these gaps.”

Beyond Benevolent Deception

One step beyond benevolent deception is “volitional theater,” which refers to functions and displays that don’t correspond to the underlying processes. Outside the world of software defined businesses, you can see these techniques at work in the fact that elevator “Close Doors” buttons don’t do anything at all. They just give impatient riders something to do until the doors close on their own.

Pretty shocked? Yep, you better believe it.

The New York Times cited several other examples of volitional theater, such as the “Press to Walk” button on street corners. Remember all those times you kept the crosswalk pedestrian button to cross the street? You were most likely pushing a button designed to calm your nerves that didn’t have any actual impact on the timing of the street lights. Designers refer to non-operation controls as “placebo buttons.” They shorten the perceived wait time by distracting people with the imitation of control.

Three Types of Volitional Theater

Adar’s research identified three major trends in the way that volitional theater is being used in modern UX application design:

System Images Deception: Designed to reframe what the system is doing.

This category of deceptive UX includes “Sandboxing.” That’s where developers create a secondary system that operates in a way that’s different from a full application. The example Adar gave is the Windows Vista Speech Tutorial. This works as a separate program from the main speech recognition system. In reality, the tutorial is a “safe” version of the environment that was built to be less sensitive to mistakes to simplify the learning process.

Behavioral Deceptions: Designed to offer users the appearance of change.

This category goes back to the idea of placebo buttons. By giving users an “OK” or “Next” button, the system can smooth over performance delays. UX designers use a host of optical illusions and cinematographic techniques like blurring to suggest motion when nothing is happening on screen. Examples include pattern emergence, reification (fill-in-the-gaps), image multi-stability, and object invariance. These often cover over issues related to slow-performing software.

Mental Model Deceptions: Designed to give users a specific mental model about how the system operates.

Explainer videos may be the worst offenders in this category, since they apply dramatic and distracting metaphors in an attempt to engage distracted prospects. Many times, the more outlandish the model, the more memorable it is. These mental models help sales, but support teams then have to explain how things really work to frustrated customers. It also covers popular skeuomorphs, like the sound of non-existent static on Skype phone calls.

Better Practices in UX

Despite these trends, there are many developers who remain strongly opposed to benevolent deception and volitional theater. Some of their insights and arguments are presented on the Dark Patterns website.

These developers don’t feel that it’s ethical to trick the user or remove their freedom to know what’s going on with the software. Instead of masking issues, they feel that slowdowns or clunky design can and should be eliminated. What developers need to achieve that is a comprehensive monitoring solution that pinpoints the causes of latency. Then engineering teams can go in and make the necessary code and infrastructure optimization necessary for software to actually perform better. They consider volitional theater to be sloppy design. One way to start correcting those design flaws is by evaluating the software and corresponding infrastructure dependencies.

UX and Business Transactions

The first step in approaching application performance monitoring is identifying an entirely new unit of measuring the success of your applications. At AppDynamics, we’ve solved this by introducing the concept of a Business Transaction.

Imagine all the requests that are invoked across an entire distributed application ecosystem necessary to fulfill a specific request. For example, if a customer wants to “checkout” their shopping cart, they click the “Checkout” button. Upon that single click, a request is made from the browser through the internet to a web server, probably a front-end Node.js application which then calls an internal website, a database, and a caching layer. Each component within that call that’s invoked is responsible to fulfill that click, so we call the lifeline of that request a Business Transaction.

The Business Transaction perspective prioritizes the goals of the end user. What are your customers really trying to achieve? In the past, developers argued over whether application monitoring or network monitoring were more important. Here’s how an AppDynamics engineer reframed the problem:

“For me, users experience ‘Business Transactions’  — they don’t experience applications, infrastructure or networks. When a user complains, they normally say something like, ‘I can’t log in,’ or, ‘My checkout timed out.’ I can honestly say I’ve never heard them say, ‘The CPU utilization on your machine is too high,’ or, ‘I don’t think you have enough memory allocated. Now think about that from a monitoring perspective. Do most organizations today monitor business transactions, or do they monitor application infrastructure and networks? The truth is the latter, normally with several toolsets. So the question ‘Monitor the application or the network?’ is really the wrong question. Unless you monitor business transactions, you’re never going to understand what your end users actually experience.”

Starting from the business transactions will help DevOps teams view your system as a function of business processes vs. individual requests firing off everywhere. This is how you can start solving the problems that matter most to customers. It’s always better to diagnose and solve performance issues instead of merely covering them up or distracting the user with UX techniques.

Isolating Causes

This approach is much closer to approximating the true UX of an average user. Starting from the business transaction, you can use APM solutions to drill down from end user clients to application code-level details. That’s how you isolate the root cause of performance problems that matter most to specific users.

Of course, isolating the problem doesn’t matter unless you resolve it. We’ve discussed this with application teams who have isolated problems related to runtime exceptions for Java-based applications in production, but they tended to gloss over those that didn’t break the application.

That’s a mistake we addressed in a series about Top Application Performance Challenges. Bhaskar Sunkar, AppDynamics co-founder and CTO, concluded that, “Runtime Exceptions happen. When they occur frequently, they do appreciably slow down your application. The slowness becomes contagious to all transactions being served by the application. Don’t mute them. Don’t ignore them. Don’t dismiss them. Don’t convince yourself they are harmless. If you want a simple way to improve your application’s performance, start by fixing up these regular occurring exceptions. Who knows, you just might help everyone’s code run a little faster.”

This approach is becoming even more critical as smaller devices attempt to crunch higher volumes of data. A good example is how applications built for the Apple Watch are expected to provide the same level of performance as those built for a tablet or smartphone. Users don’t lower their expectations to compensate for processing power. In the end, users care about the benefits of the application, not the limitations of the device.

The Age of Experience

Gartner reported that 89 percent of companies expected to compete based on customer experience by 2017. However, customers want more from their software than just great UX. Beautiful design and clever tricks can’t distract these time-sensitive users from being very aware of application performance issues.

As data volumes accelerate and devices shrink, it will grow harder to maintain optimal performance and continuous improvement schedules. Applications teams need speed and precision tools to pinpoint areas for improvement. At the same time, more businesses are undergoing their own digital transformations and discovering the importance of performance management for the first time.

Developments in user hand movement recognition and motion control mapping have been accelerating along multiple fronts, such as VR best practices by Leap Motion and Google’s Project Soli, which uses micro-radar to precisely translate user intent by the most minute finger gestures. These advancements likely represent what’s coming next in terms of UX, but they will demand IT infrastructures with access to a great deal more data-processing power.

Drilling Down to Maximum Impact

Excellence in UX for the next generation of applications has to start by troubleshooting business transaction performance from the user’s point of view. From there, you’ll be able to drill down to the code level and intelligently capture the root cause that impacts the user.

Applications teams should be responsible for end-to-end analysis of the total UX, beyond just the relative health of the application and infrastructure nodes. Get started with a free trial of AppDynamics to see what next-generation application performance monitoring can reveal about your application and your business.

Why User Experience Is Critical to Your Business Outcomes [Infographic]

There are customers who love engaging with your business, and those who don’t. Now more than ever, this dichotomy has significant competitive implications. Social media enables the user experience to go viral, which gives a megaphone to your business’s most dissatisfied customers. This—among less dramatic reasons—is why modern organizations have placed the discipline of User Experience, or UX, at the core of what they do.

UX encompasses all aspects of the end user’s interaction with a business’s products and services. The UX mindset is rooted in how your firm approaches design, research, and strategy. And when design and research are elevated by a strategy that is directly accountable to user needs, the results can be transformative. A company that is UX-driven better sets itself up for growth and repeat business—not just because their application features do the job, but because customers love doing the job with their application features. Read on to learn how to promote the culture of UX in your organization.

 

Start With a Strategy

Think of a positive experience you had while shopping online. You might remember completing a successful search, using informative tools for comparison shopping, and cruising through a frictionless checkout process. Now bring to mind a negative interaction: zero search results, puzzling navigation and taxonomy, unclear item stock levels, no guest checkout. Companies notorious for negative customer experiences neglect to meaningfully connect UX with their strategic vision. This applies to both brick-and-mortar stores, websites, and mobile apps.

The user experience is dynamic, and to optimize for it requires ongoing dialogue. A UX-conscious firm creates processes to actively listen to, understand, and give voice to current and potential customers. A company that values the user experience hears this voice at every step of production and responds accordingly.

It follows, then, that a firm’s design and research activities should carry the lion’s share of a user-centered vision. Let’s turn now to the hallmarks of health in both design and research.

Design

  • Design is iterative, and evolves from sketches to high fidelity through a user-conscious process. If you’re not agile, make a plan to get there.
  • Both visual and interactive design should receive equal attention. Don’t focus on one at the other’s expense.
  • Establish standards and best practices that guide most day-to-day design decisions.
  • A qualified design practitioner is comfortable in agile environments and cognizant of strategy and research when they’re on the job. In this sense, the best designers are creative business solutions specialists who are sympathetic toward the end user experience.

Research

  • Most research falls into two categories: generative (or discovery) and evaluative. Discover what should be built and evaluate how it should be improved.
  • Generative research includes ethnography, needs analysis, surveys, and focus groups.
  • Evaluative research typically employs usability tests, heuristic evaluations, A-B testing, and competitive best-of-breed tests. It’s not uncommon for evaluative research to occasionally yield generative results.
  • Quantitative and qualitative research methodologies are typically best suited to determine the what and why of user pain points. Try not to mix these disciplines in the same study.
  • A qualified research practitioner has a strong command of qualitative and quantitative methodologies and understands when each is called for. Often, they are the first to hear a user’s pain and will naturally advocate for them.
  • Today speed of research is of equal importance to quality. Those who research faster are able to make UX improvements faster. To this extent, end user measurement and analytics provide valuable insights into the user’s journey, the smartphone or tablet they use, and where they are located.

Next Steps

Investing in the following areas will help your company bring the user experience into focus. Even modern organizations will benefit from these actions, as customer needs are always in flux.

  • Know your users
    All too often, products are built on myths of users, not on an accurate assessment of their needs. If you haven’t invested in quality persona development, do it. The earlier the better.
  • Benchmark
    To assist in justifying UX ROI, you need to know where things stand today. What happens when users encounter your service? Why are they abandoning? What are the top three pain points? Click stream analysis, usability studies, and heuristic evaluations are tried-and-true ways of getting your bearing. Evaluate as many user touchpoints as feasible.
  • Strengthen your data awareness
    Invest in measurement and analytics architecture. Structure your organization so the people who first receive analytics are not incentivized to skew the data in any way. Research, product, and design groups should enjoy equal access to quality metrics. Otherwise, an “us versus them” mentality can develop. Invest in end user measurement and analytics solutions to get closer to your users and understand how every image, function, and feature impact UX.
  • Define your vision and your customer’s place within it
    Strategic conversations frequently exclude user experience. To foster a true UX culture, your organization should proactively give customer needs a voice in most critical decisions. This means establishing goals and accountability pursuant to those needs.

How to Measure the Results

Firms with lower UX awareness or limited availability of analytics often focus their attention on metrics gleaned within or between research exercises (such as time to task completion or benchmark ratings). While this field of view can prove useful in evaluating customer interactions in isolation, it often misses the larger context of satisfaction across the full spectrum of brand touchpoints.

A mature approach to measuring the impact of user experience taps into diverse data sources. Taken as a whole, measurements including surveys, click stream analytics, call center metrics, and conversion rates can tell a compelling story. Add to that the insight provided by qualitative research methodologies, and a panoramic view of the user journey begins to emerge. Today’s most UX-forward organizations consider all these factors, along with more conventional business KPIs, in order to justify their focus on user needs.

Conclusion

Drawing up a strategic UX roadmap is a great way to link tactics and methods with a user-centered vision for success. This helps promote holistic thinking about how your customer interacts with your brand. What should result is an organization that is both tuned in to its user’s needs and experiences and ready to be changed by what it learns.

Share “Why User Experience Is Critical to Your Business Outcomes” On Your Site

Proactively manage your user experience with EUM

As companies embark on a digital transformation to better serve their customers the challenge of managing the performance and satisfaction with each user becomes ever more critical to the success of the business. When we look at the breakout companies today like Uber, Airbnb, and Slack it’s evident that software is at the core of their success in each industry.  Consumers have gone from visiting a bank’s branch to making a transfer to using their desktop or mobile device to fulfill it instantaneously.  If the web application is not responding then it could be the equivalent of walking into a long line at the branch and walking out creating a negative impression of that brand. In a digital business making each customer interaction with your digital storefront successful should be at the core of the business objectives.

So, how do we ensure that every digital interaction is successful and performs quickly? The answer lies in using multiple approaches to manage that end-user experience. In our tool arsenal, we have real-user monitoring tools and synthetic monitoring tools that when combined with APM capabilities can help us quickly identify poor performing transactions and quickly triage the root cause to minimize end-user impact. Each tool covers a core area for the web application that gives visibility into the whole experience. For real user monitoring tools, having an understanding of the performance and sequence of every step in the customer path is critical to identifying areas of opportunity in the funnel and increase conversions. Real user monitoring provides the measurement breadth impossible to achieve with only synthetic tools and puts back end performance into a user context. On the synthetic side of monitoring, a repeatable and reproducible set of measurements focused on visual based timings can be of great value in baselining the user experience between releases, pro-actively capturing errors before users are impacted, benchmarking against competitors, and baselining and holding 3rd party content providers accountable for the performance of their content.  Synthetic measurements also allow for script assertions to validate that the expected content on a page is delivered in a timely way and accurately alert when there are deviations from the baseline or errors occur.

In a recent survey sponsored by AppDynamics, over 50% of people who manage web and mobile web applications identified 3rd party content as a factor in delivering a good end-user experience.  Our modern web and mobile sites most often contain some kind of 3rd party resource from analytics tracking, to social media integration, all the way to authentication functionality. Understanding how each of these components affects the end user experience is critical in maintaining a healthy and well performing site. Using a real-user monitoring tool like AppDynamics Browser EUM solution, you can visualize slow content that may be affecting a page load and identify the provider.  The challenge now is how do you see if this provider is living up to their performance claims and how do you hold them accountable.

Third party benchmarking is a capability that a synthetic monitoring solution best provides.  With a synthetic transaction you are able to control many variables that are impossible to do on a real user measurement.  A synthetic measurement will always use the same browser/browser version, connectivity profile, hardware configuration, and is free from spyware, virus, or adware. Using this clean room environment, you can see what the consistent performance of a page along with every resource to manage and track each element downloaded from multiple synthetic locations worldwide. In this instance, when your monitoring system picks up an unusually high number of slow transactions you can directly drill down and isolate the cause either to a core site slowdown or a 3rd party slowdown and compare the performance across synthetic to determine if it’s a user/geography centric issue or something happening across the board.

In managing the user experience, having all pertinent data in real-time on a consolidated system can be the difference between a 5 min performance degradation or a 5 hour site outage while multiple sources of discordant information are compiled and rationalized. The intersection of data from real-user monitoring and synthetic monitoring can bring context to performance events by correlating user session information like engagement and conversion with changes in performance of 3rd party content or end-user error rates. A 360 degree view of the customer experience will help ensure a positive experience for your customers.

Interested in learning more? Make sure to watch our Winter ’16 Release webinar

 

UX – Monitor the Application or the Network?

Last week I flew into Las Vegas for #Interop fully suited and booted in my big blue costume (no joke). I’d been invited to speak in a vendor debate on User eXperience (UX): Monitor the Application or the Network? NetScout represented the Network, AppDynamics (and me) represented the Application, and “Compuware dynaTrace Gomez” sat on the fence representing both. Moderating was Jim Frey from EMA, who did a great job introducing the subject, asking the questions and keeping the debate flowing.

At the start each vendor gave their usual intro and company pitch, followed by their own definition on what User Experience is.

Defining User Experience

So at this point you’d probably expect me to blabber on about how application code and agents are critical for monitoring the UX? Wrong. For me, users experience “Business Transactions”–they don’t experience applications, infrastructure, or networks. When a user complains, they normally say something like “I can’t Login” or “My checkout timed out.” I can honestly say I’ve never heard them say –  “The CPU utilization on your machine is too high” or “I don’t think you have enough memory allocated.”

Now think about that from a monitoring perspective. Do most organizations today monitor business transactions? Or do they monitor application infrastructure and networks? The truth is the latter, normally with several toolsets. So the question “Monitor the Application or the Network?” is really the wrong question for me. Unless you monitor business transactions, you are never going to understand what your end users actually experience.

Monitoring Business Transactions

So how do you monitor business transactions? The reality is that both Application and Network monitoring tools are capable, but most solutions have been designed not to–just so they provide a more technical view for application developers and network engineers. This is wrong, very wrong and a primary reason why IT never sees what the end user sees or complains about. Today, SOA means applications are more complex and distributed, meaning a single business transaction could traverse multiple applications that potentially share services and infrastructure. If your monitoring solution doesn’t have business transaction context, you’re basically blind to how application infrastructure is impacting your UX.

The debate then switched to how monitoring the UX differs from an application and network perspective. Simply put, application monitoring relies on agents, while network monitoring relies on sniffing network traffic passively. My point here was that you can either monitor user experience with the network or you can manage it with the application. For example, with network monitoring you only see business transactions and the application infrastructure, because you’re monitoring at the network layer. In contrast, with application monitoring you see business transactions, application infrastructure, and the application logic (hence why it’s called application monitoring).

Monitor or Manage the UX?

Both application and network monitoring can identify and isolate UX degradation, because they see how a business transaction executes across the application infrastructure. However, you can only manage UX if you can understand what’s causing the degradation. To do this you need deep visibility into the application run-time and logic (code). Operations telling a Development team that their JVM is responsible for a user experience issue is a bit like Fedex telling a customer their package is lost somewhere in Alaska. Identifying and Isolating pain is useful, but one could argue it’s pointless without being able to manage and resolve the pain (through finding the root cause).

Netscout made the point that with network monitoring you can identify common bottlenecks in the network that are responsible for degrading the UX. I have no doubt you could, but if you look at the most common reason for UX issues, it’s related to change–and if you look at what changes the most, it’s application logic. Why? Because Development and Operations teams want to be agile, so their applications and business remains competitive in the marketplace. Agile release cycles means application logic (code) constantly changes. It’s therefore not unusual for an application to change several times a week, and that’s before you count hotfixes and patches. So if applications change more than the network, then one could argue it’s more effective for monitoring and managing the end user experience.

UX and Web Applications

We then debated which monitoring concept was better for web-based applications. Obviously, network monitoring is able to monitor the UX by sniffing HTTP packets passively, so it’s possible to get granular visibility on QoS in the network and application. However, the recent adoption of Web 2.0 technologies (ajax, GWT, Dojo) means application logic is now moving from the application server to the users browser. This means browser processing time becomes a critical part of the UX. Unfortunately, Network monitoring solutions can’t monitor browser processing latency (because they monitor the network), unlike application monitoring solutions that can use techniques like client-side instrumentation or web-page injection to obtain browser latency for the UX.

The C Word

We then got to the Cloud and which made more sense for monitoring UX. Well, network monitoring solutions are normally hardware appliances which plug direct into a network tap or span port. I’ve never asked, but I’d imagine the guys in Seattle (Amazon) and Redmond (Windows Azure) probably wouldn’t let you wheel a network monitoring appliance into their data-centre. More importantly, why would you need to if you’re already paying someone else to manage your infrastructure and network for you? Moving to the Cloud is about agility, and letting someone else deal with the hardware and pipes so you can focus on making your application and business competitive. It’s actually very easy for application monitoring solutions to monitor UX in the cloud. Agents can piggy back with application code libraries when they’re deployed to the cloud, or cloud providers can embed and provision vendor agents as part of their server builds and provisioning process.

What’s interesting also is that Cloud is highlighting a trend towards DevOps (or NoOps for a few organizations) where Operations become more focused on applications vs infrastructure. As the network and infrastructure becomes abstracted in the Public Cloud, then the focus naturally shifts to the application and deployment of code. For private clouds you’ll still have network Ops and Engineering teams that build and support the Cloud platform, but they wouldn’t be the people who care about user experience. Those people would be the Line of Business or application owners which the UX impacts.

In reality most organizations today already monitor the application infrastructure and network. However, if you want to start monitoring the true UX, you should monitor what your users experience, and that is business transactions. If you can’t see your users’ business transactions, you can’t manage their experience.

What are your thoughts on this?

AppDynamics is an application monitoring solution that helps you monitor business transactions and manage the true user experience. To get started sign-up for a 30-day free trial here.

I did have an hour spare at #Interop after my debate to meet and greet our competitors, before flying back to AppDynamics HQ. It was nice to see many of them meet and greet the APM Caped Crusader.

App Man.