8 Steps to Migrating from JavaScript to TypeScript

Recently, we’ve been moving our Browser RUM agent from JavaScript to TypeScript. Though it’s been a challenge, we enjoyed seeing how the change will benefit us and it’s been fun learning a new language in the process. Let me share a little of how we migrated to TypeScript, some of the difficulties that arose and how we tackled them.

Why TypeScript

Before moving to TypeScript, our Browser RUM agent had thousands lines of code, but was suppressed in to just two JavaScript files.

We felt obligated to refactor it before doing any real work, to make our life easier when adding additional features. Having experienced the pain of developing a large scale app in JavaScript, we decided to take a shot at their sibling languages that have better support for large-scale development.

After looking into languages such as TypeScript, CoffeeScript, and PureScript, we decided to go with TypeScript for a few reasons:

  1. Static Typing
  2. Module and Classes
  3. Superset of JavaScript, easier to learn for JavaScript developers
  4. Success story from our front-end team

8 Steps to Migrating to TypeScript

  1. Prepare Yourself

  1. Rename Files

We renamed all the js files to ts files and as TypeScript is just a superset of JavaScript, you can just start compiling your new ts files with the TypeScript compiler.

  1. Fix Compiling Errors

There were quite a few compiling errors due to the static type checking by the compiler. For instance, the compiler will complains about js code below:
Example One

var xdr = window.XDomainRequest;

Solution

// declare the specific property on our own
interface Window {
    XDomainRequest?: any;
}

Since “XDomainRequest” is an IE only property to send cross domain request, it’s not declared in the “lib.d.ts” file (it’s a file declaring the types of all the common JavaScript objects and APIs in the browser and it’s referenced by typescript compiler by default).
You will get “error TS2339: Property ‘XDomainRequest’ does not exist on type ‘Window’.”.
The solution is to extend the Window interface in “lib.d.ts” with an optional “XDomainRequest” property.

Example Two

function foo(a: number, b: number) {
    return;
}

foo(1);

Solution

// question mark the optional arg explicitly
function foo(a: number, b?: number) {
    return;
}

Optional function args need to be marked explicitly in typescript, or it gives “error TS2346: Supplied parameters do not match any signature of call target.”.
The solution is to explicitly use “?” to mark the parameter as optional.

Example Three

var myObj = {};
myObj.name = "myObj";

Solution

// use bracket to creat the new property
myObj['name'] = 'myObj';
// or define an interface for the myObj
interface MyObj {
    name?: string
}

var myObj: MyObj = {};
myObj.name = 'myObj';

When assign an empty object “{}” to a variable, typescript compiler infers the type of the variable to be empty object without any property.
So accessing “name” property gives “error TS2339: Property ‘name’ does not exist on type ‘{}'”.
The solution is to declare an interface with an optional “name” property for it.

It’s kind of fun to fix these errors and you learn about the language and how the compiler can help.

  1. Fix Test Cases

After successfully getting a JavaScript file from those ts files, we ran the tests against the new JavaScript files and fixed all the failures.

One example of the test failures caused by moving to TypeScript is the difference between these two ways of exporting a function:

export function foo() {}
export var foo = function() {}

Assuming your original JavaScript code is:

var A = {
    foo: function() {},
    bar: function() {foo();}
}

The test case shows:

var origFoo = A.foo;
var fooCalled = false;
A.foo = function(){fooCalled = true;};
A.bar();
assertTrue(fooCalled);
A.foo = origFoo;

If the TypeScript rewrite for the JavaScript is:

module A {
    export function foo() {}
    export function bar() {foo();}
}

The test case will fail. Can you tell why?

If you look at the generated JavaScript code, you will be able to see why.

// generated from export function foo() {}
var A;
(function (A) {
    function foo() { }
    A.foo = foo;
    function bar() { foo(); }
    A.bar = bar;
})(A || (A = {}));

In the test case, when the A.foo is replaced, you are just replacing the “foo” property of A but not the foo function, the bar function still calls the same foo function.

export var foo = function(){}

can help here.

TypeScript

module A {
    export var foo = function () { };
    export var bar = function () { foo(); };
}

generates

// generated from expot var foo = function() {}
var A;
(function (A) {
    A.foo = function () { };
    A.bar = function () { A.foo(); };
})(A || (A = {}));

Now we can replace the foo function called by A.bar.

  1. Refactor Code

TypeScript Modules and Classes help organize the code in a modularized and object-oriented way. Dependenies are referenced in the file header.

///<reference path=“moduleA.ts” />
///<reference path=“moduleB.ts” />
module ADRUM.moduleC.moduleD {
    ...
}

One thing I like when compiling a ts file is using the “–out” option to concatenate all the directly or indirectly referenced ts files, so I don’t need to use requirejs or browserify for the same purpose.

With TypeScript, we can define classes in a classical inheritance way rather than the prototypal inheritance way which is more familiar to Java and C++ programmers. However, you lose the flexibility JavaScript provides too.

For example, if you are seeking a way to hide a function in the class scope, save your time, it isn’t supported. The workaround is to define the function in the module and use it in the class.

TypeScript allows you to define modules and classes in an easy way and generates the idiomatic JavaScript for you. As a result, I feel like you may also have less opportunities to learn more advanced JavaScript knowledge than programming in pure JavaScript.

But just like moving from assembly to C/C++, by and large, it’s still a good thing.

We did not bother adding all the type information in the existing code, but we’ll need to do it when changing or adding code.

It is also worth moving the test cases to TypeScript, as the test cases could be auto updated when refactoring the code in the IDE.

  1. Fix Minification

Don’t be surprised if the minification is broken especially when you use Google Closure Compiler with advanced optimization.

Problem 1: Dead Code Mistakenly Removed

The advanced optimization has a “dead code removal” feature that removes the code which recognized as unused by the compiler.

Some early version closure compiler (i.e. version 20121212) mistakenly recognizes some code in TypeScript modules as unused and removes them. Fortunately, it’s been fixed in the latest version compiler.

Problem 2: Export Symbols in Modules

To tell the compiler not to rename the symbols in your code, you need to export the symbols by the quote notation. It means you need to export the API as shown below to allow the API name to stay constant even with the minified js file.

module A {
    export function fooAPI() { }
    A["fooAPI"] = fooAPI;
}

transpiled to:

var A;
(function (A) {
    function fooAPI() { }
    A.fooAPI = fooAPI;
    A["fooAPI"] = fooAPI;
})(A || (A = {}));

It’s a little bit tedious. Another option is to use the deprecated @expose annotation.

module A {
    /**
    * @expose
    */
    export function fooAPI() { }
}

This looks like it will removed in future, and hopefully you might be able to use @export when it’s removed. (Refer to the discussion at @expose annotation causes JSC_UNSAFE_NAMESPACE warning.)

Problem 3: Export Symbols in Interfaces

If you define a BeaconJsonData interface to be passed to other libraries, you’ll want to keep the key names.

interface BeaconJsonData {
    url: string,
    metrics?: any
}

@expose does not help as the interface definition transpile to nothing.

interface BeaconData {
    /**
    * @expose
    */
    url: string,
    /**
    * @expose
    */
    metrics?: any
}

You can reserve the key names by quote notation:

var beaconData: BeaconData = {
    'url': "www.example.com",
    'metrics': {}
};

But what if you want to assign the optional key later?

var beaconData: BeaconData = {
    'url': "www.example.com"
};

// ‘metrics’ will not be renamed but you lose the type checking by ts compiler
// because you can create any new properties with quote notation
beaconData["metrics"] = {…};
beaconData["metricsTypo"] = {}; // no compiling error

// ‘metrics’ will be renamed but dot notation is protected by type checking
beaconData.metrics = {…};
beaconData.metricsTypo = {…}; // compiling error

What we did is to expose the key name as
/** @expose */ export var metrics;
in the interface file to prevent the closure compiler from renaming it.

  1. Auto-Generate Google Closure Compiler Externs Files

For the Closure Compiler, if your js code calls external js library’s APIs, you need to declare these APIs in an externs file to tell the compiler not to rename the symbols of these APIs. Refer to Do Not Use Externs Instead of Exports!

We used to manually create the externs files and any time we use a new API, we have to manually update its externs file. After using TypeScript, we found that TypeScript .d.ts and the externs file have the similar information.

They both contain the external API declarations — .d.ts files just have more typing information — so we can try to get rid of one of them..

The first idea came into my mind is to check if the TypeScript compiler supports minification. As the ts compiler understand the .d.ts file, it won’t need the externs files. Unfortunately, it doesn’t support it, so we have to stay with the Google Closure Compiler.

Then, we thought the right thing is to generate the externs files from the .d.ts files. Thanks to the open source ts compiler, we use it to parse the .d.ts files and convert them to externs file (see my solution at https://goo.gl/l0o6qX).

Now, each time we add a new external API declaration in our .d.ts file, the API symbols automatically appears in the externs file when build our project.

  1. Wrap the ts code in one function

Ts compiler generates code for modules like below:

// typescript
module A {
    export var a: number;
}

module A.B {
    export var b: number;
}

// transpiled to javascript
var A;
(function (A) {
    A.a;
})(A || (A = {}));

var A;
(function (A) {
    var B;
    (function (B) {
        B.b;
    })(B = A.B || (A.B = {}));
})(A || (A = {}));

For each module, there is a variable created and a function called. The function creates properties in the module variable for exported symbols. However, sometimes you want to stop execution for some conditions such as your libraries need to be defined or it has been disabled, you need to wrap all the js code in a function by yourself, like:

(function(){
    if (global.ADRUM || global.ADRUM_DISABLED) {
        return;
    }

    // typescript generated javascript goes here

}(global);

Why web analytics aren’t enough!

Every production web application should use web analytics. There are many great free tools for web analytics, the most popular of which is Google Analytics. Google Analytics helps you analyze visitor traffic and paint a complete picture of your audience and their needs. Web analytics solutions provide insight into how people discover your site, what content is most popular, and who your users are. Modern web analytics also provide insight into user behavior, social engagement, client-side page speed, and the effectiveness of ad campaigns. Any responsible business owner is data-driven and should leverage web analytics solutions to get more information about your end users.

Web Analytics Landscape

Google Analytics

While Google Analytics is the most popular and the de facto standard in the industry, there are quite a few quality web analytics solutions available in the marketplace:

The Forrester Wave Report provides a good guide to choosing an analytics solution.

Forrester Wave

There are also many solutions focused on specialized web analytics that I think are worth mentioning. They are either geared towards mobile applications or getting better analytics on your customers’ interactions:

Once you understand your user demographics, it’s great to be able to get additional information about how performance affects your users. Web analytics only tells you one side of the story, the client-side. If you are integrating web analytics, check out Segment.io which provides analytics.js for easy integration of multiple analytics providers.

It’s all good – until it isn’t

Using Google Analytics on its own is fine and dandy – until you’re having performance problems in production you need visibility into what’s going on. This is where application performance management solutions come in. APM tools like AppDynamics provide the added benefit of understanding both the server-side and the client-side. Not only can you understand application performance and user demographics in real time, but when you have problems you can use the code-level visibility to understand the root cause of your performance problems. Application performance management is the perfect complement to web analytics. Not only do you understand your user demographics, but you also understand how performance affects your customers and business. It’s important to be able to see from a business perspective how well your application is performing in production:

 

Screen Shot 2013-10-29 at 1.34.01 PM

Since AppDynamics is built on an extensible platform, it’s easy to track custom metrics directly from Google Analytics via the machine agent.

The end user experience dashboard in AppDynamics Pro gives you real time visibility where your users are suffering the most:

Profile-PageView

Capturing web analytics is a good start, but it’s not enough to get an end-to-end perspective on the performance of your web and mobile applications. The reality is that understanding user demographics and application experience are two completely separate problems that require two complementary solutions. O’Reilly has a stellar article on why real user monitoring is essential for production applications.

Get started with AppDynamics Pro today for in-depth application performance management.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Scaling our End User Monitoring Cloud

Why End User Monitoring?

In a previous post, my colleague Tom Levey explained the value of Monitoring the Real End User Experience. In this post, we will dive into how we built a service to scale to billions of users.

The “new normal” for enterprise web applications includes multiple application tiers communicating via a service-oriented architecture that interacts with several databases and third-party web services. The modern application has multiples clients from browser-based desktops to native applications on mobile. At AppDynamics, we believe that application performance monitoring should cover all aspects of your application from the client-side to the server-side all the way back to the database. The goal of end user monitoring is to provide insight into client-side performance and capture errors from modern javascript-intensive applications. The challenge of building an end user monitoring service is that every single request needs to be instrumented. This means that for every request your application processes, we will process a beacon. With clients like FamilySearch, Fox News, BackCountry, ManPower, and Wowcher, we have to handle millions of concurrent requests.

1geo

AppDynamics End User Monitoring enables application owners to:

  • Monitor Their Global Audience and track End User Experience across the World to pinpoint which geo-locations may be impacted by poor Application Performance
  • Capture end-to-end performance metrics for all business transactions – including page rendering time in the Browser, Network time, and processing time in the Application Infrastructure
  • Identify bottlenecks anywhere in the end-to-end business transaction flow to help Operations and Development teams triage problems and troubleshoot quickly
  • Compare performance across all browsers types – such as Internet Explorer, FireFox, Google Chrome, Safari, iOS and Android
  • Track javascript errors

“Fox News already depends upon AppDynamics for ease-of-use and rapid troubleshooting capability in our production environment,” said Ryan Jairam, Internet Operations Lead at Fox News. “What we’ve seen with AppDynamics’ End-User Monitoring release is an even greater ability to understand application performance, from what’s happening on the browser level to the network all the way down to the code in the application. Getting this level of insight and visibility for an application as complex and agile as ours has been a tremendous benefit, and we’re extremely happy with this powerful new addition to the AppDynamics Pro solution.”

EUM Cloud Service

The End User Monitoring cloud is our super-scalable platform for data analysis and processing end user requests. In this post we will discuss some of the design challenges of building a cloud service capable of supporting billions of requests and the underlying architecture. Once End User Experience monitoring is enabled in the controller, your application’s requests are automatically instrumented with a very small piece of javascript that allows AppDynamics to capture critical performance metrics.

Screen Shot 2013-07-25 at 9.47.14 AM

The javascript agent leverages Web Episodes javascript timing library and the W3C Navigation Timing Specification to capture the end user experience metrics. Once the metrics are collected, they are pushed to the End User Monitoring cloud via a beacon for processing.

EUM (End User Monitoring) Cloud Service is our on-demand, cloud based, multi-tenant SaaS infrastructure that acts as an aggregator for the entire EUM metrics traffic. All the EUM metrics from the end user browsers from different customers are reported to EUM Cloud service. The raw browser information received from the browser is verified, aggregated, and rolled up at the EUM Cloud Service. All the AppDynamics Controllers (SaaS or on-premise) connect to the EUM Cloud service to download metrics every minute, for each application.

Design Challenges

On-Demand highly available

End users access customer web applications anywhere in the world and any time of the day in different time zones, whenever an AppDynamics instrumented web page is accessed. From the browser, EUM metrics are reported to the EUM Cloud Service. This requires a highly available on-demand system accessed from different geo locations and different time zones.

Extremely Concurrent usage

All end users of all AppDynamics customers using EUM solution continuously report browser information on the same EUM Cloud Service. EUM Cloud Service processes all the reported browser information concurrently and generate metrics and collect snapshot samples continuously.

High Scalability

The usage pattern for different applications throughout the day is different; the number of records to be processed at EUM Cloud vary with different applications at different times. The EUM Cloud Service automatically scale up to handle any surge in the incoming records and accordingly scale down with lower load.

Multi Tenancy support

The EUM Cloud Service process EUM metrics reported from different applications for different customers; the cloud service provides multi-tenancy. The reported browser information is partitioned based on customers and their different applications. EUM Cloud Service provides a mechanism for different customer controllers to download aggregated metrics and snapshots based on customer and application identification.

Cost

The EUM Cloud Service needs to be able to dynamically scale based on demand. The problem with supporting massive scale is that we have to pay for hardware upfront and over provision to handle huge spikes. One of the motivating factors when choosing to use Amazon Web Services is that costs scale linearly with demand.

Architecture

The EUM Cloud Service is hosted on Amazon Web Services infrastructure for horizontal scaling. The service has two functional components – collector and aggregator. Multiple instances of these components work in parallel to collect and aggregate the EUM metrics received from the end user browser/device. The transient metric data be transient is stored in Amazon S3 buckets. All the meta data information related to applications and other configuration is stored in the Amazon DynamoDB tables.

A single page load will send one or more beacon–one per base page and every iframe onload and one per ajax request. Javascript errors occurring post page load are also sent as error beacons.

The functionality of the nodes is to receive the metric data from the browser and process it for the controller:

  • Resolve the GEO information (request coming from the country/region/city) and add it to the metric using a in-process maxmind Geo-resolver.
  • Parse the User-Agent information and add browser information, device information and OS information to the metrics.
  • Validate the incoming browser reported metrics and discard invalid metrics
  • Mark the metrics/snapshots SLOW/VERY SLOW categories based on a dynamic standard deviation algorithm or using static threshold

Load Testing

For maximum scalability, we leverage Amazon Web Services global presence for optimal performance in every region (Virginia, Oregon, Ireland, Tokyo, Singapore, Sao Paulo). In our most recent load test, we tested the system as a collective to about 6.5 B requests per day. The system is designed to easily scale up as needed to support infinite load. We’ve tested the system running at many billions of requests per day without breaking a sweat.

Check out your end user experience data in AppDynamics

4breakdown

Find out more about AppDynamics Pro and get started monitoring your application with a free 15 day trial.

As always, please feel free to comment if you think I have missed something or if you have a request for content in an upcoming post.

Manpower Group Sees Real Results from End User Monitoring

Some companies talk about monitoring their end user experience and other companies take the bull by the horns and get it done. For those who have successfully implemented EUM (RUM, EUEM, or whatever your favorite acronym is) the technology is rewarding for both the company and the end user alike. I recently had the opportunity to discuss AppDynamics EUM with one of our customers and the information shared with me was exciting and gratifying.

The Environment

ManpowerGroup monitors their intranet and internet applications with AppDynamics. These applications are used for internal operations as well as customer facing websites; in support of their global business and accessed from around the word, 24×7. We’re talking about business critical, revenue generating applications!

I asked Fred Graichen, Manager of Enterprise Application Support, why he thought ManpowerGroup needed EUM.

“One of the key components for EUM is to shed light on what is happening in the “last mile”. Our business involves supporting branch locations. Having an EUM tool allows us to compare performance across all of our branches. This also helps us determine whether any performance issues are localized. Having the insight into the difference in performance by location allows us to make more targeted investments in local hardware and network infrastructure.”

Meaningful Results

Turning on a monitoring tool doesn’t mean you’ll automagically get the results you want. You also need to make sure your tool is integrated with your people, processes, and technologies. That’s exactly what ManpowerGroup has done with AppDynamics EUM. They have alerts based upon EUM metrics that get routed to the proper people. They are then able to correlate the EUM information with data from other (Network) monitoring tools in their root cause analysis. Below is an EUM screen shot from ManpowerGroup’s environment.

MPG EUM

By implementing AppDynamics EUM, ManpowerGroup has been able to:

  • Identify locations that are experiencing the worst performance.
  • Successfully illustrate the difference in performance globally as well. (This is key when studying the impact of latency etc. on an application that is being accessed from other countries but are located in a central datacenter.)
  • Quickly identify when a certain location is seeing performance issues and correlate that with data from other monitoring solutions.

But what does all of this mean to the business? It means that ManpowerGroup has been able to find and resolve problems faster for their customers and employees. Faster application response time combined with happier customers and more productive employees all contribute to a healthier bottom line for ManpowerGroup.

ManpowerGroup is using AppDynamics EUM to bring a higher level of performance to it’s employees, customers, and shareholders. Sign up for a free trial today and begin your journey to a healthier bottom line.

Synthetic vs Real-User Monitoring: A Response to Gartner

AvailabilityRecently Jonah Kowall of Gartner released a research note titled “Use Synthetic Monitoring to Measure Availability and Real-User Monitoring for Performance”. After reading this paper I had some thoughts that I wanted to share based upon my experience as a Monitoring Architect (and certifiable performance geek) working within large enterprise organizations. I highly recommend reading the research note as the information and findings contained within are spot on and highlight important differences between Synthetic and Real-User Monitoring as applied to availability and performance.

My Apps Are Not All 24×7

During my time working at a top 10 Investment Bank I came across many different applications with varying service level requirements. I say they were requirements because there were rarely ever any agreements or contracts in place, usually just an organizational understanding of how important each application was to the business and the expected service level. Many of the applications in the Investment Bank portfolio were only used during trading hours of the exchanges that they interfaced with. These applications also had to be available right as the exchanges opened and performing well for the entire duration of trading activity. Having no real user activity meant that the only way to gain any insight into availability and performance of these applications was by using synthetically generated transactions.

Was this an ideal situation? No, but it was all we had to work with in the absence of real user activity. If the synthetic transactions were slow or throwing errors at least we could attempt to repair the platform before the opening bell. Once the trading day got started we measured real user activity to see the true picture of performance and made adjustments based upon that information.

Performance

Can’t Script It All

Having to rely upon synthetic transactions as a measure of availability and performance is definitely suboptimal. The problem gets amplified in environments where you shouldn’t be testing certain application functionality due to regulatory and other restrictions. Do you really want to be trading securities, derivatives, currencies, etc… with your synthetic transaction monitoring tool? Me thinks not!

So now there is a gaping hole in your monitoring strategy if you are relying upon synthetic transactions alone. You can’t test all of your business critical functionality even if you wanted to spend the long hours scripting and testing your synthetics. The scripting/testing time investment gets amplified when there are changes to your application code. If those code updates change the application response you will need to re-script for the new response. It’s an evil cycle that doesn’t happen when you use the right kind of real user monitoring.

Real User Monitoring: Accurate and Meaningful

When you monitor real user transactions you will get more accurate and relevant information. Here is a list (what would a good blog post be without a list?) of some of the benefits:

  • Understand exactly how your application is being used.
  • See the performance of each application function as the end user does, not just within your data center.
  • No scripting required (scripting can take a significant amount of time and resources)
  • Ensure full visibility of application usage and performance, not just what was scripted.
  • Understand the real geographic distribution of your users and the impact of that distribution on end user experience.
  • Ability to track performance of your most important users (particularly useful in trading environments)

Conclusion

Synthetic transaction monitoring and real user monitoring can definitely co-exist within the same application environment. Every business is different and has their own unique requirements that can impact the type of monitoring you choose to implement. If you’ve not yet read the Gartner research note I suggest you go check it out now. It provides a solid analysis on synthetic and real user monitoring tools, companies, and usage scenarios which are completely different from what I have covered here.

Have synthetic or real transaction monitoring saved the day for your company? I’d love to hear about it in the comments below.