The History and Future of Java Programming Language

As the internet’s renowned programming language, Java has had a profound impact on how people navigate the digital world. Much of what users expect in terms of performance from their devices that access the internet has been set by Java functionality. You don’t have to be a developer, however, to recognize its influence.

The story of Java goes back more than two decades and has evolved along with the digital transformation of the world. As consumer and business demands on scalability increases, Java is forced to grow and adapt in order to stay relevant. Stakeholders are approaching their work armed with a primer on Java’s history, current use, and future direction.

The History of Java: A Timeline

Early Development

Java is the brainchild of Java pioneer James Gosling, who traces Java’s core idea of, “Write Once, Run Anywhere” back to work he did in graduate school.

After spending time at IBM, Gosling joined Sun Microsystems in 1984. In 1991, Gosling partnered with Sun colleagues, Michael Sheridan and Patrick Naughton on Project Green, to develop new technology for programming next-generation smart appliances.

Gosling, Naughton, and Sheridan set out to develop the project based on certain rules. They were specifically tied to performance, security, and functionality. Those rules were that Java must be:

  1. Secure and robust
  2. High performance
  3. Portable and architecture-neutral, which means it can run on any combination of software and hardware
  4. Threaded, interpreted, and dynamic
  5. Object-oriented

Over time, the team added features and refinements that extended the heirloom of C++ and C, resulting in a new language called Oak, named after a tree outside Gosling’s office.

After efforts to use Oak for interactive television failed to materialize, the technology was re-targeted for the world wide web. The team also began working on a web browser as a demonstration platform.

Because of a trademark conflict, Oak was renamed, Java, and in 1995, Java 1.0a2, along with the browser, name HotJava, was released.

Developer Reception

Java was well-received by developers in the software community, in particular because it was created based on the “Write Once, Run Anywhere” (WORA) philosophy. This flexibility is rooted in Java’s Bytecode compilation capabilities, which bypass the potential barrier of different system infrastructure. Java was a unique programming language, because it essentially solved portability issues for the first time in the industry.

For a brief period of time, Java was available for open source use. Sun Microsystems made the switch in 2006 in an effort to prevent fragmentation in the market and to appeal to developers who worked primarily in open source platforms. This was short-lived, however, as Oracle reduced that effort and reverted to commercial licensing when it took over Sun Microsystems in 2010.

Java’s age and pervasiveness means most programmers have encountered it at one time or another, if not as a fulltime career. Given this large user base there are inevitable differences of opinion about whether Java is still relevant.

Developers seem to be exploring other options besides Java. According to the September 2016 TIOBE Index, the popularity of Java as a programming language is on a decline. However, it still reigns as the most widely-used language, surpassing .NET and maintaining their top-ranked position from previous years.

Strengths of Java

As a developer, you may already realize the advantages of using Java, which help explain why Java is one of the leading programming languages used in enterprise today:

  • Garbage Collection – Languages such as C and C++ require you to manually clear created objects, a stark contrast to Java’s built-in garbage collection.
  • Verbose, Static Language – Thanks to Java’s robust, inherent static nature, it’s easy to maintain and read. Java enables you to return multiple types of data and you can easily use it in a variety of enterprise-level applications.
  • Portability – Collaborative automation tools such as Apache Maven and open source are all Java-friendly. AppDynamics is no exception: understand the health of your JVM with key Java tuning and profiling metrics, including: response times, throughput, exception rate, garbage collection time, code deadlocks, and more.
  • Easy to Run, Easy to Write –  Write Java once, and you can run it almost anywhere at any time. This is the cornerstone strength of Java. That means that you can use it to easily create mobile applications or run on desktop applications that use different operating systems and servers, such as Linux or Windows
  • Adaptability – Java’s JVM tool is the basis for several languages. That is why you can use languages such as Groovy, Jython, and Scala with ease.

Weaknesses of Java

Even though Java has an array of strengths, this imminent programming language still has it’s challenges:

  • Not a Web Language – The amount of layers and tools, such as Struts, JPA, or JSP, that is needed to create web applications takes away from Java’s intentional design of ease of use. These additional frameworks have their own issues and are difficult to work within.
  • Release Frequency – With each change in the runtime, developers must get up to speed causing internal delays. This is a nuisance for businesses concerned with security, since Java updates may cause temporary disruption and instability.

The Next Evolution of Java

Java is not a legacy programming language, despite its long history. The robust use of Maven, the building tool for Java-based projects, debunks the theory that Java is outdated. Although there are a variety of deployment tools on the market, Apache Maven has by far been one of the largest automation tools developers use to deploy software applications.

With Oracle’s commitment to Java for the long haul, it’s not hard to see why Java will always be a part of programming languages for years to come and will remain as the choice programming language. 2017 will see the release of the eighth version of Java —  Java EE 8.

Despite its areas for improvement, and threat from rival programming languages like .NET, Java is here to stay. Oracle has plans for a new version release in the early part of 2017, with new supportive features that will strongly appeal to developers. Java’s multitude of strengths as a programming language means its use in the digital world will only solidify. A language that was inherently designed for easy use has proved itself as functional and secure over the course of more than two decades. Developers who appreciate technological changes can also rest assured the tried-and-true language of Java will likely always have a significant place in their toolset.

Learn More

Read more about Java Application Performance Monitoring.

What’s exciting about Java 9 and Application Performance Monitoring

In today’s modern computing age, constant enhancements in software innovations are driving us closer to an era of software revolution. Perhaps in the distant future, that may be how the 21st century is remembered best. Among the popular software languages out there, however, Java continues to have the largest industry footprint, running applications around the globe producing combined annual revenue in trillions. That’s why keeping up on the JDK is a high priority. Despite having a massive API to improve programming productivity, Java has also grown due to its high performance yet scalable JVM runtime, building among the fastest computing modern applications. As Java’s footprint expands, JDK innovations continue to impact billions of lines of code. As AppDynamics continues to grow, our focus towards supporting Java is only furthered by our customer use & industry adoption of the JVM.


Since the release of Java 8 in March of 2014, discussions around what’s next for Java 9 have begun to steadily grow. Although various JDK enhancements were originally targeted for Java 9, the scope of committed work has gradually narrowed with an upcoming proposed release date of Spring 2017. With over 30 key enhancements presently targeted, the ones with potential for broadest impact will be shared.

Project Jigsaw:

Among the largest impacting JDK 9 enhancements are from Project Jigsaw. Jigsaw’s primary goal is to make the JDK more modular whereby also enhancing the build system. It is motivated by the need for making Java more scalable for smaller computing devices, secure, performant, and to improve developer productivity. With the advent of the Internet of Things (IoT), enabling Java to run on smaller devices is instrumental for continued growth. However as Java’s footprint expands, its more prone to security targeting and performance issues as a nature of running on a vast permutation of computing services. Thus with a more modular JDK, developers can significantly reduce necessary libraries needed to build features, whereby reducing security risks as well as making the applications smaller with better performance (ie. improving code cache & class loader footprint, etc).

HTTP/2 Client: 

Among the most popular web protocols, HTTP has been getting its own upgrade to HTTP/2 (with inspiration from Google’s SPDY/2), boasting significant network performance gains. Hence, Java 9 will get its own HTTP client API implementing HTTP/2 and WebSocket to replace the legacy HttpURLConnection, which predates HTTP/1.1 and has various limitation such as the one thread per request/ response behavior. Using HTTP/2 in Java 9, applications will have better performance & scalability with memory usage on par or lower than HttpURLConnection, Apache HttpClient, and Netty.


Also being referred to as Java RPEL (Read-Print-Eval-Loop), JDK9 users will be getting a shell like interface to interactively evaluate declarations, statements, and expressions in Java. Similar to the Python or Ruby interpreters or other JVM languages like Scala and Groovy, Java users will be able to run Java code without needing to wrap in classes or methods, allowing for a much easier, faster learning & experimentation. Furthermore, as Java has been moving towards becoming a less syntactically verbose language with features like Lambda introduced in JDK 8 (shared in our Java 8 blog last year), having a shell-like interface becomes more practical for ad-hoc testing.

Screen Shot 2016-01-05 at 9.51.26 AM.png

The JVM Code Cache is critical to application performance and can be set at startup time using the following flag: -XX:InitialCodeCacheSize=32m. When the code cache memory is exhausted, the JVM losses JIT and goes into interpreted mode, significantly affecting application runtime performance. In Java 9, this section of memory is getting divided into the following 3 distinct heap segments in order to improve performance and enable future extensions: JVM internal (non-method) code, Profiled-code, Non-profiled code.

Notable mentions:

Some other notable changes in JDK 9 will include making the G1 collector default, adding a Microbenchmark testing suite extending the current JMH, and the removal of some aged unsupported performance tools such as JVM TI hprof and jHat.

Although most programming languages tend to come and go, Java is one that’s here to stay (least for the foreseeable decade). As one of the most popular and widely adopted languages with a high performant, scalable runtime, innovations towards the JDK have a large impact on the world’s computing infrastructure. By staying current with what’s in the next JDK, firms running JVM services can intelligently plan & prioritize their innovation initiatives that are complementary to languages improvements. For all those excited yet impatient and looking to get hands on, the latest JDK 9 builds can be accessible from here:


Why Now is the Perfect Time to Upgrade to Java 8

This past March, Oracle released their most anticipated version in almost decade, Java 8. The latest version had a growing buzz since it had been announced, and companies of all sizes were eager to upgrade. Our partner, Typesafe conducted a Java 8 adoption survey of 2,800 developers and found 65% of companies had already committed to adopting within the first 24 months of the release date.

Typesafe’s survey corroborated InfoQ’s survey of developers who stated 61% were devoted to adopting Java 8. Their handy heatmap below displays how excited developers were to get started with Java 8 and utilize the new features such as lambda expressions, date and time, and the Nashon JavaScript engine. In my opinion, the lambda expressions are by far the most exciting new Java 8 feature.

Screen Shot 2014-09-25 at 2.19.42 PM

So, why are folks so excited for Java 8?

Lambda Expressions and Stream Processing

What are they?

Lambda expressions are arguably the most exciting and interesting new feature with the Java 8 release. Not only is the feature itself exciting for engineers, the implications will have resounding effects on flexibility and productivity.

A lambda expression is essentially an anonymous function which can be invoked as a named function normally would, or passed as an argument to higher-order functions. The introduction of lambdas opens up aspects of functional programming to the predominantly object-oriented programming environment, enabling your code to be more concise and flexible.

Why is it useful?

Consider the task of parsing Twitter data from a given user’s home stream. Specifically, we’ll be creating a map of word length to a list of words of the same length from a user’s home stream.

For instance:

Screen Shot 2014-09-25 at 2.21.04 PM

Should yield:

  2=[so, an], 
  3=[are, for], 
  4=[wont, here, some, tips], 
  8=[programs, makeover], 
  9=[sometimes, uninstall], 
  11=[misbehaving, application]

And of course, for many tweets this data is aggregated.

Using traditional Java loop constructs, this could be solved as follows:

Screen Shot 2014-09-29 at 12.05.09 PM

Lets break down what’s happening step-by-step:

  • Fetch the Twitter home timeline
  • For each status
    • Extract text
    • Remove punctuation
    • Gather in one big list of words
  • For each word
    • Filter HTTP links and empty words
    • Add word to mapping of length to list of words of same length

Now, lets consider the solution using stream processing and lambdas:

Screen Shot 2014-09-29 at 12.05.20 PM

The lambda solution follows the same logic, and is significantly shorter. To boot, this solution can be very easily parallelized. Listed below is the next version performing the same processing in parallel below:

Screen Shot 2014-09-29 at 12.05.26 PM

Though a contrived example for purposes of illustration, the implications here are profound.

By adding lambda expressions, code can be developed faster, be clearer, and overall more flexible.

Flexible Code

As mentioned earlier, the implications of adding lambda expressions are huge. Flexible code is one of the biggest advantages of this feature. In today’s Agile and rapid-release engineering environment, it’s imperative for your code to be amenable to change. Java has finally begun to close the gap on other more nimble programming languages.

As another example, let’s consider an enhancement request for our Twitter processor. In abstract, we wish to procure a list of Twitter timeline statuses which are deemed “interesting”. Concretely, the retweet count is greater than 1 and the status text contains the word “awesome”. This rather straightforward to implement, as outlined below:

Screen Shot 2014-09-29 at 12.05.32 PM

Now, at some later point in time, suppose product management decides to change what it means for a tweet to be interesting. Specifically, we’ll need to provide a user interface where a user can indicate based on an available set of criteria how a Tweet is deemed interesting.

This poses an interesting set of challenges. First, a user interface should provide some representation of an available set of filter criteria. More importantly, that representation should manifest in the Twitter processor as a formal set of filter criteria applied in code. One approach, is to parameterize the filter such that calling code specifies that criteria. This strategy is illustrated as follows:

Screen Shot 2014-09-29 at 12.05.38 PM

This grants calling code the ability to specify arbitrary filter criteria, realized by the UI component.

By disambiguating how the timeline is filtered from what criteria is imposed, the code is now flexible enough to accept arbitrary filter criteria.

Full code details can be found at the following Github repository.


In short, lambda expressions in Java 8 enable development of clear, concise code while maximizing flexibility to remain responsive to future changes.

Engineers, and entire companies, work better when they can spend more time innovating on new products on features rather than focusing a majority of their time firefighting existing problems and squashing bugs. With AppDynamics Java 8 support, you’re finally able to gain some of time back, become more efficient, and starting innovating again.

After implementing AppDynamics throughout their Java environment, Giri Nathan, VP of Engineering at stated, “The AppDynamics APM solution increases our agility by letting us instrument any new code on the fly,” says Nathan. “We can monitor everything from servlets and Enterprise JavaBeans entry points to JDBC exit points, which gives us an end-to-end view of our transactions.”

Interested to see how you can get the most out of the new Java 8 features with AppDynamics? Start a FREE trial now!

Diving Into What’s New in Java & .NET Monitoring

In the AppDynamics Spring 2014 release we added quite a few features to our Java and .NET APM solutions. With the addition of the service endpoints, an improved JMX console, JVM crash detection and crash reports, additional support for many popular frameworks, and async support we have the best APM solution in the marketplace for Java and .NET applications.

Added support for frameworks:

  • TypeSafe Play/Akka

  • Google Web Toolkit

  • JAX-RS 2.0

  • Apache Synapse

  • Apple WebObjects

Service Endpoints

With the addition of service endpoints customers with large SOA environments can define specific service points to track metrics and get associated business transaction information. Service endpoints helps service owners monitor and troubleshoot their own specific services within a large set of services:

JMX Console

The JMX console has been greatly improved to add the ability to manage complex attributes and provide executing mBean methods and updating mBean attributes:

JVM Crash Detector

The JVM crash detector has been improved to provide crash reports with dump files that allow tracing the root cause of JVM crashes:

Async Support

We  added improved support for asynchronous calls and added a waterfall timeline for better clarity in where time is spent during requests:


AppDynamics for .NET applications has been greatly improved by adding better integration and support for Windows AzureASP.Net MVC 5, improved Windows Communication Foundation support, and RabbitMQ support:



Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

FamilySearch Saves $4.8 Million with AppDynamics [Infographic]

Everyone and their mother is talking about big data these days – how to manage it, how to analyze it, how to gain insight from it – but very few organizations actually have big data that they have to worry about managing or analyzing. That’s not the case for FamilySearch, the world’s largest genealogy organization. FamilySearch has 10 petabytes of census records, photographs, immigration records, etc. in its database, and its data grows every day as volunteers upload more documents. Ironically, this organization that’s tasked with cataloging our past is now on the forefront of the big data trend, as they’re being forced to find new and innovative ways to manage and scale this data.

From 2011 to 2012, FamilySearch scaled almost every aspect of their application, from data to throughput to user concurrency. According to Bob Hartley, Principal Engineer and Development Manager at FamilySearch, AppDynamics was instrumental in this project. Hartley estimates that FamilySearch saved $4.8 million over two years by using AppDynamics to optimize the application instead of scaling infrastructure. That’s a pretty big number, so we broke it down for you in this infographic:

Embed this image on your site:


How FamilySearch Scaled

  • From 11,500 tpm to 122,000 tpm
  • From 6,000 users per minute to 12,000 users per minute
  • From 12 application releases per year to 20 application releases per year
  • From 10 PB of data to approaching 20 PB of data
  • No additional infrastructure
  • Response time reduced from minutes to seconds

Before AppDynamics

  • 227 Severity-1 incidents/year took 33 hours each to troubleshoot
  • 300 pre-production defects per year took 49 hours each to troubleshoot
  • This amounts to a total of 36,891 man-hours spent on troubleshooting every year

After AppDynamics

FamilySearch estimates that they saved $4.8 million with AppDynamics in two years. That’s a huge number, so let’s break it down:

Infrastructure Savings:

  • FamilySearch would have had to purchase 1,200 servers at approx. $1,000 each, amounting to $1,200,215 in savings
  • Those 1,200 servers would cost $2,064,370 in power and air conditioning
  • Those 1,200 servers would cost $200,000 in administrative costs over two years

Productivity Savings:

FamilySearch estimates that they’ve reduced troubleshooting time for both pre-production defects and production incidents by 45%, amounting to $885,170 in savings for pre-production and $460,836 in savings for production incidents (based on average salaries for those positions).

To learn more about what FamilySearch accomplished and how they use AppDynamics, check out their case study and Bob Hartley’s video interview on the FamilySearch ROI page.

Code Deadlock – A Usual Suspect

Imagine you’re an operations guy and you’ve just received a phone call or alert notifying you that the application your responsible for is running slow. You bring up your console, check all related processes, and notice one java.exe process isn’t using any CPU but the other Java processes are.  The average sys admin at this point would just kill and restart the Java process, cross their fingers, and hope everything returns back to normal (this actually does work most of the time). An experienced sys admin might perform a kill -3 on the Java process, capture a thread dump, and pass this back to dev for analysis. Now suppose your application returns back to normal–end users stop complaining, you pat yourself on the back and beat your chest, and basically resume what you were doing before you were rudely interrupted.

The story I’ve just told may seem contrived, but I’ve witnessed it several times with customers over the years. The stark reality is that no one in operations has the time or visibility to figure out the real business impact behind issues like this. Therefore, little pressure is applied to development to investigate data like thread dumps so that root causes can be found and production slowdowns can be avoided again in future. It’s true restarting a JVM or CLR will solve a fair few issues in production, but it’s only a temporary fix over the real problems that exist within the application logic and configuration.

Now imagine for one minute that operations could actually figure out the business impact of production issues, along with identifying the root cause, and communicate this information to Dev so problems could be fixed rapidly. Sounds too good to be true, right? Well, a few weeks ago an AppDynamics customer did just that and the story they told was quite compelling.

Code Deadlock in a distributed E-Commerce Application

The customer application in question was a busy e-commerce retail website in the US. The architecture was heavily distributed with several hundred application tiers that included JVMs, LDAP servers, CMS server, message queues, databases and 3rd party web services. Here is a quick glimpse of what that architecture looked like from a high level:

Detecting Code Deadlock

If we look at the AppDynamics problem pane (right) as the customer saw things, it shows the severity of their issues. During the day the application was experiencing just over 4,000 business transactions per minute, which works out at just under 1 million transactions a day. Approximately 2.5% of these transactions were impacted by the slowdown, which was the result of the 92 code deadlocks you see here that occurred during peak hours.

AppDynamics is able to dynamically baseline the performance of every business transaction type before classifying each execution as normal, slow, very slow or stalled depending on its deviation from its unique performance baseline. This is critical for understanding the true business impact of every issue or slowdown because operations can immediately see how many user requests were impacted relative to the total requests being processed by the application.

From this pane, operations were able to drill down into the 92 code deadlocks and see the events that took place as each code deadlock occurred. As you can see from the screenshot (below left), the sys admins during the slowdown kept restarting the JVMs (as shown) to try and make the issues go away. Unfortunately, this didn’t work given that the application was experiencing high concurrency under peak load.

By drilling into each Code Deadlock event, operations were able to analyze the various thread contentions and locate the root cause of the issue. The root cause of the slowdown turned out to be an application cache which wasn’t thread-safe. If you look at the screenshot below, showing the final execution of the threads in deadlock accessing the cache, you can see that one thread was trying to remove an item, another was trying to get an item, and the last thread was trying to put an item. 3 threads were trying to do a put, get and remove at the same time! This caused a deadlock to occur on cache access, thus causing the related JVM to hang until those threads were released via a restart.

 Analyzing Thread Dumps

Below you can see the thread dump that AppDynamics collected for one of the code deadlocks, which clearly shows where each thread was deadlocked. By copying the full thread dumps to clipboard, operations were able to see the full stack trace of each thread, thus identifying which business transactions, classes, and methods were responsible for cache access.

The root cause for this production slowdown may have been identified and passed to dev for resolution, but the most compelling conclusion from this customer story was related to them identifying the real business impact that occurred. The application was clearly running slow, but what did the end user experience during the slowdown and what impact would this have had on the business?

What was the Actual Business Impact?

The screenshot below shows all business transactions that were executing on the e-commerce web application during the five hour window before, during, and after the slowdown occurred.

Here are some hard hitting facts for the two most important business transactions inside this e-commerce application:

  • 46,463 Checkouts processed
    • 482 returned an error, 1325 were slow, 576 were very slow and 111 stalled.
  • 3,956 Payments processed
    • 12 returned an error, 242 were slow, 96 were very slow and 79 stalled

Error – transaction failed with an exception. Slow – the business transaction deviated from its baseline by more than 3 standard deviations. Very Slow – the business transaction deviated from its baseline by more than 4 standard deviations. Stalled – the transaction timed out.

If you take these raw facts and assume the average revenue per order is $100, then the potential revenue risk/impact of this slowdown was easily into six digits when you consider the end user experience for checkout and payment. Even if you take the 482 Errors and 111 Stalls relating to the Checkout business transaction alone – this still equates to around $60,000 of revenue at risk. And that’s a fairly conservative estimate!

If you add up all the errors, slow, very slow and stalls you see in the screenshot above, you start to picture how serious this issue was in production. The harsh reality is that incidents like this happen everyday in production environments, but no one has visibility into the true business impact of them, meaning little pressure is applied to development to fix “glitches.”

Agile isn’t about Change, It’s about Results

If development teams want to be truly agile, they need to forget about constant change and focus on what impact their releases has on the business. The next time your application slows down or crashes in production, ask yourself one question: “What impact did that just have on the business?” I guarantee just thinking about that answer will make you feel cold. If development teams found out more often the real business impact of their work, they’d learn pretty quickly how fast, reliable and robust their application code really is.

I’m pleased to say no developers were injured or fired during the making of this real-life customer story; they were simply educated on what impact their non-thread safe cache had on the business. Failure is OK–that’s how we learn and build better applications.

App Man.

Application Monitoring with JConsole, VisualVM and AppDynamics Lite

What happens when mission critical Java applications slow down or keep crashing in production? The vast majority of IT Operations (Ops) today bury their heads in log files. Why? because thats what they’ve been doing since IBM invented the mainframe. Diving into the weeds feels good, everyone feels productive looking at log entries, hoping that one will eventually explain the unexplainable. IT Ops may also look at system and network metrics which tell them how server resource and network bandwidth is being consumed. Again, looking at lots of metrics feels good but what is causing those server and network metrics to change in the first place? Answer: the application.

IT Ops monitor the infrastructure that applications run on, but they lack visibility of how applications actually work and utilize the infrastructure. To get this visibility, Ops must monitor the application run-time. A quick way to get started is to use the free tools that come with the application run-time. In the case of Java applications, both JConsole and VisualVM ship with the standard SDK and have proved popular choices for monitoring Java applications. When we built AppDynamics Lite we felt their was a void of free application monitoring solutions for IT Ops, the market had plenty of tools aimed at developers but many were just too verbose and intrusive for IT Ops to use in production. If we take a look at how JConsole, VisualVM and AppDynamics Lite compare, we’ll see just how different free application monitoring solutions can be.