Go for Performance with AppDynamics for Go

When Google gets frustrated with the complexities and slow speed of Internet technology, they do what Google does best — they solve for it. They set out to create a new language that was fast, worked on large server systems, scaled well, and could run concurrency smoothly. The result is Go. And, when our customers speak, AppDynamics does what it does best — we listen!

AppDynamics announces its official support for Go

We’re excited to officially announce our support for Go! In our Spring ‘17 release, application teams may take advantage of the AppDynamics platform now available for their Go apps. Monitor your Go applications in real time, correlate transactions across your distributed environment, and diagnose performance bottlenecks while running in a live production or development environment.

What Is Go?

Go was created in 2007 by Ken Thompson, Rob Pike, and Robert Griesemer, along with a number of contributors. One of the most prominent is Thompson, who also wrote the B programming language and was instrumental in the design and introduction of the Unix operating system.

Google formally introduced Go in November of 2009 and began using it in some of their production systems. Other companies soon began deploying it as well. There is an open-source version called “gc” that works on a number of platforms including UNIX, Windows, BSD, and Linux. As of the beginning of 2015, it also works on smartphones and mobile devices. A second version, “gccgo,” is a frontend component for the GNU Compiler Collection (GCC).

The language began as an experiment among several Google programmers. They wanted to create a new language that took care of some of the problems of other languages while keeping their positive attributes. They wanted it to be statically typed and highly scalable. It also had to be legible and productive, foregoing many of the keywords and repetition of other languages. They also wanted to handle networking and multiprocessing quickly and easily. The engineers who worked on the project had a deep-seated disdain for the complexity of C++. Go shares some of the characteristics of C, but is built to be simpler, safer, and more concise.

Go uses unique approaches to handle the problems common in other languages. It has built-in concurrency, channels, and lightweight processes called goroutines. The toolchain creates statically linked binaries with no dependencies from external sources. It uses an interface instead of virtual inheritance and type embedding rather than non-virtual inheritance. Go is statically typed, open-source, and compiled. Published under BSD licensing, it also has (among other features):

  • Memory safety components
  • Garbage collection
  • Concurrent programming
  • Structural typing

Built for Concurrency

Go is made for concurrency. While older languages were designed with only a single processor in mind, Go runs efficiently on today’s multicore processors with parallel processing.

Concurrent does not necessarily mean two processes are running at the same time (i.e., parallel processing). Concurrency refers to two tasks being able to begin, run, and end in overlapping periods of time (e.g., they may not actually ever run at the same time). In contrast, parallelism refers to two tasks operating at the same time, as you find in a multicore machine. An example of concurrency is when a single-core computer handles multitasking, which is what makes Go ideal for high-load systems such as an API handling massive amounts of requests from mobile devices or browser calls.

Extensive Libraries

Go is verbose. You have to use a lot of code to complete commands. It also has a lot of libraries, to name a few:

  • http
  • regular expressions
  • json
  • file CRUD operations

On the flip side, you have to import these libraries, because it won’t compile if you don’t. This was made to keep Go as lean as possible. It’s fast, and built-in concurrency lets you run a lot of simultaneous processes.

Companies Using Go

Thousands of companies around the world currently use Go, including:

  • Google (of course)
  • Twitter
  • BBC Worldwide
  • Comcast
  • eBay
  • IBM

Smaller companies are also on board, including:

  • MalwareBytes
  • Shutterfly
  • Square
  • Zynga

Here are some real world examples:

Dropbox

After using Python in the early years of its operation, Dropbox realized their success and growing customer base required a language that could scale better and handle bigger loads. In 2013, they began to move important backend operations from Python to Go. The goal was to improve their concurrency and execute code faster, successfully deploying 200,000 lines of Go computer code.

At the time, they were somewhat hindered by Go’s lack of deep libraries, a characteristic of its youth and newness. The Dropbox developers took on the tasks themselves and began to create libraries for Memcache, connection management, and other purposes. They contributed to the Go open-source effort by making the libraries available to other programmers and companies interested in building production systems that can scale quickly and effectively.

SendGrid

Around the same time Dropbox changed over to Go, cloud-based email service SendGrid decided to do the same. In the first years of operation, their backend consisted mainly of Perl/AnyEvent, later changing to Python/Twisted. They considered the Gevent/Python framework but realized it wouldn’t do what they needed. They narrowed it down between Go, Java, and Scala.

Because they handle more than 500 million email messages every day, one of their biggest challenges is concurrency. Go’s ability to handle concurrent asynchronous programming was a major factor in their decision to use it. At the same time, it was a language their developers were very interested in using, unlike others that they felt like they had to fight every day. In fact, several developers were so excited about it that they began to teach it to themselves and experiment on their own. This turned out to be a decisive factor because the company realized their team was already using it in their off hours and would be enthusiastic about using the language every day.

StatHat

Numerotron is a small firm that developed a program called StatHat to allow developers and engineers to track events and statistics right in their code. StatHat can be used in twelve different computer languages including Go and HTML, and it can be easily deployed by a wide variety of professionals including backend engineers and designers.

Patrick Crosby, founder of Numerotron, said they chose Go for StatHat because it met many of their criteria, including great performance, many connections on one machine, fast HTML templating, quick startup and recompilation, extensive libraries, and open-source.

Go Vs. Other Languages

Python

Part of the appeal of Python is that it is so versatile (e.g., scripting vs. object-oriented and functional programming). It can run on any platform that includes a Python interpreter. It is used worldwide for both large and small application development.

It is very verbose, and it’s made to solve specific problems without constraints. It lets you build programs rapidly and then modify those programs to create powerful solutions. It’s simple to learn but takes time to fully understand. One of Go’s major strengths is its ability to handle concurrency efficiently using channels and goroutines.

C

In many ways, Go is a new version of C. The Go creators attempted to clean up the problem areas of C and take out functionalities that were creating bottlenecks. Go is simpler than C, although the primitives are more detailed than those found in Java and C++.

Java

Go is a young language, so the contributors were able to incorporate features that make it extremely effective in meeting the demands of modern web-scale traffic. While Java continues to be one of the major languages used for application development, older, similar languages have some limitations in scale.

The Future Is Bright

Go is an effective solution for projects that need scale to handle massive amounts of traffic. As the amount of data continues to expand, Go is a modern solution to building applications that can handle the demand. Expect to see Go adopted more and more as the world moves forward to meet the demands of modern web-scale traffic.

Learn more about all popular programming languages and frameworks that AppDynamics covers, including Java, .NET, Node.js, PHP, Python, and C/C++ and now, Go!

 

Code Compiled: A Short History of Programming — Part III

Programming for Commerce, Banking, and FinTech

Look at how far we’ve come. Just seven decades ago, the word “computer” referred to someone in an office with a pile of charts and a slide rule. Then ENIAC, the first electronic computer, appeared. It was followed quickly by the Commodore 64 personal computer and later, the iPhone. Today, the Sunway TaihuLight crunches data by combining 10,649,600 cores and performing 93 quadrillion floating point operations every second. Where will we be in another seven decades?

In part one of this series, we covered how the evolution of hardware shaped the development of programming languages. We then followed up in part two with the impact of the software created by those languages. In this final blog in the series, we’ll look at how programming has taken the leap beyond computers into devices, with an emphasis on how programming is rewriting the rules of commerce, banking, and finance.

Technology in Society Over the Past Decade

Society as a whole has adopted new technology with great enthusiasm, and the pace of that adoption has accelerated due to a few key developments over the past ten years or so. The Pew Internet Project has been keeping a close watch on the demographics of the internet. They reported that at the turn of the millennium, only 70 percent of young adults and 14 percent of seniors were online. That’s still the general perception of internet users, but it’s no longer true. In 2016, nearly all young adults (96 percent) and the majority of those over 65 years old (58 percent) are online.

The single biggest driver of internet access growth has been mobile devices. Simple cell phone ownership went from around 70 percent by American adults in 2006 to 92 percent a decade later. Smartphones, as well as the vast data-crunching resources made available by the app ecosystem, went from an ownership rate of 35 percent just five years ago to 68 percent today. Tablets have taken a similar explosive trajectory, going from 3 percent ownership in 2010 to 45 percent today.

This growing hunger for mobile devices required exponentially more data processing power and a vast leap in traffic across wireless networks. The growth rate can only be described in terms of zettabytes (trillions of gigabytes). In 2009, the world had three quarters of one zettabyte under management. One year later, data generated across networks had nearly doubled to 1.2 ZB, most of it enterprise traffic. By the end of last year there were 7.9 ZB of data generated and 6.3 ZB were under management by enterprises. In the next four years, you can expect there to be 35 ZB of data created by devices, with 28 ZB managed by enterprises.

Developers have had to work furiously to restructure their approach to software, databases, and network management tools just to avoid being swamped in all this data.

A Brief History of Financial Record Keeping

In the world of commerce, the data that matters most all relates to financial records. These define the health of the business today and the potential for growing the customer base in the future. That’s why financial data has become ground zero in the war between cybercriminals and data-security experts.

In the 1980s, the financial industry was still dominated by mainframes. The personal computer revolution that was sweeping the rest of the business world didn’t impact finance. Huge servers and clients were the only way to manage the vast amount of data (compared to processor speeds at the time) that had to be crunched. You might have to use COBOL or write SQL queries against a DB2 database to get the financial answers you needed to make the right business decisions (versus what is possible today). Mainframes were normally closed systems with applications written specifically for them by outside consultants.

All that changed dramatically in the 1990s, with the growth of faster servers, open systems, and the connectivity of internet protocols. Mid-sized computers for business gained immense processing power at lower costs. Mainframes began to be repurposed for back end processing of transaction data as the finance industry consolidated batch-processing projects like billing.

Computers like the IBM AS/400, which had run on IBM proprietary software in the past, gained the facility for running financial software like SAP, PeopleSoft, and JD Edwards. By the late 1990s, the appearance of Linux and virtual machines running inside mainframes opened up the entire finance sector to a flurry of new open-source development projects.

Simultaneously, network connectivity to the internet and then the web opened up financial data providers to a new threat from outside: hackers. Before, password management and inside jobs were the biggest threat to financial data security. Connectivity opened a window to a new generation of cybercriminals.

Programming Challenges for Data Security

In the weeks after Black Friday in 2013, as the holiday shopping rush was in full swing, a data-security specialist announced that a supermarket chain had become a target. His report warned that the breach was serious, “potentially involving millions of customer credit and debit card records.” There had been attacks on companies before, but this attack netted financial data on 40 million shoppers during the busiest retail period of the year.

The biggest problem is that attacks like these are increasing in their intensity and sophistication. In fact, 2016 saw a 458 percent jump in attacks that searched IoT connections for vulnerabilities. Meanwhile, another front has opened up on employee mobile devices for enterprises. Last year alone, there were over 8 billion malware attacks, twice the number of the year before, most of which went after weaknesses in the Android ecosystem. In terms of the data most sought after by hackers, healthcare businesses registered slightly more attacks than even those in the financial industry.

Data-security experts have to stay ahead of risks from both the outside and the inside, whether they are malicious or accidental. Both can be equally devastating, regardless of intent.

Ian McGuinness recommends six steps for security experts to help them concentrate on covering as many vulnerabilities as possible early on, before moving on to custom development:

  1. Protect the physical servers by tightly limiting access to a shortlist of only employees who must access them for work. Make sure the list is updated regularly.
  2. Create a network vulnerability profile. Assess any weak points, update antivirus software, test firewalls, and change TCP/IP ports from default settings.
  3. Protect every file and folder that captures database information like log files. Maintain access levels and permissions dynamically.
  4. Review which server upgrade software is necessary. All features and services not in common use provide an attack vector for cybercriminals.
  5. Find and apply the latest patches, service packs, and security updates.
  6. Identify and encrypt your most sensitive data, even if it resides on the back end with no interface to end users.

These are really just the basics, though. Monitoring network traffic, recognizing malicious code, and responding in time to make a difference represent the biggest challenges for the future.

What’s Next for FinTech

Programming for financial technology (FinTech) is among the most exciting, fastest changing areas of IT right now. Startups involved in online or peer-to-peer payments, wealth management, equity crowdfunding, and related innovations were able to bring in $19 billion in investment in the past year alone. The White House signaled its support for the contribution of financial software to the greater economy in saying, “Technology has always been an integral part of financial services — from ATMs to securities trading platforms. But, increasingly, technology isn’t just changing the financial services industry, it’s changing the way consumers and business owners relate to their finances, and the way institutions function in our financial system.”

On the road ahead, among the top challenges for FinTech developers will be:

  • The creation of original processes for secure access to financial data through mobile platforms
  • The integration of blockchain capabilities into enterprise financial systems
  • A secure, real-time global transaction system that can automatically adjust to currency fluctuations

The FinTech panel at one recent AppDynamics event concluded that:

“All banks have the same problems, but the capabilities to solve these problems have changed. Banks are taking different approaches, but the endgame is the same, making sure the customers can access their money when they want and how they want.”

Imagining the Future

The technologies developed to handle the above difficulties have much wider applications for programmers in 2020 and beyond. The bigger picture is that the coming age of ubiquitous connectivity will require much greater enterprise commitment to infrastructure maintenance and software performance. As the IoT brings machine-to-machine (M2M) networking to our homes and cars, that will also demand a vastly higher bar in terms of experience. Fortunately, those embracing the latest in DevOps best-practices are uniquely qualified to approach these problems with one eye on what customers expect and another on what will keep the business flourishing.

The Most Popular Programming Languages for 2017 [Infographic]

It’s hard to believe that it’s already 2017. But with the new year comes new challenges, new opportunities—and, of course—new software projects. One of the most important questions beginner, intermediate, and advanced coders all have to answer before they begin their next project is which programming language to use. Instead of reaching for an old favorite, pause for a moment to consider the options.

There are no perfect languages, so it’s important to take the time to understand the tradeoffs. When you decide on a language, you also determine what libraries and tools youwhich-programming-language-will-reign-in-2017-header have at your disposal, the pool of candidates you can hire, the availability of documentation, and much more. In this article, we examine the top programming languages from leading industry sources to help you make an informed decision that best suits your needs.

Familiar faces

There are languages, like Java and the C-family (C, C++, and C#), that have dominated the programming language charts for years, and won’t be obsolete anytime soon. (Check out Github’s 2016 programming language rankings chart to see where the top 21 languages fell last year.) They may not be trendy, but they are battle tested, well understood, have active communities, and continue to evolve in response to new contenders, with features such as lambda expressions for Java 8 and coroutines for C++17.

JavaScript

If you are a developer in 2017, JavaScript is a fact of life. It ships with every major browser, powers server-side applications via Node.js, and even drives the development of desktop and mobile applications (with the help of frameworks like Electron and React Native). What started out as a simple scripting language to add dynamic elements to websites now powers full-blown applications in nearly every domain.

But it’s difficult to find anyone still using plain JavaScript. The language, which is officially standardized as ECMAScript, has evolved so rapidly that developers often use tools like Babel to transform modern JavaScript into cross-browser JavaScript. Languages that compile to JavaScript will also continue to gain traction in the coming year. For example, TypeScript adds classes and interfaces to plain JavaScript, and Elm brings the functional paradigm to the JavaScript ecosystem. While you may find it impossible to avoid JavaScript, you must exercise discipline when selecting your JavaScript toolset for projects with multi-year lifespans.

Dynamic languages still going strong

Python, PHP, and Ruby continue to rank among the most popular programming languages due to their newcomer user friendliness , suitability for rapid prototyping, availability of libraries to solve almost any problem, and vibrant developer communities. While less performant than their compiled and statically typed predecessors, dynamic languages remain the perfect choice for business applications where time to market is critical.

The rapid rise of Go

Open sourced in 2009, Go (or golang) quickly became one of the most popular programming languages. Designed by Google engineers as a practical replacement for large-scale systems development (where traditional languages including Java or C++ still reign supreme), Go has found a strong, emerging following among all kinds of developers.

Most notable for its simple syntax, built-in concurrency support, and feature-rich standard library (which includes a production-ready HTTP server), Go stirred up controversy over its deliberate omission of features, especially inheritance and generics. Despite its relative simplicity, people already use Go to ship popular, cutting-edge technologies such as Docker and Kubernetes.

Developing for mobile

Apple introduced Swift in 2014, and the language is already climbing the popularity charts. Objective-C still ranks higher, but Swift is rapidly replacing it as the preferred language for both beginners and pros to build iOS apps. The streamlined syntax, gentle learning curve, and powerful abstractions all contribute to Swift’s popularity. While Swift is open source and theoretically could be ported to other platforms, developers still need to rewrite mobile applications in Java or C# in order to run on Android or Windows phones.

Functional programming languages entering the mainstream

Functional programming languages such as Scala, Clojure, and Haskell are quietly growing in popularity. These languages offer expressive and concise syntax, exceptional compile-time error checking (meaning fewer bugs in production), and strong support for parallel operations. These benefits come at the cost of a comparatively steep learning curve and small hiring pool. However, as more developers explore functional programming in response to the unique demands of modern computing, functional languages will become more common for real-world projects.

The Most Popular Programming Languages for 2017

The Most Popular Programming Languages for 2017

Learn more

With so many great programming languages at your disposal, it can be difficult to decide on one. Fortunately, you do not have to decide on your own. Talk to your developers—they have well-informed opinions about the best language for your next project. Go to local tech meetups to discover what other companies choose and for what types of projects. Hit online job boards to see which languages and skills are in high demand. Ultimately, it’s up to you to choose which language features take top priority for each project you work on.

Share “The Most Popular Programming Languages for 2017” On Your Site

 

Code Compiled: A Short History of Programming – Part I

There are more than 2,500 documented programming languages with customizations, dialects, branches, and forks that expand that number by an order of magnitude. In comparison, the Ethnologue: Languages of the World research officially recognizes 7,097 official language groups that humans use to communicate with each other all around the world.

It can be hard to grasp what’s happening in the world of programming today without a solid grounding in how we got here. There are endless fascinating rabbit holes to disappear down when you look back over the past 173 years of programming. This abstract can only give you a high-level review with a strong encouragement to follow any thread that engages you.

The Prehistory of Programming

Ada Lovelace, daughter of the poet Lord Byron, is generally recognized as the world’s first programmer, though she never wrote a single line of code as we understand it today. What she did in 1843 was very carefully describe a step by step process of how to use Charles Babbage’s theoretical Analytical Engine to generate Bernoulli’s numbers. Her idea was to take a device for calculating large numbers and use it to generate new concepts.

Take a moment to consider how monumental that was. Bernoulli’s numbers are essential for analytics, number theory, and differential topology — all fields of knowledge that most people in the world couldn’t even comprehend during the Victorian era. Babbage was never able to build his Analytical Engine, so she had to do all of this in her head. Nevertheless, her schematic for machine language became the default framework for programming when technology caught up to her one hundred years later.

ENIAC: The Digital Analytical Engine

After the Great War, before it was known as World War I, the U.S. military realized that their bullets and bombs had not been accurate enough. Inefficient ballistics had been a colossal waste of resources, and another war was imminent. Generals agreed they needed a faster way to crunch vast numbers and get them to artillery gunners in the field.

As World War II began, six women known as “computers” sat in a room at Army HQ with artillery charts and numeric calculating machines to compute ideal trajectories. They were the world’s first programming team. The need for faster computing spurred the U.S. Army to fund the creation of the Electronic Numerical Integrator and Computer (ENIAC), developed from Babbage’s original design. Instead of using mechanical cogs like Babbage’s device, ENIAC performed calculations by holding up to ten digits in memory, making it the earliest case of digital transformation. Ironically, ENIAC wasn’t fully operational until the fall of 1945, just in time to see the end of World War II.

The biggest problem with ENIAC was that this team of human computers had to reset the machine’s switches after each program they prepared. That failing was addressed by John von Neumann’s proposal of an Electronic Discrete Variable Computer (EDVAC). Starting with the construction of EDVAC in 1949, programming languages began to proliferate.

From The Garden of Languages to The Apple

For the next three decades, electronic computers were monstrous machines. UNIVAC, the first commercially available computer, was the size of a room and ran on giant vacuum tubes. Programmers wrote commands using machine code and assembly language, which was then translated into punch cards, as in Babbage’s original design, or paper tape. The first higher level language was COBOL, created by Grace Hopper in 1953. COBOL and its associated Assembler is still used today in traditional industries like banking and insurance. This was soon followed up by IBM’s creation of FORTRAN, which included its own compiler. The programming training publisher O’Reilly has created a language timeline showing how fifty of the most popular languages that have grown from there.

The next big shockwave, still being felt today, was the introduction of personal computers in the late 1970s. The first wave of personal computers was characterized by a hobbyist/DIY aesthetic, like the Tandy TRS-80 and the Commodore 64 (remarkably still in operation today). These ran simple programs using the language BASIC. During this period, language wars really began to heat up as a rise in amateur programmers developed their own logic systems. Some of the top languages developed during this time included Pascal, Scheme, Perl, and ADA (named for Lovelace).

Perhaps the most influential development at this time was a variation on C called C With Classes, by Bjarne Stroustrup. This would grow into C++ and anchor a growing catalog of object-oriented (OO) languages. The 1980s brought the rapid growth of two hardware groups that dominated the personal computer industry and virtually locked down the operating system (OS) market for many years: IBM and Apple.

Programming for Mac vs. PC

Apple made computing visual with the introduction of the Macintosh in 1984, and IBM PC’s association with Microsoft Windows soon followed suit. The Mac introduced the mouse, the on-screen desktop, and icons for programs. The average user no longer associated computing with typing text into a command line on a black screen with a blinking cursor. This changed programming in two fundamental ways.

First, it led to the introduction of visual programming languages (VPLs) like Visual C and Visual J, where developers can manipulate coding elements spatially. Second, it opened up developers to considerations of the graphical user interface (GUI). In many ways, this was the beginning of the DevOps split between concern for the user experience vs. operational efficiency.

Although programming languages themselves were normally OS agnostic, the Mac vs. PC camps tended to support different types of software development. In the 1990s, the PC favored software for business, developed from languages like C++, Visual Basic, and R. Apple was better known as a home for graphics and communications software using new languages like Ruby, Python, and AppleScript. In the mid-1990s, the explosive popularity of the World Wide Web and gaming systems changed everything.

Gaming and The Web

The web moved HTML, Java, JavaScript, and PHP to the top of every developer’s list. Cold Fusion, Game Maker, and UnrealScript are a few of the languages built expressly for gaming. More recently, game developers often rely on rich ecosystems like JavaScript, C++, Cool (later renamed as C#), Ruby, and Python. These have been the workhorses for both web applications and game development. High-end graphics often call for supplemental support by specialized languages like OpenGL or DirectX.

Languages in Demand Now

Here’s an outline of the languages most in demand in 2016, according to the TIOBE index and Redmonk:

TIOBE (September 2016)

These TIOBE rankings are based on a concatenation of the total number of developers employed to use specific languages, instruction courses offered, and third-party consultants. This data is compared to results across all major search engines and hits on language pages within Amazon, Baidu, Wikipedia, and YouTube. The goal is to identify on a monthly basis where the greatest number of lines of code are being compiled.

  1. Java
  2. C
  3. C++
  4. C#
  5. Python
  6. JavaScript
  7. PHP
  8. Assembly language
  9. Visual Basic .NET
  10. Perl

Redmonk’s Top 10 (Mid-year 2016)

Redmonk’s methodology is to compare the popularity and performance of specific languages against each other on GitHub and Stack Overflow. The amount of discussion on Stack Overflow and number of working uploads to GitHub is an indication of where development and software defined processes are trending.

  1. JavaScript
  2. Java
  3. PHP
  4. Python
  5. C#
  6. C++5
  7. Ruby (tie)
  8. CSS
  9. C
  10. Objective-C

It’s easy to see why many developers say that Java and C run the world. A foundation in these two languages and related branches will prepare you for the widest range of coding work. Of all the languages on these two lists, the one the stands out immediately is Assembly. This is an indication that the IoT has arrived and the need is intensifying for engineers who can code for short processing devices.

The Next Wave

Looking to the future, programming for enterprise business or individual apps both offer substantial financial possibilities. The Bureau of Labor Statistics (BLS) estimates the median pay for programmers to be approximately $79,530 annually. They are projecting an 8% decline in jobs through 2024 due to growing competition from lower-priced coders all over the world. However, the BLS also shows that software developers have a median income of $100, 690 annually with a 17% growth spike, much faster than other industries.

The difference is that low-level programming will be increasingly outsourced and automated in the years ahead. On the other hand, there is already a shortage of people who know how to do the higher-level thinking of engineers and DevOps professionals.

In fact, there are many developers who are now at work trying to converge programming languages with natural spoken languages. That’s the goal of the Attempto Controlled English experiment at the University of Zurich. The hope is to open up the power of programming to as many people as possible before the IoT surrounds us with machines we don’t know how to control. We may all be programmers in the future, but DevOps skills will be critical to keep the business world running.

Learn More

Stay tuned for ‘Code Compiled: A Short History of Programming – Part II.’

5 Up-and-Coming Programming Languages to Know About

Staying current in the programming field can sometimes make you feel like the Red Queen in “Alice Through the Looking-Glass.” She said, “It takes all the running you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

You’re a master at Ruby on Rails? Great, but how are you with statistical analysis in R? Want to work at Google? Forget Python and start building channels in Go.

Introduction to R

You may be surprised to learn that R has been around since 1994. It was built by Ross Ihaka and Robert Gentleman at the University of Auckland and was based on the Bell Labs language S. The turning point for this statistical analysis tool came when it zoomed up to first place as the highest paying tech skill in the 2014 Dice Tech Salary Survey. That can be considered the year that Big Data arrived in mainstream business and R was clearly the best way to handle it.

Though it was preferred by academic data scientists originally, R has proven instrumental for enormous business applications like large-scale financial reporting for Bank of America and Facebook’s Social Graph, which analyzes interactions among 500 million people. Today, companies like Microsoft are using R as a server platform to go further with predictive modeling and machine learning. For many of these companies, R is replacing SQL, which can become extremely resource-intensive for advanced analytics.

Here are three reasons why R has taken off recently:

  • Because R supports missing values as a core data value, it can easily deal with incomplete data sources that are more common in real world projects.

  • A popular data visualization package for R called ggplot2 breaks up the graphics into scales and layers as components. It’s now the most used R extension package.

  • The top-level language shell in R is customizable, and coders have used that facility to build integrated development environments like RStudio. This made R much easier to learn and more widely available for business use cases.

Key Stat

In the Redmonk Programming Language Rankings of 2016, which compares the popularity of code on GitHub vs. Stack Overflow, R ranked 13th.

Introduction to Go

While some of the other languages here are getting a new life, Go (a.k.a. golang) is experiencing its first one. In 2012, Google presented the open source Go v. 1.0, after giving the world a glimpse of the experimental language three years earlier. Google had its eye on the future, saying, “People who write Go 1 programs can be confident that those programs will continue to compile and run without change, in many environments, on a timescale of years. Similarly, authors who write books about Go 1 can be sure that their examples and explanations will be helpful to readers today and into the future.”

Go was made for web services that need to handle thousands of concurrent web requests. Popular app builders like Python can’t deal with the speed and intensity of parallel requests on mobile. For the same reason, it is an excellent HTTP handler.  Like C++, Go is ideal for writing web services with precise control without the sharp learning curve of the aging object-oriented standby. Go has also been compared favorably with Algol.

As you would expect from a language with the backing of Google, Go has been used for major projects like Docker’s large-scale, distributed software projects. Evidence of the range of Go came in the form of Revel, which is a high-productivity, full-stack web framework that was created with Go. Revel is able to handle web essentials like routing, caching, parameter parsing, templating and more.

Go’s power of handling concurrent requests resides in its goroutines and channels. While goroutines function like threads, channels are the communication ports between goroutines.

Key Stat

Go currently ranks as the 15th most popular language in the Redmonk Programming Language Rankings of 2016.

Introduction to Hack

Hack was an internal Facebook project that was released as open source in 2014. As a replacement of their PHP, Hack is a way of combining the short dev cycles of dynamically-typed language with the core discipline of static-typed ones. It also borrows features from some of the other new languages, such as type annotation and generics that allow you to set parameters for classes and methods, while adding many features commonly found in other modern programming languages.

Facebook reported that many common tasks were becoming problematic in PHP, such as calling a method accidentally on a null object, which would generate an error that you wouldn’t be able to catch until runtime. Another problem was with complex APIs that required developers to look up mundane method names in documentation. Those are merely annoying unless you are working at a company like Facebook, where developers are expected to ship code twice a day. Thanks to the upgrade, Hack is now replacing Facebook’s entire PHP codebase.

While just about any site that uses PHP now could be using Hack, it has not gained widespread acceptance yet (a major barrier to adoption of Hack is that you must be running the HHVM runtime). One example of a creative uses for Hack is Vindinium, which is the basis of an AI-driven game system.

Key Stat

Wikipedia’s MediaWiki runs on the HipHop Virtual Machine (HHVM), powered by Hack.

Introduction to Rust

Mozilla’s Rust has been in development for years, but Rust 1.0, the first stable release, appeared in 2015. Mozilla’s David Herman detailed what makes Rust valuable: “Rust has something unique to offer that languages in that space have never had before, and that is a degree of safety that languages like C and C++ have never had. … [T]here are some things that make doing systems programming terrifying that are gone in Rust.” Specifically, he’s talking about security vulnerabilities in C++.

Rust was made to give programmers complete control through extensive compile-time checking. For example, browser exploits in C++, which caused Firefox to be unstable, would generate compile-time bugs in Rust and close the hole.

The 2016 State of Rust Survey reported that one-fifth of the language’s users are deploying it for commercial purposes, either full or part-time.

Key Stat

Rust won first place for Most Loved Programming Language of 2016 in the Stack Overflow Developer Survey.

Introduction to Swift

Apple has many new hardware platforms, like the Apple Watch and Apple TV. Swift was built for them. Initially, it was brought out in 2014 as a better alternative for Objective-C, since it makes it easier to build native apps for iOS. It’s also made for building apps on OSX, watchOS, tvOS and Linux.

At the end of last year it became open source. Many coders with a great deal of JavaScript experience say Swift is easier than Ruby or Python for developing apps. Lyft rewrote their iOS app in Swift, not due to any problems with the existing app, but because Swift made it easy to improve the base code they wrote in a hurry before launch.

Many of the biggest app developers on iOS have already incorporated Swift into ecosystems, including:

  • Airbnb

  • CNN

  • Eventbrite

  • Imgur

  • KAYAK

  • LinkedIn

  • Medium

  • Pandora

  • Tumblr

  • The Weather Channel

Key Stat

Swift has moved up to 17th place in the Redmonk Programming Language Rankings of 2016.

Summary

If you just want a quick summary of which new language to use where, refer to this guide:

  • R is better than SQL for statistical analysis and managing big data.

  • Go is better than Python for building apps that have to deal with multiple, parallel requests.

  • Hack is an advanced version of PHP that speeds up development cycles.

  • Rust is easier, more security-conscious language for various use cases versus C++.

  • Swift replaces Objective-C for native app development on iOS.