How Continuous Integration Works, and The Big Benefit No One Talks About

In a digital world that moves as fast as ours, programmers are applying new, creative ways of thinking to the software development process in a non-stop push for ever-faster turnaround times. In DevOps, Continuous Integration (CI) is increasingly the integration method of choice, in large part because of the speed at which it enables the release of new features, bug fixes, and product updates.

CI dictates that every time a developer pushes code to an application, an automated process grabs the current code from a shared repository, integrates the new build, and tests the software for problems. This approach leads to faster results and ensures software is tested on a regular basis, which enables further DevOps automation processes such as delivery, deployment, and experimentation.

How CI Works

With CI, the software code is contained in a shared repository, accessible by developers so they can “check out” code to individual workstations. When ready, they push the code back into the repository to commit changes. From there, the automated CI server takes over, automatically building the system and running unit and integration tests to ensure the code did not break another part of the software. If the build is unsuccessful, the server pinpoints where in the testing process the code failed, letting the team address it at the earliest point of failure.

This CI process occurs many times per day, meaning the system constantly builds and tests new code. The updated software can then be released manually, or DevOps teams can further automate the project by electing to have the system deploy and deliver the product.

Since developers don’t have to backtrack to find where code breaks, DevOps teams save big in terms of time and resources. And because the process is continuous, programmers never work on out-of-date products, or try to hurriedly push changes to keep up with demand or internal deadlines.

CI allows developers to automate long-running processes and use parallel containers or virtual machines to run tests and builds. Because these processes are automated, programmers can work on other tasks while everything runs in the background. And since the code is only merged once the build passes testing, the chances of breaking master code are greatly reduced.

The (not so) Secret Benefit of CI

Sure, CI saves time and reduces costs, but so does every other noteworthy innovation in technology or business processes these days. There’s another major reason CI is so successful that isn’t talked about as much because it’s more difficult to quantify than productivity and cost: team morale.

If you talk to any development team, they’ll tell you that almost nothing in the world is as frustrating as building a process, integrating it with the code, and then watching the software break. Not only have hours of work been wasted, but team members know that more hours lie in front of them trying to comb back through the process to pinpoint where it failed. As any business leader knows, an unhappy team results in an inferior and/or more costly product. As frustration mounts, work slows down. Then as a deadline approaches, changes are frantically pushed through, increasing the probability of a flaw in the master branch or a bug being deployed with the product.

The transparency of CI can be a big boost to the confidence level within DevOps. Suddenly, as developers work, they can see exactly where problems arise, which allows for a much faster response and repair. And if the build passes, team members can feel good about a job well done.

Takeaways for CIOs

The continuous integration approach to DevOps increases transparency, builds automation into processes, decreases costs by maximizing developers’ time, and creates repeatable processes upon proven successes. On top of all that, it relieves pressure from programmers and helps teams gain confidence.

Though there are variations among details of different platforms and approaches, the key tenets of CI hold true among development teams:

  • Maintain a single source repository with easy access for developers
  • Automate the build and testing processes
  • Make sure every build occurs on an integration container or VM
  • Utilize a production environment for testing
  • Make the testing results and processes transparent and visible to teams

Conclusion

If you want more speed and more smiles out of your development team, consider applying a continuous integration approach to your DevOps processes. Make sure to consult both with your team and a CI service provider to determine what makes sense for your organization and ensures a smooth implementation. Then sit back and watch the code fly.

 

Improve Your UX and You’re Bound to See eCommerce Success

Commerce has become both digital and global: Online sales are expected to exceed $1.6 trillion dollars by 2020. As a customer preferred way of doing business, ecommerce offers increased selection, value, and convenience. Online shopping also offers merchants increased access to customer data and opportunities to capitalize on that information.

If your business isn’t keeping pace with best practices in ecommerce UX—not to mention leveraging mobile to capture even more opportunities—you’ll miss out on the continuously growing percentage of consumer online spending. In 2016, for instance, shoppers made 51 percent of their purchases online (compared to 48 percent in 2015 and 47 percent in 2014). Regardless of the path of innovation you choose in the ecommerce space, your online presence should be optimized for user experience. This article explores the top features and UX best practices that are at the heart of a compelling ecommerce experience. While it complicates the evolution of the buyer’s journey, multichannel commerce offers new opportunities for conversions as well. Whether your shoppers flock to phones or desktops when they view your site, the key features emphasized here apply to all platforms. Let’s start with some of the important moments in a shopper’s journey, and then take a deep-dive into optimal UX in each.

There are four critical areas where your ecommerce presence should demonstrate great UX: navigation, product pages, checkout, and optimization. Each of these areas has corresponding UX best practices, which we explore in-depth.

Navigation

Invest in an intuitive IA

Don’t organize your products based solely on how you think about them. Your users have their own intuitive ways of thinking about and grouping your products, and it’s possible, using research methodologies such as card sorting, to cater to that. Card sorting is an exercise in a lab setting that helps develop your site taxonomy by collecting patterns in the way customers sort your products. Research participants are typically given representative sets of items, and asked to group and name them intuitively into their own categories. Across multiple users, patterns emerge that help guide the creation of intuitive navigation categories and product groupings.

Have a content governance strategy

A search bar is crucial. And if you’re going to offer a product search, you should have a strong approach to content governance. This is fundamental for two reasons: You need to make your ecommerce platform fully accessible using search and filtering, and as your goods and services evolve they need to be accessible just as consistently as the products indexed before them. Strong content governance equates to quality metadata on a per-product basis. You will have addressed the following questions in developing your strategy: What search terms should be associated with my product(s)? What filtering facets do I make available to my customers during the browsing process? In the absence of specific user input, how should content in search results and elsewhere be ranked, sorted, and organized by default?

Avoid dead-end search results

Nothing is more disheartening to shoppers than entering a search term and pulling up a “0 results” page. If your site has the capability to perform a fuzzy search (thanks to good tagging and content governance), provide these products on the results list in place of a dead end. Consider implementing an autocomplete feature within your site search. This way, shoppers are exposed to new combinations of search terms, which may yield more results than they think of on their own.

Use breadcrumbs

This easily-overlooked navigational element helps shoppers locate themselves on your site and in time, get a feel for how your products are organized. As best practice, breadcrumbs are expected to appear on category-level and detail pages shortly after header and navigation content. Usually breadcrumbs appear as an unobtrusive line of text, which maps to the customer’s location and depth in a site.

Product Pages

Offer high-quality, informative imagery

Customers are evaluating the smallest of details when comparing your product to competitors. Make the decision clear for them with images sized to highlight details such as stitching, seams, colors, and functionality. Many shoppers enjoy 360-degree views and even videos of products in action. Sweeten the deal by including carousels and large hero banners on category and homepages, which rotate through quality images of featured products.

Help customers browse efficiently with quick views

When shoppers view your goods—whether in a category page or on search result page—they’ve likely invested a fair amount of effort navigating to that specific page. This is where quick view comes in handy. A properly implemented quick view allows shoppers to evaluate the most salient points of a product without being led away from the list of products they created. A good quick-view feature will provide shadow-box preview, creating a temporary content container above the page body. This will showcase a summary of key details and larger product views. Well-implemented quick views have a prominent entry point, for instance hover states above item thumbnails displaying “Preview.” Without quick views, users are often resentful and hesitant to peruse large numbers of product detail pages individually. This is because they generally won’t have confidence in their ability to return to the original product list. Additionally, page-loading delays can add up to a significant speed bump in the shopping journey.

Leverage user-generated content

The smaller your brand name relative to your competitors, the more shoppers will be skeptical of the quality of your products and services. Customers on most ecommerce sites take each other’s reviews to heart and write their own. There is no excuse for not allowing this capability. If you don’t permit user content, you’re not helping your shoppers become comfortable with your products, and they will leave your storefront and convert to a competitor providing this functionality.

Clarify pricing and discounts

It is imperative that pricing for each product is not only displayed prominently and clearly, but also broken down in relation to discounts or applicable sales. Based on a survey of American shoppers, 71 percent had abandoned a retailer at some point because of pricing concerns; they further reported finding better deals online. Your only differentiator, in the eyes of many potential customers, may be a seasonal sale or the discounts available by combining certain goods. For every item in grid, list, and detail views, the best ecommerce storefronts clearly delineate applicable discounts.

Provide a prominent path to assistance

Customers prefer immediate resolution to their questions and issues. Providing a live chat feature as well as clearly visible contact information throughout your site can go a long way in turning shoppers into buyers. Most shoppers are accustomed to finding help and support information in two areas: in the top right part of the navigation, adjacent to login functionality, and in the site footer.

Checkout

Always provide a guest checkout option

When customers embark on the checkout process, speed bumps such as mandatory account creation can be costly to your business. Guest checkouts empower customers to quickly place an order if they value the product more than membership. If you insist on guiding shoppers toward site membership, do so after an order has been placed with an invitation to create an account.

Map out the checkout

Customers appreciate seeing their progression across checkout phases. Provide graphical feedback of the movement between forms for shipping information, billing information, and order completion. Be sure to indicate which step the user is currently on. This helps customers gauge how quickly they can move through the process. In addition, providing this feedback bolsters confidence that changes can easily be made if a shopper decides to revisit one of the checkout stages.

Prioritize security awareness

With information breeches increasingly common, shoppers are more security conscious. They look toward the presence of specific iconography on your site—locks, checkmarks, and the like—to determine if their transaction will be secure. It’s better still if the graphics correspond to recognized brands in security, such as Verisign and McAfee. You need to make sure that these cues appear during the checkout process, where the need for trust is greatest.

Optimization

Optimize for quick load times

Internet access speeds vary significantly based on your customer’s browsing platform and available bandwidth. Because there’s a good chance your shoppers are on a mobile device, it’s more important than ever to ensure your content loads before your customer calls it quits. Depending on the type of storefront you run, don’t forget to assess the extent to which API calls and scripts affect and compound overall load times.

Design with high scannability in mind

You want to do everything possible to facilitate the shopping process. To do this, you can’t ignore the scannability and prominence of important information on your site. Generous use of white space, minimal use of large text blocks, and easily distinguishable hyperlink styles are all examples of ways to improve your site’s scannability. When customers are able to process the information on your site more swiftly, it’s more likely they’ll complete a purchase and bring repeat business.

Conclusion

Providing a frictionless shopping experience should be the goal of your ecommerce presence. As improved UX takes center stage in your organization, your customers will have more memorable and positive experiences as your business grows. By prioritizing an intuitive navigation, rich and intelligible product details, and a no-nonsense checkout process, you can greatly increase the odds that your shoppers will become buyers.

 

 

 

Is NoOps the End of DevOps? Think Again [Infographic]

Automation, a key pillar of the DevOps movement, frees IT operations to focus on higher-level work and collaborate with cross-functional teams. But what if your automation is so good that developers don’t need you anymore?

Mike Gualtieri of Forrester Research coined the term NoOps in his controversial blog post “I don’t want DevOps. I want NoOps.” In the post, Gualtieri says, “NoOps means that application developers will never have to speak with an operations professional again.”

During his time as a Cloud Architect at Netflix, Adrian Cockcroft expanded on the definition of NoOps in his blog post “Ops, DevOps, and PaaS (NoOps) at Netflix.” “There is no ops organization involved in running our cloud, no need for the developers to interact with ops people to get things done, and less time spent actually doing ops tasks than developers would spend explaining what needed to be done to someone else,” Cockcroft says. In short, NoOps means automation of deployment, monitoring, and management of applications.

Five years later, the debate surrounding the NoOps movement continues. If you’re a DevOps professional, this may scare you. Is your job still relevant? Does business still need you? Don’t worry.

DevOps isn’t dying. It is evolving.

Before we look to the future, let’s take a look at the origins of DevOps to give us a better foundation for debate.

A brief history of IT Operations, DevOps, and NoOps

In traditional IT organizations, developers and system administrators have opposing goals. Developers’ primary focus is to build features, while operations’ focus is to ensure the availability, reliability, performance, and security of the features in production. The Wall of Confusion, an iconic image in DevOps folklore, illustrates the barrier between development and operations, which leads to an unending series of outages, fire fighting, blame shifting, internal tension, customer frustration, and business failure.

In the late 1980s, IT Infrastructure Library (ITIL)—a set of standards and best practices shared by the highest performing IT organizations—emerged. While practicing ITIL promised high change success rates and prevented typical disasters associated with software deployment, it did so at the expense of speed. With a reliance on manual controls and bureaucratic procedures, implementing successful changes meant slowing down workflow.

Meanwhile, the software development community was busy forming their own best practices for the rapid development of applications. In 2001, a summit of prominent software craftsmen drafted the Agile Manifesto, kickstarting the agile development movement into full gear. The agile principles empowered small, cross-functional teams to build high-quality software faster than ever before.

The rise of the internet during the 1990s was the catalyst that fueled the demand for better, faster, more sophisticated software. In addition to the process advancements, many technologies advancements, such as version control, continuous integration, configuration management, and virtualization, gained traction during this period.

In 2006, the need for better processes and tools reached a critical mass with the public launch of Amazon Web Services. With the advent of cloud computing, software teams could now outsource their physical infrastructure entirely to cloud providers, and instead manage virtual infrastructure resources via APIs. This infrastructure as a service (IaaS) model allowed development teams to move faster, no longer having to wait on IT to order and provision new hardware.

One year later, platform as a service (PaaS) solutions, such as Heroku and CloudFoundry, made it possible for a single developer with no operations experience to launch a scalable web application over the weekend, because the platform automated everything from commit to deploy.

The day the earth stood still

The pivotal moment in DevOps history was the groundbreaking presentation from John Allspaw and Paul Hammond at the 2009 Velocity Conference—10+ Deploys Per Day: Dev and Ops Cooperation at Flickr. Seeing how a large company with complex software could successfully deploy a product multiple times per day was both a shock and a call to action for the IT community.

Riding the momentum of the ’09 Velocity conference, Patrick Debois organized the first Devopsdays conference in Ghent, Belgium. Devopsdays is a worldwide tour of locally organized conferences for developers, sysadmins, and other software professionals to meet and share their stories, ideas, and challenges. Some common themes discussed at Devopsdays include fostering a culture of community and collaboration, blameless post mortems, and applying agile practices and lean manufacturing principles to IT operations.

Since the first Devopsdays, the movement continues to accumulate success stories and widespread adoption including:

  • Docker
    An open-source container management platform that brought DIY PaaS solutions to the masses
  • Kubernetes
    A popular container orchestration framework
  • AWS Lambda
    The first widespread example of Serverless computing in which functions run on demand without the need for a server to run continuously

7 Reasons DevOps Is Not Dying

Now that we know the origins of DevOps, let’s return to our original question: Is NoOps the end of DevOps? Of course not!

1. DevOps is a journey

Attend any Devopsdays conference, and you will certainly hear the phrase, “DevOps is a journey, not a destination.” What you should see from our brief history of DevOps is that many of the techniques and practices were in use or being developed long before DevOps arrived on the scene. In fact, the first NoOps solutions were in use years before DevOps even had a name. And yet more than seven years later, the DevOps movement is still growing stronger.

2. DevOps adoption is growing

According to RightScale’s 2016 State of the Cloud Survey, more than 80 percent of enterprise companies and 70 percent of small businesses are in the process of adopting DevOps practices. Companies are investing in DevOps more heavily than ever before, and as Puppet’s 2016 State of DevOps Report demonstrates, the investment is paying off. High-performing IT organizations practicing DevOps have 2,555 times faster lead times (the time it takes to go from idea to working software in production), three times lower change failure rates, 24 times faster mean time to recovery (MTTR), and 10 percent less rework than their underperforming counterparts. Needless to say, your DevOps job is likely safe for the foreseeable future.

3. NoOps is not one-size-fits-all

If businesses see those kinds of gains from DevOps, why not skip DevOps and go directly to NoOps? For starters, NoOps is limited to applications that fit into existing PaaS solutions. Many enterprises still have monolithic legacy applications that would require massive upgrades or total rewrites to work in a PaaS environment. Furthermore, new technologies that have no suitable NoOps solution will emerge. As some claim, NoOps is really the next level of DevOps, and we should use DevOps principles and techniques to build NoOps nirvana into all new products.

4. NoOps fits within the three ways of DevOps

In The DevOps Handbooks, authors Gene Kim, et al. describe the three ways, which are the principles all DevOps patterns can be derived from. The first way is Flow: the movement of features from left to right through the CI pipeline. NoOps solutions remove friction and increase the flow of valuable features through the pipeline. In other words, NoOps is successful DevOps.

Thesecond way is fast feedback from right to left as features progress through the pipeline. Because NoOps allows us to ship defects as quickly as features, automated controls are necessary at every stage of the pipeline to ensure defects are caught and remediated early. At the scale of modern software applications, even a small defect could have damaging results for a business.

The third way is continuous learning and improvement. NoOps is exemplary for focused learning and improvement over many years to achieve an ideal of frictionless software deployment. NoOps is a culmination of new tools, techniques, and ideas developed through open and continuous collaboration. To say NoOps is the end of DevOps is to say we have nothing left to learn and nothing to improve.

5. Operations happens before production

With DevOps, much of the traditional IT operations work happens before code reaches production. Every release includes monitoring, logging, and A/B testing. CI/CD pipelines automatically run unit tests, security scanners, and policy checks on every commit. Deployments are automatic. Controls, tasks, and non-functional requirements are now implemented before release instead of during the frenzy and aftermath of a critical outage. Having sysadmins, auditors, security personnel, QA testers, and even key business stakeholders involved from the beginning ensures these automated tasks and controls are in-place, correct, and effective before it’s too late.

6. DevOps is people

More important than any particular tool or technique is the culture of DevOps. You can’t practice DevOps in a vacuum and expect to succeed. DevOps began when developers and system administrators met at a conference to share ideas and experiences and work toward a common goal. Over time, the community realized it could build better software faster by including members from all areas of business, including QA, security, marketing, and finance. And the community continues to learn and evolve by sharing ideas at meetups, in online forums, on blogs, and through open source software. No matter how many advancements we make, the law of entropy will take over and erode them all if we forget the importance of DevOps culture.

7. DevOps requires continuous learning and improvement

A major pillar of DevOps is the spirit of continuous learning and improvement. When failures happen, a strong safety culture that practices blameless post mortems is vital for learning from mistakes. Rather than punish and assign blame, which destroys team morale and doesn’t solve the underlying issue, we must understand and improve the systems and processes that allowed the failure to happen in the first place.

Learning and improvement shouldn’t happen only when things go wrong. Everyone should strive to improve their daily work, and organizations should provide incentives for individuals to share their discoveries with the wider organization. With modern IT being a key driver of business success, companies must recognize that turning 10 1x developers into 10 2x developers is twice as effective, and much more realistic, than finding the elusive 10x engineer.

In conclusion, DevOps is forever

Despite the cries of DevOps demise, NoOps is not the end of DevOps. In fact, NoOps is only the beginning of the innovations we can achieve together with DevOps. The movement started long before DevOps had a name, and the core principles will live on as long as businesses require software to succeed in a fast-paced, rapidly changing technological landscape. In a few years, the name may fade away in favor of a new buzzword, but the culture and the contributions of the DevOps community will live on.

Share this Image On Your Site

Does Your DevOps Department Need More Attention? [Infographic]

There are some big red flags that signify your DevOps department needs an overhaul. Your deployment process seems to take forever. It only work from a few developers’ computers. It’s different for each server you deploy to.

Sound familiar?

Luckily the warning signs of a DevOps department in need of help are pretty easy to recognize. Read on to learn how to identify if and when your infrastructure team needs more attention—plus a few suggestions to implement those changes as smoothly and seamlessly as possible.

All your servers are slightly different

Automation is the byword of DevOps. With automation, you remove manual intervention (and possible human error), which means you can deploy new services and recover from critical events faster. If a server were to go down, a new one should be automatically created.

That’s a good ideal, but don’t worry if you’re not there yet. Just having a process to create servers is an enormous step in the right direction. Investigate tools such as Chef, Puppet, Ansible, or Salt to standardize your provisioning process. You should be able to take a server from a bare image to a full-fledged member of your cluster in one command. If you can’t, you’re in danger of losing important infrastructure knowledge when a server inevitably dies. And recreating server configuration after it’s been destroyed is not a fun experience.

A huge bonus of a standardized stack is liberation from correcting strange, difficult-to-trace server problems. Sorting through header files and C source trying to track down an error, only to figure out a disk experienced a freakish one-time mishap, will become an exercise of the past. The next time your OS acts up in an unexpected way, just destroy the entire server and let your provisioning system bring it back, fresh and new.

You may be surprised how, through no fault of your application or your own, entropy can infest a system and gradually introduce errors, bloat, and bugs. Fighting server divergence is one of the hardest tasks in operations, but configuration management tools and a standardized server creation process are the most important steps to ensure conformity among all members of your cluster. The surest way improve your DevOps game is to establish a streamlined, automated provisioning process you know works on all your servers—and don’t be afraid to use it!

Change is hard

Another sign you need to reinvest in your DevOps stack is if you spend a lot of time trying to manually change parts of your infrastructure. If a simple version upgrade takes weeks of manual work by your system’s administrators, there’s definitely something wrong. No piece of software should be manually installed on a server (except maybe to test how it functions). Administrators should largely write and correct software in repositories, not fix them on servers.

On the provider side, if creating new load balancers, databases, or other provider-mediated resources takes a while and requires you to use your provider’s management console, consider a tool like Terraform or CloudFormation to automate and manage your infrastructure backend. Changes you make to any part of your infrastructure should be tracked, managed, and understood through your version control system. This includes both the software running on servers and the commands used to provision those servers and all associated resources.

And similarly, changes to the infrastructure should be quick and transparent. A new version of your application should be delivered via a continuous deployment process that occurs automatically after a merge or new version. Needing a developer or administrator to manually perform deployments is a serious problem; waiting for deployments is an artificial bottleneck that takes time and saps focus. You can be sure someone will forget how it works, which can lead to a breakdown of the process, unless it’s incredibly well-documented.

And if you’re documenting it that well, why not just write code that performs the documented steps for you?

Developer environments are inconsistent

When a new developer joins your company—or an existing engineer buys a new computer—hours of time must be devoted to installing proper tooling, ensuring versions of local software are correct, and debugging any application-specific problems that crop up. This may seem like a small issue but it can rear its ugly head at unexpected times. Even six months after an engineer joins, the code he or she developed locally may work differently once deployed. Figuring out the problem can turn into a days-long slog that craters productivity.

A developer should be able to work on an environment exactly identical to your production stack. Tools including Vagrant and Docker allow you to bring the same provisioning and containerization processes that your servers use to your developers’ workstations, which helps ensure versioning problems are a headache of the past.

But even if you can’t introduce Vagrant and Docker, having an automated install process and a standardized development environment can alleviate a lot of the pain caused by inconsistencies. Your Windows or Linux developers may chafe when required to use Macs, but if you can ensure Macs always install the correct version of your software tools, it may be worth asking them to make that sacrifice.

Of course, developing with virtual machines means a developer could use whatever platform they’re most comfortable with and still be guaranteed to receive the same software. But getting there takes a lot more work than having an automated install script.

Conclusion

If your DevOps initiative is suffering from some or all of these issues, it’s clear your organization is experiencing drag caused by bad tooling or lack of processes. Thankfully, most of these issues are easy to fix. Streamlining your DevOps flow will save your engineers and administrators countless hours of manual management and debugging. Paying a little more attention to your DevOps can make formerly implacable, difficult-to-debug problems easy to fix through automation and standardization.

Share this Image On Your Site

 

Beyond Bitcoin: How Enterprises Can Integrate Blockchain into Business [Infographic]

In 2016, blockchain technology came close to hitting its peak on Gartner’s annual Hype Cycle, signaling an imminent shift from an emerging, theoretical technology to widespread adoption. Like cloud, big data, and the Internet of Things (IoT) before it, blockchain is the tech industry’s latest Next Big Thing. Analysts and industry experts say it holds immense potential for organizations, but many business leaders don’t yet see a practical application for their operations. While a lot of people know blockchain is the technology behind Bitcoin, Ethereum, and other cryptocurrencies, what about enterprise applications in other industries?

What exactly is blockchain?

The code that makes up blockchain is incredibly complex, and only a few thousand people understand it. On a high level, it’s a data structure supported by peer-to-peer (P2P) protocols that form a distributed, decentralized transaction ledger. Every member on the community network uses the same “consensus mechanism” to verify every transaction made through the network, creating a unique, permanent audit trail. Thanks to the distributed nature of the blockchain, there’s no single point of failure, and no way to make modifications to the transaction record.

You may start to hear a new acronym—DLT, or Distributed Ledger Technology. Many experts think 2017 is the year blockchain technology will gain traction in the enterprise, and new mainstream understandings about the technology will lead to high-value applications and process simplifications as general adoption trends progress to line of business solutions.

It’s important to understand that blockchain isn’t a technology layered on top of existing infrastructures in order to tweak the way business is done. It requires a different way of thinking and an entirely new approach to business. There are no middlemen involved in a blockchain transaction to facilitate and verify the exchange; it’s all done instantly on the distributed network, with the record of transaction logged permanently in the database. It’s a machine-to-machine process that has no human touch points once the transaction is entered.

Enterprise use cases

Blockchain is the next iteration of the connected enterprise. It’s the next logical step in a data sharing ecosystem in which every process is now—or will soon be—digital. The amount of data we have to process is overwhelming. As IoT takes hold, connected smart machines and devices will exchange information and execute automated tasks.

Blockchain is already used in a handful of applications including cryptocurrency. This year, as more people set their minds to understanding the technology and finding creative applications, we will see a sharp increase in new use cases in industries as diverse as agriculture, finance, healthcare, energy, and manufacturing.

  • Supply chain The immutability of DLT makes blockchain suited to banking and finance, which is why the first mature applications, such as Bitcoin and Ethereum, are in this space. The responsibility for verifying and securing the transactions shifts from a central authority to the distributed network, and the ledger is public, which increases transparency. But you don’t have to deal in cryptocurrency in order to find a fintech application for blockchain. The real promise for enterprises lies within the supply chain. Because transactions happen in real time, sales order processing, inventory management, and accounting work more closely together and at significantly increased speeds.Picture this: a women’s apparel retail store does a booming business in a particular line of handbags. The store’s inventory management system sends a purchase alert when it’s time to order, and through the blockchain, the application sees available inventory and triggers a sales order. There’s no need to verify the inventory is in stock; because of the transparency, the system knew that before it placed the order. Payment is automatically generated and transferred in real time, bypassing the bank entirely. With blockchain, a process that takes weeks could be completed in a matter of minutes.
  • Food and beverage
    When you deal with perishables, keeping an exact chain of custody is imperative. Walmart has partnered with IBM to test a blockchain application that tracks the pork it sells in China. Every step the product takes between farm and cash register is documented on the blockchain. The record shows where it originated, how and when it was processed, which truck transported it, at what temperature it was stored, when it will expire, and which store bought it.Right now, this application is only in the testing stage in a very small market, but the clear benefits mean it won’t be long before using a blockchain in the food and beverage industry becomes best practice.
  • “Limited blockchains”
    Blockchains aren’t only helpful when exchanging valuable assets with entities outside the enterprise trust circle. Some of the applications with the most creative potential are in private use cases where the data is shared with a smaller, pre-defined value chain. In many instances, these blockchains will be used in tandem with off-blockchain solutions and brokers because enterprises won’t have the advantage of a vast P2P network of distributed entities. In this sense, they’re limited blockchains.For example, the handbag manufacturer we outlined above may use a limited blockchain to manage its production schedule. From the sales team to the shop floor to the warehouse, every touchpoint in the product lifecycle is recorded on the blockchain, while more granular business processes are controlled off-blockchain. Or in healthcare, a consortium of medical care providers and insurance companies could use the blockchain to decentralize patient health records, creating one source of patient data. However, because of HIPAA regulations, there may need to be an off-blockchain application matching each blockchain node with the corresponding patient’s sensitive identity information.

 

 

Challenges to overcome

The promise of blockchain is exciting, but there are some significant challenges to work out before widespread adoption is realistic. One of the biggest is security and privacy. In June of last year, hackers exploited a vulnerability in Ethereum’s blockchain code to steal $60 million. Because of the distributed nature of the blockchain, a certain number of Ethereum’s network would have had to agree to rewrite the rules in order to recover the money. Several refused to do so, citing the sanctity of blockchain’s immutability.

It was eventually resolved, but the incident left the banking industry cold. A few months later, Accenture developed a way to edit the blockchain, which caused no small amount of controversy among blockchain purists. But the edit feature was welcomed by the financial industry. Before enterprises feel safe entrusting their financial records and other sensitive information to a blockchain, they’ll need to see a lot of security advances.

The other big hurdle is the lack of regulation surrounding blockchain. Remember the FBI vs. Apple debacle where the feds wanted the manufacturer to unlock an iPhone used by terrorists in the San Bernardino shooting? The FBI used a 227-year-old statute as the basis of their argument. That’s how slowly governments move on new technology.

However important the information on that iPhone, that situation was a lullaby compared to the alarm bells blockchain has set off. Remember, there’s no intermediary for blockchain transactions. Banks are subject to federal regulations, but Bitcoin and other cryptocurrencies are not. Further, new advances in blockchain encryption show promise for the ability to hide the origin or destination of the assets exchanged. This all spells trouble for governments trying to collect taxes or institute consumer protection laws.

On the practical side, the infrastructure needed to give a large blockchain such as Bitcoin integrity is massive. Some experts estimate that you’d need more than five megawatts of data center just to track users’ currency. Blockchain applications require immense cloud infrastructures, with each transaction conducted as a virtual session within a data center. As blockchain grows, the demand on servers will increase dramatically, necessitating investments in advanced storage solutions such as hyper-converged infrastructure.

The World Economic Forum predicts that by 2025, one-tenth of our GDP will have made its way onto the blockchain. For a nascent technology, that’s a bold prediction. But we’re living through a period of technological transformation that moves at unprecedented speeds. If you’re not already thinking about blockchain, you may already be behind.

Share this Image On Your Site

Java vs. Python: Which One Is Best for You? [Infographic]

Few questions in software development are more divisive or tribal than choice of programming language. Software developers often identify strongly with their tools of choice, freely mixing objective facts with subjective preference.

The last decade, however, has seen an explosion both in the number of languages used in production and the number of languages an individual developer is likely to employ day to day. That means that language affiliations are sometimes spread more loosely and broadly across different codebases, frameworks, and platforms. Modern projects and modern developers are increasingly polyglot—able to draw on more languages and libraries than ever before. Informed choice still has a part to play.

From that bustling bazaar of programming languages, let’s narrow our focus to two survivors of the 1990s that have very different origin stories: Java and Python.

Python’s Story

Python is the older of the two languages, first released in 1991 by its inventor, Guido van Rossum. It has been open source since its inception. The Python Software Foundation manages the design and standardization of the language and its libraries. The Python Enhancement Proposal (PEP) process guides its development.

In programming language evolution, it is common to maintain backward compatibility indefinitely. This is not the case with Python. Python 2 arrived in 2000 and Python 3 hit the scene in 2008. They are largely compatible, but have enough functionality- and syntax-breaking differences that they can be treated as different languages. Rather than retrofit newer trends and ideas into Python 2 (complicating and compromising the language), Python 3 was conceived as a new language that had learned from Python 2’s experience. Python 3—version 3.6 at the time of writing—is where current evolution and emphasis in the Python world exists. Python 2 development has continued separately, but its final incarnation is version 2.7, which will no longer be maintained after 2020.

Python’s syntax embodies a philosophy of readability, with a simple and regular style that encourages brevity and consistent code layout. It originated as a scripting language, embodying the Unix philosophy of being able to compose new programs from old, as well as using existing code directly. This simplicity and composability is helped by Python’s dynamic type system. It is an interpreted language available on many platforms, making it a portable option for general development.

Python’s reference implementation, written in C and known as CPython, is available on many platforms and is the most commonly used. Other groups have created their own implementations, such as IronPython, which is written in C# and offers close integration with the .NET runtime.

Python is a general-purpose language built around an extensible object model. Its object-oriented core does not necessarily mean object orientation is the most common style developers use when programming in Python. It has support for procedural programming, modular programming, and some aspects of functional programming.

The language’s name—and no small amount of humor to be found peppered through its documentation and libraries—comes from British surrealist comedy group Monty Python.

Java’s Story

Although it was not released until 1995, Java’s story begins in 1991. James Gosling and others at Sun Microsystems conceived a language for programming interactive TV systems. It was released with the fanfare of being a portable internet language, particularly in the browser. It is now a long way from this starting point and the original name: Oak.

Just as it was too heavyweight at the time for its original TV target market, it lost the browser space to dynamic HTML and JavaScript (which, in spite of its name, is unrelated as a language). However, Java rapidly found itself on the server and in the classroom, helping ensure its ranking as the dominant language at the turn of the millennium.

Part of its attraction and value is its portability and relative efficiency. Although not a native language, such as C and C++, Java is a compiled language. Its execution model is more machine-centered than purely interpreted languages, such as Python and Perl. Java is more than just a language and libraries: It is also a virtual machine and, therefore, an ecosystem. The Java Virtual Machine (JVM) is an idealized and portable platform for running Java code. Rather than worrying about hardware specifics and having to port code to new platforms, the promise of Java has been Write Once, Run Anywhere (WORA). That is so that as long as a JVM is present, anything compiled into its bytecode can run and interact easily with anything else written for the JVM. There are many JVM languages, including the more script-like Groovy, the functional Clojure, the object–functional hybrid Scala, and even a Python variant, Jython.

Java is an object-oriented language with a C/C++-like syntax that is familiar to many programmers. It is dynamically linked, allowing new code to be downloaded and run, but not dynamically typed. As a language, Java’s evolution has been relatively slow, only recently incorporating features that support functional programming. On the other hand, the philosophy of both the language and the VM has been to treat backward compatibility as a prime directive.

After Oracle bought Sun, the language and its compiler were eventually open-sourced. The language’s evolution is guided by the Java Community Process (JCP), which includes companies and individuals outside Oracle.

So how do these two languages stack up? Let’s break it down by category.

 

 

Speed

Although performance is not always a problem in software, it should always be a consideration. Where network I/O costs or database access dominate, the specific efficiency of a language is less significant than other aspects of technology choice and design when it comes to overall efficiency.

Although neither Java nor Python is especially suited to high-performance computing, when performance matters, Java has the edge by platform and by design. Although some Python implementations, such as PyPy, are fine-tuned for performance, raw portable performance is not where Python shines.

A lot of Java efficiency comes from optimizations to virtual machine execution. A JVM can translate bytecode into native machine code as a program executes. This Just-In-Time (JIT) compilation is why Java’s performance can often rival that of native languages. Relying on JIT is a reasonably portable assumption as HotSpot, the default Oracle JVM, offers it.

Java has had support for concurrency from its first public version, whereas Python is more resolutely a sequential language. This has implications for taking advantage of current multi-core processor trends, with Java code more readily able to do so. The Global Interpreter Lock (GIL) in the dominant implementation of Python, CPython, stands in the way of such scaling. Python implementations without hits restriction exist, but relying on them can interfere with some of the portability assumptions underpinning Python code.

Legacy

Often language choice is not about the design and intrinsic qualities of the language itself. Languages exist to create code, and that code has a context in business, economics, history, software architecture, skills, and development culture.

Legacy systems have inertia around their incumbent technologies. Changes will more easily follow the path already laid down, shifting gradually and incrementally rather than by rewrite and revolution. For example, an existing Python 2 codebase is more likely to find a new lease on life in Python 3 than in a rewrite. The back-end of an existing Java enterprise project is likely to grow its functionality with more Java code, perhaps migrating to a more current version of the language, or by adding new features in other JVM languages such as Scala and Groovy.

Java’s history in the enterprise and its slightly more verbose coding style mean that Java legacy systems are typically larger and more numerous than Python legacy. On the other hand, organizations may be surprised to find how many of the scripts and glue code that hold their IT infrastructure together are made up of Python. Both languages have a legacy problem, but it typically presents differently.

Practical Agility

Development culture and trends have benefited both Java and Python. By virtue of publications that have used Java as their lingua franca and tools that focused on working with Java, Java is often seen to have the closer association with agile development and its community. But no community is static and so easily defined. Python has always had a presence in the agile space and has grown in popularity for many reasons, including the rise of the DevOps movement.

Java enjoys more consistent refactoring support than Python thanks on one hand to its static type system which makes automated refactored more predictable and reliable, and on the other to the prevalence of IDEs in Java development (IntelliJ, Eclipse, and NetBeans, for example). Python’s more dynamic type system encourages a different kind of agility in code, focusing on brevity, fluidity, and experimentation, where Java is perhaps seen as a more rigid option. That very same type system, however, can be an obstacle to automated refactoring in Python. Pythonic culture favors a diverse range of editors rather than being grounded in IDEs, which means there is less expectation of strong automated refactoring support.

The early popularity of JUnit and its association with test-driven development (TDD) has meant that, of all languages, Java enjoys perhaps the most consistent developer enthusiasm for unit testing of any language. The automatic inclusion of JUnit in IDEs has, in no small part, helped.

That said, Python’s origins in scripting and the inclusion of test features in its standard library mean that Python is no stranger to the emphasis on automated testing found in modern development, although it is more often likely to be integration rather than unit testing.

Human Resources

Sometimes language choice is more about the application of skills than it is about the software applications themselves. Staffing may count for more than language design and tooling. If the ideal language for the job is one that no one has skills in—and no one wants skills in—then it is probably not the ideal language for the job after all. On the other hand, if developers are keen to embrace a new technology then all other things being equal, this can be a good enough reason to go with that technology. In the Java world, the pill of a legacy Java codebase can often be sweetened by embracing another JVM language, such as using Groovy or Clojure for automated testing, or stepping outside the Java universe altogether, such as using Python to handle the operations side of the system.

Another side to the staffing question is the skills market. Both Java and Python are stalwarts of the TIOBE Index programming language popularity top 10 list. Java has consistently been more popular than Python, but Python has experienced the greater growth of the two languages, picking up where Perl and Ruby are falling.

Following the idea that one of the greatest influences on both personal choice and employment interest is going with what you know, both languages have a strong foothold in education, with Java more typically used on university courses and Python used in high school. Current IT graduates have one or both of these languages on their résumé almost by default.

Architecture

Skills and existing software systems and choices inform the programming languages used in any given software architecture. Software architecture is also matter of frameworks and libraries, reuse, and integration. In many cases, it is the technologies people want to take advantage of that dictate language choice rather than the other way around. A software architecture conceived around a Python web framework will not get far with a Java-only development team.

Both Java and Python enjoy a seemingly endless supply of open-source libraries populated by code from individuals and companies who have solved common and uncommon problems, and who are happy to share so others can take advantage of their solutions. Indeed, both languages have benefited from—and been shaped by—online forums and open-source development.

When questions of legacy, reuse, performance, and development skills have all been accounted for, some architectural decisions can still leave the choice of language open. For example, the rise of microservice architectures (where internet-facing systems are partitioned into small, cooperating processes) make the choice of language more of a localized detail than a dominant consideration across a project.

For all the diversity present in the modern programming landscape and its software architectures, some teams and businesses prefer to reduce some of their technology choices rather than live with a jumble of past decisions and personal whim. But consolidation can reduce options, so this is not a decision to be taken lightly. It is worth keeping an eye on trends in languages and frameworks to avoid taking the wrong fork in the road.

Conclusion

Java and Python are both in it for the long haul. Along with their development communities, they’ve evolved and adapted since the 1990s, finding new niches and replacing other languages—sometimes competing in the same space. Both languages are associated with openness, so companies, teams, and developers are best keeping an open mind when it comes to making a decision.

 

How to Trim Page Weight to Boost Page Speed and Increase Conversions [Infographic]

One of the best ways to increase website conversions and build customer trust in your online brand is to boost your page speed. However, many businesses fail to make the connection between site performance and increased revenue. More than half of mobile web visitors will abandon sites that take more than three seconds to load, and one in five users who abandon your website will never return. Increasing page speed leads to more visitors and higher conversion rates, and boosts your bottom line. Website performance has many facets, but today we will focus on page weight (sometimes called page size).

What is page weight?

Page weight is the combined size of all elements required to load a web page such as HTML, CSS, JavaScript, images, and custom fonts. Reducing page weight can significantly improve your page speed, especially for mobile visitors.

  • Wireless for 4G LTE speeds can reach up to 12 Mbps.
  • According to the HTTP archive, the average size of a mobile page is 2.2 MB (a nearly 70 percent increase from a year ago) and consists of 92 individual HTTP requests on average.

If we assume an average 4G LTE speed of 8.5 Mbps, every megabyte increase in page weight slows your site down by another second. While modern 4G networks offer plenty of bandwidth, latency of mobile networks is the silent performance killer. Even under favorable conditions, latency can easily add 300ms or more to your TCP round trip time (RTT).

For example, the average mobile page makes 92 HTTP requests. Assuming 20 percent of those requests establish a new TCP connection, your site’s download time will increase by 5.4 seconds. For the average mobile page (2.2 MB), that’s 7.6 seconds total—more than double the 3-second benchmark.

Measuring page speed and page weight

Before you can optimize page speed, you must first establish a performance baseline. Without one, you will fly blind; any optimization efforts you make are just guesses and you won’t know if you’ve improved speed or not. The fastest way to start is with the tools already included in your browser, or with a free online speed test.

Page speed tools

Reducing page weight

Now that we understand the need to reduce page weight, let’s look at some best practices to do that. Each technique deserves its own blog post, but for now we will divide them into three high-level optimization categories.

Reduce file size

  • Optimize images for web and mobile devices.
  • Remove unnecessary characters from custom fonts.
  • Enable gzip compression in your web server configuration.

Reduce HTTP requests

  • Avoid redirects, especially for landing pages.
  • Concatenate CSS and JavaScript files.
  • Use image sprites, especially for icon sets.

Network optimizations

  • Use a content delivery network to reduce latency and RTT.
  • Use responsive and adaptive design techniques to further trim page weight for mobile visitors.
  • Leverage browser caching to avoid downloading resources more than once.

For more in-depth discussions on each of these techniques to reduce page weight, check out these guides:

Continuously monitoring your page speed

Improving page speed isn’t a one-time activity. With new content, new products, and new features, your site is always changing. Each change could add weight and ruin your previous optimization efforts. That’s why it’s critical to routinely monitor all aspects of your site’s performance. With a good End-User Monitoring solution, you can:

  • Analyze site performance trends over time
  • Alert when page load times deviate from established baselines
  • Provide in-depth vision into real user sessions
  • Correlate performance regressions and improvements with business outcomes in real time

Next steps

The results are in: Trimming page weight will boost page speed and increase your company’s reputation and conversion rates, especially for mobile. Armed with tools to measure website performance, techniques to reduce page weight, and solutions to continuously monitor your end-user experience, it’s time for you to take action. Slim down your pages, turbocharge your site’s performance, and leave your competitors in the dust!

This Is How Amazon’s Servers Rarely Go Down [Infographic]

Amazon Web Services (AWS), Amazon’s best-in-class cloud services offering, had downtime of only 2.5 hours in 2015. You may think their uptime of 99.9997 percent had something to do with an engineering team of hundreds, a budget of billions, or dozens of data centers across the globe—but you’d be wrong. Amazon’s website, video, and music offerings, and even AWS itself, all leverage multiple AWS products to get five nines of availability, and those are the same products we get to use as consumers. With some clever engineering and good service decisions, anyone can get uptime numbers close to Amazon’s for only a fraction of the cost.

But before we discuss specific techniques to keep your site constantly available, we need to accept a difficult reality: Downtime is inevitable. Even Google was offline in 2015, and if the single largest website can’t get 100 percent uptime, you can be sure it’s impossible for your company to do so too. Instead of trying to prevent downtime, reframe your thinking and do everything you can to make sure your service is as usable as possible even while failure occurs, and then recover from it as quickly as possible.

Here’s how to architect an application to isolate failure, recover rapidly from downtime, and scale in the face of heavy load. (Though this is only a brief overview: there are plenty of great resources online for more detailed descriptions. For example, don’t be afraid to dive into your cloud provider’s documentation. It’s the single best source for discovering all the amazing things they can do for you.)

Architecture and Failure Mitigation

Let’s begin by considering your current web application. If your primary database were to go down, how many services would be affected? Would your site be usable at all? How quickly would customers notice?

If your answers are “everything,” “not at all,” and “immediately” you may want to consider a more distributed, failure-resistant application architecture. Microservices—that is, many different, small applications that work together to act like a larger app—are extremely popular as an engineering paradigm. The failure of an individual service is less noticeable to all clients.

For example, consider a basic shop application. If it were all one big service, failure of the database takes the entire site offline; no one can use it at all, even just to browse products or plan purchases. But now let’s say you have microservices instead of a monolith. Instead of a single shop application, perhaps you have an authentication service to login users, a product service to browse the shop, and an order fulfillment service to charge customers and ship goods. A failure in the order fulfillment database means that only people who try to ship see errors.

Losing an element of your operation isn’t ideal, but it’s not anywhere near as bad as having your entire site unavailable. Only a small fraction of customers will be affected, while everyone else can happily browse your store as if nothing was going wrong. And with proper logging, you can note the prospects that had failed requests and reach out to them personally afterward, apologizing for the downtime and hopefully still converting them into paying customers.

This is all possible with a monolithic app, but microservices distribute failure and better isolate it to specific parts of a system. You won’t prevent downtime; instead, you’ll make it affect less people, which is a much more achievable goal.

Databases, Automatic Failover, and Preventing Data Loss

It’s 2 a.m. and a database stops working. What happens to your website? What happens to the data in your database? How long will you be offline?

This used to be the sysadmin nightmare scenario: pray the last backup was usable and recent, downtime would only be a few hours, only a day’s worth of data perished. But nowadays the story is very different, thanks in part to Amazon but also to the power and flexibility of most database software.

If you use the AWS Relational Database Service (RDS), you get daily backups for free, and restoration of a backup is just a click away. Better yet, with a multi-availability zone database, you’re likely to have no downtime at all and the entire database failure will be invisible.

With a multi-AZ database, Amazon keeps an up-to-date copy of your database in another availability zone: a logically separate datacenter from wherever your primary database is. An internet outage, a power blip, or even a comet can take out the primary availability zone, and Amazon will detect the downtime and automatically promote the database copy to be your main database. The process is seamless and happens immediately—chances are, you won’t even experience any data loss.

But availability zones are geographically close together. All of Amazon’s us-east-1 datacenters are in Virginia, only a few miles from each other. Let’s say you also want to protect against the complete failure of all systems in the United States and keep a current copy of your data in Europe or Asia. Here, RDS offers cross-region read replicas that leverage the underlying database technology to create consistent database copies that can be promoted to full-fledged primaries at the touch of a button.

Both MySQL and PostgreSQL, the two most popular relational database systems on the market and available as RDS database drivers, offer native capabilities to ship database events to external follower databases as they occur. Here, RDS takes advantage of a feature that anyone can use, though with Amazon’s strong consumer focus, it’s significantly easier to set up in RDS than to do it manually. Typically, data is shipped to followers simultaneously to data being committed to the primary. Unfortunately, across a continent, you’re looking at a data loss window of about 200 to 500 milliseconds, because an event must be sent from your primary database and be read by the follower.

Still, for recovering a cross-continental consistent backup system, 500 milliseconds is much better than hours. So next time your database fails in the middle of the night, your monitoring service won’t even wake you. Instead you can read about it in the morning—if you can even detect that it occurred. And that means no downtime and no unhappy custom.

Auto Scaling, Repeatability, and Consistency

Amazon’s software-as-a-service (SaaS) offerings, such as RDS, are extremely convenient and very powerful. But they’re far from perfect. Generally, AWS products are much slower to provision compared to running the software directly yourself. Plus, they tend to be several software versions behind the most recent releases.

In databases, this is a fine tradeoff. You almost never create databases so slow that startup doesn’t matter, and you want extremely stable, well-tested, slightly older software. If you try to stay on the bleeding edge, you’ll just end up bloody. But for other services, being locked into Amazon’s product offerings makes less sense.

Once you have an RDS instance, you need some way for customers to get their data into it and for you to interact with that data once it’s there. Specifically, you need web servers. And while Amazon’s Elastic Beanstalk (AWS’ platform to deploy and scale web applications) is conceptually good, in practice it is extremely slow with middling software support, and can be painfully difficult to debug problems.

But AWS’ primary offering has always been the Elastic Compute Cloud (EC2). Running EC2 nodes is fast and easy, and supports any kind of software your application needs. And, unsurprisingly, EC2 offers exceptional tools to mitigate downtime and failure, including auto scaling groups (ASGs). With an ASG, Amazon keeps as many servers up as you specify, even across availability zones. If a server becomes unresponsive or passes other thresholds defined by you (such as amount of incoming traffic or CPU usage), new nodes will automatically spin up.

New servers by themselves do you no good. You need a process to make sure  new nodes are provisioned correctly and consistently so a new server joining  your auto scaling group also has your web  software and credentials to access  your database. Here, you can take  advantage of another Amazon tool, the  Amazon  Machine Image (or AMI). An AMI is a saved copy of an EC2 instance.  Using an AMI, AWS can spin up a new node  that is  an exact copy of the machine that generated the AMI.

Packer, by Hashicorp, makes it easy to create and save AMIs, and is also free and open-source. But there are lots of  amazing tools that can simplify AMI creation. They are the fundamental building blocks of EC2. With clever AMI use you’ll  be able to create new, functional servers in less than 5 minutes.

It’s common to need additional provisioning and configuration even after an AMI is started—perhaps you want to make  sure  the latest version of your application is downloaded onto your servers from GitHub, or that the most recent security  patches  have been applied to your installed packages. In cases such as these a provisioning system is a necessity. Chef  and  Puppet  are the two biggest players in this space, and both offer excellent integrations with AWS. The ideal use case  here i  is an AMI  with credentials to automatically connect to your Chef or Puppet provisioning system, which then ensures  the  newly  created node is as up to date as possible.

 

Final Thoughts

By relying on auto scaling groups, AMIs, and a sensible provisioning system, you can create a system that is completely  repeatable and consistent. Any server could go down and be replaced, or 10 more servers could enter your load balancer,  and the process would be seamless, automatic, and almost invisible to you.

And that’s the secret why Amazon’s services rarely go down. Not the hundreds of engineers, or dozens of datacenters, or  even the clever products: It’s the automation. Failure happens, but if you detect it early, isolate it as much as possible, and  recover from it seamlessly—all without requiring human intervention—you’ll be back on your feet before you even knew a  problem occurred.

There are plenty of potential concerns with powerful automated systems like this. How do you ensure new servers are  ones  provisioned by you, and not an attacker trying to join nodes to your cluster? How do you make sure transmitted  copies of  your databases aren’t compromised? How do you prevent a thousand nodes from accidentally starting up and dropping a  massive AWS bill into your lap? This overview of the techniques AWS leverages to prevent downtime and isolate failure  should serve as a good jumping-off point to those more complicated concepts. Ultimately, downtime is  impossible to  prevent, but you can keep it from broadly affecting your customers. Working to keep failure contained and recovery as  rapid as possible leads to a better experience both for you and your users.

Share this Image On Your Site

How Java’s Built-In Garbage Collection Will Make Your Life Better (Most of the Time) [Infographic]

“No provision need be made for the user to program the return of registers to the free-storage list.”

This line (along with the dozen or so that followed it) is buried in the middle of John McCarthy’s landmark paper, “Recursive Functions of Symbolic Expressions and Their Computation by Machine,” published in 1960. It is the first known description of automated memory management.

In specifying how to manage memory in Lisp, McCarthy was able to exclude explicit memory management. Thus, McCarthy relieved developers of the tedium of manual memory management. What makes this story truly amazing is that these few words inspired others to incorporate some form of automated memory management—otherwise known as garbage collection (GC)—into more than three quarters of the more widely used languages and runtimes developed since then. This list includes the two most popular platforms, Java’s Virtual Machine (JVM) and .NET’s Common Language Runtime (CLR), as well as the up and coming Go Lang by Google. GC exists not just on big iron but on mobile platforms such as Android’s Dalvik, Android Runtime, and Apple’s Swift. You can even find GC running in your web browser as well as on hardware devices such as SSDs. Let’s explore some of the reasons why the industry prefers automated over manual memory management.

Automatic Memory Management’s Humble Beginnings

So, how did McCarthy devise automated memory management? First, the Lisp engine decomposed Lisp expressions into sub-expressions, and each S-expression was stored in a single word node in a linked list. The nodes were allocated from a free list, but they didn’t have to be returned to the free list until it was empty.

Once the free list was empty, the runtime traced through the linked list and marked all reachable nodes. Next, it scanned through the buffer containing all nodes, and returned unmarked nodes to the free list. With the free-list refilled, the application would continue on.

Today, this is known as a single-space, in-place, tracing garbage collection. The implementation was quite rudimentary: it only had to deal with an acyclic-directed graph where all nodes were exactly the same size. Only a single thread ran, and that thread either executed application code or the garbage collector. In contrast, today’s collectors in the JVM must cope with a directed graph with cycles and nodes that are not uniformly sized. The JVM is multi-threaded, running on multi-core CPUs, possibly multi-socketed motherboards. Consequently, today’s implementations are far more complex—to the point GC experts struggle to predict performance in any given situation.

Slow Going: Garbage Collection Pause Time

When the Lisp garbage collector ran, the application stalled. In the initial versions of Lisp it was common for the collector to take 30 to 40 percent of the CPU cycles. On 1960s hardware this could cause the application stall, in what is known as a stop-the-world pause, for several minutes. The benefit was that allocation had barely any impact on application throughput (the amount of useful work done). This implementation highlighted the constant battle between pause time and impact on application throughput that persists to this day.

In general, the better the pause time characteristic of the collector, the more impact it has on application throughput. The current implementations in Java all come with pause time/overhead costs. The parallel collections come with long pause times and low overheads, while the mostly concurrent collectors have shorter pause times and consume more computing resources (both memory and CPU).

The goal of any GC implementer is to maximize the minimum amount of processor time that mutator threads are guaranteed to receive, a concept known as minimum mutator utilization (MMU). Even so, current GC overheads can run well under 5 percent, versus the 15 to 20 percent overhead you will experience in a typical C++ application.

So why you don’t feel this overhead like you do in a Java application? Because the overhead is evenly spread throughout the C/C++ run time, it is perceptibly invisible to the end users. In fact the biggest complaint about managed memory is that it pauses your application at unpredictable times for an unpredictable amount of time.

Garbage Collection Advancements

Sun Java’s initial garbage collector did nothing to improve the image of garbage collection. Its single-threaded, single-spaced implementation stalled applications for long periods of time and created a significant drag on allocation rates. It wasn’t until Java 2, when a generational memory pool scheme—along with parallel, mostly concurrent and incremental collectors—was introduced. While these collectors offered improved pause time characteristics, pause times continue to be problematic. Moreover, these implementations are so complex that it’s unlikely most developers have the experience necessary to tune them. To further complicate the picture, IBM, Azul, and RedHat have one or more of their own garbage collectors—each with their own histories, advantages and quirks. In addition, a number of companies including SAP, Twitter, Google, Alibaba, and others have their own internal JVM teams with modified versions of the Garbage collectors.

Costs and Benefits of Modern-Day Garbage Collection

Over time, an addition of alternate and more complex allocation paths led to huge improvements in the allocation overhead picture. For example, a fast-path allocation in the JVM is now approximately 30 times faster than a typical allocation in C/C++. The complication: Only data that can pass an escape analysis test is eligible for fast-path allocation. (Fortunately the vast majority of our data passes this test and benefits from this alternate allocation path.)

Another advantage is in the reduced costs and simplified cost models that come with evacuating collectors. In this scheme, the collector copies live data to another memory pool. Thus, there is no cost to recover short-lived data. This isn’t an invitation to allocate ad nauseam, because there is a cost for each allocation and high allocation rates trigger more frequent GC activity and accumulate extra copy costs. While evacuating collectors helps make GC more efficient and predictable, there are still significant resource costs.

That leads us to memory. Memory management demands that you retain at least five times more memory than manual memory management needs. There are times the developer knows for certain that data should be freed. In those cases, it is cheaper to explicitly free rather than have a collector reason through the decision. It was these costs that originally caused Apple to choose manual memory management for Objective-C. In Swift, Apple chose to use reference counting. They added annotations for weak and owned references to help the collector cope with circular references.

There are other intangible or difficult-to-measure costs that can be attributed to design decisions in the runtime. For example, the loss of control over memory layouts can result in application performance being dominated by L2 cache misses and cache line densities. The performance hit in these cases can easily exceed a factor of 10:1. One of the challenges for future implementers is to allow for better control of memory layouts.

Looking back at how poorly GC performed when first introduced into Lisp and the long and often frustrating road to its current state, it’s hard to imagine why anyone building a runtime would want to use managed memory. But consider that if you manually manage memory, you need access to the underlying reference system—and that means the language needs added syntax to manipulate memory pointers.

Languages that rely on managed memory consistently lack the syntax needed to manage pointers because of the memory consistency guarantee. That guarantee states that all pointers will point where they should without dangling (null) pointers waiting to blow up the runtime, if you should happen to step on them. The runtime can’t make this guarantee if developers are allowed to directly create and manipulate pointers. As an added bonus, removing them from the language removes indirection, one of the more difficult concepts for developers to master. Quite often bugs are a result of a developer engaged in the mental gymnastics required to juggle a multitude of competing concerns and getting it wrong. If this mix contains reasoning through application logic, along with manual memory management and different memory access modes, bugs likely appear in the code. In fact, bugs in systems that rely on manual memory management are among the most serious and largest source of security holes in our systems today.

To prevent these types of bugs the developer always has to ask, “Do I still have a viable reference to this data that prevents me from freeing it?” Often the answer to this question is, “I don’t know.” If a reference to that data was passed to another component in the system, it’s almost impossible to know if memory can safely be freed. As we all know too well, pointer bugs will lead to data corruption or, in the best case, a SIGSEGV.

Removing pointers from the picture tends to yield a code that is more readable and easier to reason through and maintain. GC knows when it can reclaim memory. This attribute allows projects to safely consume third-party components, something that rarely happens in languages with manual memory management.

Conclusion

At its best, memory management can be described as a tedious bookkeeping task. If memory management can be crossed off the to-do list, then developers tend to be more productive and produce far fewer bugs. We have also seen that GC is not a panacea as it comes with its own set of problems. But thankfully the march toward better implementations continues.

Go Lang’s new collector uses a combination of reference counting and tracing to reduce overheads and minimize pause times. Azul has driven pause times down while Oracle and IBM continue to build collectors that are better suited for very large heaps containing significant volumes of data. RedHat has entered the fray with Shenandoah, a collector that aims to completely eliminate pause times from the run time. Meanwhile, Twitter and Google continue to improve the existing collectors so they continue to be competitive to the newer collectors.

 

Share “How Java’s Built-In Garbage Collection Will Make Your Life Better (Most of the Time)” On Your Site

Why User Experience Is Critical to Your Business Outcomes [Infographic]

There are customers who love engaging with your business, and those who don’t. Now more than ever, this dichotomy has significant competitive implications. Social media enables the user experience to go viral, which gives a megaphone to your business’s most dissatisfied customers. This—among less dramatic reasons—is why modern organizations have placed the discipline of User Experience, or UX, at the core of what they do.

UX encompasses all aspects of the end user’s interaction with a business’s products and services. The UX mindset is rooted in how your firm approaches design, research, and strategy. And when design and research are elevated by a strategy that is directly accountable to user needs, the results can be transformative. A company that is UX-driven better sets itself up for growth and repeat business—not just because their application features do the job, but because customers love doing the job with their application features. Read on to learn how to promote the culture of UX in your organization.

 

Start With a Strategy

Think of a positive experience you had while shopping online. You might remember completing a successful search, using informative tools for comparison shopping, and cruising through a frictionless checkout process. Now bring to mind a negative interaction: zero search results, puzzling navigation and taxonomy, unclear item stock levels, no guest checkout. Companies notorious for negative customer experiences neglect to meaningfully connect UX with their strategic vision. This applies to both brick-and-mortar stores, websites, and mobile apps.

The user experience is dynamic, and to optimize for it requires ongoing dialogue. A UX-conscious firm creates processes to actively listen to, understand, and give voice to current and potential customers. A company that values the user experience hears this voice at every step of production and responds accordingly.

It follows, then, that a firm’s design and research activities should carry the lion’s share of a user-centered vision. Let’s turn now to the hallmarks of health in both design and research.

Design

  • Design is iterative, and evolves from sketches to high fidelity through a user-conscious process. If you’re not agile, make a plan to get there.
  • Both visual and interactive design should receive equal attention. Don’t focus on one at the other’s expense.
  • Establish standards and best practices that guide most day-to-day design decisions.
  • A qualified design practitioner is comfortable in agile environments and cognizant of strategy and research when they’re on the job. In this sense, the best designers are creative business solutions specialists who are sympathetic toward the end user experience.

Research

  • Most research falls into two categories: generative (or discovery) and evaluative. Discover what should be built and evaluate how it should be improved.
  • Generative research includes ethnography, needs analysis, surveys, and focus groups.
  • Evaluative research typically employs usability tests, heuristic evaluations, A-B testing, and competitive best-of-breed tests. It’s not uncommon for evaluative research to occasionally yield generative results.
  • Quantitative and qualitative research methodologies are typically best suited to determine the what and why of user pain points. Try not to mix these disciplines in the same study.
  • A qualified research practitioner has a strong command of qualitative and quantitative methodologies and understands when each is called for. Often, they are the first to hear a user’s pain and will naturally advocate for them.
  • Today speed of research is of equal importance to quality. Those who research faster are able to make UX improvements faster. To this extent, end user measurement and analytics provide valuable insights into the user’s journey, the smartphone or tablet they use, and where they are located.

Next Steps

Investing in the following areas will help your company bring the user experience into focus. Even modern organizations will benefit from these actions, as customer needs are always in flux.

  • Know your users
    All too often, products are built on myths of users, not on an accurate assessment of their needs. If you haven’t invested in quality persona development, do it. The earlier the better.
  • Benchmark
    To assist in justifying UX ROI, you need to know where things stand today. What happens when users encounter your service? Why are they abandoning? What are the top three pain points? Click stream analysis, usability studies, and heuristic evaluations are tried-and-true ways of getting your bearing. Evaluate as many user touchpoints as feasible.
  • Strengthen your data awareness
    Invest in measurement and analytics architecture. Structure your organization so the people who first receive analytics are not incentivized to skew the data in any way. Research, product, and design groups should enjoy equal access to quality metrics. Otherwise, an “us versus them” mentality can develop. Invest in end user measurement and analytics solutions to get closer to your users and understand how every image, function, and feature impact UX.
  • Define your vision and your customer’s place within it
    Strategic conversations frequently exclude user experience. To foster a true UX culture, your organization should proactively give customer needs a voice in most critical decisions. This means establishing goals and accountability pursuant to those needs.

How to Measure the Results

Firms with lower UX awareness or limited availability of analytics often focus their attention on metrics gleaned within or between research exercises (such as time to task completion or benchmark ratings). While this field of view can prove useful in evaluating customer interactions in isolation, it often misses the larger context of satisfaction across the full spectrum of brand touchpoints.

A mature approach to measuring the impact of user experience taps into diverse data sources. Taken as a whole, measurements including surveys, click stream analytics, call center metrics, and conversion rates can tell a compelling story. Add to that the insight provided by qualitative research methodologies, and a panoramic view of the user journey begins to emerge. Today’s most UX-forward organizations consider all these factors, along with more conventional business KPIs, in order to justify their focus on user needs.

Conclusion

Drawing up a strategic UX roadmap is a great way to link tactics and methods with a user-centered vision for success. This helps promote holistic thinking about how your customer interacts with your brand. What should result is an organization that is both tuned in to its user’s needs and experiences and ready to be changed by what it learns.

Share “Why User Experience Is Critical to Your Business Outcomes” On Your Site