CGI uses AppDynamics unified monitoring to gain performance insight

This blog post is a summary of a case study that I conducted with CGI. You can see the full case study HERE.

Quite often, the vast majority of the AppDynamics customers that I interview maintain only their own in-house software applications. Speaking to CGI about its performance demands gave me an interesting perspective on the unique position of its type of client services consulting model.

CGI specializes in IT and consultant services for clients across the globe. Not only does it have its internal environments to manage, but it also serves requests from both the upstream and downstream tiers that it consumes. Technically, it is part of an application stack that serves as a continuous tier from upstream and downstream services that make up the parts of an entire business transaction.

CGI’s environment

I sat down with Steve Perkins, the Service Delivery Manager responsible for the infrastructure delivery and end to end service management. We spoke a bit about the environment, challenges, and the selection process CGI underwent in choosing an APM solution. CGI is powered by:

  • Oracle technologies, including MySQL
  • Various open-source solutions
  • Java
  • .NET application stacks.

Steve’s job is to measure the end to end performance between the entire lifecycle of a request that makes up CGI’s business transactions. It came as no surprise that their demands eventually led to the need for a more robust, complete, and unified APM platform. Steven and his team began to qualify AppDynamics, and he discussed with me their challenges and the selection criteria that eventually led to their adoption of AppDynamics.

Challenges & Selection Process

Steve explained that their biggest challenge was to understand a full business transaction from end to end. The difficulty in understanding this problem is that a single transaction consumes various tiers, including requests for both upstream and downstream services. Correlation and code visibility in a highly distributed transaction such as theirs is imperative in diagnosing application problems and optimizing performance.

The selection process incidentally involved a partner from the UK recommending AppDynamics. This eventually led Steve and his team to perform a trial case study to discover what AppDynamics could do for them. Specifically, they were looking for a system that could:

  • Help manage their infrastructure
  • Manage the SLAs with their existing customers
  • Demonstrate the full end-to-end performance of their system

Implementation & Benefits

CGI concluded after this case study that AppDynamics was the APM platform of choice for its needs. Over the course of about 9-12 months, it expanded the use of AppDynamics across all its development environments. With the aid of AppDynamics, it was able to build a more robust integration between the following environments:

  • Development
  • Testing
  • Production

Steve explained that this integration between development, testing, and production environments provides an accurate feedback loop that allows them to iterate more effectively and efficiently. Based on his past career experience with multiple companies, Steve explained that he has seen his share of tools whether they were built in-house or by a third party and admitted that he has never seen anything as good as what AppDynamics provides.

Ten Minutes with Steve Harrick of Institutional Venture Partners

I recently had the opportunity to sit down with Steve Harrick of Institutional Venture Partners (IVP) to discuss current trends and future outlook within the technology industry. Along with leading IVP’s investment in AppDynamics — and being a Board Observer — Steve also led investments in notable IT companies such as Pure Storage, Sumo Logic, MySQLSpiceworks, and several others.

Here’s a quick insight into our chat…

In your opinion, what category of technology will be most impactful to businesses in the next three to five years?

Three areas are top of mind. Security is foremost. The IT landscape changed dramatically over the last several years. Global smart phone adoption happened even faster than we expected and file sharing via Dropbox, iCloud and other services is common practice. Accordingly, the boundaries between your personal and professional life are extremely porous. Huge volumes of confidential information move among these spheres without adequate security, or checks and balances. It’s become increasingly difficult to ensure your data is safe or that your company’s data policies are enforced.  All of your company’s intellectual property and hard work could be compromised without the right security tools and appropriate emergency response plans.

Next is Data Analytics. IDC recently reported that the digital universe is doubling in size every two years, and by 2020 the amount of data we create and copy annually will reach 44 zettabytes, or 44 trillion gigabytes. It’s essential that companies are able to store, analyze and make sense of all this information in order to make better decisions. Businesses that learn how to efficiently leverage all this information will enjoy a distinct competitive advantage; those that don’t, will quickly become dinosaurs.

And finally, Application Performance Management is becoming essential as we navigate a highly distributed and rapidly changing IT landscape. We’ve already seen how AppDynamics’ software can test, measure and monitor app performance in heterogeneous environments. The next phase involves a unified solution that continually monitors an enterprise’s entire infrastructure, including not only applications but databases, servers and network performance.

What is the next category to be ‘Ubered’?

A lot of startups come to us claiming to be the “Uber of this,” or the “Uber of that.” Uber has a remarkably compelling business model and is global in its ambitions, but it is a unique company. People shouldn’t confuse Uber’s on-demand product with a simple marketplace model that connects buyers and sellers. Marketplace models can only provide value at scale, so you have to be very cautious when evaluating small marketplace startups. That said, I believe the marketplace model makes sense for healthcare and health insurance because in that market, you have an odd confluence of fragmentation and regulation. Zenefits’ free software allows customers to navigate the highly distributed HR and benefits provider landscape. The genius of Zenefits’ business model is that the company uses software to create a marketplace and in doing so, it disrupts traditional brokers – similar to the disruption that Uber has brought to the taxi industry.

How would you describe the future role of the CIO?

CIOs are responsible for each of the areas I outlined in my first answer. But perhaps more importantly, a successful CIO must efficiently manage distributed operations, whether we’re talking about people, or processors. It is becoming increasingly expensive to scale operations in the Bay Area and that’s forcing companies to establish second and third sites to cut costs. They might locate their sales team in another state, or their manufacturing in another country. And the CIO must manage the company’s information systems, on premise or in the cloud, in a manner that is not only distributed but consistent, efficient and cost effective.

If you were not an investor, what would you be doing right now?

If I wasn’t working and my kids were suddenly in college, maybe I’d go fly fishing around the world with my wife.  If I continued to work and wasn’t in VC, I would love to convince someone that I was qualified to become the president of a major research university like Stanford, Harvard or Yale. Great universities teach students responsibility and put them on a path not to memorize and repeat, but to think effectively and contribute their gifts to society. I believe it would be exciting to fashion an environment that would have a positive impact upon the leaders of tomorrow. This is an age of advancement and technology is leading the way.

Thanks again to Steve for his time and insights!

Docker: The Secret Sauce to Fuel Innovation

Much has already been written about the virtues of Docker, and containers in general like CoreOS or Kubernetes. How life-changing Docker is, how innovative, etc. However, the real secret to Docker’s success in the marketplace is the hidden retribution of innovation. Innovation and R&D is the lifeblood of today’s technology success. Companies, no matter how large, must iterate constantly to stay ahead of their legacy competitors and new upstarts risking disruption. The rise of Agile methodologies and DevOps teams comes with the expectations of more releases, more features, and ultimately a better product.

How can you maintain this pace of innovation? Allow your developers to develop, instead of focusing on tedious — and time consuming — tasks dealing with distributed application upkeep and maintenance.

Pre-Docker Life

At AppDynamics, we primarily use Docker for our field enablement resources, such as demo environments. Before Docker, we would have to spin up a virtual machine with create some fake load inside the environment to show the benefits of AppDynamics’ monitoring. There was no quick, or easy way to make an update to the VM — even a small update. Any minor change (which as an Agile company, were often), would require some heavy lifting work for our developers. There was no version control.

Productivity Gain

Removing redundant work such as updating a demo environment VM — which let’s face it, devs don’t want to do in the first place — frees up vital time for the developers to get back doing what they do best. Setting up machines becomes obsolete and devs gonna dev.

At any company, you’re likely paying a substantial wage for quality engineers. With that expense, you should expect innovation.

Docker, in our case, also removes the project abandonment risk. If a project owner is sick or leaves the company there is typically an audit process of analyzing the code. More often than not, a good chunk would have to be rebuilt in a more consistent manner. With Docker, you insource your code into a standardized container, allowing seamless handoff to the next project owner.

Fostering DevOps

Along with passing to the next project manager, the handoff between dev, QA, and Ops becomes seamless as well — which is a main foundation of DevOps. How we use Docker, and I assume others do as well, allows us maintain repeatable processes and enable our field teams.

The shareability allows us to incorporate best practices among the entire team and provide a consistent front with engagements.

Interested to see how AppDynamics and Docker work together? Check out this blog!

Monitoring Amazon SQS with AppDynamics

AppDynamics recently announced the support for applications running an expanded suite of services from Amazon Web Services (AWS). As many enterprises are migrating or deploying their new applications in the AWS Cloud, it is important to have deeper insight and control over the applications and the underlying infrastructure in order to ensure they can deliver exceptional end-user experience.

AppDynamics offers the same performance monitoring, management, automated processes, and analytics for applications running on AWS that are available for applications running on-premises. With the AppDynamics Summer ’15 Release, applications deployed on AWS are now easily instrumented to provide complete visibility and control into an expanded set of AWS services, including Amazon Simple Queue Service (Amazon SQS), Amazon Simple Storage Service (Amazon S3), and Amazon DynamoDB.

In this blog, I will focus on monitoring of applications using Amazon SQS. As per the AWS web page, “Amazon SQS is a fast, reliable, scalable, fully managed message queuing service. SQS makes it simple and cost-effective to decouple the components of a cloud application. You can use SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available.”

The Amazon SQS Java Messaging Library, which is a Java Messaging Service (JMS) interface to Amazon SQS, enables you to use Amazon SQS in the applications that already use JMS.

Message queues in SQS can be created either manually, or via the SQS Java Messaging Library and AWS Java SDK, and messages can be sent or received to the queues in various ways for different use cases.

Here is an application flow map in AppDynamics for a sample application using Amazon SQS for the following three use cases:

  • Basic send/receive

  • Batched send/receive

  • Async send/receive

 

unnamed-3.png

 

AppDynamics supports all exit points for the Amazon SQS out of the box. Each exit point is treated exactly like JMS, .NET messaging, etc. for all of the use cases outlined above.

At this time, the entry point to the Amazon SQS is supported as part of a continuing transaction only. For example, if a transaction originates at some tier “foo” and continues via an exit through some SQS queue to a downstream tier, bar – the transaction on the “bar” may continue given the appropriate configuration. The user must specify a custom-interceptors.xml configuration file to apply the special SQS entry point interceptor to a given method and to configure where to obtain the correlation header.

My colleague Anthony Kilman shared the following example in case a user’s downstream application were processing messages received from an SQS message:

public abstract class  ASQSConsumer extends ASQSActor {

   …

   protected void processMessage(Message message) {

       log.info(”  Message”);

       log.info(” MessageId: ” + message.getMessageId());

       log.info(” ReceiptHandle: ” + message.getReceiptHandle());

       log.info(” MD5OfBody: ” + message.getMD5OfBody());

       log.info(” Body:       ” + message.getBody());

       for (Map.Entry<String, String> entry : message.getAttributes().entrySet()) {

           log.info(”  Attribute”);

           log.info(” Name:  ” + entry.getKey());

           log.info(” Value: ” + entry.getValue());

       }

       Map<String, MessageAttributeValue> messageAttributes = message.getMessageAttributes();

       log.info(“message attributes: ” + messageAttributes);

   }

   …

}

Then, the configuration to continue the transaction would be as follows:

<custom-interceptors>

<custom-interceptor>

   <interceptor-class-name>com.singularity.SQSEntryPoint</interceptor-class-name>

   <match-class type=”matches-class”>

       <name filter-type=”equals”>aws.sqs.test.ASQSConsumer</name>

   </match-class>

   <match-method>

       <name>processMessage</name>

   </match-method>

   <configuration type=”param” param-index=”0″ operation=”getter-chain” operation-config=”this”/>

</custom-interceptor>

</custom-interceptors>

This configuration will result in a snapshot like the following:

 

snapshots_flow_map.png

 

To learn more about cloud application performance monitoring and AWS cloud, please go to http://www.appdynamics.com/aws/.

Read our complimentary white paper, Managing the Performance of Cloud-Based Applications.

 

How Kraft is able to scale using AppDynamics

This blog post is a summary of a case study that I conducted with Kraft. You can see the full case study HERE.

Have you ever come across KraftRecipes.com? When it comes to receiving large amounts of traffic, KraftRecipes.com is definitely no exception. A collection of excellent recipes for all sorts of meal ideas, KraftRecipes.com receives over 40,000,000 hits within a single month during their peak season! That is a lot of traffic, and with that much demand they’re going to have to be sure to have their performance checklist marked off – and you can be sure that part of that list includes AppDynamics.

The Challenges with rebuilding KraftRecipes.com

In an effort to rebuild the website, the team at Kraft already anticipated the amount of traffic they could expect. The hurdles of their website redesign included:

  • The right tool for performance monitoring
  • Meeting the traffic demands of their website
  • Reaching their targeted site performance KPIs
  • Serving 2,600 requests per second

What I also found interesting is that the website team included several vendors. This isn’t unheard of, especially with larger organizations that contract various bits of their applications to specialized contractors. The challenge that this poses, however, is a potential disconnect with regard to performance monitoring. Luckily, one of the vendors is an existing user of AppDynamics, and highly recommended us to the team at Kraft.

How did AppDynamics immediately help?

Once the agents are installed, the auto-discovery goes into effect immediately. In Kraft’s case, the AppDynamics agents discovered:

  • Code issues exposed by snapshots with full call graphs
  • Database queries, specifically stored procedures
  • Latency and bottlenecks within the infrastructure itself

Once Kraft logged in for the first time, they were immediately presented with their application topology map, also known as the flowmap. The flowmap is essentially one of the key visuals within AppDynamics, in that you’re immediately presented with a visual map of the architecture of your entire application environment. All requests and traffic flow are auto-discovered and displayed in the same view.

Scalability concerns were confirmed by AppDynamics

The team at Kraft spent an entire weekend performing scalability and performance load testing. They had an intended target of successfully serving 2,600 requests per second. What they discovered by the end of the weekend is that their existing infrastructure was not able to meet that demand. How did they confirm that? They used AppDynamics to monitor their app while conducting their load tests. More importantly, AppDynamics was able to help the performance teams pinpoint the exact cause of their performance slowdowns.

Once AppDynamics pointed out the bottlenecks, Kraft was able to fix the problem code and also work with Rackspace to scale out their environment to handle the traffic that was hitting the site. The fixes were in place by the end of the day, so Kraft was able to handle the traffic on Thanksgiving day and maintain high availability through the Christmas season.

How AppDynamics became a part of KraftRecipes.com

Today, KraftRecipes.com benefits from the visibility provided by AppDynamics to ensure it is running at optimal performance. The team at Kraft has integrated AppDynamics into their deployment workflow, including the ability to communicate with external vendors by using the Virtual War Room feature by:

  • Chatting among their multi-vendor networks
  • Making system changes in real time
  • Viewing and annotating events on the second interval charts
  • Consolidating workflow into a single collaborative view

This is why customers such as Kraft depend on AppDynamics to do what they do best: build amazing consumer applications for their visitors.

So You Want to Buy Some APM – You Need Intelligent Contingencies

This post originally appeared on LinkedIn.

Consider that the benefits derived from proactive monitoring do not necessarily come from just one tool, albeit tool selection is a key element, but it is not the only one. “A fool with a tool is still a fool” – Grady Booch.

Listening to what your business partners need from an application perspective and then delivering on that within a tight timeline and a modest budget can create challenging opportunities for any IT Manager. 

Think about how extensible your APM solution should be.  If it is flexible enough to integrate ubiquitously, and dynamic enough to be configured rapidly, then you will be poised for expansion and ready to begin monitoring anything that comes your way. 

Intelligent Contingencies

Focusing on the integration touch points with existing ITSM processes will help anchor a thoughtful APM solution into the IT culture.  A look into ITIL’s Continual Service Improvement (CSI) model and the Application Performance Management (APM) framework indicates they are both focused on improvement. I see them as being two sides of the same coin.

APM defines the approach and tool-sets that CSI uses while leveraging specific processes in Service Design, Service Transition, and Service Operation.  Within the CSI model there are certain ITIL processes that weave themselves in and through the APM methodology that create a fabric of continuous improvement for application performance.

The Incident Management Process is one of these threads and is germane to a successful APM strategy. This process is focused on going from red to green and has an immediate benefit when APM event flow is integrated directly into it.

Here, I’ve listed three elements to consider when outlining an Enterprise Monitoring Strategy which includes integration of an APM Framework.

Build an Automation Center

Integrate, Correlate, and Automate — Consider that it is the correlation of events and the amalgamation of metrics that bring value to the business by way of real-time reporting, and it’s the way the business interprets the accuracy of those metrics that determines your success.

It is also the automation of alerts into actionable incidents, for the support engineers, to quickly troubleshoot issues that provides value to IT.

If an event occurs and no one sees it, believes it, or takes action on it, APM’s value can be severely diminished and you run the risk of owning “shelfware.” – L.Dragich, APM Digest June 2012

Look to customize how you use the APM framework for the business needs you are trying to support, and then integrate that output into your existing ITSM /ITIL processes where you have the biggest need.

Get Your Ops Team Involved

Development and Operations view APM in a slightly different light, largely because it is a concept that consists of multiple complementary approaches for addressing issues surrounding application performance. Understanding the different requirements for Dev and Ops is one of the key elements needed for APM adoption to take off in both areas.

One important distinction to make is that in our situation, the Incident Management, Problem Management, and Change Management processes were already established in the culture for a year prior to implementing an APM solution. This allowed us to integrate right into the Incident Management process, allowing for some quick wins once we got the automation and event correlation in place.

Create a Feedback Loop

It is not necessarily the number of features or technical stamina of each monitoring tool to process large volumes of data that will make an APM implementation successful; it’s the choices you make in putting them together. 

When an event occurs, be prepared to answer the question; what does normal look like?  Operationally there are things you may not want to think about all of the time (e.g. standard deviations, averages, percentiles, etc.), but you have to think about them long enough to create the most accurate picture possible.

Creating an amplified feedback loop between Development and Operations (one of the core tenets of DevOps) to communicate the subtleties in each environment.will help facilitate quick wins and build trust across the silos.

Once you build awareness in the organization that you have a bird’s eye view of the technical landscape and the ability to monitor the ecosystem of each application (as an ecologist), people become more meticulous when introducing new elements into the environment. They know you are watching, taking samples, and keeping a scorecard on successful deployments and operational stability.

Conclusion

It’s important to consider however, that with the abundance of monitoring tools available in the market, you don’t buy APM, you develop it as a strategy and then acquire the tools you need to realize the vision.  Using APM as a cornerstone to a well thought out Enterprise Monitoring strategy will allow organizations to support the speed for Development without compromising the stability for Operations, improving the Customer Experience. 

China bucking the trends of IT growth

I’m writing this blog flying over China on my way to Asia. I always enjoy spending time in this side of the world, since there is so much change and innovation happening at a massive scale. At the same time this pace of change causes a lot of chaos and opportunity. When analyzing the market opportunities within IT there are many drastic changes.

Within China, IT is becoming far more serious than it ever has been, and it’s earning increased spending. According to Gartner (http://www.gartner.com/document/2974450 2015 CIO Agenda: A China Perspective, 30 January 2015, Analyst(s): Owen Chen | Tina T. Tang | Changhua Li), China’s growth in IT spending is far higher than the Global data collected from the CIO survey. This means that enterprises must appeal to and focus on opportunities in these emerging markets, including handling the differences between cultures, user perception, and the understanding of brands.

Image.png

During my time as an analyst I spent a good deal of time speaking with clients in China both via inquiry calls and in my travels, and heard resoundingly a need to change and improve service levels. was being demanded. Improving monitoring, resiliency, and adopting technologies such as public cloud created new opportunities and challenges. APM technologies allow for the visibility into the end user, beyond focusing on traditional infrastructure component service levels. Clearly internal users and external customers are demanding IT deliver better services, and IT is responding and should re-focus on the user and application perspective.

Image2.png

Finally, due to the current growth currently happening the cost issues are far less prevalent within the data Gartner has collected and gathered versus what we see globally. We have a growing set of customers in China, and we expect to push much harder in this area to help customers solve new problems with our highly intelligent and easy to implement software.

Sorry for the delay between blog posts, been a bit tied up the last month. Feel free to reach out on Twitter @jkowall appreciate your readership.

Introduction to PHP Security – Part 2

 

In Part 1 of this series, we were introduced to some of the fundamental security flaws that may afflict your PHP applications. In Part 2, let’s dive a bit deeper into some of the most advanced security flaws that may afflict your environment. Truth be told, there are potentially an infinite number of ways in which a software product can be compromised and have its security breached. Gene Spafford, computer security expert at Purdue University, said, “the only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards.” Spafford has a point: any system that is exposed to the public may have a flaw, and it is only a matter of time before the flaws are discovered and quickly (hopefully) patched.

“The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards.” – Gene Spafford

Security is, without a doubt, a moving target

New security flaws are regularly found, and routine patches are immediately released for most of the major software applications you utilize in your application stack. No matter whether your web or database server, your operating system, your PHP runtime, or even the MVC framework that your time adopted, your point(s) of exposure may exist anywhere within the various components that make up your application ecosystem.

While the professional community has evolved and best-practices are becoming standard practices, there are still reason for concern, and you should never take your security lightly.

Security Best Practices

The term best-practice suggests a broad gray area of generally accepted knowledge that most developers and systems administrators adhere to. For the purposes of this article, I am using it as a catch-all for the general techniques that you should be aware. Each of this topics could, of course, expand into extensive conversations about each, but I’ll cover them from a high-level perspective.

Turn SSL on at all times

When accessing your application over the regular port 80, you are serving content to your users over an unsecure connection. This may not be an issue if you are not collecting private user data and only serving public-friendly content. Perhaps for a 5-page information website, port 80 may not be a problem. However, what if you allow the ability for your users to login and store extremely sensitive financial and personal information? Not only should your login page itself be protected via SSL over port 443, but you should enforce SSL over every connection with your web application (private and public information). It seems that SSL has become the de facto standard for most of the common web applications, including Facebook and Google.

If you are unsure how to access the port number, take a look at the global $_SERVER function. You can create checks at the beginning of your application to redirect to HTTPS if a user attempts to access your site via HTTP.

Hide all version numbers

For a hacker to compromise your system, part of the effort involves understanding what your system consists of. What good is knowing the security exploits of a particular version of the application you are running when the hacker does not know the version you are running? Unfortunately for the rest of us, this information is publicly accessible and extremely easy for hackers to obtain.

A fundamental search through your HTML can reveal information about your infrastructure and application stack that you never imagined would be exposed. From the Javascript libraries to the type of third-party monitoring tools you use, you are revealing far more information than you think you are. Even the information you pass along in your HTTP headers can help someone understand the web server, operating system, MVC frameworks, hosting provider, analytics tools, and other vital information necessary to build a plan to attack your product. You can see for yourself by navigating to builtwith.com and querying your favorite sites to see what technology stack they are running. Look into your particular application stacks to determine how to turn off the signature being sent back to the client so that you can hide this sensitive data.

Keep your errors in logs and away from the public

A common mistake is making errors publicly viewable. A skilled PHP application craftsman understands that application errors are not for the public to view. What does the public need to see? The public needs to see nothing more than a funny and friendly error! That is all your visitors need to see. All other errors should immediately go to your logs and/or your APM monitoring tool! The more information you make public, the more information you provide hackers to exploit. Your error messages may include information about the extensions you are using, the PHP runtime, server information, directories and paths, and other vital information necessary to conduct an attack on your application. Take a look at the display_errors, log_errors, and error_log declaratives.

Disable potentially harmful PHP functions

A suitable practice is to disable PHP functions that could potentially be hijacked and used for harm. I bet you did not know about this critical feature that PHP offers. As an additional layer of security, completely turn off certain functions that could execute harmful code if your application is ever compromised. The disable_functions declarative takes a list of functions you would like to make unavailable to PHP.

More Injection Attacks

In Part 1 of this post, we covered SQL injection and XSS attacks. We’ll cover a few more potentially harmful injection attacks that you are susceptible to.

Shell Injection

Have you ever been introduced to the shell_exec() function? What about exec(), passthru(), and system()? Each of these functions has the potential to do some incredibly significant damage to your system if ever compromised. Take for example shell_exec() — you can pass a string to this function, and it will execute whatever shell command was contained in that string.

My first recommendation is to disable these functions altogether if you are not using them anywhere in your application. If you are using these functions, you absolutely must sanitize your data before passing them through to your functions for execution.

eval()

eval() is a function that deserves a section of its own. eval() will execute PHP code that is passed to it as an argument. Generally speaking, there are very few scenarios in which you will ever need to use this function. The use of eval() is considered poor practice, and if you find yourself — or your application — using this function, then I strongly recommend that you revisit the need for it. Like other potentially harmful functions, my recommendation is to have this function never used and disabled entirely.

File Inclusion

include() and require() are standard functions in which a PHP script will include another file in its point of execution stream. While these functions serve an essential utility, they also are potentially harmful if compromised. By simply passing a directory path, whether relative or absolute, an intruder can reach any file available to PHP and include it in the execution stream, if successful. These two functions are used so commonly that disabling them is an impossible option. Thus, ensuring proper use and secure data sanitization (if passing variables) is the best practice when using require() and include().

Storing Secure Data

An entire book could probably be written on the practice of storing data securely. This topic, unfortunately, extends beyond the scope of this particular blog post. However, if I can stress the importance of at least understanding the sensitivity of storing critical data, then I may have partly accomplished my goal. Storing data such as the following should be given a security audit for both the sake of your customers and for regulatory compliance:

  • Social Security numbers.
  • Credit card numbers.
  • Dates of birth
  • Driver’s license numbers
  • Health and medical records
  • Bank account information
  • Tax records
  • Credit reports

Make no mistake, the list does not stop here as these are only examples of the type of information that you must treat as highly sensitive. Best practices vary, as do the legal implications of not ensuring proper procedures and practices required by various legal and regulatory bodies.

Keep private apps behind a VPN

If you have admin applications that are used only by your internal employees, then there should be no need to expose the application to the outside world. You can devise a private VPN so that even if your employees are remote, they are required first to connect to your private network first before they are allowed the option of accessing different private applications.

Having a private VPN will introduce an additional layer of security to your corporate assets, regardless, and also provide secure access to your private internal applications. Even if you set a .htaccess password for your most private applications — make no mistake — your software can be hacked and compromised the more exposure you provide it to the outside world.

Security is much of an art as it is a science. The more that you invest in best practices and security precautions, the safer your customers and your company will be. As PHP applications grow in scale to meet the demands of your business as it scales, the more vulnerabilities you and your organization will become exposed to. Understanding the various critical points of weaknesses will help provide the framework necessary to have a solid security plan in place.

Top 10 Application Performance Problems

A transaction is defined as one or multiple threads running in or across server boundaries on multiple runtime environments. And with today’s rapid enterprise growth, the demand for high-performance transactions through software run applications is higher than ever before, which means a greater need for these threads to be run on their respective platforms efficiently, and without delay.

Principal Sales Engineer, Hugh Brien, shares some of his findings on the common application problems discovered in transaction threads and the core concepts to think about while investigating issues to ultimately optimize the end user’s experience. Some of the main findings include server configuration in thread pools, identifying a correlation between load and response times to find request overloads, I/O bottlenecks, and improper memory configurations for transaction volumes. Hugh also walks us through strategies and measures specific to Java, .NET and more to further identify potential problem areas, and how APM exposes them beforehand.

Brien also spoke on what it takes to start constructing a defined process to understand a customer’s pain points, and further clarify the best practices on how to best utilize application intelligence and avoid future troubleshooting.

Browse through the deck today!

Top 10 Application Problems from AppDynamics

What Every CIO Needs to Know about the Internet Of Things (IoT)

The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out and touch somebody is becoming reach out and touch everything.” In my previous blog, we talked about absolutely everything from cars and houses to your family members will be connected to the internet. However, revenue projections involved in those applications are left in the hands of consumer adopters, and if you want to keep up with IoT on an enterprise level then you have to step up your game. We’re not talking about your toaster tweeting that your toast is ready; we’re talking about billions of dollars on the table, and if you don’t take it then someone else will.

The Internet of Things is not just a buzzword anymore.

“Why now?” you might ask. It all comes down to costs, technology advancements, and the market size creating an opportunity for enterprise businesses. A little over a year ago, Chris Murphy, Editor of InformationWeek, explained in an article that “one of the myths about the Internet of Things is that companies have all the data they need, but their real challenge is making sense of it. In reality, the cost of collecting some kinds of data remains too high, the quality of the data isn’t always good enough, and it remains difficult to integrate multiple data sources.” The good news is that the cost is no longer a challenge for IoT. In fact, in a recent report, Goldman Sachs pointed out that “key obstacles are gone.” This report lists several examples ranging from bandwidth to hardware. For instance, in the last 10 years, sensor prices have dropped to an average 60 cents from $1.30 Similarly, processing costs have decreased almost 60 times. This means that even more devices are “not just connected, but smart enough to know what to do with all the new data they are generating or receiving.” Meanwhile, there are analytics programs out there ready to dissect the ever-growing mountain of data generated by enterprise IoT applications.

Recent headlines further demonstrate how momentum is growing around the IoT; acquisitions by large technology companies like Google and Apple are key indicators of opportunities in IoT. For example, Google’s acquisition of Nest Labs in early 2014 for $3.2 billion shows us that the wireless world will continue to become intertwined with other industries. Whether you planned on including IoT in your business plans or not, you might want to start rethinking what’s in the pipeline to make sure you’re at the top for your industry. There are several predictions for how many devices will be connected within the next 3-5 years, and then there’s the market value associated with those figures. Gartner’s assessment of the huge potential of the Internet of Things is reflected in its forecast of a cumulative 25 billion things to be shipped by the year 2020, in industries ranging from automotive to food and beverage services. This potential partially obscures IoT contradictions, as it is both emergent and has been around awhile. Eventually, the market will grow so large and technology will advance so that nearly all markets will have no choice but cut the cords, invest in analytics, and go IoT. Cisco’s Managing Director, Stuart Taylor, discusses how the Internet of Things is the next Dot-Com. “Like all technology revolutions,” he explains, “the path was not a straight line. We had sock puppets and business models based on the elusive quest for eye-balls, and outrageous promises of new businesses and social upheavals. But, in the end we got there. No one today would argue that the Internet has not added immeasurable value to the world and changed our lives forever. The Internet of Things is similar to where we were two decades ago, at the start of the Dot-Com era.” There’s no better time like the present, as they say. If you haven’t engaged in IoT and taken advantage of the opportunity in front of you, then you might lose out to your competition.

Industry reports confirm: you need to get involved in IoT

If you’re not convinced to implement an IoT solution yet, then take a look at what technology analysts have to say. Bob Kraus, senior research analyst, Global Technology and Industry Research Organization, IDC, found that “industry-specific solutions will comprise the bulk of the non-consumer market, and as the benefits to manufacturing processes, logistics, energy efficiency, and customer experience become evident, the IoT market will continue to experience rapid growth.” In fact, the focus on enterprise markets will especially see rapid growth due to the ability to scale with thousands of connections as a time.

Let’s look at an enterprise application that has succeeded in using IoT data to make informed business decisions: Garmin. Internet Application Admin, Doug Strick, is responsible for the performance of several of Garmin’s web applications, including the online store and applications that powered the call center. Like most companies in the space, Garmin used several tools to monitor the performance of their web application and troubleshoot issues, however it found that these tools weren’t powerful enough to deliver true visibility into application performance. “We never had anything that would give us a deep dive into what was going on in the transactions inside our application. We had no historical data, and no insight into database calls, threads, etc.,” said Strick. Given the insight from AppDynamics, Strick explained that they “now understand how the application is growing over time. This data will prove invaluable in guiding future decisions in IT.”

According to research from International Data Corporation, “the worldwide IoT market will grow from $655.8 billion in 2014 to $1.7 trillion in 2020 with a compound annual growth rate of 16.9%. Devices, connectivity, and IT services will make up the majority of the IoT market in 2020.” Additionally, many analysts expect North America to be the largest geographical market to use analytics for IoT. There has been empirical evidence that IoT is a growing market, but in which industries? As most people know, IoT spread across several industries from healthcare and the smart-home, to public defense and industrial manufacturing. Business Insider reports that “some of those markets will develop faster than others… there may be IoT market bubbles in certain industries, but other industries will likely pick up the slack for IoT adoption.” The figure below depicts estimations of market grow over the next few years per industry. As the experts have reported, IoT will continue to grow into other markets as technology advances.

Now what? What should the CIO do?

Given the thousands of devices getting connected via IoT everyday, there’s a huge demand for management, analytics and insights, and automation. First, the CIO must manage IoT connectivity costs by choosing the right architectures and protocols to optimize network and system resource utilization. Second, the CIO must use a management platform that offers new functions and deep visibility into applications and user characteristics. By utilizing existing IoT components from best-of-breed technology providers that demonstrate system maturity, CIOs can make sure their keeping up with the Internet of Things, and maximize the opportunity on the table.

As the CIO role shifts from the traditional cost center, to the innovation catalyst of a company, IoT will play a major role in bringing about new efficient processes and speed up the overall innovation. It will be the CIO’s duty to make sure the foundations are in place to support IoT in the enterprise, most notably, managing the mass amounts of data and monitoring the interconnectedness of devices.