Sunday, February 18, 2018
Home » Cloud Computing » Tips to Reduce Cloud Latency When Dealing with Data Gravity

Tips to Reduce Cloud Latency When Dealing with Data Gravity

We’ve been revisiting the concept of data gravity of late and how that impacts where applications are placed. As a quick refresher, the concept of data gravity is fairly simple. As organizations adopt and migrate infrastructure to the cloud, data that remains outside of the cloud starts to gravitate to those applications running in the cloud. As data is pulled closer to that infrastructure, it can reduce latency, increase efficiencies and speed, as well as increase application performance, all of which can positively impact the end user’s experience.

However, some in the industry note cloud latency can be a potential concern. Keith Townsend with TechRepublic noted some possible solutions to counter the potential problem:

In multi-data center designs, data center managers place workloads closest to the data that is commonly accessed, minimizing the impact of latency. An application hosted in the cloud has the same considerations. The simplest technical solution is to host workloads requiring cloud-based data in the same cloud service.

Another simple solution is to co-locate your non-cloud workloads in a Cloud Exchange… Switch’s Cloud Exchange is a value-add Switch offers to its cloud provider and enterprise customers hosting equipment in their data center.  Switch provides the capability of running cross connects from customer equipment to cloud providers. The closer proximity eliminates the need for dedicated circuits between a cloud provider and a customer.

Another option is to purchase on-premises cloud services… Since the data is local to the customer’s data center, data gravity doesn’t factor into application performance.

Another tip: some cloud providers are evasive when it comes to disclosing the location of their data centers. This can complicate latency issues. Wired notes to really understand latency, you should know the answers to the following questions:

  • Are your VMs stored on different SANs or different hypervisors, for example?
  • Do you have any say in decisions that will impact your own latency?
  • How many router hops are in your cloud provider’s internal network and what bandwidth is used in their own infrastructure?

The idea of being able to transform one’s operations through the use and placement of data is a revolutionary one. And while some processes and workloads physically are restricted to the cloud, with the likes of AWS, Google, Microsoft, and others continuing to refine what it means to physically store data, it’s making more sense for enterprises to increasingly move more workloads and applications to the cloud.

For other ways to reduce latency here are some favorite insights:

About Richard Dolan

Richard Dolan
As Datapipe’s Senior Vice President of Marketing, Rich is responsible for developing and driving Datapipe’s world class marketing team and ensuring Datapipe stays ahead of the curve with product development and client support. Rich has been with Datapipe for more than 15 years and has seen the company evolve into a leading, global MSP. Rich writes about Datapipe news, Datapipe clients, business strategies, and also provides insight into the company’s partnerships with AWS, Microsoft, Equinix, and others.

Check Also

The Shift to Containerization

If you’ve been paying attention to the software industry in recent years, you’ve likely heard a ton of buzz around “containers.” Today, most teams and developers are trying to build applications using this method, but why all the fuss?