How to Fix Poor Azure Latency

How to Fix Poor Azure Latency

Companies don’t always get the most out of their cloud networks. There are times when need outpaces capacity and networks suffer latency issues. If you’ve found this blog, you likely already know how problematic poor latency can be for a business , and how its inefficiencies quickly translate into lost revenue. According to a recent report from Gartner, downtime costs companies, on average, $5,600 USD per minute – and unreliable latency is a key contributor. So, when network needs are outpacing network capacity, how can you keep up and avoid latency issues?

The good news is that for the 70 percent of businesses that rely on Microsoft Azure for their cloud networking, there are easy ways to reduce latency that won’t increase costs and, in most cases, will allow you to maximize existing assets. Here are three key methods to reduce your Azure latency and get better network performance.

Method 1: Accelerated Networking

Accelerated Networking, also known as Single-Root Input-Output Virtualization (SR-IOV), is the most common method of reducing latency. Accelerated Networking, in plain English, is about finding the shortest network path between A and B.

The problem is that many cloud setups rely too heavily on a host for administration. But with Accelerated Networking, traffic flows directly from the virtual machine’s (VM’s) network interface, no longer passing through the host and the virtual switch. The result is reduced latency, jitter, and CPU utilization for the most demanding workloads.

Accelerated Networking matters because it eliminates unnecessary network complexity (which we’ve taken the guess work out of in our blog before ). Without accelerated networking, traffic moving in and out of the VM must pass through the host and the virtual switch. Accelerated Networking reduces the number of jumps for traffic to reach its destination.

Done right, the virtual switch provides all policy enforcement which was previously provided by the host, such as access control and network security.

A perfect example use case is disaster recovery – a networking professional’s nightmare and a costly event for companies. It’s in this situation that a method like Accelerated Networking shows what it can do to restore networking operation when the central host is no longer an option, and the virtual switches must assume its role.

Microsoft has extensive documentation on this use case, which anyone serious about not only latency but their overall cloud network’s health and wellbeing should absorb. For anyone who wants a technical deep dive, Microsoft’s Accelerated Networking explainer is essential .

Experiencing hefty egress fees using Azure? Learn four ways to lower them in our blog.

Method 2: Receive Side Scaling (RSS)

Accelerated Networking, of course, is not for everyone. For organizations whose VM does not fit the compatible scenarios to allow Accelerated Networking, Receive Side Scaling (RSS) is the best bet to reduce overall network performance – including latency. With RSS, a key goal is to find the shortest path between A and B. The difference, however, is that instead of bypassing the host, RSS will find the most efficient distribution of network processing across multiple CPUs.

RSS serves as a “traffic cop” for your network – certain traffic is waved through while others are assigned lesser priority. Without RSS, the network attempts to process traffic through a single network path, and latency is the natural effect of everything trying to happen at the same time.

Azure offers an easy way to implement RSS. Administrators simply configure the network interface card (NIC) and the miniport driver to receive the deferred procedure calls (DPCs) on other processors. Also handy: The RSS ensures that processing associated with a particular connection stays on an assigned CPU. Networkers just set the configuration one time, then move on to other tasks as the NIC takes care of the rest.

RSS is a godsend for reducing processing delays by routing receive processing from the NIC across multiple CPUs. The goal here isn’t simply to reduce latency; it’s also about building better network intelligence so that one CPU isn’t overwhelmed with requests while others remain idle.

This method of latency reduction is especially good news for companies that deal in high-volume networking like broker-dealers, e-commerce providers, and organizations that require high-network capacity at a moment’s notice. As with Accelerated Networking, Microsoft offers helpful documentation for organizations to roll out RSS quickly and efficiently.

Compare the private connectivity of Microsoft Azure against AWS and Google Cloud with our guide.

Method 3: Proximity Placement Groups

Proximity Placement Groups are the most no-brainer latency reduction method, essentially referring to colocation. But in this instance, colocation isn’t just hosting network assets at the same hosting facility – rather, it’s a broader application of the term referring to reducing the distance between Azure compute resources to reduce overall network latency.

Proximity Placement Groups can reduce latency between on-premises VMs, VMs in multiple locations (i.e., Availability Sets), or multiple networking groups (scale sets). Of course, the further away assets are from each other, the greater the chance of latency and other issues. The goal with Proximity Placement Groups is to think about how to route network tasks using assets with the least physical distance.

To build intelligence into this method, Microsoft recommends pairing Proximity Placement Groups with Accelerated Networking. The idea here is to make sure that when you ask for the first virtual machine in the proximity placement group, the data center is automatically selected.

The only caution we offer is that in the case of elastic workloads, the more constraints you place on the placement group, the greater the chance of allocation errors and the higher the likelihood of latency. Our best advice is even when Accelerated Networking is handling the traffic, network administrators must remain watchful. For instance, capacity is not kept when you stop (deallocate) a VM.

Proximity Placement Groups make the most sense for organizations looking to maximize network function in their home region, as well as ensure they aren’t overspending on capacity in a new market.

Microsoft’s Azure documentation offers additional best practices on Proximity Placement Groups, especially for networkers who are serious enough about latency and intelligent networking to augment their capabilities with Accelerated Networking.

Get faster Azure connectivity with Network as a Service

When it comes to speeding up your network, the Azure methods we have discussed can make a massive difference—but the best place to start is to examine your enterprise network as a whole.

Using the public internet leaves your network at the mercy of traffic fluctuations which can severely impact speed and performance – which can ultimately damage your business . Traditional telecommunications providers can be a problematic alternative, forcing customers to lock into contracts where bandwidth can’t be scaled up to support peak demand periods. The result: Slow, unreliable network performance and the need for constant disaster mitigation from your IT department.

By using Network as a Service (NaaS) as your Azure connectivity method, you can bypass the public internet and switch to a private network path. Plus, when you provision NaaS connectivity with Megaport, you’ll benefit from:

  • Better performance – avoid bottlenecks and downtime caused by internet traffic fluctuations. With your own connection to AWS on Megaport’s private backbone, your connectivity will be fast and consistent.
  • Scalable bandwidth – provision connections and turn up bandwidth (in some instances, up to 100 GB) on demand via the Megaport portal to deliver the performance you need in peak periods, then turn it down when no longer needed to start saving instantly.
  • “Always on” redundancy – with 700+ on-ramps globally, you’re protected from downtime with Megaport’s Service Availability target of 100 percent.
  • Megaport Cloud Router (MCR) Megaport’s virtual routing service provides on-demand private connectivity at Layer 3 for high-performance cloud-to-cloud routing without hairpinning to and from your on-premises environments.
  • Megaport Virtual Edge (MVE) To bring the benefits of a low-latency network right to the edge, use MVE, Megaport’s Network Function Virtualization (NFV) service, to get direct, private connectivity from branch to cloud – reducing latency and jitter on your mission-critical applications.

Use one, or use them all; with Azure, you can leverage any combination of these tools to reduce your network latency. When underpinned by a NaaS like Megaport’s, you’ll benefit from a faster network from end to end for a more productive business with a more profitable bottom line.

Related Posts

Oracle Cloud Chooses Megaport as Their First Partner for One Portal Provisioning

Oracle Cloud Chooses Megaport as Their First Partner for One Portal Provisioning

Oracle Cloud’s new integration with Megaport makes it easy for Megaport customers to provision connections without leaving the Oracle Cloud console.

Read More
The Internet is a Series of Tubes: Peering and the MegaIX

The Internet is a Series of Tubes: Peering and the MegaIX

Share and source traffic with a global network of service providers via the MegaIX.

Read More
How to Load Balance Traffic Across Multiple VXCs

How to Load Balance Traffic Across Multiple VXCs

For those wanting to maximise the use of multiple network connections for redundancy and optimised performance.

Read More