
The Hidden Cost of Running Cloud-Hosted SD-WAN for IaaS
- Cloud networking
- October 3, 2025
- RSS Feed
By Matt Madawi, Solutions Architect
There are three common ways to connect your branch locations to the cloud. We break down the benefits and limitations of each.
Many enterprises are executing a strategy of cloud services first, followed by application modernization – including SD-WAN. Moving toward a cloud-native architecture using containerization, microservices, or serverless architectures like SD-WAN can lower costs over time, increase scalability, reduce development cycles, and speed up time to market.
The next challenge is making sure branch sites can reach this now modernized, public cloud-based application. But what if there are still a number of legacy applications hosted in the private cloud or services hosted in second or third public cloud providers?
With the popularity of SD-WAN and an increasing number of enterprise WAN migrations from MPLS to broadband-based underlays, business-critical traffic now travels across best effort, public internet-based connections. This new shift to the “cloud-powered branch” requires a redesign from an enterprise architecture perspective.
One option is to bring cloud-bound traffic from the branch sites back to the data center then outwards to the cloud, ideally over private connectivity. This can work well, but depending on the geographic location of the enterprise’s data center, this can also introduce hairpinning, which significantly increases latency and reduces application performance. The second option is to allow branch sites to communicate directly to the cloud.
There are various methods to achieve this, specifically:
- cloud networking service provider VPN gateways (AWS VPG, Azure VNG, Google Cloud VPN)
- cloud networking-based SD-WAN (including AWS Transit VPC, Azure Virtual WAN, Google Cloud NCC)
- edge connectivity via Network as a Service (Megaport Virtual Edge).
Standard branch to cloud with VPN tunnels
Let’s consider a medium-sized retail enterprise with 40 branch sites spread across the United States that require access to a point of sale/inventory management platform hosted in the cloud.
One quick and reliable solution would be to utilize the standard Site-to-Site VPN gateway offered by the cloud providers. In an AWS environment, this would mean building IPsec tunnels across the public internet to a virtual private gateway connected to a single virtual private cloud (VPC).
The drawback to this design is the one-to-one rule which also applies to Azure, as their equivalent (VPN Gateway) can also only be attached to a single VNet. Since a virtual network is limited to only one gateway, this means that if the enterprise builds a second virtual network, this would require a second VGW and new IPsec tunnels to all 40 branch sites.
The cloud providers have a maximum number of branch site tunnels that attach to a single VPN gateway. For Azure, on Gen High Spec VpnGw this number is 100, and for AWS it is 10 (but an increase can be requested).
There is also a ceiling on the network speeds that can be achieved. A common misconception is that IPsec tunnels to the cloud are limited to 1.25 Gbps — in fact, that is the total throughput of the gateway. So if you configure 40 branch sites, they will each share that 1.25 Gbps resulting in only 30 Mbps of throughput per site which shrinks as you add more sites.
It’s also worth considering that traffic from each branch site will be travelling 100% across the public internet and will likely face unpredictable levels of jitter, packet loss, and latency. And since the cloud-to-branch site traffic is travelling via the public internet, it will be subject to higher standard cloud provider egress or data transfer out (DTO) fees of approximately 8 cents for every gigabit that leaves your virtual network.
Cloud-hosted SD-WAN edge
To overcome the aforementioned challenges of a standard VPN tunnel-based design, the cloud providers and traditional network hardware vendors (such as Megaport partners Cisco, Palo Alto and Fortinet) have built solutions that can integrate into an enterprise SD-WAN.
SD-WAN is a virtualized WAN architecture that utilizes a mix of underlying transport methods such as MPLS, broadband, and LTE to connect branch sites to applications hosted in the private and public cloud. One of the key advantages of SD-WAN is the centralization of control and routing functions, which direct traffic paths across the overlay WAN.
While there are slight variations, this architecture extends the SD-WAN fabric into the public cloud using Infrastructure as a Service (IaaS), then loading SD-WAN vendor software from the cloud provider’s marketplace (BYOL, or bring your own license). This allows for very fast deployment, high levels of automation, and simplified operations using the SD-WAN vendor’s management console.
Let’s start by looking at an IaaS-based solution in AWS and its key components. The first is the Transit VPC, which acts as the central hub for all traffic to and from the branch sites, and as the conduit to the application virtual private clouds (VPCs).
Two virtual machines are spun up in the ‘Connect’ VPC and an SD-WAN vendor’s image is then loaded to create two virtual routers which can be managed by Cisco Catalyst SD-WAN Manager, FortiManager, or an equivalent. The hourly charge for the underlying AWS VMs will depend on the specifications selected during the build.
For redundancy, it is recommended that each branch site builds a tunnel to each virtual router. Using a BGP-style routing protocol, the private subnets in the application’s VPC will be advertised to the virtual routers, which in turn will re-advertise those subnets to the 40 branch sites (for clarity, the diagram above only shows five branch sites).
As the number of SD-WAN branch sites grows, additional virtual machines (routers) may be required as there could be CPU and bandwidth limitations due to the amount of IPsec tunnel encryption/decryption needed.
This SD-WAN solution still utilizes the public internet so standard egress fees will apply, but one additional hidden cost to be aware of is the potential for double egress fees. Since the northbound connections are using internet IPsec tunnels, there will be a fee charged for each GB that leaves the application’s VPCs toward the Transit VPC. If the traffic is destined for a branch site, then there will be an additional per-GB charge as it leaves the Transit VPC southward. Effectively, the customer is getting charged double egress fees for each GB reaching a branch site.
Depending on workloads and traffic flow, an IaaS-based SD-WAN solution can work extremely well for many enterprises.
Let’s consider what happens when a second cloud provider is introduced to avoid vendor lock-in, plan for business continuity, or use a new internal application that works best inside – for example, Microsoft Azure. The architecture gets significantly more complex, as now all 40 branch sites require access to both AWS VPCs and the new Azure VNets.
In an Azure environment, a Virtual WAN would be deployed and a virtual hub VNet would be created (similar, in many ways, to an AWS Transit VPC). Again, dual VMs would be spun up in this hub and SD-WAN software loaded on top to create a Network Virtual Appliance (NVA).
An existing branch site will then connect via IPsec tunnels to both NVAs in the Azure virtual hub (in addition to the existing two tunnels to AWS). As you can infer from the diagram above, this now means there are hundreds of IPsec tunnels.
So far we have only considered multicloud in the branch-to-cloud flow, but there could also be a requirement for cloud-to-cloud connectivity; an example might be cross-cloud data replication.
The options here are either to hairpin the traffic back to the on-premises data center (which introduces higher latency) or build new IPsec tunnels between the Azure virtual hub and the AWS Transit VPC which, of course, will incur internet egress fees and have tunnel speed limitations.
Cloud-hosted SD-WAN with Megaport Virtual Edge
Megaport Virtual Edge (MVE), combined with private Layer 2 cloud connectivity, addresses many of the pain points identified in the two solutions above.
MVE enhances your existing enterprise SD-WAN platform by giving you the ability to strategically build optimal pathways to critical applications wherever they reside. Essentially, MVE enables businesses to build their own Virtual Connectivity Hub within minutes and extend their WAN reach leveraging Megaport’s global Software Defined Network (SDN).
Each of Megaport’s MVE metro locations contains at least two data centers and scalable Megaport Internet options, supporting MVE availability zones to provide end-to-end WAN resiliency. There are over 120 MVE locations available to choose from around the globe.
Let’s look under the hood of MVE to understand its core elements and capabilities.
By integrating Network Function Virtualization (NFV) with Megaport’s private software-defined network, MVE enables virtual network functions including Next Generation Firewall (NGFW), software-defined wide area network (SD-WAN) gateways, and virtual routing via a single and intuitive platform.
Megaport MVE supports a number of industry leading vendors including:
- 6WIND Virtual Service Router (VSR)
- Aruba EdgeConnect SD-WAN
- Aviatrix Secure Edge
- Check Point CloudGuard
- Cisco Catalyst SD-WAN or Autonomous mode
- Fortinet FortiGate
- Palo Alto Networks Prisma SD-WAN
- Palo Alto Networks VM-Series NGFW
- Peplink FusionHub
- Versa Secure SD-WAN
- VMware SD-WAN.
MVE offers users the choice of device scale through the Megaport Portal from 2 vCPU up to 32 vCPU devices, while RAM and storage are optimized for each vendor and vCPU size selected.
The MVE is effectively managed like a regular SD-WAN appliance within the enterprise WAN fabric and can be configured from Cisco Catalyst SD-WAN Manager, FortiManager, or an equivalent.
The branch traffic arrives at the MVE via first-mile local internet, typically less than ~10ms depending on proximity. From there, the MVE has access to the global Megaport fabric which includes over 333 public cloud on-ramps enabling private Layer 2 connections (such as AWS Direct Connect, Azure ExpressRoute, Google Cloud Interconnect, Oracle Cloud, and others).
How it works
SD-WAN branch sites can build IPsec attachments to the MVE via redundant Megaport Internet with Megaport-provided public IP addresses. After the branch traffic arrives at the MVE, enterprises can aggregate their branch site traffic at a centralized MVE location, transition it away from the public internet IPsec tunnels quickly, and reroute that traffic to the major cloud providers over the Megaport backbone, getting a number of benefits.
Benefits
- Application performance: Branches that previously accessed cloud-based applications via the public internet will see improved performance. Now only the “first mile” will traverse the internet while the majority of the path will be via the Megaport backbone, reducing jitter, latency, and packet loss.
- Access to the Megaport fabric: MVE provides access to Megaport’s ecosystem of 410+ service providers and 1000 + enabled data centers globally, including 333 + cloud on-ramps – the most of any neutral Network as a Service (NaaS) provider.
- Opex reduction: By using the hyperscaler’s private connection instead of internet IPsec tunnels, you can significantly reduce egress fees. In North America, this means a reduction from an average of 8 cents per GB to around 2 cents per GB.
- Network simplification: A 40-site example of an IaaS solution would require at least four IPsec tunnels per site, which means a minimum of 160 total tunnels. These will each require 30 IP addresses as well as link state notifications, and will eat up valuable CPU resources at both the branch and in the cloud-based virtual routers. By moving to MVE, an enterprise can massively reduce the complexity of their network architecture.
- SD-WAN efficiency: Integrating your SD-WAN platform with MVE allows you to build strategic middle-mile transports throughout your WAN using Megaport’s Virtual Cross Connects (VXCs). SD-WAN traffic-shaping services will immediately leverage these more efficient pathways for end-to-end application flows.
- On-demand: There is no equipment to purchase or install in a data center, and no cross-connects to run. MVE (like the IaaS solutions) can be provisioned in real time and ready to use within minutes.
- Cloud to cloud: Lastly, since MVE provides private Layer 2 connectivity to both AWS and Azure, any cloud-to-cloud traffic can also utilize these private connections, as well as avoid costly IPsec tunnels or hairpinning traffic back to your on-premises router.
The core benefit of SD-WAN is its ability to shape application traffic and steer it across multiple WAN transports. Inserting Megaport Virtual Edge into your enterprise WAN fabric gives you more flexibility and control over your WAN to optimize your applications (which is not always achievable with a cloud hosted SD-WAN edge). MVE can also save you a huge amount of money in egress fees while simplifying your network.
The hidden costs of cloud-hosted SD-WAN for IaaS can quickly add up through performance trade-offs, architectural complexity, and inflated egress charges. By contrast, Megaport Virtual Edge (MVE) provides a simpler, more cost-effective, and higher-performing alternative that integrates seamlessly with leading SD-WAN vendors.
With direct access to Megaport’s global fabric, hundreds of cloud on-ramps, and over 1,000 enabled data centers, MVE gives businesses the flexibility to scale, reduce costs, and optimize application performance across multicloud environments.