Connectivity Solutions

Additional Products

Dedicated Compute, On Demand
Dedicated Compute, On Demand
Spin up Latitude.sh CPUs and GPUs in key markets, then use Megaport private connectivity to reach clouds and data centers across 1,000+ locations with predictable performance.
Explore Compute

Explore

Build

Join the Megaport Community
Join the Megaport Community
The community for network engineers, IT leaders, and partners to swap ideas and build what’s next.
Join Community

Get in touch

Corporate Info

Partners

It's official: Megaport x Latitude.sh
It's official: Megaport x Latitude.sh
Latitude.sh dedicated compute meets Megaport private connectivity so you can launch fast and run anywhere.
Press Start
New Bare-metal GPU Instance Now Available with NVIDIA RTX Pro 6000

New Bare-metal GPU Instance Now Available with NVIDIA RTX Pro 6000

By Victor Chiea, Director of Marketing, Compute

Meet your new perfect ML infrastructure powered by NVIDIA’s universal GPU, now available with Latitude.sh.

Faster AI models, tighter deployment cycles, and rising GPU costs have all changed the way network managers build and run their AI infrastructure. The trade-off has been to either wait months and overpay for top-tier GPUs, or settle for older chips that can’t handle today’s compute demands – until now.

With Megaport’s recent acquisition of Latitude.sh, we’ve brought network and compute closer together. Today, we’re advancing that approach with Latitude’s launch of a new bare-metal instance, powered by the NVIDIA RTX Pro 6000 Server GPU, for universal AI workloads.

Latitude is among the first providers globally to offer this latest NVIDIA GPU in an on-demand virtual machine (VM) or bare-metal format, and it’s available to deploy right now.

What makes the RTX Pro 6000 different

The NVIDIA RTX Pro 6000 Server GPU is built on NVIDIA’s popular Blackwell architecture, but isn’t positioned as a flagship part like B200 or B300. Instead, NVIDIA has launched the RTX Pro 6000 to make Blackwell more accessible across a broader set of workloads, rather than reserving it exclusively for hyperscale AI factories, placing it in the sweet spot for many network teams.

With next-gen tensor cores and generous VRAM, the RTX Pro 6000 also gives you access to Blackwell-era efficiency, memory capacity, and tensor core improvements without needing to justify the cost or wait time associated with top-tier GPUs. (With 96 GB of VRAM per GPU, you actually get more memory than the heavy-hitting NVIDIA H100 GPU.)

This makes Latitude’s g4.rtx6kpro.large instance a practical and affordable entry point into modern GPU infrastructure that’s still viable for real production work, not just experimentation.

Specifications

  • GPU: 8x NVIDIA RTX PRO 6000 Server Edition (768GB of total VRAM)
  • CPU: Dual AMD 9355 CPUs (64c @ 3.55 GHz)
  • RAM: 1.5 TB of DDR5
  • STR: 4x 3.8TB NVMe
  • NIC: 2x 100 Gbps

Features

  • 5th-Gen tensor cores and 2nd-Gen transformer engines
  • Native FP4 with 2x faster inference than previous-gen cards
  • Spectrum X networking for 1.6x better performance for AI traffic
  • Parallel inference to serve multiple models without GPU switching
  • Intelligent batching from prototype to production without replatforming.

Who the RTX Pro 6000 is for

Flexible and ready to deploy, this instance is ideal for AI engineers running inference workloads as well as ML engineers training or fine-tuning smaller models. It’s perfectly suited for teams working with models up to roughly 70 billion parameters, especially when the focus is on predictable performance and fast deployment rather than maximum raw throughput.

The RTX Pro 6000 is perfect for:

  • AI research labs tired of queuing for compute time
  • Startups building or scaling AI platforms
  • Enterprises overpaying to deploy production AI workloads at scale
  • Creative studios working with AI-powered content generation on tight deadlines
  • ML teams that have outgrown their current setup but aren’t ready for an AI farm
  • Anyone who’s needed more memory for their models or had to wait for GPU resources.

The RTX Pro 6000 solves two common pain points:

  • Performance efficiency: Blackwell delivers meaningful gains over previous generations, and this instance allows teams to start using that architecture now rather than waiting for access to higher-end parts.
  • Time to market: Flagship Blackwell GPUs are costly, scarce, and often prioritized for big tech and large AI farms, leaving other customers waiting for months. This instance removes that bottleneck so teams can move forward quickly and at a lower cost.

How you can use the RTX Pro 6000

This GPU is perfect for general AI workloads, particularly inference on models that are already trained. With enough power to train/fine-tune models with up to 70 billion parameters, it has plenty of capacity for running multimodal models, handling batch inference, or supporting production pipelines that need consistent latency and throughput.

But there are also more specialized use cases. For example, some customers in the Ethereum ecosystem are using this GPU to validate proof-of-stake transactions on Ethereum’s Zero-Knowledge Layer 2 blockchains. The hardware is more than capable of handling this type of workload, allowing teams to virtualize the instance internally and increase validation capacity.

Both of these different examples show how flexible this GPU is – powerful enough to support demanding workloads, and accessible enough to suit a variety of use cases.

You can also use the RTX Pro 6000 for:

  • LLM training and fine-tuning
  • inference at scale
  • rendering and VFX
  • gen-AI and diffusion models
  • scientific computing and HPC.

Learn more here.

Get started

GPU workloads don’t operate in isolation; they rely on data pipelines, storage platforms, cloud services, and distributed systems that span different regions and providers.

By combining bare-metal GPU infrastructure with on-demand private connectivity, teams can place compute where it makes sense and connect it securely to the rest of their environment. GPUs can sit close to data sources, integrate into hybrid or multicloud architectures, and scale alongside network requirements rather than being constrained by them.

If you’re looking for a practical entry-point into Blackwell-powered GPUs, backed by dedicated hardware and flexible connectivity, the NVIDIA RTX Pro 6000 Server GPU is ready to deploy today in Ashburn and Chicago, with additional locations available on request.

Speak to a Latitude expert to get started.

 

 

Related Posts

Megaport Success Stories: Aimee McGovern

Megaport Success Stories: Aimee McGovern

Megaport’s Accounts Payable (AP) Team Lead, Aimee McGovern, shares how she has grown and innovated while at Megaport, and what makes the company culture unlike any other.

Read More
Megaport Expands Into India With Strategic Acquisition of Extreme IX

Megaport Expands Into India With Strategic Acquisition of Extreme IX

Megaport acquires Extreme IX, adding seven Internet Exchanges and 40 data centers across India’s fastest-growing digital hubs to deliver broader connectivity options for customers.

Read More
The Best of Both Worlds: Hybrid Cloud and the Connection Between Public and Private

The Best of Both Worlds: Hybrid Cloud and the Connection Between Public and Private

Connecting to on-premises and public cloud resources doesn’t have to mean sacrificing the benefits of either environment.

Read More