Home » Blog » Ultra-low latency for online games: where lag comes from — and what the network can actually fix
Blog

Ultra-low latency for online games: where lag comes from — and what the network can actually fix

Ultra-low latency for online games: where lag comes from — and what the network can actually fix

Some of the most frustrating incidents in online games are the ones that do not look like infrastructure incidents at all. Prime-time complaints rise, players report rubber-banding, micro-stutters, or delayed inputs, yet the server side appears healthy: CPU headroom is fine, match servers are up, and there is no obvious outage. In many cases, the root cause is not server load or game logic but the packet path itself becoming unstable — because queues build up, interconnects run hot, peering is weak, or BGP shifts traffic onto a worse route.

For that kind of problem, the main levers are infrastructure and connectivity: where you place servers, how close you are to the right ISPs and exchanges, how you reach them via IP transit, peering / IX, and DIA, and how you separate critical game traffic from heavy internal flows via private inter-site connectivity, cross-connects, or dedicated circuits / wavelengths. With presence in 72 data centers and 228 locations, including Asia options such as Hong Kong, we support that part of the stack.

The key point: games are not just about ping. They are about RTT, jitter, and loss. Jitter — delay variation — and packet loss are what turn a tolerable average RTT into a bad player experience.

Why “ping” rarely explains what players feel

A packet goes through the home router, the ISP last mile, inter-AS hops, backbone or transit, then your data center edge to the game server — and back. The most dangerous contributor is usually not distance but queueing: when there is not enough capacity somewhere, or buffering is poorly managed, packets sit in line before moving on. To players, that shows up as spikes, even if the average RTT barely changes.

That is why it helps to look beyond a single number and track tail behavior: p95 and p99 RTT, peak-hour jitter, and even small but steady loss. Those are exactly the things infrastructure design and placement can improve — if the path to your servers is short, stable, and not forced through the wrong interconnects.

Where the network’s “magic” ends — and where the game’s work begins

Some things the network cannot fix: a player’s Wi-Fi, home bufferbloat, your netcode, tickrate, or lag compensation. But a lot is infrastructure-driven: the path to your servers, the quality of interconnects, proximity to an Internet Exchange, direct reach into major ISPs, and the ability to move traffic away from a degraded route.

The rest of this article focuses on that infrastructure side: what to do when the problem is routing and stability, not server load.

Three common problems — and the infrastructure levers that actually help

Most latency issues that show up as “lag” are not caused by a single broken link. They happen because the network behaves exactly as networks do under load: paths shift, interconnects heat up, and queues build in the wrong places. The good news is that these problems are usually repeatable and diagnosable.

1) You launched a new region, but for many players it is “farther” than it should be

This is a classic trap: the data center location is geographically right, but network-wise it is not. For some ISPs the path is short and clean; for others it hairpins through extra transit, hits a congested peering link, or takes a strange BGP detour. Same city, very different experience by ISP.

The fix is mostly about site selection and routing policy. You do not choose a region only by the map; you choose it by the connectivity ecosystem around it. You want a nearby IX, the ability to reach the right networks quickly, plus the right mix of IP Transit and Remote IX / Remote Peering. Sometimes the biggest lever is not “more servers” but better routing: the right transit mix and BGP policy reduce unnecessary hops and evening surprises.

2) The platform grew, and internal traffic started hurting game traffic

As you scale, you add replication, backups, build delivery, big releases, analytics, and more east-west traffic between services. The network starts breathing. If all of that rides the same paths as match traffic, queues are inevitable. Players do not care why the pipe got clogged — they just see rubber-banding and spikes.

This is where architecture discipline matters: separate traffic classes and give critical inter-site flows controlled transport. In practice, that often means technologies such as EoMPLS Pseudowire and other private interconnect models that make paths more predictable and reduce the chance that sync traffic will interfere with gameplay traffic. The point is not the acronym; the point is predictable paths, QoS, and enough capacity headroom to keep queues from building in the wrong place.

3) “Average is fine,” but evenings get worse every day

If complaints show up like clockwork, it is often not a game bug but peak-hour congestion and queueing. Some ISPs overload a peering link, some transit paths heat up, and sometimes routing changes so the path shifts. The result is not always high RTT, but bursts of jitter and loss.

This is where data beats instinct. Identify which ISPs and metros degrade, where the traceroute changes, and whether RTT, jitter, or loss is the primary symptom. Then fix it surgically: change ingress, refine BGP policy, move to a cleaner Internet handoff with DIA, or use purpose-built Low Latency Routes where specific geographies or paths need tighter control. The goal is not to “optimize the network” in general, but to correct the exact place where instability appears.

Platform nuances that matter for network design

Once the basics are under control, the next step is aligning the network with how your game actually behaves. Not every title is sensitive to the same failure mode, and the right architecture depends on what players notice first.

Different genres, different pain profiles

In shooters, jitter sensitivity is brutal: small delay variation and loss quickly degrade hit registration and control feel. In MOBAs, rare but sharp spikes during large fights are often worse than ten extra milliseconds on average — tail latency matters more than the mean. In MMOs, long-session stability and service-to-service reliability matter more because there are more components, more east-west traffic, and more ways for issues to surface outside combat — inventory, chat, and instances included.

The point is practical, not theoretical: different games need different regional layouts and different tolerances for path instability.

UDP vs TCP vs QUIC: why games want delivery control

Real-time simulation typically uses UDP because it does not stall the stream on a lost packet and lets the game decide what to do — resend, predict, or drop. TCP can feel worse in real-time play because loss can block subsequent delivery. QUIC is a modern transport over UDP and works well for service channels such as authentication, APIs, and telemetry, but core simulation usually stays where the application fully controls reliability.

Whatever the transport, unstable paths still hurt. If the network path is noisy, the protocol alone will not save the player experience.

Relay patterns: when the direct path does not work

CGNAT and messy home networks make some UDP paths unreliable or impossible. Many platforms use a fallback model: try direct first, then switch to a relay node. Relay only helps when it is placed correctly — close to players in a network sense, with good peering and exchange reach, and well-connected to the target region.

This is where edge presence matters. Relays placed in the right PoPs and data centers are more likely to reduce jitter and loss instead of becoming one more bad hop.

Anti-DDoS without adding latency

DDoS can create queues at ingress or saturate bottlenecks. Bad mitigation can be just as harmful: if legitimate traffic is hairpinned to a distant scrubbing center and back, the service may stay technically up while the game becomes unplayable because of extra RTT and variance.

A practical rule is to filter as close to the edge as possible and segment traffic domains so an attack on one layer does not drag everything else down. That is also why low-latency mitigation matters: the protection layer has to defend the service without stretching the legitimate path, which is exactly the tradeoff addressed by DDoS mitigation.

SLOs and SLIs: measure what players actually feel

“Low latency” means very little until you define the tails. In real incidents, the average RTT may stay flat while p95 and p99 blow up, jitter grows in prime time, or small steady loss appears.

A useful approach is to track RTT, jitter, and loss by region and ISP and correlate those metrics with disconnect rates and player complaints. That turns changes in peering, transit, or placement from guesswork into measurable engineering iterations — especially when supported by visibility tools such as Best Path.

What we can offer on the part the network truly controls

We are not claiming we can remove all lag. What we can do is improve the part of the stack the network actually controls: placement, path quality, Internet handoff, inter-site predictability, and operational responsiveness.

That includes colocation across a broad global footprint, regional deployment options with Asia coverage, and operational models that make expansion and maintenance easier — from Virtual PoP for lighter-footprint presence to Remote Hands when infrastructure needs to be changed quickly without sending people on-site.

For smaller latency-sensitive scenarios, LagBlaster can be positioned as a lightweight complement to that stack rather than a full infrastructure redesign. Framed correctly, it is not about “more bandwidth” but about improving the path for gaming traffic with a simple, plug-and-play approach centered on low-latency connectivity and port management for endpoints such as PlayStation, Xbox, PC, and Mac.

Takeaways

  1. Game QoE is driven by tail latency: p95 and p99 RTT, jitter, and loss in prime time — not just average ping.
  2. Rubber-banding and micro-stutters are often caused by queueing and degraded interconnects or routes, not distance alone.
  3. Placement is more than a city choice: IX proximity, peering quality, and real ISP paths matter.
  4. Private inter-site connectivity helps when internal traffic would otherwise compete with match traffic and you need predictability.
  5. Low-latency DDoS mitigation only works well when the edge is placed correctly and legitimate traffic keeps a short path.
  6. The right way to talk about low latency is through SLOs, SLIs, and measurements — not slogans.
索取報價

准備好開始了嗎?

索取報價