I’ve been logging MapleRoyals’ player count since the 9th. Below are the charts and what I think is happening. We see short, sudden step-downs in the number of players online, then a clean recovery a few minutes later. Typical length: 2–10 minutes How big are they? Usually 5–15% of whoever is online at that moment (about 50–300 players), with the worst around ~20% (~470 players). When do they happen? Often between 13:00–17:30 UTC, with another smaller cluster near ~23:00 UTC. They also appear at quieter times, so it’s not only peak-load related. Not one channel: multiple channels report lag/disconnects at the same time. If it were a single channel/process restart, we’d expect the same slice of players to drop every time; that’s not what the data shows. Cross-server clue: A different server (MapleLegends), hosted with the same provider (OVH) in the same country, shows dents at similar minutes. That points to something outside the game software—most likely the network path—to both servers. - Data borrowed from another player as I do not track Legends with my data What this probably is OVH’s DDoS protection (“VAC”) sometimes flips our traffic through a security checkpoint. Think of everyone’s connection as driving on a highway to the game. When OVH thinks there might be an attack—or normal traffic looks spiky—it reroutes cars through a toll plaza (a “scrubbing center”) to check them. That sudden detour adds a brief stall and changes the road mid-trip. Most people just feel a lag spike. Some—usually on longer or fussier routes (APAC, some Verizon/South America paths)—time out and disconnect during that stall. Later, when OVH thinks things are calm, it flips back to the old road… causing another brief stall. This fits the charts because: Drops are short and clean (minutes, then back to normal). The percent hit varies each time (not a fixed “channel size”). Another OVH server shows dips at the same minutes. Reports skew by region/ISP, which is what you get when some Internet paths hiccup while others are fine. This also could explain why certain ISPs/regions are more impacted whereas others get a 'lag spike' due to closer/faster paths able to "keep-alive" whereas other ISPs might have a less direct route. Practical options Permanent mitigation on the game IP (24/7). What it does: traffic is always routed through the VAC scrubbing path; no auto cutovers. Why it helps: eliminates the brief re-route that’s knocking active sessions off. Trade-offs: usually +5–20 ms and a bit more jitter; otherwise minimal if the scrubber is local and capacity is fine. Tune VAC rules for the game profile (reduce false positives). Ensure the correct “Game/UDP/TCP” profile and exact ports are whitelisted; raise burst thresholds that are being mistaken for attacks; disable signatures known to trip on game patterns. OVH can check hits against your timestamps and adjust per-IP policy. Session-tolerance on the game side Increase client/server timeouts and keepalive frequency so a lag spike is not a disconnect. No clue if this is even plausible tl; dr Across 9/09–9/13 (UTC) there are ~20 short, step-drops in concurrent players, typically 2–10 minutes long and 5–20% of online users. These occur at multiple load levels, often in windows around 13:00–17:30 UTC (Peak) and a smaller cluster near ~23:00 UTC (Much Quieter). MapleLegends shows dents in similar windows. The pattern fits OVH VAC (DDoS) mitigation cutovers or an upstream route change, not server CPU or a single channel restart.
Do you also only DC on one client when multiple are open? That's the strangest thing about the recent DCs for myself and other players I've asked. Monoclienting is not safe either, most recent VLs have had dcs and they were monoclienting...
Hey, excellent report. I've been a victim of the DCing for the past few weeks and here are some things that may support your thesis No lag until right before the DC for about 10 seconds (mobs walk in place, etc) and then DC to login screen. Can log right back in after DC. Happens multiple times over the span of around half an hour. Tried connecting a few clients to VPN on a different PC, and those didn't DC
Okay so good news, in this last d/c I was able to have an NTR going to keep an eye on things and give more information for the team. Here is during the server disconnects And here is during the restored time What this means and if you need more direct evidence to discuss with OVH is that we are actively seeing their VAC scrubbing in action. At the Zayo/OVH handoff we see that during the d/c events traffic is being routed off the usual path and into OVH’s DDoS scrubber (VAC). In the before shot, the first place packets start dropping is a Zayo hop (148.113.188.57), and that loss continues straight onto the first OVH edge hops (145.239.* / 147.135.* / sometimes 198.27.) until the server stops replying. In the after shot, the server is reachable again and the route clearly shows the scrubbing path—OVH’s POPs like 147.135., 198.27., 54.36., 213.186.* appear on the way. A “POP” (Point of Presence) is just OVH’s local entry site in a city/region—think of it as their doorway onto the wider Internet. So during the drop we’re catching the flip into mitigation (packets die at the Zayo→OVH on-ramp and the first OVH hops); once the flip settles, everything flows again through the scrubber. Ask them to confirm VAC enter/exit around the windows from my above time stamps, keep mitigation steady instead of flapping (either permanent mitigation or auto with a hold-down so it stays on once triggered), tune the UDP/game profile so normal bursts don’t trip it, and review the Zayo↔OVH handoff near 148.113.188.57 during those same windows. I have included a .csv of all the drops if that's easier for documentation than a bunch of pictures. It's been changed to a .txt because of forum restrictions, but should work if you change it back to a CSV. Hope this helps!
9/25-9/27 data For the record the 9/25 @ 18:47:30 has been our worst one yet 30.8% drop (486 players dropped)
Got another NTR in on this 17:00ish d/c cycle This one was our worst ever we lost 1/3 of the server in this d/c spike Plugging my data into ChatGPT + What I know: During the most recent disconnect window (UTC time in the screenshots), I captured a live trace to the game IP 57.128.187.242. While players were dropping, the end-to-end path showed ~46% packet loss. The first device where loss turned on and then stayed on was a Zayo (AS6461) router: 148.113.188.57 (preceded by 128.177.169.81). From there, the loss continued across the OVH edge (addresses like 145.239. / 147.135. / 198.27.73.206**) and through OVH scrubbing nodes (54.36.50.240 / 213.186.32.253) down to the destination. A few minutes later the destination recovered to 0% loss; the route still showed those scrubbing nodes, which is what you see when OVH’s DDoS protection (VAC) is in place and traffic is being cleaned before it reaches the server. This lines up with the pattern we’ve been logging since 9/9: short drops affecting a slice of players, often around the same times each day, sometimes coinciding with another OVH-hosted server. The trace here adds the “where”: loss starts at or just before the Zayo → OVH handoff and persists inside the OVH mitigated path. Suggested next steps for OVH (based on this evidence, not demands): Confirm the VAC state for this IP during the window in the screenshots (which POP handled it, whether mitigation was entered/active/exited). Check the health/capacity of the scrubbing POP used in that window (packet-loss, queues, CPU). Check interface counters on the Zayo ↔ OVH peering around 148.113.188.57 for errors/drops and coordinate with Zayo if needed. Reduce path flapping: either keep this IP in permanent mitigation or keep auto-mitigation but add a hold-down so once triggered it stays on for a period instead of bouncing. Review the VAC game/UDP profile for our ports so normal bursts don’t trigger sensitivity. All disconnect windows (from 9/9 onward) are already listed in the logs I posted, so OVH can line up their internal timeline with those. --- And an after for comparison when I was back on my 6 stopper farmers. at 17:23
Been super busy IRL, so much so that my server was literally off from the 11th and I didn't notice until some point on the 15th. Drops haven't stopped, but they have calmed down it seems. EDIT: "Calmed down" just means they've been at 1-200 players tops and we haven't seen any major 200+ player drops! Before anyone complains that it's just less players, player count has been mostly consistent for the last month. Most and Least Players (The data showed an inaccurate drop due to missing data so I scribbled the dates my server was off) Average Players (The data showed an inaccurate drop due to missing data so I scribbled the dates my server was off)