Data for D/C's

Discussion in 'Technical Help' started by Apoc_Ellipsis, Sep 14, 2025.

  1. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    I’ve been logging MapleRoyals’ player count since the 9th. Below are the charts and what I think is happening.

    upload_2025-9-13_16-47-52.png

    upload_2025-9-13_16-48-18.png

    We see short, sudden step-downs in the number of players online, then a clean recovery a few minutes later.
    Typical length: 2–10 minutes

    How big are they? Usually 5–15% of whoever is online at that moment (about 50–300 players), with the worst around ~20% (~470 players).

    When do they happen? Often between 13:00–17:30 UTC, with another smaller cluster near ~23:00 UTC.
    They also appear at quieter times, so it’s not only peak-load related.

    Not one channel: multiple channels report lag/disconnects at the same time. If it were a single channel/process restart, we’d expect the same slice of players to drop every time; that’s not what the data shows.

    Cross-server clue: A different server (MapleLegends), hosted with the same provider (OVH) in the same country, shows dents at similar minutes. That points to something outside the game software—most likely the network path—to both servers.

    upload_2025-9-13_16-58-28.png

    - Data borrowed from another player as I do not track Legends with my data

    What this probably is
    OVH’s DDoS protection (“VAC”) sometimes flips our traffic through a security checkpoint.
    Think of everyone’s connection as driving on a highway to the game. When OVH thinks there might be an attack—or normal traffic looks spiky—it reroutes cars through a toll plaza (a “scrubbing center”) to check them. That sudden detour adds a brief stall and changes the road mid-trip. Most people just feel a lag spike. Some—usually on longer or fussier routes (APAC, some Verizon/South America paths)—time out and disconnect during that stall. Later, when OVH thinks things are calm, it flips back to the old road… causing another brief stall.

    This fits the charts because:
    • Drops are short and clean (minutes, then back to normal).
    • The percent hit varies each time (not a fixed “channel size”).
    • Another OVH server shows dips at the same minutes.
    • Reports skew by region/ISP, which is what you get when some Internet paths hiccup while others are fine.
    • This also could explain why certain ISPs/regions are more impacted whereas others get a 'lag spike' due to closer/faster paths able to "keep-alive" whereas other ISPs might have a less direct route.
    Practical options
    1. Permanent mitigation on the game IP (24/7).
      • What it does: traffic is always routed through the VAC scrubbing path; no auto cutovers.
      • Why it helps: eliminates the brief re-route that’s knocking active sessions off.
      • Trade-offs: usually +5–20 ms and a bit more jitter; otherwise minimal if the scrubber is local and capacity is fine.
    2. Tune VAC rules for the game profile (reduce false positives).
      • Ensure the correct “Game/UDP/TCP” profile and exact ports are whitelisted; raise burst thresholds that are being mistaken for attacks; disable signatures known to trip on game patterns.

      • OVH can check hits against your timestamps and adjust per-IP policy.
    3. Session-tolerance on the game side
      • Increase client/server timeouts and keepalive frequency so a lag spike is not a disconnect.
      • No clue if this is even plausible

    tl; dr
    Across 9/09–9/13 (UTC) there are ~20 short, step-drops in concurrent players, typically 2–10 minutes long and 5–20% of online users. These occur at multiple load levels, often in windows around 13:00–17:30 UTC (Peak) and a smaller cluster near ~23:00 UTC (Much Quieter). MapleLegends shows dents in similar windows. The pattern fits OVH VAC (DDoS) mitigation cutovers or an upstream route change, not server CPU or a single channel restart.
     
    Last edited: Sep 14, 2025
    LilNoddy, Heidi, bibz and 16 others like this.
  2. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    Including 9/14 data

    upload_2025-9-14_18-5-4.png
     
    NehZu and Mittsy like this.
  3. Green Mind
    Online

    Green Mind Donator

    Joined:
    Jan 11, 2022
    Messages:
    666
    Likes Received:
    965
    Gender:
    Male
    Do you also only DC on one client when multiple are open? That's the strangest thing about the recent DCs for myself and other players I've asked.

    Monoclienting is not safe either, most recent VLs have had dcs and they were monoclienting...
     
    Last edited: Sep 15, 2025
  4. 1Minh
    Offline

    1Minh Donator

    Joined:
    Sep 22, 2024
    Messages:
    82
    Likes Received:
    69
    Country Flag:
    IGN:
    Vu
    Guild:
    New Haven
    Hey, excellent report.
    I've been a victim of the DCing for the past few weeks and here are some things that may support your thesis
    • No lag until right before the DC for about 10 seconds (mobs walk in place, etc) and then DC to login screen. Can log right back in after DC.
    • Happens multiple times over the span of around half an hour.
    • Tried connecting a few clients to VPN on a different PC, and those didn't DC
     
  5. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    9/15-9/17 data
    (And some early morning 9/18 data)

    upload_2025-9-17_22-11-8.png
     
  6. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    9/18-9/20 data upload_2025-9-20_20-56-58.png
     
  7. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    9/21-9/22 data.
    upload_2025-9-22_18-32-42.png
     
  8. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    9/23 & 9/24 data.

    upload_2025-9-25_14-50-24.png
     
    Matt likes this.
  9. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    Okay so good news, in this last d/c I was able to have an NTR going to keep an eye on things and give more information for the team.

    Here is during the server disconnects

    upload_2025-9-25_14-51-44.png

    And here is during the restored time

    upload_2025-9-25_14-51-59.png

    What this means and if you need more direct evidence to discuss with OVH is that we are actively seeing their VAC scrubbing in action. At the Zayo/OVH handoff we see that during the d/c events traffic is being routed off the usual path and into OVH’s DDoS scrubber (VAC). In the before shot, the first place packets start dropping is a Zayo hop (148.113.188.57), and that loss continues straight onto the first OVH edge hops (145.239.* / 147.135.* / sometimes 198.27.) until the server stops replying. In the after shot, the server is reachable again and the route clearly shows the scrubbing path—OVH’s POPs like 147.135., 198.27., 54.36., 213.186.* appear on the way. A “POP” (Point of Presence) is just OVH’s local entry site in a city/region—think of it as their doorway onto the wider Internet. So during the drop we’re catching the flip into mitigation (packets die at the Zayo→OVH on-ramp and the first OVH hops); once the flip settles, everything flows again through the scrubber.


    Ask them to confirm VAC enter/exit around the windows from my above time stamps, keep mitigation steady instead of flapping (either permanent mitigation or auto with a hold-down so it stays on once triggered), tune the UDP/game profile so normal bursts don’t trip it, and review the Zayo↔OVH handoff near 148.113.188.57 during those same windows.

    I have included a .csv of all the drops if that's easier for documentation than a bunch of pictures. It's been changed to a .txt because of forum restrictions, but should work if you change it back to a CSV.

    Hope this helps! :)
     

    Attached Files:

    Last edited: Sep 25, 2025
    Angevine, takkarakka, Crags and 5 others like this.
  10. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    9/25-9/27 data
    For the record the 9/25 @ 18:47:30 has been our worst one yet
    30.8% drop (486 players dropped)

    upload_2025-9-27_18-1-56.png
     
    takkarakka and Sylafia like this.
  11. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    Got another NTR in on this 17:00ish d/c cycle

    upload_2025-9-28_10-10-15.png

    This one was our worst ever we lost 1/3 of the server in this d/c spike

    upload_2025-9-28_10-13-28.png


    upload_2025-9-28_10-9-4.png

    Plugging my data into ChatGPT + What I know:


    During the most recent disconnect window (UTC time in the screenshots), I captured a live trace to the game IP 57.128.187.242. While players were dropping, the end-to-end path showed ~46% packet loss. The first device where loss turned on and then stayed on was a Zayo (AS6461) router: 148.113.188.57 (preceded by 128.177.169.81). From there, the loss continued across the OVH edge (addresses like 145.239. / 147.135. / 198.27.73.206**) and through OVH scrubbing nodes (54.36.50.240 / 213.186.32.253) down to the destination. A few minutes later the destination recovered to 0% loss; the route still showed those scrubbing nodes, which is what you see when OVH’s DDoS protection (VAC) is in place and traffic is being cleaned before it reaches the server.

    This lines up with the pattern we’ve been logging since 9/9: short drops affecting a slice of players, often around the same times each day, sometimes coinciding with another OVH-hosted server. The trace here adds the “where”: loss starts at or just before the Zayo → OVH handoff and persists inside the OVH mitigated path.

    Suggested next steps for OVH (based on this evidence, not demands):

    1. Confirm the VAC state for this IP during the window in the screenshots (which POP handled it, whether mitigation was entered/active/exited).

    2. Check the health/capacity of the scrubbing POP used in that window (packet-loss, queues, CPU).

    3. Check interface counters on the Zayo ↔ OVH peering around 148.113.188.57 for errors/drops and coordinate with Zayo if needed.

    4. Reduce path flapping: either keep this IP in permanent mitigation or keep auto-mitigation but add a hold-down so once triggered it stays on for a period instead of bouncing.

    5. Review the VAC game/UDP profile for our ports so normal bursts don’t trigger sensitivity.
    All disconnect windows (from 9/9 onward) are already listed in the logs I posted, so OVH can line up their internal timeline with those.

    ---
    And an after for comparison when I was back on my 6 stopper farmers.

    at 17:23

    upload_2025-9-28_10-25-41.png
     
    Last edited: Sep 28, 2025
    takkarakka, KamiOh, hoisanlow and 2 others like this.
  12. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    9/28-9/30 data
    upload_2025-9-30_23-13-2.png
     
  13. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    Switching to weekly cause I forgot a whole bunch.
    Data from 10/1 through 10/7
    upload_2025-10-7_23-7-15.png
     
    kachau likes this.
  14. Apoc_Ellipsis
    Offline

    Apoc_Ellipsis Donator

    Joined:
    Jul 18, 2023
    Messages:
    352
    Likes Received:
    924
    Gender:
    Male
    Country Flag:
    IGN:
    ApocEllipsis
    Been super busy IRL, so much so that my server was literally off from the 11th and I didn't notice until some point on the 15th.
    Drops haven't stopped, but they have calmed down it seems.

    EDIT: "Calmed down" just means they've been at 1-200 players tops and we haven't seen any major 200+ player drops!
    upload_2025-10-28_23-15-42.png

    Before anyone complains that it's just less players, player count has been mostly consistent for the last month.

    Most and Least Players (The data showed an inaccurate drop due to missing data so I scribbled the dates my server was off)
    upload_2025-10-28_23-21-57.png

    Average Players (The data showed an inaccurate drop due to missing data so I scribbled the dates my server was off)
    upload_2025-10-28_23-18-43.png
     
    Last edited: Oct 29, 2025
    Sylafia likes this.
  15. Kamuna
    Offline

    Kamuna Member

    Joined:
    Sunday
    Messages:
    5
    Likes Received:
    35
    Location:
    AFK IRLstory
    IGN:
    Kamuna
    Guild:
    Kamuna
    I am the "another player" mentioned here. I've been monitoring player count for several years now, and showed @Apoc_Ellipsis how to use the API endpoint that Royals provides so they can do their own logging in Splunk. Proof:

    Screenshot 2025-11-23 at 21.56.59.png

    Here's player count data over the last day or so. I find that a rates of change worse than -0.15 to -0.10 integrated over 5min intervals to coincide with meaningful player DCs.

    Screenshot 2025-11-23 at 22.38.42.png
    Screenshot 2025-11-23 at 22.38.53.png

    There are some minor errors in the original post, as well as fatal misconceptions about network engineering that I'd like to correct.

    There's no pattern of incident (nor recovery) here. You're dealing with a blackbox for DDoS mitigation which can drop packets however it feels like: single IP address, IP range/prefix, ASN, region basis, and whatever else they want to do. You're also dealing with human players trying who may either decide to log back in or not bother.

    Different country. MR is in OVH Erith (IIRC), ML is in OVH Beauharnois near Montreal - and questionable if they're actually running game servers there or just tunneling. But yes, the only thing you should take away from that comparison is player count drops on both servers at the same time is a common thread pointing to OVH.

    This ChatGPT "explanation" is pretty bad. The highway analogy just doesn't work either.
    • This can depend from address to address, network to network basis, but if shields are up, then everyone is going through security checkpoints at the borders of OVH's network, which is usually the nearest OVH point of presence to the user by network distance.
    • There's no one singular "checkpoint" - OVH's network is vast, and they ingress traffic all over the world. For example, in your traces, you enter OVH's network through Portland since that's the closest OVH POP to you - your traffic is filtered there before it moves on. On my way to one of MR's game servers, I enter OVH's network via Montreal since that's the nearest POP to me. My traffic is filtered at that border, before I transit OVH's network to the UK where MR game server is.
    Code:
      7. AS16276  ymq-mtl3-pb1-8k.qc.ca (192.99.146.109)                   0.0%    10   21.0  19.3  17.4  23.2   1.8
      8. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
      9. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
     10. AS16276  ymq1-bhs1-vac1-a75-1-firewall.qc.ca (192.99.146.152)     0.0%    10   20.8  19.8  17.9  21.7   1.4
     11. AS16276  ymq1-bhs1-vac1-a75-2.qc.ca (192.99.146.150)              0.0%    10   16.3  18.9  16.0  36.6   6.3
     12. AS16276  ymq1-bhs1-vac1-a75-3.qc.ca (192.99.146.151)              0.0%    10   18.9  16.7  14.9  18.9   1.2
     13. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
     14. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
     15. AS16276  nyc-ny1-sbb2-8k.nj.us (192.99.146.139)                  60.0%    10   29.0  28.8  27.2  30.5   1.4
     16. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
     17. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
     18. AS16276  be101.lon1-eri1-g1-nc5.uk.eu (213.186.32.253)           90.0%    10   95.6  95.6  95.6  95.6   0.0
     19. AS16276  be101.lon1-eri1-g2-nc5.uk.eu (91.121.215.119)           50.0%    10   88.7 489.2  87.3 2088. 894.3
     20. AS???    ???                                                     100.0    10    0.0   0.0   0.0   0.0   0.0
     21. AS16276  ns3238208.ip-57-128-187.eu (57.128.187.242)             60.0%    10   93.3  92.9  86.8 100.0   5.4
    • You can trace to see if your traffic is being filtered or not. If you see hops that involve "VAC" then your traffic is being filtered, and your packets have a chance of being dropped if OVH doesn't like you in that moment.
    • Longer routes doesn't mean fussier. You just happen to see a lot of complaints from countries where there are more players and more likely chance of being vocal about it.
    None of this makes sense, that's not how this works, ENF is probably configured correctly, etc. The options here are:
    • MR options: open ticket with OVH support to see if there's any thresholds they can adjust, probably max pps detection threshold before VAC kicks in. Consider a different server host.
    • player options: if your traffic is being filtered, use a VPN to either tunnel all the way to the UK and hopefully OVH POP doesn't filter traffic from your VPN's network there, or tunnel somewhere near you and hope the same there.
    Did you also know that OVH doesn't firewall internal connections? This is a common complaint I still hear about them, services getting attacked internally because OVH still hasn't addressed that on their technical roadmap. So as long as you end up within OVH's network, you should be able to connect to the game just fine. You can either find a VPN provider who has servers on OVH's network, or you can get an OVH VPS and run one yourself.
     
    Last edited: Nov 25, 2025 at 2:29 AM
    PandemicP, Apoc_Ellipsis and 1Minh like this.

Share This Page