FFXIV Latency issues

So a few hours ago ive gotten a noticeably larger ping and has still been a problem. Ive since done everything on my end to try combat it, and my ISP (dodo) says everything on their end says it should be fine. Ive since done several tracerts to several servers and the problem seems to lie with a ‘telstraglobal’ hop.
Here is a typical tracert to JP data centre (several tracerts and everytime the problem lies with this hop-10)

Tracing route to 202.67.53.202 over a maximum of 30 hops

1 <1 ms <1 ms <1 ms home.gateway.home.gateway [192.168.1.254]
2 27 ms 26 ms 26 ms 191.3.148.122.network.m2core.net.au [122.148.3.191]
3 27 ms 28 ms 29 ms be2-v547-bsr02-sydnmtc.syd.nsw.m2core.net.au [122.148.4.81]
4 28 ms 27 ms 28 ms TenGigE0-0-0-3.chw-edge902.sydney.telstra.net [139.130.197.141]
5 30 ms 29 ms 29 ms bundle-ether14.chw-core10.sydney.telstra.net [203.50.11.100]
6 28 ms 28 ms 27 ms Bundle-ether17.oxf-gw2.sydney.telstra.net [203.50.13.70]
7 28 ms 31 ms 28 ms bundle-ether1.sydo-core01.sydney.reach.com [203.50.13.38]
8 147 ms 147 ms 151 ms 202.84.140.138
9 150 ms 150 ms 149 ms i-0-1-1-0-peer.siko02.pr.telstraglobal.net [134.159.160.177]
10 235 ms 206 ms 640 ms kddi-peer.siko02.pr.telstraglobal.net [134.159.160.178]
11 147 ms 147 ms 147 ms otejbb205.int-gw.kddi.ne.jp [106.187.6.81]
12 153 ms 147 ms 147 ms jc-ote301.int-gw.kddi.ne.jp [118.155.197.178]
13 148 ms 147 ms 147 ms 106.187.28.198
14 147 ms 205 ms 147 ms 61.195.56.133
15 148 ms 148 ms 148 ms 219.117.144.78
16 148 ms 148 ms 148 ms 219.117.144.49
17 148 ms 149 ms 148 ms 219.117.146.129
18 148 ms 150 ms 150 ms 219.117.146.102
19 147 ms 149 ms 148 ms 202.67.53.202

Trace complete.

Now ive done a few tracerts to the this telstraglobal server and this is a typical tracert –

Tracing route to i-0-1-1-0-peer.siko02.pr.telstraglobal.net [134.159.160.177]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms home.gateway.home.gateway [192.168.1.254]
2 27 ms 27 ms 27 ms 191.3.148.122.network.m2core.net.au [122.148.3.191]
3 41 ms 28 ms 28 ms be2-v547-bsr02-sydnmtc.syd.nsw.m2core.net.au [122.148.4.81]
4 28 ms 27 ms 27 ms TenGigE0-0-0-3.chw-edge902.sydney.telstra.net [139.130.197.141]
5 28 ms 29 ms 29 ms bundle-ether14.chw-core10.sydney.telstra.net [203.50.11.100]
6 28 ms 27 ms 31 ms Bundle-ether17.oxf-gw2.sydney.telstra.net [203.50.13.70]
7 * 32 ms 31 ms bundle-ether2.oxf-gw1.sydney.telstra.net [203.50.6.85]
8 28 ms 28 ms 28 ms tengigabitethernet2-0.syd-core03.sydney.reach.com [203.50.13.54]
9 31 ms 31 ms 31 ms 202.84.223.41
10 150 ms 147 ms 147 ms i-0-1-0-7.siko-core04.bx.telstraglobal.net [202.84.141.193]
11 182 ms 148 ms 148 ms i-0-1-1-0-peer.siko02.pr.telstraglobal.net [134.159.160.177]

Trace complete.
There is at least 1 timeout per tracert, and it can be on any hop. This leaves me to believe that the problem lies with this server hop, which being telstra i assume is not to do with the game but an intermediate hop on the way to a server owned by telstra – in which case who would i speak to about this if it can even be fixed? Telstra will simply tell me to call my ISP which they will tell me its fine, as any test i do for them wont include this server. Anyone had this problem before? Anything that I can do?

ANSWER:It is the root of most of our routing woes…and something that is basically the result of your ISP’s routing policies. They have to enter into peering/transit agreements with other ISP’s to get you to other people’s networks. So for them to direct you to your ISP is actually the correct path for you to take. It sounds like you may be dealing with a lower tier of support, most likely Tier 1 or maybe Tier 2 if someone came on site to check your lines and such. Typically you need to be dealing with Tier3, as they are usually the ones with the resources to conduct more thorough investigations and escalate things further upstream if necessary.

In most cases, it ultimately results in altering the routing path to bypass a problem segment if they can’t get things improved through their existing agreements. It is often the result of not enough bandwidth at an exchange, which requires purchasing more hardware/bandwidth to properly address. If they can get away with tweaking the metrics so you get redirected, it is a much cheaper band-aid sort of fix. The point is, there typically is an “easy” fix for these problems. Unfortunately you are dealing with undersea cabling out of Australia, which may severely limit the options depending on how their peering/transit agreements are set up (here in the states, pretty much everyone has at least 3 ISP’s they can use so it is much easier for them to flip us to an alternate route).

In the short term, you may get some relief using a VPN or Proxy service to basically put you on a slightly different path or at the very least put you on a higher priority slot by encrypting your data. That is often what triggers the delays–shaping rules are delaying delivery of packets for lower priority traffic in an attempt to stave off heightened congestion. Encrypted packets typically get around most of that, but if the nodes are just too borked even that doesn’t avoid the slowdowns that much. The real beauty of a VPN service is that you can often pick different locations to tunnel to, which effectively alters the path you take and can get you around the troubled segment altogether. Overall latency may increase slightly in the process because you may wind up doing the “around the elbow to get to the thumb” thing, but it will stabilize things so you can adjust play style to compensate.

Many just go to the more popular battleping and pingzapper type services, but they can get a bit pricey for some folks. There are a lot of other services out there though. Most all services will offer at least a free trial period so you can see if the service is worth subscribing too. Others may offer an ongoing free use policy, but they may be limited in how much you can use them–perhaps a data cap, or so many hours a day/week, or they may be ad-supportted (get pop up adds periodically, which can disrupt gameplay). So, you may want to shop around. Here are some notes on a few I tinkered with here in the US back during the 2.0 launch fiasco. They helped me demonstrate the problem to a technician (another great thing about using one of these services–you can sort of flip the switch on your routing and demonstrate how the issue lies in their upstream routing to the server and not at your localized segments).

Leave a Reply

Your email address will not be published. Required fields are marked *