Why a website can be down in one country but working everywhere else
The same URL can return 200 OK in Frankfurt and time out in Mumbai. Four mechanisms explain almost every regional outage — GeoDNS, CDN POP failures, BGP routing problems, and government-level filtering. Here's how to tell which one is in play.
A friend in Berlin can use a service fine. You're in Mumbai and it's been broken for an hour. The status page says everything is operational. The vendor's customer support says they "can't reproduce." You're not imagining it — the same URL can genuinely return a 200 OK in one country and time out in another, and the explanation isn't conspiracy or incompetence. It's the way the modern internet routes traffic.
Websites haven't been single physical machines for twenty years. A request to chatgpt.com from Berlin and a request to chatgpt.com from Mumbai will resolve to different IP addresses, traverse different transit networks, and terminate at different physical data centres. Any of those layers can fail for one region while the others stay healthy. This post is the field guide to the four mechanisms that produce country-scale outages, and how to tell which one is in play.
The shape of a regional outage
Before the mechanisms: the symptom pattern.
A regional outage feels different from a normal outage. The vendor's status page says operational. Friends in other cities or other countries say the site works fine. Social media has scattered complaints — not the thousands-per-minute spike of a global outage, but a steady trickle from a recognisable cluster of locations. The site eventually recovers, often without a public incident report.
This shape is almost always one of four causes. The four sit at different layers of the network stack, but they all produce the same surface symptom: the site is up for some people and down for others, with the boundary drawn along geographic lines.
1. GeoDNS — different countries, different IPs
Large services don't return the same IP address to every user. The DNS infrastructure deliberately returns different answers based on the geographic location of the resolver asking. A request to example.com from a German resolver might return 203.0.113.45; the same query from a Japanese resolver might return 198.51.100.22. Both IPs are real; both belong to the service; both serve the same product. They're physically located in different data centres on different continents.
The mechanism is called GeoDNS (or GSLB — Global Server Load Balancing). It's used by every large consumer service and most enterprise ones. The reasons are good: routing users to the nearest data centre reduces latency dramatically, balances load across regions, and provides failover when one region goes down.
The failure mode: if the IP that a region is routed to has a problem — software bug, network issue, capacity exhaustion — users in that region get errors. Users in other regions, routed to other IPs, see nothing.
How to confirm
The fastest test is to query DNS from multiple resolvers in different countries and compare the answers:
$ For a true geographic comparison, use a multi-region online DNS lookup tool — anycast resolvers obscure the geographic dimension because the query lands at whichever instance is closest to you. The DNS Lookup tool and DNS Propagation tools show resolver-specific results, including known geographic resolvers.
If different countries are seeing different IPs, that's GeoDNS at work. If one of those IPs is unreachable, that's a regional outage caused by a problem at the GeoDNS-routed endpoint.
2. CDN POP failures — same IP, different physical data centres
A subtler version of the same problem: the IP address is the same in every country, but the physical server that answers it depends on where you are. This is anycast — the IP is announced from dozens of different physical locations, and your packets get routed to whichever location's network is closest.
Cloudflare, Fastly, Akamai, Amazon CloudFront, and every major CDN works this way. The IP 104.16.123.45 (for example) is simultaneously "live" in Frankfurt, Mumbai, São Paulo, Singapore, and 200+ other locations. Each of those locations is a POP (point of presence) — an edge data centre with cached content and a connection back to the origin.
When a single POP has a problem — a routing misconfiguration, a software bug in the cache layer, a power event — users routed to that POP see errors. The IP is still up everywhere else; the global health metrics are still green; the vendor's status page says all systems operational. But if your packets land at the broken POP, you see a 521, a 522, a "this site can't be reached," or a long hang.
CDN POP failures are the single most common cause of regional outages on the modern internet. They typically last 15–90 minutes and resolve when the POP operator notices the elevated error rate and either fails the POP out of rotation or fixes the underlying issue.
How to confirm
The error page itself is often the giveaway. CDN error pages identify which POP served them:
- Cloudflare error pages include a line like
Cloudflare Ray ID: 8a7e2c0d3f-FRA, where the suffix after the dash is the IATA airport code naming the POP (FRA = Frankfurt, BOM = Mumbai, LHR = London Heathrow). See Cloudflare error codes decoded. - Fastly error pages name the POP in the response headers (
x-served-by: cache-fra19193-FRA). - AWS CloudFront includes
x-amz-cf-pop: FRA50-C3in the response headers.
If you're seeing a CDN-branded error page with a POP identifier, and a friend in another country sees the site working, you've found a POP-specific failure. The fix is purely on the CDN's side — wait, or use a VPN to route through a different POP as a workaround.
3. BGP routing — the path between you and the site
A layer deeper. Even when DNS resolves correctly and the destination is up, your packets have to find their way to it. The internet is not a flat network; it's a graph of tens of thousands of independent networks (ISPs, cloud providers, transit carriers, university networks) that exchange routes via BGP — Border Gateway Protocol.
When a transit provider in your region loses its BGP path to the destination's network — through a router misconfiguration, a fibre cut, a routing-table corruption — your traffic literally has no route to the destination. The destination is up, your network is up, but there's no path between them. Packets get dropped at the broken intermediate.
BGP issues are how regional outages can be caused by a third party you've never heard of. A transit provider in Indonesia loses its peering with a tier-1 carrier; suddenly every site that uses that carrier's network is unreachable from Indonesia, even though the sites are fine and Indonesia's internet is fine.
The most famous BGP incidents — the 2008 Pakistan Telecom YouTube hijack, the 2021 Facebook outage (technically a withdrawn BGP route), the regular accidental hijacks logged by BGPmon — show how a routing-table change at one provider can ripple outward and break connectivity for users worldwide. The smaller, less-newsworthy version of the same thing happens regularly at regional scale.
How to confirm
A traceroute from your network to the destination will show packets going in a direction and then disappearing:
$ If the trace dies at a specific hop and never reaches the destination, that hop is where your packets are getting lost. If a friend in another country can complete the same trace successfully, the route from their location bypasses the broken hop — confirming the issue is in your region's routing.
BGP-level fixes are entirely the transit providers' responsibility. End users cannot do anything except wait (or switch to a different ISP, which might use different transit, as a temporary workaround).
4. Government-level blocking
Some regional outages aren't outages at all — they're deliberate. A government has blocked the site from being reached within its territory. The site is fine; the network is fine; the firewall at the border is doing exactly what it was configured to do.
The pattern is well-documented:
- China: the Great Firewall blocks a long list of foreign services (Google, Facebook, Twitter, most news outlets). VPN traffic is increasingly detected and dropped.
- Russia: Roskomnadzor publishes a register of blocked URLs; ISPs are legally required to enforce blocks. The list has included LinkedIn (since 2016), Telegram (2018–2020), and increasingly large parts of the Western media after 2022.
- Iran: nation-wide blocks of many social platforms; periodic full internet blackouts during civil unrest.
- Turkey, Pakistan, India, Bangladesh, Saudi Arabia, and many other countries: tactical blocks on specific platforms during politically sensitive periods.
- The UK and Germany: court-ordered DNS blocks of specific URLs, narrower in scope but real.
The fingerprint of state-level blocking, as opposed to a normal outage:
- The block is stable over time — the site stays broken for days or weeks, not minutes.
- The block is specific — one site or domain is unreachable while related sites work fine.
- The error pattern is identical across ISPs in the country — every consumer ISP enforces the same block list.
- VPN traffic to outside the country bypasses the block (which is why the affected governments invest heavily in VPN detection).
How to confirm
The blunt test: connect to a VPN that exits in a different country. If the site loads via VPN and fails without it, and the failure is identical on every ISP you can test, the block is at the network-policy layer. The OONI Probe project publishes detailed country-by-country censorship measurements.
Putting it together — the diagnostic flow
For a suspected regional outage, the order of investigation is the order this post is in:
- Does DNS return the same IP from different geographic resolvers? If no, GeoDNS is involved. Check whether the IP for the affected region is reachable from a third location.
- If yes, is the error page identifying a specific CDN POP? A POP-named error page (Cloudflare ray ID, Fastly cache header, CloudFront pop ID) names the specific edge data centre that failed.
- If neither, can you traceroute to the destination? A trace that dies at a specific hop, plus a friend in another country who can complete the trace, points to a BGP-level routing issue.
- If the trace also completes for you, but the site times out at the application layer, and a VPN bypass works? Government-level filtering. The site is reachable from outside the country and isn't from inside.
What you can do as a user
For each of the four causes, your options as a user are narrow:
In each case, the diagnostic value of confirming the cause is mostly that you can stop trying to fix something on your end. None of these are caused by your device, browser, or local network. Restarting your router will not change a BGP routing issue or a CDN POP failure.
A note on the StatusDetector approach
Because regional outages don't show up on global aggregate metrics, we treat them as a first-class signal: the Shutdown Radar merges third-party user reports (which are inherently geographic — they come from a real user in a real place) with our own probes (which run from a fixed location, so we see what users in that location see) and the vendor's status page (which is the global aggregate). When the three diverge in a way that suggests one region is affected, we flag it.
The honest limit of this approach: a single-probe service can never see a regional outage in a region we don't probe from. That's why for the most critical services we either run multi-region probes or correlate with public user-report data from communities like Down Detector and Reddit. No single tool can diagnose every regional outage; the cross-reference is what catches them.
Frequently asked
If I use a VPN, will it always bypass regional outages?
For GeoDNS, CDN POP failures, and government blocks: usually yes — the VPN's exit IP appears to be in a different geographic region, so the upstream routes traffic accordingly. For BGP routing issues: depends on the VPN provider's transit path. A VPN that uses the same broken transit segment to reach the destination won't help. Try a VPN with exits on a different continent before concluding the issue is global.
How long do CDN POP outages typically last?
Most resolve within 30–90 minutes. The CDN operator's monitoring sees the elevated error rate, fails the POP out of rotation, and traffic routes to neighbouring POPs while engineers investigate. Truly extended POP outages (4+ hours) are rare and usually involve power or fibre issues at the data centre itself.
Why doesn't my ISP's customer service know about these?
Because their support staff use the same global-aggregate dashboards as the vendor. Front-line ISP support has visibility into their own network's health, not into the BGP relationships their upstream transit providers have with the destination's network. Escalating past first-level support sometimes helps; usually the practical move is just waiting.
The Cloudflare ray ID includes 'LHR' — what does that mean?
LHR is the IATA airport code for London Heathrow. Cloudflare uses IATA codes to identify their POPs, so a ray ID ending in -LHR was served from the London data centre. Other common codes: FRA (Frankfurt), AMS (Amsterdam), SIN (Singapore), NRT (Tokyo Narita), DFW (Dallas-Fort Worth), IAD (Washington DC Dulles), ORD (Chicago O'Hare). If the ray ID's POP doesn't match your geographic location, you're being routed long-distance, which often correlates with the nearer POP being degraded.
On this page12
- The shape of a regional outage
- 1. GeoDNS — different countries, different IPs
- How to confirm
- 2. CDN POP failures — same IP, different physical data centres
- How to confirm
- 3. BGP routing — the path between you and the site
- How to confirm
- 4. Government-level blocking
- How to confirm
- Putting it together — the diagnostic flow
- What you can do as a user
- A note on the StatusDetector approach