You’re getting 403 errors. Let’s talk about what’s actually happening.

You wrote a scraper. It worked for a while. Then every request started returning 403 Forbidden. You added headers. You rotated User-Agents. You slowed down. Nothing worked.

So you signed up for Bright Data or ScraperAPI, thinking proxy rotation would fix everything. Now you’re paying $25 per 1,000 requests — and still getting 403s.

Here’s the truth: a 403 error is not a simple access denial. It’s an anti-bot system telling you it knows you’re a bot. And rotating your IP address doesn’t change what you are.

The three causes of 403 errors in web scraping

1. Anti-bot systems (the hardest to fix)

This is the cause in 80%+ of scraping 403 errors today. Modern anti-bot platforms — Akamai Bot Manager, Cloudflare, DataDome, PerimeterX (HUMAN), Kasada — are deployed on millions of websites. They don’t just check your IP address. They fingerprint your entire browser environment.

When Akamai returns a 403, it has already:

  • Analyzed your TLS handshake fingerprint
  • Evaluated your HTTP/2 settings and header order
  • Checked JavaScript execution environment markers
  • Compared your browser fingerprint against known bot patterns
  • Correlated your session with previous flagged sessions

No amount of proxy rotation fixes this. Bright Data rotates your IP. Akamai doesn’t care about your IP.

2. WAF rules (Web Application Firewall)

Some 403s come from WAF rules — simpler blocking based on request patterns, geographic restrictions, or known bad IP ranges. AWS WAF, Cloudflare WAF, and Imperva WAF can block requests based on:

  • Request rate from a single IP
  • Missing or suspicious headers
  • Known datacenter IP ranges
  • Geographic restrictions
  • Specific URL patterns

WAF 403s are the easy ones. They usually can be fixed with proper headers and residential proxies. If your 403s are from WAF rules, Bright Data might actually work. The problem is most people assume their 403s are WAF blocks when they’re actually anti-bot blocks.

3. Rate limiting

True rate limiting 403s are increasingly rare. Most modern sites use anti-bot systems instead of simple rate limits. But some APIs and smaller sites still use basic rate limiting that returns 403 after N requests per minute.

Rate limiting 403s have a distinctive pattern: your first N requests succeed, then everything fails for a cooldown period. If this is your pattern, slowing down might actually help. But read our post on rate limits vs anti-bot before assuming that’s your problem.

How to diagnose which system is blocking you

Stop guessing. The 403 response itself tells you what’s blocking you if you know where to look.

Check the response headers

# Akamai Bot Manager
Look for: Akamai-GRN header, _abck cookie, ak_bmsc cookie
Response often includes: "Reference #" error ID

# Cloudflare
Look for: cf-ray header, cf-cache-status header
Response page: "Checking your browser..." or Turnstile challenge

# DataDome
Look for: x-datadome header, datadome cookie
Response: JSON with url field pointing to captcha page

# PerimeterX (HUMAN)
Look for: _px cookie, _pxhd cookie
Response: Block page with "Press & Hold" or puzzle challenge

# Kasada
Look for: x-kpsdk-ct header, _kpsdk cookie
Response: Proof-of-work challenge page

Check the response body

Anti-bot 403 pages have distinctive HTML patterns:

  • Akamai: References to akam in inline scripts, _abck cookie setting
  • Cloudflare: /cdn-cgi/challenge-platform/ paths, Turnstile widget
  • DataDome: Redirect to geo.captcha-delivery.com
  • PerimeterX: References to perimeterx or human.com in scripts

Check the cookies

If the response sets cookies like _abck, __cf_bm, datadome, or _px — you’re dealing with anti-bot protection. These cookies are challenge tokens that your scraper failed to generate correctly.

Why Bright Data and ScraperAPI make 403s worse, not better

Here’s what happens when you send your 403’d request through Bright Data:

  1. Bright Data routes it through a residential proxy
  2. The request hits the anti-bot system with a new IP — but the same bot fingerprint
  3. The anti-bot system blocks it again: 403
  4. Bright Data retries with another IP
  5. Same bot fingerprint, same 403
  6. After several retries, Bright Data returns the 403 to you
  7. You get charged for every attempt

Bright Data charges $25.10 per 1,000 requests for their Web Unlocker. If the success rate on your target site is 10%, you’re paying $251 for 1,000 successful pages. And that’s optimistic — on Akamai-protected sites, Bright Data’s success rate is often under 5%.

ScraperAPI is worse. At $0.001 per request it sounds cheap, but their success rate on anti-bot protected sites is effectively zero. Unlimited requests times zero success rate equals zero data. You could run 100,000 requests through ScraperAPI on an Akamai-protected site and get nothing back.

Oxylabs, ZenRows, Apify — same story. They’re all proxy rotation services with headless browser farms. The headless browsers have bot fingerprints. Anti-bot systems detect bot fingerprints. Rotating the IP doesn’t change the fingerprint.

The actual fix: per-site anti-bot analysis

The 403 error exists because the anti-bot system detected your scraper as a bot. The fix is to not look like a bot. That requires understanding exactly what the anti-bot system is checking and passing every check.

This is what UltraWebScrapingAPI does differently:

  1. We identify the exact anti-bot system protecting your target site — not just “Cloudflare” but the specific configuration, challenge types, and detection rules.

  2. We reverse-engineer the detection logic for each site. Akamai on Site A might check canvas fingerprinting aggressively while Akamai on Site B focuses on behavioral signals. One-size-fits-all bypass doesn’t work because one-size-fits-all detection doesn’t exist.

  3. We use real Chrome browsers with genuine fingerprints. Not Puppeteer. Not Playwright. Not headless Chrome with --headless=new. Real Chrome instances with real GPU rendering, real canvas output, real WebGL fingerprints.

  4. We solve the challenges the anti-bot system expects — JavaScript challenges, cookie generation, proof-of-work computations — exactly the way a real browser would.

The result: 99%+ success rate on the same sites where Bright Data gets 403’d.

The cost difference is staggering

Let’s say you need 10,000 pages from an Akamai-protected site per month.

ServiceCost per 1K requestsSuccess rateCost for 10K successful pages
UltraWebScrapingAPI$5099%+~$500
Bright Data Web Unlocker$25.105-10%$2,500-$5,000
ScraperAPI$4.90~0%Impossible
Oxylabs$1510-20%$750-$1,500

You’re not just getting fewer 403s with us. You’re paying 5-10x less for the data you actually need.

Stop paying for 403 errors

Every 403 you get from Bright Data is money you paid for nothing. Every retry is another charge. Every failed request is wasted budget that could have gone toward actual data.

Try UltraWebScrapingAPI in our free playground — paste your 403-producing URL and watch it return clean HTML on the first try. No credit card required. No setup. Just results.

If you’ve been getting 403s, you don’t need more proxies. You need better bypass technology. That’s exactly what we built.