You rotated your IP. Your cookies told them it’s still you.

IP rotation is the foundation of every major scraping service. Bright Data has 72M+ residential IPs. Oxylabs has 100M+. ScraperAPI, ZenRows, Apify — they all sell you fresh IPs on every request.

And anti-bot systems don’t care. Because they’re not tracking you by IP anymore. They’re tracking you by cookies.

Every time your scraper touches an anti-bot protected site, the server drops cookies that contain encrypted session identifiers, browser fingerprint hashes, and behavioral scores. When your next request arrives from a “new” IP but carries the same cookie — or suspiciously lacks one — the anti-bot system knows exactly what’s happening.

This is cookie-based bot detection, and it’s the reason IP rotation has been a dead strategy since 2023.

How anti-bot cookies work

Anti-bot cookies are not simple session identifiers. They’re encrypted data packages that bind a session to a specific browser fingerprint. Here’s what’s inside:

Session fingerprint binding

When you first visit a DataDome-protected site, the anti-bot JavaScript collects your browser fingerprint — canvas hash, WebGL renderer, screen dimensions, timezone, fonts, everything. This fingerprint is hashed and stored in a cookie.

On subsequent requests, DataDome checks: does the cookie’s stored fingerprint match the current browser’s fingerprint?

If you’re a real user, it always matches. Your browser hasn’t changed between pages.

If you’re a scraper rotating IPs, one of three things happens:

  1. You send no cookie (new session each request) — Suspicious. Real users carry cookies between pages. Every request being a “first visit” is a massive bot signal.

  2. You send the same cookie from a different IP — The cookie’s fingerprint matches, which is good. But the IP changed. The anti-bot system checks if the IP change is plausible (same ISP/region) or suspicious (jumped from Brazil to Japan in 200ms).

  3. You send a cookie from one browser with a different browser’s fingerprint — Instant block. The fingerprint stored in the cookie doesn’t match the fingerprint collected from the current browser. This happens when scrapers share cookies between different headless Chrome instances.

Behavioral scoring cookies

Anti-bot systems don’t just store fingerprints in cookies — they store behavioral scores. Each interaction (page load, API call, mouse movement, scroll pattern) adjusts a score that’s encrypted in the cookie.

A real user’s cookie accumulates a history of normal behavior: browsed a few pages, scrolled naturally, clicked links at human speeds. The behavioral score is high.

A scraper’s cookie (if it maintains one) shows: loaded a page, immediately made 50 API requests, no mouse movement, no scrolling, requests at machine speed. The score drops to zero. Blocked.

This is a subtle but powerful technique. Anti-bot systems don’t just check their own cookies — they check what other cookies are present.

A real browser visiting a website has:

  • Google Analytics cookies (_ga, _gid)
  • Marketing cookies (Facebook pixel, ad tracking)
  • Consent management cookies
  • Third-party cookies from embedded content

A headless Chrome scraper visiting the same website has: the anti-bot cookie and nothing else. Or it has only first-party cookies because third-party cookies were blocked. The absence of normal cookie ecosystem signals is itself a detection vector.

DataDome: session correlation at the edge

DataDome is arguably the most sophisticated cookie-based detection system. Here’s how their session tracking works:

  1. First request — The browser hits the site. DataDome’s JavaScript tag loads and collects 200+ browser signals. These are sent to DataDome’s edge server. A session cookie (datadome) is set with an encrypted payload containing the fingerprint hash and initial behavioral score.

  2. Subsequent requests — Every request includes the datadome cookie. DataDome’s edge server decrypts it, checks the fingerprint against the current request’s signals, and updates the behavioral score. This happens in under 2 milliseconds at the edge — before the request even reaches the origin server.

  3. Cross-session correlation — Even if a scraper starts a fresh session with no cookies, DataDome correlates the new session’s fingerprint against known bot fingerprints from previous sessions. Same headless Chrome fingerprint + new IP + no cookie history = known bot with a new IP. Blocked.

  4. Cookie tampering detection — The datadome cookie is encrypted and signed. Attempts to modify it, replay it from a different browser, or generate it synthetically are detected through cryptographic validation.

This is why IP rotation fails against DataDome. You can rotate through 10,000 IPs. If you’re using headless Chrome, DataDome has already cataloged your browser fingerprint from previous sessions. New IP, same fingerprint, no legitimate cookie history. The correlation is instant.

Bright Data’s customers discover this the hard way. They enable residential proxy rotation, see 403 after 403, and can’t understand why “clean” IPs are being blocked. The IPs are clean. The cookie signals aren’t.

Akamai Bot Manager uses cookies to manage what they call the session lifecycle:

Phase 1: Challenge

First visit with no Akamai cookie triggers a JavaScript challenge. The browser must execute Akamai’s sensor data collection script and return valid results. A session cookie (_abck) is set.

Phase 2: Validation

The _abck cookie contains an encrypted token that Akamai validates on each request. The token includes:

  • Sensor data hash (browser fingerprint)
  • Challenge completion timestamp
  • Behavioral score from JavaScript execution
  • Request pattern analysis

Phase 3: Ongoing monitoring

Even after passing the initial challenge, Akamai continues monitoring the session. The cookie is updated with each request. Anomalous patterns — too many requests, unusual navigation paths, missing referrer chains — degrade the session score until it’s terminated.

Phase 4: Termination

When the session score drops below Akamai’s threshold, the cookie is invalidated. The next request triggers a new challenge. If the same fingerprint repeatedly fails to maintain healthy sessions, it gets blacklisted at the fingerprint level — not the IP level.

This is why Bright Data fails on Akamai. Their headless Chrome might occasionally pass the initial challenge (Phase 1). But the session degrades rapidly because:

  • The behavioral score from a scraping pattern doesn’t match human browsing
  • The cookie lifecycle is abnormal (too short, no natural browsing progression)
  • The same fingerprint appears from too many IPs (Bright Data’s infrastructure is shared)

ScraperAPI and Oxylabs face the same problem. Their headless browsers create sessions that Akamai’s cookie analysis identifies as bots within a few requests.

This is counterintuitive, but IP rotation actually helps anti-bot systems detect you. Here’s why:

A real user has a stable IP for the duration of a browsing session. They might change IPs when they switch from WiFi to cellular, or when their ISP assigns a new dynamic IP. These changes are infrequent and follow predictable patterns (same ISP, same region).

A scraper using Bright Data’s residential proxy rotation changes IP every request. The cookie stays the same (if they’re smart enough to maintain cookies), but the IP changes from Comcast in Chicago to Verizon in Miami to AT&T in Seattle — all within 30 seconds.

Anti-bot systems detect this pattern trivially:

Request 1: IP=Chicago, Cookie=ABC123, Fingerprint=XYZ
Request 2: IP=Miami, Cookie=ABC123, Fingerprint=XYZ (3 seconds later)
Request 3: IP=Seattle, Cookie=ABC123, Fingerprint=XYZ (2 seconds later)

No human travels from Chicago to Miami to Seattle in 5 seconds. The cookie’s consistency actually proves it’s the same client despite different IPs. IP rotation, combined with cookie tracking, makes you more detectable, not less.

Without cookies, the pattern is equally damning:

Request 1: IP=Chicago, No cookie, Fingerprint=XYZ → New session
Request 2: IP=Miami, No cookie, Fingerprint=XYZ → New session (same fingerprint!)
Request 3: IP=Seattle, No cookie, Fingerprint=XYZ → New session (same fingerprint!)

Three “first visits” from three different cities, all with the same browser fingerprint, all within seconds. The anti-bot system doesn’t need sophisticated ML to flag this. It’s obvious.

Persistent sessions with real browsers: the solution

The answer isn’t better IP rotation. The answer isn’t smarter cookie handling. The answer is persistent sessions with real browsers that naturally maintain cookie state.

UltraWebScrapingAPI takes a fundamentally different approach:

Our real Chrome browsers maintain proper cookie state throughout a session. When an anti-bot system sets a session cookie, our browser stores it and sends it on subsequent requests — exactly like a real user’s browser.

The cookie lifecycle is natural:

  1. First visit: cookie is set after challenge completion
  2. Browsing: cookie is maintained and updated normally
  3. Session end: cookie expires naturally or session is cleanly closed

Because we use real Chrome browsers (not headless), the fingerprint stored in the anti-bot cookie matches the fingerprint collected on every subsequent request. There’s no mismatch. There’s no inconsistency. The anti-bot system sees a continuous, legitimate session.

Natural request patterns

Real browsers don’t make 100 requests per second. Our system manages request pacing, maintains referrer chains, and follows natural navigation patterns. The behavioral score in the cookie stays healthy because the behavior is genuinely browser-like.

Geographic consistency

Our sessions maintain geographic consistency between the IP and the browser’s locale/timezone signals. The cookie-IP correlation that catches proxy rotators doesn’t apply because our sessions don’t randomly jump between cities.

Every failed request on Bright Data costs $0.025+. Every failed request on ScraperAPI costs money. Every failed request on any proxy-based service costs you time and budget.

And a significant portion of those failures — especially on DataDome and Akamai protected sites — are caused by cookie-based detection that IP rotation cannot solve and actually makes worse.

You’re paying premium prices for a service that actively works against you. Bright Data’s IP rotation creates the exact cookie-IP mismatch patterns that anti-bot systems are designed to catch. You’re funding your own detection.

Stop paying for IP rotation that makes you easier to detect. Stop pretending that ScraperAPI’s “render=true” mode handles cookie-based detection (it doesn’t). Stop believing that Oxylabs’ session management is “sticky” enough to fool DataDome (it isn’t). Stop hoping that ZenRows or Apify have cracked this problem (they haven’t).

Use a scraping API that maintains real browser sessions with real cookies, real fingerprints, and real behavioral patterns. Use UltraWebScrapingAPI.


See persistent sessions in action. Test any DataDome or Akamai protected site in our playground and watch real cookie management beat every anti-bot system — Try the Playground.