“Just slow down your scraper” is terrible advice
Every web scraping tutorial says the same thing: “Add delays between requests to avoid getting blocked.” Stack Overflow answers repeat it. Reddit threads repeat it. Even Bright Data’s documentation repeats it.
It’s wrong. Not just incomplete — actively wrong for the majority of scraping scenarios in 2026.
Slowing down your scraper helps with rate limiting. It does absolutely nothing against anti-bot detection. And in 2026, anti-bot detection is the reason you’re getting blocked, not rate limits.
If you’re adding time.sleep(5) between requests and still getting 403s, this post explains why — and what to do instead.
Rate limits and anti-bot detection are completely different systems
Rate limiting (the old problem)
Rate limiting is simple: “This IP address made more than N requests in T seconds. Block it.”
Rate limiting characteristics:
- First N requests succeed
- Blocks happen after a threshold is exceeded
- Different IPs have independent counters
- Slowing down directly reduces the rate → fewer blocks
- Rotating IPs multiplies your effective rate limit
- No analysis of request content or browser environment
Rate limiting was the primary anti-scraping defense in 2015. It’s still used by some APIs and smaller sites. But for any site worth scraping at scale, rate limiting has been replaced or supplemented by anti-bot systems.
Anti-bot detection (the real problem)
Anti-bot detection is fundamentally different: “This request comes from a bot. Block it.”
Anti-bot detection characteristics:
- The first request can be blocked — no warm-up period
- Detection is based on fingerprinting, not request frequency
- IP rotation doesn’t help if the fingerprint is bot-like
- Slowing down doesn’t change your fingerprint
- Analysis happens in real-time on every single request
- Behavioral analysis, TLS fingerprinting, JavaScript challenges
When Akamai Bot Manager evaluates your first request to a protected site, it has already determined whether you’re a bot before your page even loads. Speed is irrelevant. You could make one request per hour and still get blocked on every single one.
How each major anti-bot system detects bots (hint: not by rate)
Akamai Bot Manager: fingerprinting on first request
Akamai’s detection happens in layers, and the first layer triggers before your page content loads:
-
TLS fingerprint analysis — your client’s TLS handshake is fingerprinted and compared against a database of known browsers and bots. This happens at the connection level, before any HTTP request is sent.
-
HTTP/2 settings fingerprint — the SETTINGS frame, WINDOW_UPDATE values, header order, and pseudo-header arrangement create a unique fingerprint. Most HTTP libraries have distinctive fingerprints.
-
JavaScript challenge — a script runs in the browser, collecting canvas fingerprints, WebGL data, plugin lists, font lists, screen properties, and hundreds of other signals. The script generates the
_abckcookie. -
Sensor data collection — Akamai’s script continuously collects mouse movements, keyboard events, touch events, and scroll behavior. This data is sent to Akamai’s servers for ML analysis.
None of these checks care about your request rate. You could send one request per day, and if your TLS fingerprint matches Python’s requests library, you’re blocked instantly.
DataDome: ML classification in 2ms
DataDome doesn’t use rate limits as a primary detection mechanism. Their system:
- Evaluates every request through an ML model that classifies bot vs. human
- The classification happens server-side in under 2 milliseconds
- Features include: IP reputation, TLS fingerprint, HTTP header patterns, device fingerprint from JavaScript
- The ML model is trained on billions of labeled requests
- Classification is per-request, not per-window — there’s no rate to limit
DataDome’s own marketing says it: “First-request detection.” They’re proud that they can detect and block a bot on its very first request. Adding delays between requests doesn’t matter when every individual request is independently classified.
Slowing down against DataDome doesn’t reduce your block rate. It just means you get blocked slower.
Cloudflare Bot Management
Cloudflare uses a scoring system that evaluates each request:
- Bot score (0-99): Every request gets a score based on ML analysis
- JA3/JA4 fingerprinting: TLS fingerprint matched against known bot patterns
- Challenge pages: JavaScript challenges for borderline scores
- Turnstile: Invisible CAPTCHA that validates the browser environment
Cloudflare’s documentation explicitly states their system works on individual requests. They score each request independently. Your request rate affects the score only marginally compared to fingerprinting signals.
PerimeterX (HUMAN)
PerimeterX combines:
- Real-time behavioral analysis via JavaScript
- Canvas and WebGL fingerprinting
- Mouse movement trajectory analysis
- Device and browser environment validation
Their detection model builds a behavioral profile from the first interaction. A single page load with no mouse movement and no scroll events is enough to flag the session. Speed of requests is not a meaningful signal compared to these behavioral markers.
The math that proves slowing down doesn’t work
Let’s say you’re scraping a DataDome-protected site. Your scraper has a bot-like TLS fingerprint.
Scenario A: 100 requests per minute
- DataDome classifies each request via ML
- Bot fingerprint detected on every request
- Block rate: ~100%
- Successful pages: 0
Scenario B: 1 request per minute
- DataDome classifies each request via ML
- Bot fingerprint detected on every request
- Block rate: ~100%
- Successful pages: 0
Scenario C: 1 request per hour
- DataDome classifies each request via ML
- Bot fingerprint detected on every request
- Block rate: ~100%
- Successful pages: 0
The block rate is the same in all three scenarios because DataDome isn’t counting requests per minute — it’s evaluating the fingerprint of each individual request. A bot fingerprint at any speed is still a bot fingerprint.
Now add Bright Data to the equation:
Scenario D: 100 requests per minute through Bright Data
- Each request goes through a different residential proxy
- DataDome classifies each request independently
- Bright Data’s headless browser fingerprint is still detected
- Block rate: 85-95% (residential IPs help marginally)
- Successful pages: 5-15 out of 100
- Cost: $2.51 for 100 requests, got 5-15 pages → $0.17-$0.50 per page
Scenario E: 100 requests per minute through UltraWebScrapingAPI
- Each request uses a real Chrome browser with genuine fingerprint
- DataDome classifies each request: looks human
- Block rate: <1%
- Successful pages: 99+
- Cost: $5.00 for 100 requests, got 99 pages → $0.05 per page
The variable that matters is fingerprint quality, not request speed.
When slowing down actually helps (rare cases)
To be fair, there are scenarios where request rate matters:
-
Simple rate-limited APIs — REST APIs with explicit rate limit headers (
X-RateLimit-Remaining). These are becoming rare for scraping targets because APIs that care about abuse use API keys, not rate limits. -
Sites with only WAF protection — basic WAF rules that block based on requests-per-IP thresholds. These sites typically don’t have advanced anti-bot protection.
-
Sites with soft behavioral scoring — some anti-bot systems give a small weight to request frequency in their overall bot score. Slowing down might reduce your score from 95/100 (definitely bot) to 92/100 (still definitely bot). The fingerprinting signals dominate.
-
After you’ve already achieved bypass — once your browser fingerprint passes all checks, maintaining human-like request rates helps avoid secondary behavioral flags. This is “don’t be greedy” optimization, not a primary bypass strategy.
If your first request gets blocked, slowing down won’t help. Period.
The “just add random delays” cargo cult
There’s a common pattern in scraping code:
import random
import time
for url in urls:
response = scraper.get(url)
time.sleep(random.uniform(2, 5)) # "human-like" delays
This is cargo cult programming. The random delay makes the developer feel like they’re being stealthy, but anti-bot systems aren’t fooled by request timing. They’re looking at:
- Is this a real browser? (TLS fingerprint)
- Does it execute JavaScript correctly? (JS challenges)
- Does the browser environment look genuine? (canvas, WebGL, plugins)
- Are there automation framework artifacts? (WebDriver, CDP)
- Is the behavioral pattern human-like? (mouse, keyboard, scroll)
Adding time.sleep(random.uniform(2, 5)) changes none of these signals. You’re waiting 2-5 seconds between requests for absolutely no benefit on anti-bot protected sites.
That random delay costs you something real: time. If you need 10,000 pages and you’re sleeping 3.5 seconds on average between requests, that’s 9.7 hours of unnecessary waiting. Hours where your scraper is idle, your infrastructure is running, and your data is getting stale.
The actual solution: better bypass, not slower requests
The reason your scraper gets blocked isn’t that it’s too fast. It’s that it looks like a bot. The fix is to stop looking like a bot.
UltraWebScrapingAPI solves this at the fingerprint level:
-
Real Chrome TLS fingerprints — not spoofed, not emulated. Generated by actual Chrome browser instances.
-
Genuine JavaScript execution environments — every browser API returns real values. Canvas, WebGL, AudioContext, plugins — all genuine.
-
Per-site anti-bot analysis — we identify exactly what each target site checks and ensure our browser passes every check.
-
No automation artifacts — no WebDriver flag, no CDP traces, no Selenium markers.
The result: anti-bot systems classify our requests as human on the first request. No delays needed. No rate limit worries. No cargo cult time.sleep().
Speed comparison
| Approach | Time for 10,000 pages | Success rate | Total cost |
|---|---|---|---|
| Python + random delays (3.5s avg) | 9.7 hours | ~0% (blocked) | $0 (no data) |
| Bright Data + delays | 9.7 hours | 10-20% | $250+ |
| Bright Data no delays | 30 minutes | 10-20% | $250+ |
| UltraWebScrapingAPI | 15-30 minutes | 99%+ | $500 |
Notice that Bright Data’s success rate is the same with or without delays. That’s because the blocks come from fingerprinting, not rate limiting. The delays just waste time.
Stop sleeping. Start bypassing.
Every time.sleep() in your scraper is time you’re wasting on a strategy that doesn’t work against modern anti-bot systems. The Internet moved past rate limiting as a primary defense years ago. Your scraping strategy should move past it too.
The sites you’re scraping use Akamai, DataDome, Cloudflare, and PerimeterX. These systems detect bots by what they are, not how fast they move. The solution is to not be a bot — or at least, to be indistinguishable from a real browser.
Try UltraWebScrapingAPI in our free playground — no delays, no rate limit tricks, no random sleep. Just paste a URL and get the data back. See what happens when your requests actually look human.
Your scraper doesn’t need to slow down. It needs to level up.