Headless Chrome is the most detectable browser on the internet
Here’s the uncomfortable truth that the scraping industry doesn’t want to acknowledge: headless Chrome is easier to detect than Python requests. At least with Python requests, you fail immediately and don’t waste time. With headless Chrome, you spin up a browser, load a page, wait for JavaScript, and then get blocked — after wasting 5-10 seconds and significant compute resources.
In 2026, every major anti-bot system — Akamai, Cloudflare, DataDome, PerimeterX, Kasada, Imperva — detects headless Chrome within milliseconds. Puppeteer, Playwright, Selenium — it doesn’t matter which framework you use. They all get caught.
And yet, Bright Data, ScraperAPI, Oxylabs, ZenRows, and Apify all build their scraping infrastructure on headless Chrome. They know it gets detected. They charge you for the failed requests anyway.
Let’s catalog every way headless Chrome gets caught.
The detection surface: 30+ signals
1. navigator.webdriver
The most famous detection signal. In headless/automated Chrome, navigator.webdriver returns true. Puppeteer, Playwright, and Selenium all set this flag.
“But I set it to false!” — Anti-bot systems know you did. They check how it was set. If it was overridden via Object.defineProperty or CDP (Page.addScriptToEvaluateOnNewDocument), the override is detectable through:
- Property descriptor inspection (
Object.getOwnPropertyDescriptor(navigator, 'webdriver')) - Prototype chain checking (
Navigator.prototypestill has the original descriptor) - iframe leaks (new iframes get the original value before your patches apply)
- Worker thread checks (Web Workers inherit the unpatched value)
2. Chrome DevTools Protocol artifacts
Puppeteer and Playwright communicate with Chrome via CDP (Chrome DevTools Protocol). This leaves traces:
window.cdc_adoQpoasnfa76pfcZLmcfl_Arrayand similar CDP-injected variables (Selenium)Runtime.enableside effects that modifyError.prepareStackTracebehaviorPage.addScriptToEvaluateOnNewDocumentscripts that execute in a detectable order relative to page scripts- DevTools protocol endpoints that respond to probing from within the page
3. Missing browser chrome
Headless Chrome has no browser UI — no address bar, no bookmarks bar, no extensions toolbar. This manifests as:
window.outerHeight === window.innerHeight(no toolbar height difference)window.outerWidth === window.innerWidth(no scrollbar offset in some modes)window.chrome.appmissing or incompletewindow.chrome.runtimebehaving differently than in extension-capable Chrome
4. WebGL: the SwiftShader problem
Headless Chrome on servers uses SwiftShader, Google’s software-based GPU implementation. When anti-bot scripts call:
const gl = document.createElement('canvas').getContext('webgl');
const renderer = gl.getParameter(gl.RENDERER);
const vendor = gl.getParameter(gl.VENDOR);
They get: "Google SwiftShader" and "Google Inc." — a combination that exists on exactly zero consumer devices. It’s the single most obvious headless Chrome signal.
Even if you somehow provide a real GPU (running Chrome on a machine with a GPU), the WebGL rendering hash differs from consumer hardware because server GPUs (Tesla, A100) produce different rendering output than consumer GPUs (RTX 4090, M2).
5. navigator.plugins is empty or fake
Real Chrome has a populated navigator.plugins array: PDF Viewer, Chrome PDF Plugin, Native Client, etc. Headless Chrome has an empty or minimal array.
Puppeteer stealth plugins inject fake plugin entries, but:
- The injected plugins fail
instanceof PluginArraychecks - The individual items fail
instanceof Pluginchecks - The
namedItem()anditem()methods don’t match native behavior - MimeType associations are inconsistent
6. Permissions API inconsistencies
Real Chrome has a Permission API that returns meaningful results:
navigator.permissions.query({ name: 'notifications' })
A real browser returns "prompt", "granted", or "denied". Headless Chrome returns "prompt" for everything or behaves inconsistently. Anti-bot systems query multiple permissions and check for impossible combinations.
7. User media and devices
navigator.mediaDevices.enumerateDevices() returns audio and video devices on real machines. Headless Chrome on a server returns an empty array or a single default device. Real users have webcams, microphones, and speakers. Bot farms don’t.
8. Battery API
navigator.getBattery() on a real laptop returns real battery data with a meaningful level and charging state. On a server, it returns full charge, always plugged in — or fails entirely.
9. Screen and display detection
Multiple overlapping checks:
screen.width/screen.heightdon’t match any standard display resolutionscreen.availWidthdoesn’t account for OS taskbar/dockwindow.devicePixelRatiois 1 (servers have no HiDPI display)screen.colorDepthis 24 on a machine that claims to have a 10-bit displaymatchMedia('(color-gamut: p3)')returns false despite claiming a modern display
10. Notification API behavior
Notification.permission in a real browser reflects the user’s actual notification preference. In headless Chrome, notifications are disabled entirely, and the API behavior subtly differs from a real browser where the user has explicitly denied notifications.
11. Headless-specific flags and user agents
Despite Chrome removing “Headless” from the user agent string in newer versions, there are still ways to detect headless mode:
navigator.userAgentmay contain telltale version strings that match known headless builds- The
--headlessflag leaves traces inprocess.argvaccessible via certain APIs - Chrome’s
--disable-gpuflag (commonly used with headless) affects WebGL behavior
12. Font rendering discrepancies
Headless Chrome on Linux servers renders fonts differently. Even if you install the same fonts:
- Subpixel antialiasing is different without a real display
- Font hinting behaves differently
- Text measurement (
measureText()) returns different widths - Canvas text rendering produces different pixel data (affecting canvas fingerprint)
13. Audio fingerprinting
AudioContext and OfflineAudioContext produce hardware-dependent audio processing results. The oscillator output, dynamics compressor behavior, and analyser node data are different on server hardware vs. consumer hardware. Anti-bot systems hash this output and compare against known profiles.
Puppeteer, Playwright, Selenium — none of them hide
Puppeteer — Google’s own automation library. Puppeteer-extra with puppeteer-extra-plugin-stealth patches approximately 15 known detection vectors. Anti-bot systems check 200+. The stealth plugin hasn’t been meaningfully updated to keep pace with detection advances. It was useful in 2022. In 2026, it’s security theater.
Playwright — Microsoft’s automation framework. Slightly different fingerprint than Puppeteer but equally detectable. Playwright injects its own automation markers and has unique CDP usage patterns. Anti-bot systems have specific Playwright detection rules.
Selenium — The oldest automation framework. Injects webdriver properties, cdc_ variables, and has the most well-documented detection surface. If Puppeteer detection is a solved problem, Selenium detection was solved five years ago.
Each framework has a community of developers building “undetectable” configurations. Each configuration gets detected within weeks of gaining popularity. The anti-bot vendors monitor GitHub repos, npm packages, and scraping forums. Every new bypass technique gets reverse-engineered and countered.
Bright Data and ScraperAPI use headless Chrome
This is the part that should make you angry.
Bright Data charges $25+/1K requests for their Web Unlocker. Under the hood, it’s headless Chrome running on their server infrastructure. The same headless Chrome that gets detected by Akamai. The same headless Chrome that fails Cloudflare Turnstile. They’ve applied patches and mitigations, but the fundamental architecture is detectable.
ScraperAPI charges for “render=true” requests that launch headless Chrome. Same detection problems. Same failures on protected sites. Same wasted money.
Oxylabs — Headless Chrome behind proxy rotation. The proxy changes the IP. The headless Chrome fingerprint stays the same. Blocked.
ZenRows — Markets themselves as an anti-bot bypass solution. Uses headless Chrome. Gets detected by the anti-bot systems they claim to bypass.
Apify — Excellent platform for building scrapers. Their infrastructure runs headless Chrome. On protected sites, it gets caught like every other headless browser.
These companies know headless Chrome is detected. They invest in patching detection vectors. But patching is a losing game — the detection surface is too large, the anti-bot vendors move too fast, and the fundamental architectural problem (server hardware pretending to be consumer hardware) cannot be patched away.
Real Chrome vs. headless Chrome: the complete comparison
| Signal | Headless Chrome | Real Chrome |
|---|---|---|
| navigator.webdriver | true (or suspiciously overridden) | false (natively) |
| WebGL renderer | SwiftShader / server GPU | Real consumer GPU |
| Plugins | Empty or fake | Real plugin list |
| Screen dimensions | Default / inconsistent | Matches real display |
| Font rendering | Server-side rendering | Native OS rendering |
| Audio fingerprint | Server hardware profile | Consumer hardware profile |
| Outer vs. inner dimensions | Equal (no UI chrome) | Different (has toolbars) |
| Media devices | None / minimal | Real devices present |
| Permissions | All “prompt” | Mixed real states |
| Canvas fingerprint | Server rendering hash | Consumer rendering hash |
| TLS fingerprint | Potentially differs from real Chrome | Native Chrome TLS |
| HTTP/2 fingerprint | May differ | Native Chrome HTTP/2 |
Every row is a detection opportunity. Real Chrome passes every check because every answer is genuine. Headless Chrome fails multiple rows, and anti-bot systems only need one failure to block you.
The only path forward
Stop trying to make headless Chrome undetectable. You’ve been trying for years. The anti-bot vendors have been countering you for years. They’re winning. They’ll keep winning. The detection surface grows with every Chrome release, and headless Chrome cannot fake what it isn’t.
UltraWebScrapingAPI uses real Chrome. Not headless. Not patched. Not stealth-plugined. Real Chrome with real rendering, real hardware access, real APIs, and zero automation markers.
When an anti-bot system inspects our browser, every signal is genuine. There’s nothing to detect because there’s nothing to hide.
Stop paying for headless Chrome that gets detected. Test any anti-bot protected URL in our playground and see the difference real Chrome makes — Launch the Playground.