What is UltraWebScrapingAPI?
UltraWebScrapingAPI is a web scraping API that defeats advanced anti-bot protections like Akamai Bot Manager, Cloudflare Turnstile, and PerimeterX. Unlike generic scraping services such as Bright Data or ScraperAPI, we manually analyze each target site and build custom bypass strategies, guaranteeing 90%+ success rates.
Step 1: Create an Account
Sign up at ultrawebscrapingapi.com/signup using Google or email. Email signups require email verification.
Step 2: Get Your API Key
After signing in, go to your dashboard to find your API key. Keep this key secret — it’s used to authenticate all API requests.
Step 3: Make Your First Request
Python
import requests
response = requests.post(
"https://api.ultrawebscrapingapi.com/v1/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [{"url": "https://example.com"}],
"mode": "sync"
}
)
data = response.json()
print(f"Title: {data['title']}")
print(f"HTML length: {len(data['html'])} characters")
Node.js
const response = await fetch(
"https://api.ultrawebscrapingapi.com/v1/scrape",
{
method: "POST",
headers: {
"X-API-Key": "your_api_key",
"Content-Type": "application/json",
},
body: JSON.stringify({
urls: [{ url: "https://example.com" }],
mode: "sync",
}),
}
);
const data = await response.json();
console.log(`Title: ${data.title}`);
console.log(`HTML length: ${data.html.length} characters`);
cURL
curl -X POST https://api.ultrawebscrapingapi.com/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{"urls": [{"url": "https://example.com"}], "mode": "sync"}'
Step 4: Try Async Mode for Batch Scraping
For scraping multiple URLs at once, use async mode:
import requests
import time
# Submit batch job
response = requests.post(
"https://api.ultrawebscrapingapi.com/v1/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [
{"url": "https://site-a.com/page1"},
{"url": "https://site-b.com/page2"},
{"url": "https://site-c.com/page3"},
],
"mode": "async"
}
)
subscription_id = response.json()["subscriptionId"]
print(f"Subscription ID: {subscription_id}")
# Poll for results
while True:
status = requests.get(
f"https://api.ultrawebscrapingapi.com/v1/subscription/{subscription_id}",
headers={"X-API-Key": "your_api_key"},
).json()
print(f"Completed: {status['completed']}/{status['total']}")
if status["processing"] == 0 and status["queued"] == 0:
break
time.sleep(5)
# Fetch each result
for job in status["jobs"]:
if job["status"] == "completed":
result = requests.get(
f"https://api.ultrawebscrapingapi.com/v1/result/{subscription_id}/{job['index']}",
headers={"X-API-Key": "your_api_key"},
).json()
print(f"Got {len(result['html'])} chars from {result['url']}")
Step 5: Set Up Webhooks (Optional)
Instead of polling, register a webhook endpoint to get notified when scraping completes:
# Register webhook
endpoint = requests.post(
"https://api.ultrawebscrapingapi.com/v1/endpoint",
headers={"X-API-Key": "your_api_key"},
json={"url": "https://your-server.com/webhook"}
).json()
# Submit job with webhook
requests.post(
"https://api.ultrawebscrapingapi.com/v1/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [{"url": "https://example.com"}],
"mode": "async",
"endpointId": endpoint["endpointId"]
}
)
# Your webhook URL will receive a POST when scraping completes
Understanding the response
Every scrape response includes several useful fields:
{
"url": "https://example.com",
"status": "completed",
"html": "<!DOCTYPE html>...",
"title": "Example Domain",
"statusCode": 200,
"credits": 1
}
- html: The fully rendered HTML after JavaScript execution. This is the same HTML you’d see in Chrome DevTools.
- title: The page’s
<title>tag, extracted for convenience. - statusCode: The HTTP status code from the target site.
- credits: How many credits this request consumed. Standard pages cost 1 credit. Pages with heavy anti-bot protection like Akamai or Cloudflare Turnstile may cost more.
Handling anti-bot protected sites
The real power of UltraWebScrapingAPI is scraping sites that block other services. When you submit a URL protected by DataDome, PerimeterX, or Kasada, our system automatically:
- Identifies the anti-bot system on the target site
- Loads the page in a real Chrome browser with GPU rendering
- Solves JavaScript challenges, CAPTCHAs, and proof-of-work puzzles
- Returns the fully rendered HTML
No extra configuration needed — just submit the URL like any other request. If you’ve been struggling with sites that block Bright Data or ScraperAPI, this is why customers switch to us.
Common patterns
Extracting data with Python
Once you have the HTML, use BeautifulSoup or lxml to extract data:
from bs4 import BeautifulSoup
response = requests.post(
"https://api.ultrawebscrapingapi.com/v1/scrape",
headers={"X-API-Key": "your_api_key"},
json={"urls": [{"url": "https://example.com/products"}], "mode": "sync"}
)
soup = BeautifulSoup(response.json()["html"], "html.parser")
prices = [el.text for el in soup.select(".product-price")]
Scheduling regular scrapes
For monitoring use cases like e-commerce price tracking or travel fare monitoring, combine async mode with a cron job:
# Run every hour via cron
urls = load_urls_from_database()
response = requests.post(
"https://api.ultrawebscrapingapi.com/v1/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [{"url": u} for u in urls],
"mode": "async",
"endpointId": "your_endpoint_id"
}
)
# Webhook handles results automatically
Next Steps
- Read the full API documentation
- Try the playground to test URLs before purchasing
- View pricing and buy credits
- Learn sync vs async scraping to choose the right mode
- Check our FAQ for common questions
Need help with a specific anti-bot system? Check our FAQ for common questions. You can also learn more about how we compare to other services in our Bright Data comparison and ScraperAPI comparison.