API Documentation
UltraWebScrapingAPI is a REST API that scrapes websites protected by Akamai, Cloudflare, and other advanced anti-bot systems. All you need is an API key and a target URL.
Authentication
All API requests require an X-API-Key header. Get your key from the
dashboard after signing up.
X-API-Key: your_api_key Base URL
https://api.ultrawebscrapingapi.com POST /api/scrape
Submit one or more URLs for scraping. Supports sync and async modes.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
urls | array | Yes | Array of URL objects. Max 100 per request. |
urls[].url | string | Yes | Target URL to scrape. |
urls[].waitFor | number | No | Wait time in ms after page load (max 30000). |
urls[].waitForSelector | string | No | CSS selector to wait for before capturing HTML. |
mode | string | No | "sync" (default) or "async". Sync supports 1 URL only. |
endpointId | string | No | Webhook endpoint ID (async mode only). |
Sync Mode
Returns the scraped HTML directly. Limited to 1 URL per request.
import requests
response = requests.post(
"https://api.ultrawebscrapingapi.com/api/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [{"url": "https://example.com"}],
"mode": "sync"
}
)
result = response.json()
print(result["html"]) const response = await fetch(
"https://api.ultrawebscrapingapi.com/api/scrape",
{
method: "POST",
headers: {
"X-API-Key": "your_api_key",
"Content-Type": "application/json",
},
body: JSON.stringify({
urls: [{ url: "https://example.com" }],
mode: "sync",
}),
}
);
const result = await response.json();
console.log(result.html); curl -X POST https://api.ultrawebscrapingapi.com/api/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{"urls": [{"url": "https://example.com"}], "mode": "sync"}' Async Mode
Returns a subscriptionId immediately. Poll the subscription endpoint
or use webhooks to get notified when scraping is complete.
import requests
import time
# 1. Submit scrape job
response = requests.post(
"https://api.ultrawebscrapingapi.com/api/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [
{"url": "https://site-a.com/page1"},
{"url": "https://site-b.com/page2"},
],
"mode": "async"
}
)
subscription_id = response.json()["subscriptionId"]
# 2. Poll for completion
while True:
status = requests.get(
f"https://api.ultrawebscrapingapi.com/api/subscription/{subscription_id}",
headers={"X-API-Key": "your_api_key"},
).json()
if status["processing"] == 0 and status["queued"] == 0:
break
time.sleep(5)
# 3. Fetch results
for job in status["jobs"]:
if job["status"] == "completed":
result = requests.get(
f"https://api.ultrawebscrapingapi.com/api/result/{subscription_id}/{job['index']}",
headers={"X-API-Key": "your_api_key"},
).json()
print(result["html"]) // 1. Submit scrape job
const submitRes = await fetch(
"https://api.ultrawebscrapingapi.com/api/scrape",
{
method: "POST",
headers: {
"X-API-Key": "your_api_key",
"Content-Type": "application/json",
},
body: JSON.stringify({
urls: [
{ url: "https://site-a.com/page1" },
{ url: "https://site-b.com/page2" },
],
mode: "async",
}),
}
);
const { subscriptionId } = await submitRes.json();
// 2. Poll for completion
const poll = async () => {
const res = await fetch(
`https://api.ultrawebscrapingapi.com/api/subscription/${subscriptionId}`,
{ headers: { "X-API-Key": "your_api_key" } }
);
return res.json();
};
let status = await poll();
while (status.processing > 0 || status.queued > 0) {
await new Promise((r) => setTimeout(r, 5000));
status = await poll();
}
// 3. Fetch results
for (const job of status.jobs) {
if (job.status === "completed") {
const result = await fetch(
`https://api.ultrawebscrapingapi.com/api/result/${subscriptionId}/${job.index}`,
{ headers: { "X-API-Key": "your_api_key" } }
).then((r) => r.json());
console.log(result.html);
}
} GET /api/subscription/:id
Check the status of an async scrape job.
Response
| Field | Type | Description |
|---|---|---|
subscriptionId | string | The subscription ID. |
total | number | Total number of URLs. |
completed | number | Number of successfully scraped URLs. |
failed | number | Number of failed URLs. |
processing | number | Number of URLs currently being scraped. |
queued | number | Number of URLs waiting to be scraped. |
jobs | array | Status of each URL (index, url, status, error). |
GET /api/result/:subscriptionId/:index
Retrieve the scraped HTML for a completed URL.
Response
| Field | Type | Description |
|---|---|---|
url | string | Original requested URL. |
finalUrl | string | Final URL after redirects. |
title | string | Page title. |
html | string | Full rendered HTML. |
capturedAt | string | ISO timestamp of capture. |
remainingQueries | number | Remaining query attempts for this result. |
You can also retrieve results by URL:
GET /api/result?url=https://example.com Webhooks
Register a webhook endpoint to receive notifications when async scrape jobs complete, instead of polling.
POST /api/endpoint
Register a new webhook endpoint.
# 1. Register webhook endpoint
response = requests.post(
"https://api.ultrawebscrapingapi.com/api/endpoint",
headers={"X-API-Key": "your_api_key"},
json={"url": "https://your-server.com/webhook"}
)
endpoint = response.json()
# Save endpoint["endpointId"] and endpoint["signingKey"]
# 2. Submit async job with webhook
response = requests.post(
"https://api.ultrawebscrapingapi.com/api/scrape",
headers={"X-API-Key": "your_api_key"},
json={
"urls": [{"url": "https://example.com"}],
"mode": "async",
"endpointId": endpoint["endpointId"]
}
)
# Webhook will POST to your URL when completed Webhook Payload
{
"type": "completed",
"subscriptionId": "sub_abc123",
"completed": 5,
"failed": 0,
"total": 5
}
Webhooks include X-Signature (HMAC-SHA256) and X-Timestamp
headers for verification.
GET /api/health
Check the API status. No authentication required.
{
"status": "ok",
"onlineDesktops": 3
} Error Codes
| Code | Error | Description |
|---|---|---|
| 400 | BAD_REQUEST | Invalid request body or parameters. |
| 401 | UNAUTHORIZED | Missing or invalid API key. |
| 402 | INSUFFICIENT_CREDITS | Not enough credits. |
| 404 | NOT_FOUND | Resource not found. |
| 429 | RATE_LIMITED | Too many requests. Slow down. |
| 503 | SERVICE_UNAVAILABLE | No scraping capacity available. |
Rate Limits
| Endpoint | Limit |
|---|---|
| POST /api/scrape | 60 requests/min per API key |
| GET /api/subscription, /api/result | 120 requests/min per API key |
| Global | 1,000 requests/min |