What is UltraWebScrapingAPI?

UltraWebScrapingAPI is a web scraping API that defeats advanced anti-bot protections like Akamai Bot Manager, Cloudflare Turnstile, and PerimeterX. Unlike generic scraping services such as Bright Data or ScraperAPI, we manually analyze each target site and build custom bypass strategies, guaranteeing 90%+ success rates.

Step 1: Create an Account

Sign up at ultrawebscrapingapi.com/signup using Google or email. Email signups require email verification.

Step 2: Get Your API Key

After signing in, go to your dashboard to find your API key. Keep this key secret — it’s used to authenticate all API requests.

Step 3: Make Your First Request

Python

import requests

response = requests.post(
    "https://api.ultrawebscrapingapi.com/api/scrape",
    headers={"X-API-Key": "your_api_key"},
    json={
        "urls": [{"url": "https://example.com"}],
        "mode": "sync"
    }
)

data = response.json()
print(f"Title: {data['title']}")
print(f"HTML length: {len(data['html'])} characters")

Node.js

const response = await fetch(
  "https://api.ultrawebscrapingapi.com/api/scrape",
  {
    method: "POST",
    headers: {
      "X-API-Key": "your_api_key",
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      urls: [{ url: "https://example.com" }],
      mode: "sync",
    }),
  }
);

const data = await response.json();
console.log(`Title: ${data.title}`);
console.log(`HTML length: ${data.html.length} characters`);

cURL

curl -X POST https://api.ultrawebscrapingapi.com/api/scrape \
  -H "X-API-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"urls": [{"url": "https://example.com"}], "mode": "sync"}'

Step 4: Try Async Mode for Batch Scraping

For scraping multiple URLs at once, use async mode:

import requests
import time

# Submit batch job
response = requests.post(
    "https://api.ultrawebscrapingapi.com/api/scrape",
    headers={"X-API-Key": "your_api_key"},
    json={
        "urls": [
            {"url": "https://site-a.com/page1"},
            {"url": "https://site-b.com/page2"},
            {"url": "https://site-c.com/page3"},
        ],
        "mode": "async"
    }
)
subscription_id = response.json()["subscriptionId"]
print(f"Subscription ID: {subscription_id}")

# Poll for results
while True:
    status = requests.get(
        f"https://api.ultrawebscrapingapi.com/api/subscription/{subscription_id}",
        headers={"X-API-Key": "your_api_key"},
    ).json()

    print(f"Completed: {status['completed']}/{status['total']}")

    if status["processing"] == 0 and status["queued"] == 0:
        break
    time.sleep(5)

# Fetch each result
for job in status["jobs"]:
    if job["status"] == "completed":
        result = requests.get(
            f"https://api.ultrawebscrapingapi.com/api/result/{subscription_id}/{job['index']}",
            headers={"X-API-Key": "your_api_key"},
        ).json()
        print(f"Got {len(result['html'])} chars from {result['url']}")

Step 5: Set Up Webhooks (Optional)

Instead of polling, register a webhook endpoint to get notified when scraping completes:

# Register webhook
endpoint = requests.post(
    "https://api.ultrawebscrapingapi.com/api/endpoint",
    headers={"X-API-Key": "your_api_key"},
    json={"url": "https://your-server.com/webhook"}
).json()

# Submit job with webhook
requests.post(
    "https://api.ultrawebscrapingapi.com/api/scrape",
    headers={"X-API-Key": "your_api_key"},
    json={
        "urls": [{"url": "https://example.com"}],
        "mode": "async",
        "endpointId": endpoint["endpointId"]
    }
)
# Your webhook URL will receive a POST when scraping completes

Next Steps

Need help with a specific anti-bot system? Ask in our Q&A community.