cURL Integration Guide
Test the API from your terminal. Perfect for quick testing and bash automation scripts.
Prerequisites: cURL (pre-installed on macOS/Linux). Optional:
jq for JSON parsing.
Sync Scraping
The simplest API call — one URL, one response.
curl -X POST https://api.ultrawebscrapingapi.com/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{"urls": [{"url": "https://example.com"}], "mode": "sync"}' Extract HTML with jq
Use jq to extract just the HTML from the JSON response.
# Scrape and extract just the HTML using jq
curl -s -X POST https://api.ultrawebscrapingapi.com/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{"urls": [{"url": "https://example.com"}], "mode": "sync"}' \
| jq -r '.html' Async Workflow
For multiple URLs: submit, poll, fetch results.
Step 1: Submit
# 1. Submit async scrape job
curl -s -X POST https://api.ultrawebscrapingapi.com/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{
"urls": [
{"url": "https://site-a.com/page1"},
{"url": "https://site-b.com/page2"},
{"url": "https://site-c.com/page3"}
],
"mode": "async"
}'
# Response: {"subscriptionId": "sub_abc123", ...} Step 2: Poll Status
# 2. Check job status
curl -s https://api.ultrawebscrapingapi.com/v1/subscription/sub_abc123 \
-H "X-API-Key: your_api_key" \
| jq '{total, completed, processing, queued}'
# Response: {"total": 3, "completed": 2, "processing": 1, "queued": 0} Step 3: Fetch Results
# 3. Fetch result for index 0
curl -s https://api.ultrawebscrapingapi.com/v1/result/sub_abc123/0 \
-H "X-API-Key: your_api_key" \
| jq '{url, title, html: (.html | length | tostring) + " chars"}'
# Response: {"url": "https://site-a.com/page1", "title": "Page Title", "html": "45231 chars"} Wait for Elements
Wait for dynamic content to load before capturing the HTML.
# Wait for a CSS selector before capturing
curl -s -X POST https://api.ultrawebscrapingapi.com/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{
"urls": [{
"url": "https://example.com/products",
"waitForSelector": ".product-list",
"waitFor": 3000
}],
"mode": "sync"
}' | jq -r '.html' > page.html Save to File
Save the scraped HTML directly to a file.
# Save scraped HTML to a file
curl -s -X POST https://api.ultrawebscrapingapi.com/v1/scrape \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{"urls": [{"url": "https://example.com"}], "mode": "sync"}' \
| jq -r '.html' > scraped_page.html
echo "Saved to scraped_page.html" Health Check
Verify the API is running. No authentication required.
# Check API status (no auth required)
curl -s https://api.ultrawebscrapingapi.com/v1/health | jq .
# Response: {"status": "ok", "onlineDesktops": 3} Batch Scraping with Bash
A complete bash script that reads URLs from a file, submits them as an async batch, polls for completion, and saves all results.
#!/bin/bash
# scrape.sh — Batch scrape URLs from a file
API_KEY="your_api_key"
BASE="https://api.ultrawebscrapingapi.com/v1"
# Build JSON payload from urls.txt
URLS=$(cat urls.txt | jq -R '{url: .}' | jq -s '.')
# Submit async job
SUB_ID=$(curl -s -X POST "$BASE/scrape" \
-H "X-API-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d "{"urls": $URLS, "mode": "async"}" \
| jq -r '.subscriptionId')
echo "Submitted: $SUB_ID"
# Poll until done
while true; do
STATUS=$(curl -s "$BASE/subscription/$SUB_ID" \
-H "X-API-Key: $API_KEY")
COMPLETED=$(echo "$STATUS" | jq '.completed')
TOTAL=$(echo "$STATUS" | jq '.total')
PROCESSING=$(echo "$STATUS" | jq '.processing')
QUEUED=$(echo "$STATUS" | jq '.queued')
echo "Progress: $COMPLETED/$TOTAL (processing: $PROCESSING, queued: $QUEUED)"
if [ "$PROCESSING" -eq 0 ] && [ "$QUEUED" -eq 0 ]; then
break
fi
sleep 5
done
# Fetch results
mkdir -p results
for i in $(seq 0 $((TOTAL - 1))); do
curl -s "$BASE/result/$SUB_ID/$i" \
-H "X-API-Key: $API_KEY" \
| jq -r '.html' > "results/page_$i.html"
echo "Saved results/page_$i.html"
done
echo "Done! $TOTAL pages saved to results/"
Create a urls.txt file with one URL per line, then run bash scrape.sh.