The simple formula
standard: 1stealth: 3auto: 1 or 3 depending on whether Reader escalates (see below)
Estimating for standard
Trivial. If you know the page count, you know the cost.
proxyMode: "standard" in your request if you want the cost guarantee. Reader will error on a blocked page instead of silently escalating.
Estimating for stealth
Also trivial, just 3x:
Estimating for auto
The hard case. auto only costs 3x for the subset of pages that actually escalate. You need an escalation rate: the fraction of requests that fall back to stealth.
| Target type | Expected escalation | Effective cost per page |
|---|---|---|
| Blogs, docs, news sites | 0–5% | ~1.0–1.1 |
| Marketing pages, public APIs | 0–10% | ~1.0–1.2 |
| E-commerce (general) | 20–50% | ~1.4–2.0 |
| E-commerce (Amazon, Walmart, etc.) | 80–100% | ~2.6–3.0 |
| LinkedIn, booking sites, aggressive anti-bot | 95–100% | ~2.9–3.0 |
Pilot first
For any batch above a few hundred URLs, run a pilot on a representative subset of 50–100 URLs, then measure the real escalation rate:Crawl cost
Crawls chargecrawlPerPage (flat 1 credit per page discovered) plus the scrape cost per page:
pages discovered is hard to predict without running the crawl; set maxPages as a hard cap so worst-case spend is bounded.
Reading the bill afterwards
After the run,/v1/usage/history shows per-request cost broken down by proxyMode. The sum of the credits column is your actual spend.

