Everything is typed
All errors extend a baseReaderError class and carry:
code- a stable string likeTIMEOUTorNETWORK_ERRORmessage- human-readable descriptionretryable- a boolean telling you whether this is a transient failure worth retryingurl- the URL that failed (when applicable)toJSON()- structured output for logging
The retryable flag
retryable: true means the error is transient and a retry with the same input might succeed. retryable: false means the error is terminal - retrying won’t help.
Examples of retryable errors:
NETWORK_ERROR- connection reset, socket errorTIMEOUT- page took too long to loadPROXY_CONNECTION_ERROR- proxy unreachableBOT_DETECTED- might pass on retry with a different proxyEMPTY_CONTENT- page might have been rate-limiting
INVALID_URL- malformed URL, not going to improveDNS_ERROR- hostname doesn’t existROBOTS_BLOCKED- robots.txt forbids itACCESS_DENIED- 401/403 from the originPROXY_EXHAUSTED- all proxy tiers tried and failedVALIDATION_ERROR- you passed bad options
Built-in proxy escalation
Reader has a two-step retry per URL built in:- Datacenter attempt (default 10s timeout) - try the URL with a fast, cheap datacenter proxy
- Residential attempt (remaining time up to 30s total) - if the first attempt fails for any reason, escalate to a residential proxy and try again
- Done - if both attempts fail, the URL is reported as failed
Error handling patterns
Simple try/catch
For one-off scripts, wrap the call and log:Retry on retryable errors
For production code, check the flag:Batch with partial failures
When scraping many URLs, a batch can partially succeed. The result’sbatchMetadata.errors array tells you which URLs failed:
result.data. Batch scraping never throws on individual URL failures - only on framework-level errors (browser pool exhausted, invalid options, etc.).
Where to go next
Errors reference
Full table of every error class and its code.
Scraping Engine
How the Hero engine and proxy escalation work.

