results array contains the markdown for every page the crawler found, the same way a batch scrape would.
What you get
results[] has the same shape as a sync scrape result: url, markdown, html (if requested), metadata with title, statusCode, duration, scrapedAt, and so on. You can hand the same handler function to batch and crawl results; they’re interchangeable.
Extraction options apply to crawled pages
The same extraction knobs work on crawl jobs:One credit per discovered page
Crawl bills 1 credit per page flat, regardless of proxy mode. A 100-page crawl costs 100 credits. See Credits and billing.Feeding a downstream pipeline
A crawl + scrape result is a ready-made input for an LLM pipeline, a search index, or a static backup:Debugging an unexpected result
A crawl result can surprise you:- Too few pages.
maxDepthtoo shallow; links Reader can’t find (JavaScript-rendered); same-host constraint excluded the pages you wanted. - Too many pages.
maxPagestoo loose; site has unexpected link graph (e.g., calendar archives). - Missing content on specific pages. Extraction heuristics dropped something; use include/exclude selectors to pin it.
maxPages: 20) to sanity-check the crawler’s output before running a larger crawl.

