npm install @vakra-dev/reader, you can run reader directly if you have it in your PATH, or invoke it via npx reader / node node_modules/.bin/reader.
Scrape a single URL
markdown) and all metadata.
Write output to a file
Multiple URLs in parallel
-c 3 sets concurrency. The output is a single JSON file with all results in an array.
Choose formats
Content cleaning flags
Force an engine
Use a proxy
Crawl
| Flag | Purpose |
|---|---|
-d, --depth <n> | Max crawl depth (default: 1) |
-m, --max-pages <n> | Max pages (default: 20) |
-s, --scrape | Also scrape discovered pages |
--delay <ms> | Delay between requests |
--include <pattern> | URL regex to include |
--exclude <pattern> | URL regex to exclude |
Verbose output
Piping to other tools
CLI always outputs JSON. Pipe it tojq for further processing:
Exit codes
- 0 - all URLs scraped successfully
- 1 - one or more URLs failed (details in the JSON
errorsfield)
Where to go next
Daemon Mode
Keep a warm browser pool for faster repeat CLI calls.

