reader scrape or reader crawl, Reader normally spins up a fresh browser pool, runs your command, and tears it down. That’s fine for one-off scripts but wasteful when you’re making many calls - you pay the 1-2 second pool startup cost every time.
Daemon mode fixes this. Start a background daemon once, and all subsequent CLI commands auto-attach to it, reusing the warm pool.
Start the daemon
| Flag | Purpose |
|---|---|
--pool-size <n> | Browser pool size (default: 5) |
--port <n> | Listen port (default: 4000) |
--show-chrome | Show browser windows (debugging) |
-v, --verbose | Enable logging |
Check status
Use the daemon transparently
Once the daemon is running, just usereader scrape and reader crawl as normal:
Bypass the daemon
For one-off runs where you don’t want to touch the daemon (e.g., you want to test without affecting the shared pool):Stop the daemon
When to use daemon mode
- Shell scripts that run Reader many times in sequence
- Development where you’re iterating on CLI commands
- Cron jobs that fire frequently (keep the daemon warm rather than cold-start on every run)
When not to use it
- Single one-off commands - the daemon startup cost exceeds the savings
- CI/CD - each build should be stateless; just use
--standaloneor skip daemon entirely - Server applications - embed
ReaderClientdirectly in your Node app instead; the CLI daemon is for terminal workflows
Where to go next
CLI guide
Commands and flags for scrape and crawl.
Deployment
Running Reader as a production service.

