Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.reader.dev/llms.txt

Use this file to discover all available pages before exploring further.

By the end of this page you’ll have scraped a real web page with Reader and seen clean markdown come back. Takes about 60 seconds.

Step 1 - Get your API key

Open Dashboard →

Sign up free at app.reader.dev. You get 1,000 credits every month on the free tier - no credit card required.
Once you’re signed in:
  1. Click API Keys in the sidebar
  2. Click Create API Key and give it a name (like “Playground”)
  3. Copy the key - it starts with rdr_ and you’ll only see it once
Treat your API key like a password. Never commit it to version control or expose it in client-side code.

Step 2 - Make your first request

Reader has one endpoint for everything: POST /v1/read. Send a URL, get back clean markdown.
curl -X POST https://api.reader.dev/v1/read \
  -H "Content-Type: application/json" \
  -H "x-api-key: rdr_your_api_key" \
  -d '{
    "url": "https://example.com"
  }'
You should get back something like:
{
  "success": true,
  "data": {
    "markdown": "# Example Domain\n\nThis domain is for use in...",
    "metadata": {
      "title": "Example Domain",
      "description": "...",
      "statusCode": 200
    }
  }
}
That’s it. You just scraped a page.

Step 3 - Try something more interesting

Scrape a real article with main content extraction:
curl -X POST https://api.reader.dev/v1/read \
  -H "Content-Type: application/json" \
  -H "x-api-key: rdr_your_api_key" \
  -d '{
    "url": "https://en.wikipedia.org/wiki/Web_scraping",
    "formats": ["markdown"],
    "onlyMainContent": true
  }'
Reader strips navigation, sidebars, and footers - you get just the article body as markdown. Perfect for feeding to an LLM.

What’s next

Scrape vs Crawl

Learn when to use a single URL vs an array vs a crawl.

Batch scraping

Scrape hundreds of URLs in parallel.

Crawling a website

Discover and scrape every page on a domain.

SDKs

Official JavaScript and Python clients.