Metrics to track
| Metric | Source | Why |
|---|---|---|
| Success rate (%) | Your own logs / Reader dashboard | Health signal; drops mean something is wrong |
| p50 / p95 / p99 latency | Your own timings on each /v1/read | User experience in interactive paths |
| Credit spend per hour | /v1/usage/history or local counter | Catches runaway spend before your limit |
| Credits remaining | /v1/usage/credits | Exhaustion early warning |
rate_limited error count | Your logs | Capacity planning, upgrade signal |
upstream_unavailable rate | Your logs | Target-site health signal |
| Webhook delivery failures | Webhook deliveryStats | Detect broken listener endpoints |
| Escalation rate (auto → stealth) | metadata.proxyEscalated on each response | Helps you decide when to force a mode |
Pulling data from Reader
Usage history
GET /v1/usage/history returns recent requests with per-row proxyMode, credits, status, and duration. Paginate through and feed into whatever your observability stack is (Datadog, Grafana, a local Postgres, a spreadsheet).
Credits balance
A one-call poll:Instrumenting client calls
Wrap yourclient.read calls in a metrics helper:
Alerts worth having
- Error rate > 5% over a 5-minute window → investigate
- Credit balance < 20% of limit → notify ops
- p95 latency doubles → slowdown or escalation spike
rate_limitedcount > 10 per minute → need to upgrade or throttle- Webhook
failedDeliveriesrising → your listener is broken
Request ID correlation
Every response (success or error) carries anx-request-id header. Log it on every call. When something goes wrong, include the request ID in your bug report. Reader’s server-side logs key off that ID and we can reconstruct what happened.

