API overview
Base URL, authentication, response conventions, and the endpoint index for the ShopSniffer REST API.
Overview
The ShopSniffer API is a REST-over-HTTPS API served from a single base URL. Most read endpoints are public; write endpoints and user-scoped reads require one of three auth methods. All responses are JSON except file downloads.
Base URL: https://shopsniffer.com/api
The machine-readable OpenAPI 3.1 spec is published at /api/openapi.json. The agent-discovery manifest is at /.well-known/agents.json.
Authentication at a glance
| Method | Header | Use case |
|---|---|---|
| None | — | Read-only: job status, reports, free export, public directory |
| API key | X-API-Key: ss_… | Scripts, CI, long-running AI agents |
| Better Auth session | Cookie (automatic) | Browser UIs on shopsniffer.com |
| x402 payment | X-PAYMENT: … | Account-free per-request payment |
See authentication for the full method matrix and which endpoints accept which.
Conventions
- JSON everywhere. All request bodies and responses are
application/json, exceptGET /api/downloadswhich streams raw file content. - UUIDs for job IDs. Every job ID is a v4 UUID. Slugs are lowercase domain strings with dots replaced by hyphens (
allbirds.com→allbirds-com). - Timestamps are ISO 8601 UTC. All
created_at/updated_at/completed_atfields use2025-01-15T10:30:00Zformat. - Pagination uses
pageandlimitquery params. Defaultpage=1, limits vary per endpoint (see individual endpoint docs). Responses include apaginationobject withpage,limit, andtotal. - Errors are JSON with an
errorstring field. HTTP status codes follow REST conventions: 400 client error, 401 missing auth, 402 payment required (x402), 403 forbidden, 404 not found, 409 conflict, 429 rate-limited, 5xx server error.
Pagination pattern
Endpoints that return lists accept page and limit and return a pagination object:
bashcurl "https://shopsniffer.com/api/stores?page=2&limit=24&search=sustainable"
json{ "stores": [], "pagination": { "page": 2, "limit": 24, "total": 1847 } }
To iterate all pages: fetch page=1, keep incrementing until (page * limit) >= total.