AI agent integration

How AI agents discover, authenticate, and use the ShopSniffer API — with working Python and JavaScript examples.

Overview

ShopSniffer is built for AI agent discovery and automated usage. Agents can find capability manifests, pick an authentication method (API key or x402), create jobs, stream progress, and fetch results — all without human intervention. This page is the integration guide; for a step-by-step tutorial using a real agent framework, see the Python agent guide.

Discovery files

Three files at well-known locations describe the API to agents and LLM crawlers:

PathPurpose
/.well-known/agents.jsonAgent capability manifest: endpoints, pricing, supported auth methods
/.well-known/ai-plugin.jsonOpenAI plugin manifest (legacy format, still supported)
/api/openapi.jsonFull OpenAPI 3.1 spec for automated client generation

Agents should fetch agents.json first for a high-level capability overview, then openapi.json when they need endpoint-level detail.

1

Authenticate

Pick one: API key (if you have persistent credentials), or x402 payment (if you want account-free per-request payment). See authentication.

2

Create a job

POST /api/jobs with a target domain. Include webhook_url for async notification — avoid polling if you can.

3

Wait for completion

Subscribe to the WebSocket at /api/ws/:jobId, wait for the webhook, or poll GET /jobs/:id/status every 10 seconds as a fallback.

4

Fetch the report

Once status is completed, fetch GET /api/reports/:id for the full dataset.

5

Download artifacts

GET /api/downloads?jobId=…&key=products.csv returns raw file contents. Store locally.

6

(Optional) Ask questions

For analysis over a report, stream POST /api/chat with the job ID and user questions. See chat API.

Working examples

python
import requests, time API_KEY = "ss_your_key_here" BASE = "https://shopsniffer.com/api" headers = {"X-API-Key": API_KEY, "Content-Type": "application/json"} # 1. Create job resp = requests.post( f"{BASE}/jobs", json={ "domain": "allbirds.com", "webhook_url": "https://your-server.com/hook", }, headers=headers, ) job = resp.json()["job"] print(f"Job {job['id']} created for {job['domain']}") # 2. Poll for completion (or wait on your webhook) while True: status = requests.get(f"{BASE}/jobs/{job['id']}/status").json() print(f"Status: {status['status']}") if status["status"] == "completed": break if status["status"] == "errored": raise RuntimeError(f"Job failed: {status}") time.sleep(10) # 3. Get report data report = requests.get(f"{BASE}/reports/{job['id']}").json() print(f"Found {len(report.get('products', []))} products") # 4. Download CSV csv_url = ( f"{BASE}/downloads?jobId={job['id']}&key=products.csv" ) csv_data = requests.get(csv_url).text with open("products.csv", "w") as f: f.write(csv_data)

Webhooks vs polling vs WebSocket

MethodLatencyCostBest for
Webhook (webhook_url at creation)Seconds1 request per callbackServer-side agents
WebSocket (/api/ws/:jobId)Sub-second1 long-lived connectionInteractive UIs, streaming progress
Polling (GET /jobs/:id/status)10s+N requests until doneFallback when others unavailable

Prefer webhooks for autonomous agents — they're push-based and don't consume rate budget.

Error handling

Agents should handle these cases:

  • 401/402 on job creation — retry with auth (API key or x402 payment).
  • 400 "Not a Shopify store" — the target domain isn't a Shopify store. Fail permanently, don't retry.
  • 404 on report fetch — the job is still processing, or the job ID is wrong. Re-check status first.
  • Job errored status — the scrape failed. Use POST /jobs/:id/retry to create a new attempt; if it fails twice, escalate to a human.
  • 429 on any endpoint — back off using retry_after. See rate limits.

Next steps

Python agent tutorial

Full end-to-end tutorial with a real agent framework.

Learn More
x402 tutorial

Account-free per-request payment for fully autonomous agents.

Learn More
Chat API

Stream AI analysis of any report.

Learn More
API overview

Full endpoint reference.

Learn More
Ask a question... ⌘I