Integrate with an AI agent (Python)

End-to-end tutorial wiring ShopSniffer into an autonomous agent loop — discovery, authentication, job creation, and tool-call analysis.

Overview

This tutorial walks through building a minimal but realistic AI agent that uses ShopSniffer as a tool. The agent reads a user's question about a Shopify store, discovers ShopSniffer's capabilities, creates a job, fetches the report, and uses the results to answer. We'll use Anthropic's Claude with tool use, but the pattern translates to any tool-calling LLM.

Prerequisites:

  • Python 3.11+
  • pip install anthropic requests
  • An Anthropic API key (ANTHROPIC_API_KEY)
  • A ShopSniffer API key (SHOPSNIFFER_API_KEY) — create one if you don't have one

Step 1 — discover capabilities

A good agent starts by reading the capability manifest, not by hardcoding endpoints. This means your agent keeps working if ShopSniffer adds new endpoints or changes pricing.

python
import requests def discover_shopsniffer(): """Read the agent manifest and OpenAPI spec.""" agents = requests.get( "https://shopsniffer.com/.well-known/agents.json" ).json() openapi = requests.get( "https://shopsniffer.com/api/openapi.json" ).json() return agents, openapi

In production, cache these responses for an hour or two rather than fetching them every run.

Step 2 — define the tool

Expose ShopSniffer as a single coarse-grained tool the LLM can call. Coarse-grained beats fine-grained because it keeps the LLM's decision tree simple ("do I need store data? yes → call this one tool").

python
TOOLS = [ { "name": "analyze_shopify_store", "description": ( "Analyze any public Shopify store. Returns the full product " "catalog, installed apps, theme, PageSpeed scores, and insights. " "Use this when the user asks about products, pricing, apps, " "or performance of a specific Shopify store." ), "input_schema": { "type": "object", "properties": { "domain": { "type": "string", "description": "Shopify store domain, e.g. 'allbirds.com'", }, }, "required": ["domain"], }, }, ]

Step 3 — implement the tool

The tool body creates a job, polls until complete, and returns the report data trimmed to what's useful for an LLM (full product arrays are too big for context — summarize).

python
import os, time, requests API = "https://shopsniffer.com/api" KEY = os.environ["SHOPSNIFFER_API_KEY"] HEADERS = {"X-API-Key": KEY, "Content-Type": "application/json"} def analyze_shopify_store(domain: str) -> dict: # 1. Create the job r = requests.post(f"{API}/jobs", json={"domain": domain}, headers=HEADERS) r.raise_for_status() job_id = r.json()["job"]["id"] # 2. Poll until complete while True: status = requests.get(f"{API}/jobs/{job_id}/status").json() if status["status"] == "completed": break if status["status"] == "errored": return {"error": f"Scrape failed for {domain}"} time.sleep(5) # 3. Fetch the enriched job record (metadata + insights) job = requests.get(f"{API}/jobs/{job_id}").json()["job"] # 4. Return a compact, LLM-friendly summary return { "domain": job["domain"], "shop_name": job.get("shop_meta", {}).get("shop_name"), "product_count": job["product_count"], "collection_count": job["collection_count"], "theme": job.get("shop_meta", {}).get("theme_name"), "detected_apps": job.get("shop_meta", {}).get("detected_apps", []), "insights": job.get("insights"), "downloads": job.get("downloads"), "report_url": f"https://shopsniffer.com/report/{job_id}", }

Don't dump the full products array into the LLM context — a store with 500 products can blow past token budgets. Summarize (top vendors, price stats, categories) and let the LLM ask follow-up questions via the chat API if it needs to drill in.

Step 4 — run the agent loop

Standard Claude tool-use pattern: LLM decides, tool runs, LLM sees results, loop until no more tool calls.

python
from anthropic import Anthropic client = Anthropic() def run_agent(user_question: str) -> str: messages = [{"role": "user", "content": user_question}] while True: response = client.messages.create( model="claude-opus-4-6", max_tokens=2048, tools=TOOLS, messages=messages, ) # No more tool calls — return the final answer if response.stop_reason != "tool_use": return "".join( block.text for block in response.content if block.type == "text" ) # Execute every tool_use block tool_results = [] for block in response.content: if block.type != "tool_use": continue if block.name == "analyze_shopify_store": result = analyze_shopify_store(**block.input) else: result = {"error": f"Unknown tool {block.name}"} tool_results.append({ "type": "tool_result", "tool_use_id": block.id, "content": str(result), }) # Append assistant + tool_result turns and loop messages.append({"role": "assistant", "content": response.content}) messages.append({"role": "user", "content": tool_results})

Step 5 — try it

python
if __name__ == "__main__": answer = run_agent( "What Shopify apps does Allbirds use and how fast is their homepage?" ) print(answer)

The agent will call analyze_shopify_store(domain="allbirds.com"), wait for the job, get the summary back, and produce a natural-language answer referencing the detected apps and PageSpeed scores.

Adding webhook-driven async mode

Polling in-loop is fine for interactive use, but wasteful for batch jobs. For production, make the tool fire-and-forget with a webhook and let the LLM come back to the result later:

1

Register a webhook at job creation

Pass webhook_url pointing at your agent's job-completion handler.

2

Return immediately from the tool

The tool returns {"job_id": "…", "status": "pending"} without waiting. The LLM can either continue with other work or respond "I'll come back when the report is ready."

3

Persist the pending state

Store (job_id → conversation_id) so your webhook handler knows which conversation to resume.

4

Resume on callback

When the webhook fires, fetch the report, inject it into the conversation as a new user message ("Here are the results for job X: …"), and re-run the agent loop.

This pattern scales to hundreds of concurrent store analyses without burning tokens on polling loops.

Going further

  • Use the chat API if the user asks follow-up questions about the same report. It's cheaper than re-feeding the full product list into your own LLM.
  • Use x402 payment if you want a fully autonomous agent with no pre-provisioned API key — the agent pays per request.
  • Combine with monitoring for agents that track stores continuously, not just on-demand.

Next steps

Pay with x402

Make the agent fully account-free with per-request crypto payments.

Learn More
AI agent integration

Reference for discovery files and recommended patterns.

Learn More
Chat API

Offload follow-up questions to ShopSniffer's hosted chat.

Learn More
Monitor prices

Continuous monitoring for agents that watch stores over time.

Learn More
Ask a question... ⌘I