Search
left arrowBack
Eugene Levitin

Eugene Levitin

March 19, 2026 ・ Agentic Commerce

The Court Case That Will Define AI Shopping

The Court Case That Will Define AI Shopping

On March 9, 2026, Judge Maxine Chesney in the Northern District of California granted Amazon a temporary restraining order against Perplexity's Comet browser — an AI agent that makes purchases on Amazon on behalf of users. The ruling hinged on a distinction that sounds simple but changes everything: user permission is not the same as platform authorization.

TL;DR: The first federal court ruling on AI agent shopping sided with Amazon: platforms can block AI agents even when users authorize them. Perplexity appealed to the Ninth Circuit on March 11. Meanwhile, Visa, Mastercard, and Cloudflare are building the trust infrastructure that could make these courtroom fights unnecessary. The question isn't whether agents will shop — it's who gets to set the rules.

I've been tracking this story since I first tested how AI agents actually shop. That post was about product discovery — which stores show up when you ask ChatGPT for pajamas. This post is about something bigger: what happens when an AI agent tries to buy something on a platform that doesn't want it there.

What Actually Happened in the Courtroom

Perplexity launched Comet in late 2025 as a browser that lets AI complete purchases on behalf of users. You tell the agent what you want, it finds it on Amazon, and handles the checkout — all inside Perplexity's interface, using your Amazon credentials.

Amazon's objection wasn't about what Comet bought. It was about how it got in.

The timeline, as laid out in court filings: Amazon first warned Perplexity in November 2024. By August 2025, Amazon had built technical barriers — CAPTCHAs, bot detection, rate limiting. Perplexity's engineers circumvented them within 24 hours. Amazon warned them five more times. Perplexity kept going.

Judge Chesney's ruling cited the Computer Fraud and Abuse Act — a 1986 law originally written for computer hackers — and found "strong evidence" that Comet accessed Amazon "without authorization." The critical legal distinction: Comet "accesses with the Amazon user's permission but without authorization by Amazon" (PYMNTS, 2026).

That one sentence rewrites the rules. If you hire a personal shopper, they can walk into any store. If you authorize an AI agent, the store can still say no.

Why the Food Delivery Analogy Keeps Nagging at Me

Amazon's lawyers made a comparison that stuck. DoorDash needs restaurant consent to list menus and process orders. Booking.com needs hotel consent to sell rooms. These are established platform consent norms — and the judge agreed the same logic applies to AI agents.

Perplexity pushed back: users should have "the right to choose whatever AI they want" to interact with services they already have accounts on. Your account, your data, your choice of interface.

Here's what bothers me about the DoorDash analogy. DoorDash negotiates one restaurant at a time. A human on the partnerships team calls a restaurant owner, signs a contract, uploads the menu. That works when you're onboarding restaurants in a city.

An AI shopping agent doesn't work that way. It wants access to every store, every platform, every inventory feed — simultaneously and autonomously. Millions of agents needing access to millions of stores. There's no partnerships team big enough for that. The consent model that works at DoorDash's scale breaks at internet scale.

The judge sided with platform consent, and that creates precedent. Perplexity filed their appeal on March 11. But even if the Ninth Circuit rules differently, the underlying problem doesn't go away: how do you do consent when both sides are machines?

The Three Gaps the Ruling Exposed

Here's where I went down a rabbit hole I didn't expect. A comment on my LinkedIn post about the injunction — from Josh Baker, former NFL player turned tech founder who's been building what he calls the Agent Certification Framework — made me realize the court had to fill a gap that the industry should have filled itself.

Baker's point: "If Comet had operated under a certification standard that required proving platform-level authorization before executing a transaction, this ruling might have looked very different."

I spent three days mapping what exists. Three layers are missing.

Who Is This Agent? (Identity)

Perplexity's Comet had no standardized way to prove its identity to Amazon. It showed up as traffic — sophisticated bot traffic, but traffic. Amazon couldn't distinguish it from a scraper, a price-monitoring tool, or a legitimate shopping agent with bad manners.

The infrastructure being built:

Visa open-sourced a Transaction Authentication Protocol (TAP) that gives AI agents Ed25519 cryptographic key pairs — essentially digital passports (Visa Developer, 2026). That's the big one. If an agent carries a Visa-issued certificate, a platform can verify its identity in milliseconds.

Cloudflare's approach is different — a 7-step edge verification that checks agent credentials before traffic ever reaches the origin server. And GoDaddy, of all companies, is building an Agent Name Service through an IETF draft standard. Think DNS for AI agents: shopping-agent.perplexity.ai resolves not to an IP address but to a verifiable identity with declared capabilities.

A HID Global survey from Q1 2026 found 15% of organizations are already deploying certificates and credentials for AI agents. A year ago that number was essentially zero.

Can This Agent Be Trusted? (Behavioral Trust)

Even if Amazon knew who Comet was, it had no way to know what Comet would do. Would it respect rate limits? Follow terms of service? Avoid scraping inventory data for competitive intelligence?

Two certification systems are racing to fill this gap:

The Agent Certification Framework (ACF) runs 30 behavioral tests across 4 suites — commitment boundaries, consistency, hallucination detection, and adversarial resistance. Agents that pass get cryptographic certificates. There's a public registry of certified agents (ACF Standards, 2026).

Then there's AIUC-1 — backed by Stanford, MIT, and MITRE — which certified UiPath as its first platform on March 9, 2026 (UiPath Blog, 2026). The same day as the Perplexity injunction. Over 2,000 evaluations completed across participating organizations. People are calling it "SOC 2 for AI" — the same kind of third-party audit that enterprise software buyers already require for cloud services.

A2AS — a consortium including AWS, Google, and JPMorgan — focuses on runtime security between agents. How agents authenticate to each other during multi-agent workflows where one agent hands off a task to another.

Did the User Actually Say Yes? (Authorization Records)

The court found that user permission alone wasn't sufficient. But Perplexity had no cryptographic proof that users authorized specific transactions — just session access and Amazon credentials.

Mastercard launched Verifiable Intent on March 5, 2026, with Google — four days before the ruling. It creates a cryptographic record that a human authorized a specific AI agent to make a specific purchase. Not just "I logged in" but "I told this agent to buy this item at this price" (Mastercard Newsroom, 2026).

And in February 2026, NIST launched an AI Agent Standards Initiative to establish baseline requirements for AI agent behavior in commerce and other domains. Still early, but when a federal standards body starts writing rules, the direction is clear — this won't be left to case-by-case courtroom fights forever.

The SSL Certificate Moment

I keep coming back to this analogy. In the 1990s, the web had the same problem. Anyone could put up a storefront. No way to prove a website was legitimate. No way for users to know if their credit card was being sent to a real merchant or a scammer.

SSL certificates solved this. A cryptographic proof of identity, verified by a trusted third party. The padlock icon became a trust signal that billions of people understand without thinking about the cryptography underneath.

AI agents are in that pre-SSL moment right now. The pieces being built — Visa TAP for identity, ACF and AIUC-1 for behavioral trust, Mastercard Verifiable Intent for authorization — are the agent equivalent of the certificate authority ecosystem.

The timing is what struck me most. Mastercard launched March 5. UiPath got certified March 9. Visa's TAP is already on GitHub. These aren't roadmap slides from a conference keynote. They're shipping while the courts are still figuring out which 1986 law applies.

What Happens Next

Perplexity appealed to the Ninth Circuit on March 11 (CNBC, 2026). On the same day, Amazon expanded its Shop Direct and Buy for Me features to over 400,000 merchants and 100 million products (About Amazon, 2026) — building its own agent shopping layer inside the walled garden while the legal fight keeps external agents out.

I see three paths from here, and they're not mutually exclusive.

The Ninth Circuit upholds the ruling. Platform consent becomes the law for AI agents — the DoorDash model at internet scale. Amazon, eBay (which banned AI agents in January), and any platform with enough leverage gets to dictate terms.

Or the Ninth Circuit reverses, siding with consumer autonomy. Your account, your agent, your choice. This favors Perplexity and anyone building cross-platform agents — but it creates chaos without behavioral standards. Agents flooding platforms with no identity, no trust verification, no audit trail.

The path I keep coming back to: the trust infrastructure matures fast enough that the legal question becomes secondary. Agents carry cryptographic credentials. Platforms verify identity at the edge. Behavioral certifications become table stakes. The padlock icon, but for agents.

I think we get some combination of the first and third. The courts draw a line. The infrastructure fills the space the courts can't reach. I think this because of the timing — Visa, Mastercard, and the certification bodies aren't waiting for the Ninth Circuit. They're building as if the answer is already obvious.

The Question That Won't Go Away

Four posts into this investigation — from testing which stores AI recommends, to the infrastructure gap, to China's head start, to Amazon's walled garden — I thought the questions were technical. Which protocols work? Whose products show up?

This ruling shifts the frame. The question isn't "can your store serve AI agents." It's "who decides whether an agent can enter your store in the first place?" And underneath that: who decides what counts as a trustworthy agent?

Four competing protocols — UCP, ACP, MCP, A2A — each backed by different tech giants, each with a different answer to that question. That's where I'm digging next.

FAQ

Can AI agents legally shop on websites without platform permission? Not under current precedent. The March 2026 ruling in Amazon v. Perplexity found that user authorization is not the same as platform authorization under the CFAA. Perplexity's Comet browser was blocked despite having user permission because Amazon never authorized agent access. The case is being appealed to the Ninth Circuit.

What is the Agent Certification Framework? Think of it as a trust seal for AI agents. ACF runs 30 behavioral tests across 4 suites — commitment boundaries, consistency, hallucination detection, and adversarial resistance. Agents that pass receive cryptographic certificates and appear on a public registry at acfstandards.org.

What is Mastercard Verifiable Intent? A cryptographic receipt proving a human authorized a specific AI purchase. Launched March 5, 2026, in partnership with Google, it creates an auditable record that goes beyond session-level access — exactly the kind of proof that was missing in the Perplexity case.

How does Visa TAP work for AI agents? Visa's Transaction Authentication Protocol assigns Ed25519 cryptographic key pairs to AI agents — digital passports, essentially. Platforms can verify agent identity at the edge before granting access. The protocol is open-source on GitHub, signaling Visa's push for an industry standard rather than a proprietary solution.

Will this ruling affect other AI shopping tools? Almost certainly. It establishes the first federal precedent that platforms can block AI agents even when users authorize them. This extends beyond shopping to travel booking, food delivery, and any service where an agent acts on behalf of a user on a third-party platform.

  • Agentic Commerce
  • AI
  • Ecommerce
  • Legal
Eugene Levitin
Eugene Levitin

CEO, Ivinco

Building Ivinco since 2009 — a Kubernetes consulting firm with 20+ senior engineers managing 1,350+ servers worldwide. Currently exploring how AI agents are reshaping e-commerce infrastructure.