2026-04-05  ·  18 viewsx402MPPAI agentsopenapillms.txt

How AI Agents Discover and Pay for APIs Without Human Intervention

No UI required. AI agents find paylog.dev through openapi.json, llms.txt, and Bazaar — then pay automatically via x402 or MPP. Here's how the discovery-to-payment flow works.

Most API documentation is written for humans. You navigate to a site, read the docs, copy an example curl command, paste in your API key, and eventually figure out what the endpoint returns. This works fine for humans. For AI agents, it's completely the wrong model.

Agents don't read documentation the way humans do. They don't navigate UIs. They don't sign up for accounts or enter payment details. What they can do is read structured files, call endpoints, and handle payment challenges — all in a single automated flow. This post explains how that flow actually works, using paylog.dev as a concrete example.


Humans vs Agents: Different Entry Points

The difference comes down to entry points.

A human arrives at a website through search or a link, reads a landing page, finds the docs section, copies an example, and eventually makes their first API call. The discovery journey looks like this:

Human:
  Google → paylog.dev → landing page → docs → API call

An agent skips all of that. It looks for structured metadata — machine-readable files that describe what the API does and how to pay for it. The journey is much shorter:

Agent:
  openapi.json → endpoint → 402 challenge → pay → data

The UI is never touched. The docs page is never read. The agent gets everything it needs from a handful of structured files.

This is why x402 and MPP matter. They're not just payment protocols — they're what make the second flow possible. Without a standard way to signal "this endpoint costs $0.001 and here's how to pay," there's no way for an agent to proceed without human intervention.


openapi.json: The Agent's Map

The openapi.json file is the primary entry point for agent discovery. It tells an agent what endpoints exist, what parameters they accept, and — crucially — what they cost and how to pay.

Standard OpenAPI doesn't include payment information. The AgentCash Discovery Spec extends it with an x-payment-info field that carries everything an agent needs to initiate payment.

Here's what paylog.dev's openapi.json looks like for the report endpoint:

{
  "/api/v1/report": {
    "get": {
      "operationId": "getReport",
      "summary": "Get spending report by wallet and date range",
      "x-payment-info": {
        "price": {
          "mode": "fixed",
          "amount": "0.001000",
          "currency": "USD"
        },
        "protocols": [
          {
            "mpp": {
              "method": "tempo",
              "intent": "charge",
              "currency": "0x20c000000000000000000000b9537d11c60e8b50"
            }
          }
        ]
      },
      "parameters": [
        { "name": "wallet", "in": "query", "required": true },
        { "name": "from",   "in": "query", "required": true }
      ]
    }
  }
}

An agent that reads this knows: the endpoint costs $0.001 USD, payment goes through MPP on Tempo, and the currency contract address is 0x20c0.... No human needed to convey any of that.

The spec lives at https://paylog.dev/openapi.json and is discoverable via a well-known path. Any agent that knows to look for openapi.json at the root of a domain — or that finds it through Bazaar — can immediately understand the full API surface.


llms.txt: The Agent's Briefing

openapi.json tells agents what the API does. llms.txt tells them what the site is in plain language.

The format is simple: a markdown file at https://yourdomain.com/llms.txt with a brief description of the service, links to key resources, and whatever context an AI model needs to use the site effectively. It's the spiritual successor to robots.txt, but instead of controlling crawler access, it's designed to make AI-assisted discovery actually work.

Here's the relevant part of paylog.dev's llms.txt:

# paylog.dev

## What is this
paylog.dev is an API that aggregates and visualizes on-chain payment history
for MPP (Machine Payments Protocol) and x402 protocol transactions.
Stateless, no database. Callable directly from AI agents or CLI.

## Endpoints

### GET /api/v1/report
Spending breakdown on Tempo chain (MPP).
Payment: $0.001 USDC via MPP

### GET /api/v1/x402/services
x402 service registry: maps service names to payTo addresses on Base.
Payment: $0.01 USDC via x402

The difference from robots.txt is intent. robots.txt is about access control — what crawlers are allowed to index. llms.txt is about comprehension — helping an AI model understand your site well enough to reason about it and use it correctly. You're not telling the agent what it can't do; you're giving it a head start on understanding what you offer.


Bazaar and MPPscan: Agent-Native Directories

Agents need a way to discover APIs they've never encountered before. Two registries handle this for the MPP and x402 ecosystems.

MPPscan (mppscan.com) is the registry for MPP services on Tempo chain. It maps recipient wallet addresses to service names and metadata. When an agent sees a USDC transfer going to 0xca4e835F803..., it can look up that address in MPPscan and find out it went to a bundle of Exa/fal/Modal services. paylog uses MPPscan data to build its service-name resolution — and is itself registered there.

Bazaar is the discovery layer for x402 services on Base mainnet. It's where x402-compatible services list their payTo addresses, pricing, and endpoint descriptions. Agents that support x402 can query the Bazaar API to find services worth calling.

Being registered in both directories matters. It's not about SEO or referral traffic — it's about being in the index that agents search. An agent that queries Bazaar for "spending history API" should be able to find paylog.dev and proceed directly to a paid API call without any human guidance.


The Discovery-to-Payment Flow

Here's the full sequence from first contact to receiving data:

Step 1: Agent reads openapi.json at https://paylog.dev/openapi.json
Step 2: Agent identifies /api/v1/report as a paid endpoint ($0.001 MPP)
Step 3: Agent calls GET /api/v1/report?wallet=0x...&from=2026-03-01
Step 4: Server returns HTTP 402 with payment challenge
Step 5: Agent pays automatically using MPP wallet
Step 6: Agent retries with payment proof in Authorization header
Step 7: Server verifies payment and returns JSON report

The agent never stops to ask a human for help. Steps 4–6 happen in milliseconds, handled entirely by the payment client library.

For x402 on Base, the same flow applies with different payment infrastructure. Using @x402/fetch:

import { wrapFetchWithPayment } from '@x402/fetch'
import { privateKeyToAccount }  from 'viem/accounts'
import { registerExactEvmScheme } from '@x402/evm/exact/client'
import { x402Client } from '@x402/core/client'

const account = privateKeyToAccount(process.env.EVM_PRIVATE_KEY)
const client  = new x402Client()
registerExactEvmScheme(client, { signer: account })

const fetchWithPayment = wrapFetchWithPayment(fetch, client)

// This call handles the 402 → pay → retry cycle automatically
const res    = await fetchWithPayment('https://paylog.dev/api/v1/x402/services')
const { services } = await res.json()

The wrapFetchWithPayment wrapper intercepts the 402 response, signs the EIP-3009 payment transaction, and retries the request — all without any application code change. From the caller's perspective, it's just a fetch call that costs $0.01.


Good Data Attracts Agents

There's a pattern emerging in the x402/MPP ecosystem that mirrors what happened with the early web: structured, hard-to-find data that's been manually curated is disproportionately valuable to AI systems.

jphfa wrote about this in the context of Berghain's booking system — a database of DJ sets, residencies, and booking patterns assembled by hand, the kind of thing that can't be easily scraped from a public page. That kind of data is worth paying for, and x402 makes it trivial to charge for.

paylog.dev's service mapping is the same structure in a different domain. The mapping from on-chain recipient addresses to human-readable service names doesn't exist in any single authoritative source. It's assembled from MPPscan, Bazaar, and manual research, updated daily. That's genuinely useful information for any agent operating on Tempo or Base — and it's exactly the kind of thing that justifies a micropayment.

Manually curated data
  → Not available elsewhere
  → AI systems need it
  → x402/MPP makes it easy to charge
  → Revenue funds more curation

The economic loop here is real. Good data pays for its own maintenance.


Track Agent Spending with paylog

There's a self-referential quality to paylog.dev that's worth calling out.

When an agent calls GET /api/v1/services to retrieve the MPP service registry, it pays $0.001 via MPP on Tempo. That payment shows up as a transaction on-chain. Any wallet that paid paylog.dev is itself tracked by paylog.dev.

Which means: if you're running an agent that calls paylog endpoints, you can track what you spent on paylog using paylog itself.

# See what your agent spent on paylog (and everything else)
npx @kakedashi/paylog report --days 30

# Or via the API directly
tempo request -L -X GET \
  "https://paylog.dev/api/v1/report?wallet=YOUR_WALLET&from=2026-03-01"

The x402 version works the same way:

EVM_PRIVATE_KEY=0x... npx @kakedashi/paylog report --chain base

This is what a properly instrumented agent economy looks like: every payment is observable, every service is addressable, and the tools that track spending are themselves part of the ecosystem they're tracking.


The shift from human-navigated APIs to agent-native APIs is already happening. The infrastructure — openapi.json, llms.txt, Bazaar, MPPscan, x402, MPP — is largely in place. What's left is building the services that agents actually want to call.

The highest-value position in that world isn't necessarily building the most sophisticated AI. It's being the authoritative source of structured data that AI systems can't get anywhere else.