Version: 1.0 Date: 2026-03-11 Status: Living Document Reference: PROTOCOL_SPECIFICATION.md v1.4, DEC-028, DEC-029
The AURA Protocol Specification describes message formats, state transitions, endpoints, and cryptographic mechanisms. It reads like a network protocol. That is intentional — implementers need wire-level precision.
But the protocol is not a network protocol. It is a market institution.
Network protocols move data reliably. Market institutions coordinate behaviour among self-interested participants who may cooperate, defect, deceive, or withdraw at any time. The rules that make a market stable are economic, not just technical. They determine whether participants can trust counterparties they have never met, whether high-quality sellers remain in the market, and whether the system improves or degrades as it scales.
This document makes the economic model explicit. It describes the principles that informed the protocol’s design: why constraints are first-class objects, why reputation is multi-dimensional, why identity persists across sessions, and why the protocol makes bad behaviour unprofitable rather than attempting to make it impossible.
Readers who care only about integration can skip this document. Readers who want to understand why the protocol works the way it does — or who plan to build on it — should not.
Most digital commerce systems model a shop: catalogue, cart, checkout. AURA models a market: intent, discovery, negotiation, commitment, settlement.
Shops assume a fixed seller with published prices and a buyer who accepts or leaves. Markets assume multiple competing sellers, incomplete information, and a negotiation process that surfaces the best available terms. The agent commerce landscape — where autonomous software acts on behalf of buyers and sellers — maps to the market model, not the shop model. Agents do not browse. They negotiate.
The protocol’s architecture reflects this. A Scout does not query a catalogue. It declares intent. Beacons do not list inventory. They respond to demand signals with offers shaped by their own constraints, capacity, and strategy. AURA Core does not process transactions. It brokers a market: routing intent, facilitating discovery, enforcing protocol rules, and recording outcomes that feed reputation.
This framing has consequences. Markets require mechanisms that shops do not: price discovery, adverse selection resistance, commitment enforcement, and behavioural equilibria. The rest of this document describes how the protocol addresses each of these.
The protocol draws on three bodies of work.
Robert Axelrod’s tournaments on the iterated Prisoner’s Dilemma established that cooperation emerges among self-interested agents when three conditions hold:
The winning strategy — tit-for-tat — starts cooperative, mirrors the counterparty’s last action, and forgives quickly. It requires no central authority. It requires only memory and visibility.
AURA’s design embeds these conditions structurally. Agent identity is persistent (Ed25519 keypairs survive across sessions), so interactions repeat. Session outcomes are recorded by Core and feed reputation, so behaviour is observable. Degraded reputation reduces offer quality, match priority, and ultimately market access, so retaliation is automatic and cheap.
The protocol does not attempt to prevent defection. It makes defection economically irrational over time.
George Akerlof’s 1970 analysis showed that in markets with asymmetric information about quality, low-quality sellers drive out high-quality ones. Buyers who cannot distinguish reliable from unreliable counterparties assume the worst and offer accordingly. High-quality sellers, unable to recover their costs at the depressed price, exit. The market spirals toward low quality.
Agent markets amplify this risk. Interactions scale massively (a Beacon might handle thousands of sessions per day), the cost of defection is low (submit an aggressive offer, fail to deliver, re-register), and quality signals are harder to verify when both sides are software.
The protocol counters this through credible quality signals — specifically, multi-dimensional reputation that is expensive to fake and cheap to verify. A Beacon’s fulfilment rate, delivery accuracy, offer consistency, and dispute record are computed from actual protocol interactions, not self-reported claims. These signals allow high-quality sellers to differentiate themselves, which prevents the lemons equilibrium from forming.
Mechanism design asks: given that participants are self-interested and possess private information, can you design rules such that rational behaviour produces desirable outcomes?
The protocol applies this principle in several places:
Every automated market encounters three structural problems as it scales. They are not bugs — they are emergent properties of self-interested agents operating at machine speed. The protocol’s design addresses each one.
The problem: In markets where quality is uncertain, unreliable participants can offer aggressively (lower prices, faster delivery promises) because they do not intend to deliver reliably. Reliable participants cannot compete on those terms. Over time, reliable participants leave. Market quality collapses.
How AURA addresses it:
The reputation protocol computes multi-dimensional behavioural scores from actual session outcomes, not self-reported claims:
| Signal | Source | What It Measures |
|---|---|---|
| Fulfilment reliability | Transaction completion records | Percentage of committed offers actually delivered |
| Offer consistency | Offer-to-outcome comparison | Gap between what was promised and what was delivered |
| Constraint accuracy | Session constraint evaluation | How often a Beacon’s declared capabilities match reality |
| Dispute rate | Dispute resolution records | Frequency and severity of post-commitment disputes |
| Response quality | Session interaction patterns | Relevance and completeness of offers relative to intent |
These signals are computed by Core from protocol-level events. A Beacon cannot inflate its fulfilment rate without actually fulfilling commitments. A Beacon with high dispute rates cannot hide them behind a self-curated profile.
The Compatibility-Weighted Reputation (CWR) formula combines base reputation with per-session compatibility scoring, so a Beacon with high general reputation but poor fit for a specific request is ranked below a Beacon with moderate reputation but excellent fit. This prevents reputation from becoming a static moat.
Beacon qualification thresholds are defined in the protocol specification (minimum reputation score, configurable per deployment) to create a floor below which participation is restricted. This is the protocol’s equivalent of financial market capital requirements — a cost of entry that makes disposable identities uneconomical. The threshold is not yet enforced in the beacon matching implementation, but the protocol reserves it as a mechanism.
The problem: When negotiation is automated, agents can probe the market for intelligence rather than transacting. A Scout might submit slightly varied intents to map pricing curves. A Beacon might submit offers designed to reveal buyer urgency or budget elasticity. At scale, probing traffic overwhelms genuine demand, sellers hide information defensively, and the market becomes opaque.
How AURA addresses it:
Information sovereignty by design. The protocol’s sanitisation layer (Section 6.5 of the Protocol Specification) strips personally identifiable information, injection patterns, and raw user input before any content reaches a Beacon. Beacons receive structured requirements and sanitised context — enough to generate a relevant offer, not enough to profile the buyer.
Compound intent decomposition (schema reserved, Section 5.4.4) furthers this by splitting a single buyer’s request across multiple independent Beacons, each seeing only their sub-intent. A Beacon fulfilling the accommodation portion of a travel request knows that a budget is being consumed elsewhere but cannot determine what any other supplier charged. This prevents cross-supplier price anchoring.
Constraint abstraction. Hard constraints are declared explicitly, but soft preferences are surfaced as ranking criteria, not negotiation leverage. A Beacon knows the buyer’s maximum price but not how far below maximum they would happily pay. This limits the information available for price discrimination.
Repeated-game incentives. Probing behaviour generates session records with distinctive patterns (many sessions, few commitments, variant intents). These patterns degrade reputation over time. An agent that probes aggressively receives worse offers and slower responses in future sessions, making the strategy self-defeating.
The problem: Rational agents over-reserve. If accepting an offer is cheap and cancelling is free, a Scout will accept multiple offers simultaneously and cancel all but one. From the Beacon’s perspective, capacity is reserved, other buyers are rejected, and the cancellation arrives too late to recover the opportunity cost. This is the “phantom liquidity” problem familiar from financial markets, airline bookings, and ad exchanges.
How AURA addresses it:
Offer expiry. Every offer includes a valid_until timestamp. Offers that are not committed to within their validity window are automatically discarded. This limits the optionality window.
Atomic commitment. The POST /sessions/:id/commit transition is atomic — it creates a transaction, locks the session, and moves the state to committed within a single database transaction using SELECT ... FOR UPDATE. A Scout cannot commit to multiple offers within the same session. Committing to one implicitly rejects all others.
Cancellation visibility. Session cancellations are recorded and attributed. A Scout that routinely opens sessions, collects offers, and cancels without committing accumulates a pattern that degrades reputation. The protocol does not prohibit cancellation (legitimate reasons exist), but it makes serial cancellation visible and costly.
Future: commitment friction. The protocol reserves space for explicit commitment costs — deposits, stake-based negotiation, or cancellation penalties — in the settlement layer. These mechanisms are not yet implemented because the settlement layer itself (Phase 3, revenue infrastructure) is not built. When it is, the protocol’s state machine and constraint framework support adding friction without restructuring.
Most commerce systems treat constraints as filter parameters. AURA treats them as first-class protocol objects with economic significance.
A constraint declared in a session is not just a search filter. It is a signal from a buyer to the market about what constitutes an acceptable outcome. Hard constraints (maximum price, required certifications, delivery deadline) are binding — offers that violate them are excluded. Soft preferences (preferred brands, feature priorities, ideal price range) are ranking criteria — they influence which offers surface first without excluding alternatives.
When constraints are explicit, Beacons can compute feasibility before generating offers. A Beacon that cannot meet a hard delivery constraint does not waste resources pricing an offer that will be filtered out. A Beacon that can exceed soft preferences knows it has a competitive advantage worth investing in.
The effect at market level is efficiency. Instead of open-ended negotiation where participants probe each other’s limits, the protocol enables something closer to distributed constraint optimisation — agents converging on feasible solutions within declared boundaries. This is more machine-friendly, more scalable, and less susceptible to the information probing problem described in Section 4.2.
The constraint protocol also enables the future delegation layer (Phase 6, Verifiable Intent integration). When constraints are first-class objects, a delegation credential can reference them directly: “this agent is authorised to commit up to $5,000 for office supplies with delivery by Friday.” The constraint becomes both a market signal and a cryptographically verifiable scope of authority.
Reputation in the AURA protocol is infrastructure — as fundamental as the session lifecycle or the signature chain.
Many platforms reduce reputation to a single score (4.7 stars, 92% positive, “Gold Seller”). Scalar reputation fails in automated markets for three reasons:
The protocol computes reputation across independent dimensions:
Each dimension is independently queryable. The CWR formula (Section 9.1 of the Protocol Specification) combines base reputation with per-session compatibility, but consumers of the reputation data — including Scouts, integrators, and monitoring systems — can inspect individual dimensions.
Multi-dimensional reputation is what makes the Axelrod dynamics work at protocol level. For tit-for-tat to produce cooperation, three things must be true:
The current architecture is Core-mediated — Core computes and stores reputation. This is appropriate for the initial market where Core is the only venue. As the ecosystem grows, a second model becomes possible: cryptographically portable reputation.
The Ed25519 identity infrastructure already supports signed attestations. Core could issue a signed statement: “Agent X has 98% fulfilment reliability across 200 sessions, computed as of 2026-03-11.” This attestation is verifiable by any party with Core’s public key, without calling Core at runtime.
Portable reputation enables agents to carry trust across venues, reduces Core’s role as a bottleneck, and aligns with the Verifiable Intent framework (Phase 6) where delegation credentials already carry signed claims. The schema reservation for delegation_credential (DEC-029) anticipates this convergence.
Axelrod’s model assumes repeated interaction. But open markets include one-shot participants: a Beacon that appears, defects once, and disappears.
This is the end-game problem. If a participant knows it will not interact again, cooperation is irrational. The shadow of the future — the expectation that behaviour today affects outcomes tomorrow — disappears.
The protocol addresses this through identity persistence and qualification thresholds:
These mechanisms do not eliminate one-shot defection entirely. They make it expensive enough that it is not a dominant strategy. The protocol accepts a small defection rate as the cost of an open market, rather than closing the market to prevent all defection.
Privacy in the AURA protocol is an economic mechanism that shapes market behaviour.
Scout identity is withheld until transaction commitment. Beacons cannot see who is asking, only what is being asked for. This prevents price discrimination based on buyer identity — a common pathology in automated markets where agents profile counterparties using historical data.
The economic effect is genuine price discovery. When Beacons compete on the merits of their offers rather than their assessment of the buyer’s willingness to pay, prices converge toward competitive equilibrium rather than individual extraction.
The protocol’s sanitisation layer ensures Beacons receive the minimum information necessary to generate a relevant offer. Raw user input is never forwarded. Behavioural signals (purchase history, price sensitivity, decision speed) are reserved for a future inter-agent signals channel governed by explicit Scout consent policy.
This is a deliberate choice to sacrifice market efficiency for market fairness. A Beacon with perfect information about a buyer’s preferences and budget could extract maximum surplus. A Beacon with only structured requirements and sanitised context must compete on the quality of its offer. The protocol trades potential efficiency gains for a market that buyers are willing to participate in.
When compound intents are implemented (schema reserved, Section 5.4 of the Protocol Specification), each sub-intent is routed to independent Beacons who see only their portion. A Beacon fulfilling one sub-intent learns that budget is being consumed elsewhere but cannot determine what other suppliers charged, who they are, or what they offered.
This is economic privacy — the prevention of cross-supplier intelligence that would enable coordinated pricing or market manipulation.
The protocol’s design converges on patterns from the Financial Information eXchange (FIX) protocol, which standardised electronic trading across financial markets.
| FIX Concept | AURA Equivalent |
|---|---|
| Order types (market, limit, stop) | Constraint tiers (hard constraints, soft preferences) |
| Execution reports | Session state transitions, transaction records |
| Party identification (sender/target comp IDs) | Ed25519 agent identity (Scout ID, Beacon ID) |
| FIX session (logon, heartbeat, logout) | Agent registration, session lifecycle, expiry |
| Order matching | Beacon matching via category, constraint, reputation |
| Allocation/settlement | Transaction commitment, fulfilment, payment |
| Market data | Offer presentation, constraint evaluation |
The analogy is structural, not superficial. FIX succeeded because it standardised the negotiation layer between counterparties while leaving execution, settlement, and clearing to specialised systems. AURA follows the same separation: the protocol standardises intent, discovery, and negotiation while deferring payment processing, fulfilment logistics, and dispute resolution to appropriate layers.
The implication for protocol evolution is that AURA is likely to follow a trajectory similar to FIX: initial adoption for simple flows (single intent, single commitment), gradual extension to complex instruments (compound intents, delegation, multi-round negotiation), and eventual specialisation into market-specific profiles (procurement, services, retail).
The following principles are embedded in the protocol architecture and should guide future extensions:
Make defection unprofitable, not impossible. The protocol does not attempt to prevent all bad behaviour through technical enforcement. It creates conditions where cooperation is the dominant strategy through persistent identity, observable behaviour, and automatic reputation consequences.
Constraints are economic signals, not search filters. Declaring constraints shapes the negotiation space for all participants. Hard constraints eliminate waste. Soft preferences enable differentiation. Both are first-class protocol objects.
Reputation is multi-dimensional and computed, not self-reported. Scalar scores collapse information. Multi-dimensional behavioural signals computed from protocol-level events resist gaming, enable nuanced trust assessment, and support the cooperative equilibria that make the market stable.
Privacy creates fairness. Identity abstraction prevents price discrimination. Information minimisation limits strategic advantage. Compound intent decomposition prevents cross-supplier intelligence. These are economic mechanisms, not compliance checkboxes.
The protocol is a market institution, not a messaging layer. Technical correctness (valid messages, proper signatures, successful delivery) is necessary but not sufficient. The protocol must also produce stable, fair, and efficient markets. That requires economic design, not just engineering.
| Date | Change |
|---|---|
| 2026-03-11 | Initial document (Cowork session) |