A federated compute network powered by customers' freed capacity of any kind: CPU cycles, GPU cycles, storage, bandwidth, verification work. The Validiti stack reduces customer infrastructure demand 30–95% depending on the workload — storage compression, faster statistical compute, AI cache substrate, signature verification, and more. Covenant federates the freed capacity back into a coordinated pool that absorbs every category of work data centers were built to handle — scientific simulation, financial backtesting, bioinformatics, media encoding, verification, AI inference, and statistical batch. Workers earn Covenant Coins for verified contribution — 1 coin = $0.01 service credit, redeemable against any Validiti SKU.
A protocol, not a service. Validiti publishes the spec; customers run the network. Workers contribute idle capacity of any kind — CPU cycles, GPU cycles, storage, bandwidth, verification work — earn coins for verified work, spend coins on Validiti services and license fees.
The Validiti stack frees up customer capacity across multiple resource dimensions — CPU, GPU, RAM, disk, bandwidth, verification cycles — depending on which SKUs the customer adopts. Covenant federates a fraction of that freed capacity into a coordinated pool. Many workers, multiple coordinators, no central operator required. The network functions if Validiti is offline.
No mining. No staking. No consensus round-trips. No exchange listings. Covenant Coins are minted in proportion to measured units of verified work (compute-hours, GB-hours hosted, GB-transferred, signatures-verified). 1 coin = $0.01 fixed service credit, redeemable against any Validiti SKU.
Every job, every result, every coin mint is Pacta42-signed and hash-chained. Workers verify the coordinator's job authenticity. Coordinators verify the worker's signed result. The chain verifies end-to-end with one command. Tamper-evident at every step.
From datacenter-class servers down to phones. The coordinator routes jobs by capability across every workload type — scientific simulation, financial backtesting, statistical batch, bioinformatics, media encoding, signature verification, AI inference. Sovereign tier handles heavy parallel work; power tier handles single-machine compute; verifier tier handles signing and audit-chain validation. Every device contributes to the work it's capable of.
Aggregate compute, storage, and bandwidth scale linearly with worker count. More participants means more available capacity per workload class and shorter wait times in the coordinator queue. Bundle reuse compounds: corpus and signed-corpus distribution costs fall as more workers mirror them.
The worker daemon runs only when local utilization (CPU, GPU, RAM, disk I/O, network) is below threshold. Customer's primary workload always preempts federated work. No customer is asked to give up performance for participation — only to lend the cycles, storage, or bandwidth they aren't using.
The same five-step flow regardless of hardware tier or coordinator choice. Brain Key signs worker identity; Succession Ledger stores balances; coordinator routes jobs by tier and workload.
From datacenter-class servers down to phones. Capability spans multiple resource dimensions — CPU, GPU, RAM, disk, bandwidth, verification cycles — not just GPU TFLOPS. The coordinator routes work to whatever the worker can actually do. Workers self-classify on boot.
| Tier | Class | Hardware examples | What they offer | Workload examples |
|---|---|---|---|---|
| T0 | Sovereign | H200/H100/A100 GPU servers; 96-core EPYC + 512 GB RAM CPU servers; MI300X clusters | 200–700 TFLOPS GPU or 50–200 CPU cores + 256 GB–1 TB RAM |
Scientific simulation chunks, large fine-tunes, financial Monte Carlo, drug screening, bundle building |
| T1 | Enterprise | RTX 6000 Ada / multi-RTX 4090·5090 / M3 Ultra cluster / dual-socket Xeon / A40/A6000 | 80–250 TFLOPS GPU or 32–128 CPU cores + 128–512 GB RAM |
Bioinformatics alignment, financial backtesting, drift detection at scale, distributed sorting |
| T2 | Power | RTX 5090, RTX 4090, M3 Ultra single, single-socket workstation, A10/A30 | 40–120 TFLOPS GPU or 16–32 CPU cores + 64–128 GB RAM |
Statistical batch, AI inference, perplexity probes, image / 3D rendering, signal processing |
| T3 | Standard | RTX 4080/4070 Ti, RTX 3090/Ti, M2/M3 Max, mid-range workstation | 15–50 TFLOPS GPU or 8–16 CPU cores + 32–64 GB RAM |
Small inference, audit verify, A/B test analysis, lightweight stats batch, video encoding |
| T4 | Light | RTX 4060/Ti, RTX 3070/3080, M2 Pro, T4, Jetson Orin AGX, mid-range desktop | 5–20 TFLOPS GPU or 4–8 CPU cores + 16–32 GB RAM |
Micro-inference, ShiftCAPTCHA verify, Maths CPU batch, drift sample processing, signature aggregation |
| T5 | Edge | M1/M2 base, GTX 16xx, Jetson Orin Nano, RX 6600, laptops, integrated GPUs | 0.5–5 TFLOPS GPU or 2–4 CPU cores + 8–16 GB RAM + disk for hosting |
Maths CPU, signature verify, audit chain validation, bundle hosting (disk-rich), drift channel mirroring |
| T6 | Verifier | Phones, ARM SBCs, Raspberry Pi 5, integrated graphics | <0.5 TFLOPS or 1–2 CPU cores + <8 GB RAM |
Pacta42 signature verify, audit chain validation, ShiftCAPTCHA verify, integrity checks |
A worker doesn't have to be a GPU box. A 96-core EPYC server with 512 GB RAM and no GPU is a perfectly valid T0 worker for statistical, scientific, or bioinformatics workloads. A storage-rich DMS customer with terabytes of disk is a valid T1 or T2 worker for bundle hosting and drift mirroring. A phone is a perfectly valid T6 verifier. Every Validiti license is a worker license.
Customers contribute 10–20% of freed capacity — whatever type their stack adoption frees up. CPU, GPU, RAM, disk, bandwidth, verification cycles. The aggregate pool grows with the network, not with new construction. Numbers below assume a realistic mixed customer base across all seven tiers and all workload types.
Per-resource impact at facility scale — if Covenant absorbs 95% of a 100 MW data-center build-out:
Service credits, not cryptocurrency. Not a token sale. Not pre-mined. Minted in proportion to verified contribution. Spent against the Validiti SKU price list. 1 coin = $0.01 fixed peg.
coins_minted = work_units_verified
× tier_multiplier
× uptime_ratio
× workload_weight
× scarcity_factor
work_units_verified — measured work, signed by worker, accepted by coordinator after verification or consensus. Units depend on workload class: TFLOPS-hours for compute, GB-hours for storage hosting, GB-transferred for bandwidth, signatures-verified for verification worktier_multiplier — T0=1.0 down to T6=0.02; proportional to capabilityuptime_ratio — rolling 30-day uptime (0.0–1.0)workload_weight — scarce workloads (large training, complex simulation) weighted higher than abundant workloads (signature verification, ShiftCAPTCHA verify)scarcity_factor — rises when network has shortage of a workload type, falls when surplusCustomers spend coins on stack license fees, bundle subscriptions, premium coordinator routing, and federated job priority. 1 coin = $0.01 service credit. Closed economy: no fiat conversion, no exchange listings, no secondary market.
Three pricing tiers, mirroring AWS spot / on-demand / reserved. Customer picks per job. Coordinator quotes a price; customer signs the quote; coins escrow until the job completes. A 1-coin per-job floor covers the audit-chain entry on every job, regardless of size.
job_price_coins = max(
base_units × tier_premium × scarcity_factor × deadline_factor × class_weight,
1 coin (per-job floor)
)
base_units — catalog-published rate per unit (CPU-hour, GB-hour, signature, tile-second, etc.)tier_premium — T0=1.0 down to T6=0.05; mirrors mint multiplierscarcity_factor — 0.5–2.0×; real-time pool utilization, published in availability feeddeadline_factor — 1.0× (24h+) up to 4.0× (sub-hour); tighter deadlines cost moreclass_weight — 1.0–3.0×; constrained workloads (GPU-only, T0-only) cost more| Tier | Multiplier vs base | SLA | Use case | Coordinator take |
|---|---|---|---|---|
| Spot | ~1.0× | Best effort, no SLA, runs when capacity is free | Batch work, deadline-flexible jobs, R&D | 1% |
| On-demand | ~3.0× | Coordinator commits to capacity within 60 sec or refunds | Interactive work, time-sensitive jobs | 3% |
| Reserved | ~5.0× | Pre-booked T0/T1 window paid up front | Recurring batches, "every Tuesday 02:00–06:00 UTC" | 5% |
Reservation rules: use-or-lose by default — unused slots burn coins, no refund. Customer can swap a reservation to a different window with 24-hour notice at zero cost. If the coordinator fails to deliver the booked capacity, customer is refunded 2× (network penalty for missed commitment).
Public visibility: the catalog (workload types and base rates), real-time availability (pool state and current multipliers), and reservation calendar are all open feeds, signed by the coordinator. Customers can compute exact cost before submitting and shop coordinators on price. AWS Spot pricing pattern.
Phase A take rate is 0%. Validiti's reference coordinator subsidizes onboarding through Phase A. Standard rates (1% / 3% / 5%) flip on at Phase B when partner coordinators are live and competing.
No hard cap on parallelism. A single
MapReduceJob meta-protocol envelope can shard across
thousands of workers — distributed training, parallel Monte
Carlo, genome-wide studies, distributed scientific simulation are
all supported as first-class workloads. Customer pays linearly for
what they consume, all workers earn proportionally, single
audit-chain entry covers the whole run.
Five categories. Every one denominated in coins, every coin $0.01 fixed.
| Category | What customer is buying |
|---|---|
| License offset | Pay Validiti SKU license fees with earned credits instead of fiat (DMS, Maths, Accelerate, Titus, Pacta42, etc.) |
| Bundle subscriptions | Medical, legal, scientific, financial, bioinformatics, media bundles — pay in coins |
| Federated workload submission | Submit a job to the network; pool of workers runs it; signed result returned. The offload-to-servers piece — what makes the network real to a customer |
| Premium routing | Priority queue at the coordinator; jump ahead of best-effort jobs |
| Capacity reservation | Pre-book T0/T1 worker capacity for a specific window (use-or-lose, free swap with 24h notice) |
The Phase A workload catalog covers verification (signature,
audit, ShiftCAPTCHA), statistical batch (Maths CPU), drift
aggregation, bundle build, AI inference on public corpora, and
(with mandatory consensus verification at Phase A.3) scientific
simulation, financial Monte Carlo, bioinformatics alignment,
engineering simulation, media tile rendering, and cryptographic
batch. Phase B opens storage hosting and bandwidth mirroring as
independent worker tiers; Phase C opens federated fine-tuning and
full
MapReduceJob-driven distributed training. Full
workload-to-tier matrix in the
build plan §8.
Worker, coordinator, and customer all inherit the same Validiti substrate protections. Every job, every coin, every state transition is signed.
Plus everything in Validiti Titus — the security engine that ships with every SKU. See network-scale protection (entropy that scales with the network, not against it) and data isolation & malice resistance (eight attack shapes, why each one fails structurally) for the depth case. Always included; not a paid tier.