One install. One bill. Four products composed under one signed Atlas slice. Most customers buy Atlas rather than the four individually — the bundle is how the four were designed to work together. Per-user partition, every answer signed, every action on the record, every update verified before it lands.
We built the category. Nobody else has VSS to imitate it.
Atlas ships with four products under one slice. Each is a real SKU in its own right; together they're how an Atlas install actually works.
Signed corpora, sub-millisecond recall, citation tags on every hit. Bring your own corpus or subscribe to ours.
See Knowledge → Verdict labelsEvery claim ships with a label: verified, partial, missing, expired. Same words, every product in the bundle.
See Provenance → Signed updatesNew entries verified on arrival; expired entries drop. Your Knowledge stays current without redeploying anything.
See Drift → One feed, every productEvery action across every product in your environment, in one place, in order. One feed instead of seven.
See Audit →You can buy any of the four standalone. Most customers buy Atlas because that's how the four work together — on one signed slice, with one operator chain, and one bill.
Most AI is one giant pool. Everyone shares the same model, the same memory, the same logs. Atlas gives each user their own private corner — signed, sealed, and provable.
Alice is a cardiologist. Bob is an orthopedic surgeon. They both work at the same hospital, both use the same AI tool. Their conversations never cross. Not "we promise" — mathematically can't.
When the AI answers a question, that answer gets a signed timestamp, a verifiable source trail, and a permanent slot in that user's audit history. Six months later in a malpractice review or a SOX audit: receipts on demand.
If 50 users ask the same question, the AI answers once, then hands the signed answer to the others — instantly, with no extra LLM bill. VSS becomes the answer.
Anthropic doesn't hold your audit logs. OpenAI doesn't hold your audit logs. You hold your audit logs, signed under your own master key. Anyone can verify them; nobody can rewrite them.
Atlas runs inside Validiti Accelerate's cache — the layer that already sits between your users and the LLM. Adding Atlas turns that cache from an anonymous shared pool into a per-user verifiable record.
What happens when Alice the cardiologist asks Atlas a question.
Ten things, told plainly. Three a slice does on its own; seven it does with other slices, with the LLM provider, or with other Validiti products. Every interaction is signed; every signed interaction lands in the user's audit trail.
When the user asks a question, the slice checks five different ways the answer might already exist — recent phrasing, related topics, semantic neighborhoods. The best match goes to the LLM as context.
Example: Alice asks "drug interactions for warfarin?" The slice instantly surfaces her last warfarin question, hospital protocol updates, and FAERS safety signals.
While the LLM is writing, the slice checks each sentence against verified sources. If the model wanders off, the slice nudges it back before the answer reaches the user.
Example: The LLM starts inventing a dosage. The slice spots no matching source, injects the correct dosage from FAERS, and the user never sees the made-up version.
Each piece of information used to answer gets a label: verified, partial, no source. A clinician's slice can be set to only show fully-verified info; a researcher's can see partial.
Example: A medical slice never serves an unsourced answer. A research slice gets a "partial source" flag instead of an outright refusal.
If one slice already has a high-quality answer, it can pass that signed answer to another slice — instantly, without re-querying the LLM. Both sides record the handoff.
Example: 50 doctors ask the same drug-interaction question. The first costs an LLM call. The other 49 get the signed answer instantly.
If one slice discovers a fact has gone stale (a drug recall, a retracted citation), it can push that signal to subscriber slices. Their caches invalidate automatically.
Example: The hospital's pharmacy slice notes a new black-box warning. Every clinician's slice that subscribed updates within seconds.
If a slice's policy is too strict to answer a question, it can hand the question to a different slice that's allowed. Both sides record the handoff.
Example: A clinician's strict slice can't show a research preprint. It hands the question to the research-slice for that user, which can.
If users opt in, slices send aggregate trend data (never their actual queries) to the LLM provider. Privacy is preserved with k-anonymity and noise. Off by default.
Example: Anthropic learns "this hospital network is asking about drug X 40% more this week" — never the patient names, never the queries.
When the LLM provider signs its response, Atlas saves the signature into the user's audit trail. Months later, you can prove exactly what the provider said.
Example: Malpractice review six months later. The clinician can prove Anthropic answered "X" at 2:14 PM on Tuesday — cryptographically.
The LLM provider can push corrections downstream — "this fact changed, drop your cached answer." Slices subscribed to the provider update automatically.
Example: Anthropic identifies a known model error. Subscribed slices invalidate affected answers and refuse stale results until refreshed.
Other Validiti products (ShiftCAPTCHA, DMS, Pacta) can push signed events into a slice's audit trail — "this user passed a CAPTCHA," "this source was reverified," "this event was Pacta-signed."
Example: Your DMS instance flags a citation as retracted. Every slice that depended on it updates, automatically.
Atlas's Negotiation 04 primitive. A protocol, not a service. An opt-in switch that's off by default. The customer environment computes aggregate frequency derivatives on the cache VSS it already runs — and a privacy-filtered, signed envelope flows upstream on the customer's chosen schedule. Subscribers (LLM providers, regulators, research institutions) receive the signed bundle and verify it locally.
What flows is concept-frequency, query-pattern clusters, miss-rate signatures, novel-token bursts — never user tokens, never prompts, never per-user counts. The privacy filter rejects anything below the k-anonymity threshold and applies differential-privacy noise on what survives.
Every customer envelope, every aggregator combine, every subscriber dispatch is signed and hash-chained. The privacy filter that ran is itself signed — provable that the filter executed as configured. Tamper-evident at every hop.
Opt-in, off at install, toggleable per signal class, pausable at any time. Disabling reverts to no-flow within one aggregation cycle. The customer dashboard shows what flowed, when, with what filter applied, and revenue rebated.
Subscribers pay tiered fees. Customers receive a rebate against their Accelerate license proportional to their signal contribution. Opting in literally lowers your bill — up to 100% of Accelerate fee at scale.
Most "privacy-preserving AI" pitches stop at policy slides. Atlas's privacy guarantees are shipped in code, not promised in docs. Four pieces.
When upstream signals leave a hospital, they're aggregated with at least 50 other hospitals first. Statistical noise is added so no single hospital's contribution can be traced. Verifiable from the receipt — no trust required.
"Each user can send at most 100 signals per day, 10,000 in their lifetime." Not a guideline — a hard cap, enforced by VSS. Changing the cap is itself an audit event.
Every rebate the LLM provider pays is computed by walking the audit trail and signed by your master key. The CFO and the provider's BD look at the same number, signed by you.
Every interaction — query, answer, transfer, signal — lands a receipt in the user's audit trail. An auditor can walk it forwards or backwards and get the same answer either way.
455 tests, 38 files, every claim above shipped in production code. The technical implementation lives in the API guide.
When slices can negotiate, real markets emerge. Atlas ships six of them — one to sell answers, one to sell signal, one to gate trades on policy, one to find peers, one to pool data with peers, and one for non-GPU users to do CPU work for GPU-heavy users.
If your slice has answers other people will pay for, post a price. Buyers post a budget. The market matches them, with both sides on record.
Example: Hospital A has cached rare-disease answers. Hospital B asks the same questions. Atlas matches A's offer to B's bid; B gets a faster signed answer, A gets paid.
If you opt in to send aggregate trends upstream, multiple LLM providers can compete for it. Atlas routes each signal to the highest payer in real time.
Example: Anthropic offers 5¢ per signal, OpenAI offers 7¢. Atlas routes your signal to OpenAI this week. Next week Anthropic's offer is higher; routing flips automatically.
Set rules that every trade must satisfy — "the other side must be at least as careful as we are." Atlas checks the rules at the moment of the trade.
Example: Your hospital won't share with peers whose retention is shorter than 7 years. Atlas refuses any trade that fails the test — automatically, no manual review.
Slices can advertise what they have without revealing who they are. Buyers query "anyone offer cache for this topic?" and get a list of trustworthy candidates.
Example: A research slice asks "anyone with cached cardiology answers?" The catalog returns three signed advertisers; the slice picks the best price + quality.
Two organizations can co-sign an agreement to combine their slices' anonymous aggregates — getting better statistical privacy than either could alone.
Example: Two regional hospitals each have 30 patients with a rare condition. Pooled, they have 60 — enough to safely contribute aggregate signal. Neither could do it alone.
The flip side of selling cached answers. If your slice has spare CPU, post an offer to do VSS housekeeping — verifying audit chains, batching anonymous signals, validating cache freshness. GPU-heavy users post the work; you do it; you get paid.
Example: A small clinic has spare CPU overnight. A research hospital posts "verify these 10K audit chains, $0.50 each." The clinic's slice picks up the work and gets paid — the research hospital saves CPU for GPU work.
The honest read: most of these capabilities don't exist anywhere else. There's no equivalent product because there's no equivalent language. Here's the chart.
| Capability | Validiti Atlas |
RAG frameworks LangChain · LlamaIndex |
Vector DBs Pinecone · Weaviate |
Provider memory Anthropic · OpenAI |
Vertical SaaS Glean · Harvey |
AI governance Credo · Holistic |
|---|---|---|---|---|---|---|
| Each user gets their own private slice | ✓ | ✗ | ◐ namespaces only |
✗ | ✗ | ✗ |
| Customer holds the audit log, not the vendor | ✓ | ✗ | ✗ | ✗ | ◐ vendor-held |
◐ vendor-held |
| Hand a cached answer to a peer (no LLM re-call) | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Push corrections to subscribed peers | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Pass a query to a peer with different policy | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Send anonymous trends upstream (k-anon + noise shipped) | ✓ | ✗ | ✗ | ✗ | ✗ | ◐ policy only |
| Capture and store the LLM's signed response | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Receive provider corrections downstream | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Connect to other security/audit products | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Sell your cached answers to peers | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Let LLM providers compete for your signal | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Trade only with peers meeting your policy bar | ✓ | ✗ | ✗ | ✗ | ✗ | ◐ policy mgmt |
| Find peers with anonymous capability discovery | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Pool data with other customers for stronger anonymity | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Sell CPU cycles for VSS housekeeping work | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Signed, accountable rebate statements | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Use any LLM provider | ✓ | ✓ | ✓ | ✗ single-vendor |
◐ internal only |
✓ |
| Storage with built-in integrity checks | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
Verticals aren't the product — VSS is. But VSS has obvious surface fits, and Atlas ships pre-built role bundles for the four most-asked-about regulatory regimes (HIPAA / SOX / ABA Model Rules / GDPR). Each is a thin wrapper over the same primitives above; verticals are how you land design partners, not what Atlas is.
A hospital's LLM-powered tool. Each doctor's queries hit their VSS slice — their patient panel, their specialty corpus, their hospital's protocols. Cross-patient leakage prevented by Brain Key isolation. Each query HIPAA-audited per doctor.
A law firm's research LLM. Each attorney's VSS carries their cases, their privileged matters, their authorized corpora. Cross-matter contamination structurally impossible. E-discovery defensible: every attorney's research trail is signed and durable.
SaaS company offering an LLM-powered tool to N enterprise customers. Each customer's slice on VSS is isolated, brandable, configurable. Each customer's data never trains anyone else's model. Per-customer audit and provenance compliance.
Investment firm's research LLM. Each analyst's slice has their coverage universe, their proprietary models, their authorized data. Insider-information firewalls enforced at VSS level. Every recommendation auditable to the source data.
An LLM provider plugs Atlas in as the per-user reasoning backbone. Every API user gets durable, signed, provenance-graded memory. Branded as "Anthropic Memory" or similar — Atlas the engine, hyperscaler the surface. Massive scale.
Each cleared researcher's VSS slice has their authorized clearance level baked into the corpus ACL. Cross-clearance leakage prevented at VSS, not at the prompt layer. Audit-chain admissible.
Same language, same guarantees, three shapes. Pick the one that matches your buyer profile.
For developers building on VSS. A sealed Validiti binary exposes the negotiation primitives — claim a slice, transfer cache, publish drift, request handoff — through language bindings your application code calls. Drop the binary into your deployment, point it at an Atlas daemon, build whatever VSS makes possible.
For SaaS, enterprise, and regulated organizations issuing slices to their employees / customers / members. Each end-user gets a VSS slice; you pay per active slice per month.
For LLM providers wiring Atlas's negotiation primitives into their own surface. Branded by the provider; co-developed compliance posture; scales with user count. The category-defining sibling product to Accelerate.
Atlas isn't a replacement for any other Validiti SKU. It composes with them. A customer running Accelerate + Atlas + EST gets cost reduction + per-user reasoning + privacy-preserving signal — three independent value props on one language.
VERIFIED-only. Research users get PARTIAL-flagged. Per-role enforcement at VSS level.None of these compositions is automatic. Customer admins configure each cross-SKU integration explicitly. Atlas's value is highest when paired with at least one corpus product (Knowledge, Provenance, or customer-curated bundles).
Every install, every SKU shape, every customer.
Plus everything in Validiti Titus — runtime defense, network-scale protection. And the cross-cutting Validiti Core Features — Pacta-signed events, sealed binaries, fail-closed privacy enforcement. Always included; not a paid tier.
This page is the buyer view. Engineers, auditors, and integrators will want the API guide — protocol types, audit-event ledger, primitive signatures, verifier walk-throughs, settlement math, federation flow.
Every Validiti SKU inherits the same Safe · Fast · Smart guarantees from the shared language — encryption, tamper-evident history, runtime defense, predictable performance. Same code, same proof, same floor on every install.