Projection Without Mutation: How to Personalize Without Destroying Truth

Part 3 of the PDKS series. Every personalization system faces a tension: adapt to the user or preserve the truth. The projection operator formalism resolves it.

Personalization and authority are in tension. Personalization wants to adapt — show different content to different users based on context, history, and inferred intent. Authority wants stability. Present consistent, verifiable, citable content that search engines and AI systems can trust over time.

Most systems resolve this tension by choosing a side. Static sites choose authority over personalization. Dynamically generated pages choose personalization over authority. Neither choice is necessary. The projection operator formalism shows how to get both.

The Projection Operator

Define the projection operator Π as a function that takes a canonical substrate object s and a context c, and produces a rendered projection:

Π : S × C → P

The substrate S is the knowledge layer. Stable, structured, versionable. The context C is everything about the current interaction. User history, referral source, device, inferred intent, session behavior. The projection P is what the user actually sees.

Two constraints make this work.

Constraint 1, Substrate Invariance: for all substrates s and contexts c, the projection Π(s, c) does not modify s. The substrate is read-only during projection. No matter how many projections are generated, for how many users, in how many contexts, the canonical knowledge objects remain unchanged.

Constraint 2, Semantic Consistency: the projection must remain within a bounded semantic distance of the substrate. Formally, if φ(s) is the semantic representation of the substrate and ψ(Π(s,c)) is the semantic representation of the projection, then d(ψ(Π(s,c)), φ(s)) ≤ δ for some small δ.

This second constraint is what separates projection from fabrication. The projection can adapt emphasis, ordering, framing, and presentation. It cannot change meaning. A projection that tells a gift-buyer “this ornament is perfect for grandparents” is a valid contextual adaptation of a substrate that says “personalized family ornament.” A projection that says “this ornament is dishwasher safe” when the substrate says nothing about dishwasher safety violates semantic consistency.

Context Inference

The context vector c is inferred from the historical trajectory H_t. The sequence of interactions that led to the current moment. The inference function g maps history to context:

g : H_t → C

This function must be Lipschitz continuous, meaning small changes in history produce proportionally small changes in inferred context. This prevents the context engine from overreacting to individual signals. A single unusual click shouldn’t radically change the projection.

In practice, context inference combines several signal categories. Referral context captures where the user came from. Search query, social link, direct navigation, email campaign. Session behavior captures what they’ve done since arriving. Pages viewed, products examined, time spent. Historical data captures what we know from prior sessions. Purchase history, preference signals, return visit patterns.

Each signal contributes to the context vector, but no single signal dominates. The Lipschitz constraint ensures that the context, and therefore the projection. Changes smoothly as new signals arrive rather than jumping between radically different presentations.

What Projection Looks Like in Practice

On an ecommerce category page, projection might adjust the following elements based on context.

Product sorting: a user with gift-buying intent (inferred from search query containing “gift for”) sees products sorted by popularity and gift-appropriateness. A user with collector intent (inferred from previous visits to detailed product specification pages) sees products sorted by newness and exclusivity.

Above-the-fold messaging: a first-time visitor from organic search sees trust signals. Reviews, shipping guarantees, company story. A returning customer sees what’s new since their last visit and any items related to their previous purchases.

Guide content prominence: a user who arrived from an informational search sees the category guide prominently. A user who arrived from a product-specific search sees the guide minimized in favor of product listings.

In every case, the underlying data is identical. The same products exist with the same attributes, the same guide content exists with the same information, the same reviews exist with the same ratings. The projection selects, arranges, and emphasizes; it doesn’t create or modify.

Why Search Engines Need the Canonical Layer

The projection layer serves users. The canonical layer serves search engines and AI systems.

When a search crawler visits a category page, it doesn’t carry user context. There’s no referral source, no session history, no intent signal. The projection operator receives a null context and returns the canonical projection. The default, uncontextualized rendering of the substrate.

This canonical projection is what gets indexed. It’s stable across crawls, consistent across time, and semantically identical to the substrate. Search engines build their model of your domain’s authority based on this canonical layer, and that model remains reliable because the layer doesn’t change in response to user behavior.

When a user visits the same page, they get a contextual projection that may look quite different from what the crawler saw. But the semantic content. The actual knowledge, facts, and claims. Remains within δ of the canonical version. The user gets a better experience without the search engine losing its stable referent.

This is the dual that makes the whole architecture work: users see projections, search engines see substrates, and the bounded distance between them ensures they’re both seeing representations of the same truth.

The Semantic Distance Budget

The parameter δ: the maximum semantic distance between projection and substrate. Is the key governance parameter. Set it too tight and projections are effectively identical to the canonical version, eliminating personalization value. Set it too loose and projections diverge from substrate truth, undermining authority convergence.

In practice, δ is managed through a combination of technical constraints and editorial policy. The projection engine can adjust layout, ordering, emphasis, and supplementary context (reviews, related products, trust signals). It cannot modify product claims, category descriptions, guide content, or any text that contributes to semantic authority. The editorial policy defines which content elements are projectable (presentation) and which are substrate-locked (truth).

This creates a clear boundary for engineering and content teams: anything that changes what the page means requires a substrate update through the governance process. Anything that changes how the page presents can be projected freely within the δ budget.

In Part 4, we’ll examine what happens when adversaries try to game the system. Perturbation analysis and the formal bounds on how much damage a hostile actor can do to authority convergence.

Discussion

Adam Bishop

Veteran, entrepreneur, and independent researcher. Writing about formal methods, AI governance, production systems, and the operational discipline that connects them. Every project here demonstrates hard thinking on simple infrastructure.