---
title: "What Duchamp Figured Out About Your Context Window"
id: "92640"
type: "post"
slug: "what-duchamp-figured-out-about-your-context-window"
published_at: "2026-05-11T18:14:56+00:00"
modified_at: "2026-05-11T18:14:56+00:00"
url: "https://tealium.com/blog/artificial-intelligence-ai/what-duchamp-figured-out-about-your-context-window/"
markdown_url: "https://tealium.com/blog/artificial-intelligence-ai/what-duchamp-figured-out-about-your-context-window.md"
excerpt: "The conversation about AI context is actually a very old conversation. We just forgot who started it. In 1917, Marcel Duchamp purchased a porcelain urinal from a plumbing supplier, rotated it ninety degrees, signed it with a pseudonym, and submitted..."
taxonomy_category:
  - "Artificial Intelligence (AI)"
---

Artificial Intelligence (AI)

# What Duchamp Figured Out About Your Context Window

Nick AlbertiniMay 11, 2026

The conversation about AI context is actually a very old conversation. We just forgot who started it.

In 1917, Marcel Duchamp purchased a porcelain urinal from a plumbing supplier, rotated it ninety degrees, signed it with a pseudonym, and submitted it to an art exhibition under the title *Fountain*. The object did not change. The plumbing did not improve. What changed was everything surrounding it — the frame, the institution, the implied question. Duchamp understood, perhaps better than anyone in the twentieth century, that meaning is not inside an object. It is constructed around it.

I think about this often when I’m in rooms full of engineers, data scientists, and executives debating the future of AI. We talk about context constantly. Context windows. Contextual data. In-context learning. We debate how much of it to pass, how fresh it needs to be, how to structure it so a model can actually use it. It is the defining technical frontier of applied AI right now. And yet we almost never acknowledge that artists have been solving this problem for centuries. Not metaphorically. Literally. The challenges of context in AI and context in art are structurally the same — and once you see it, the solutions artists have built into their practice offer a surprisingly useful map for what comes next.

Consider what Duchamp actually demonstrated. The urinal, removed from its plumbing context and placed in a gallery, became a question: what makes something art? The answer is the context around it — the institution that frames it, the audience that interprets it, the historical moment that receives it. Raw data has exactly the same property. A customer who visited your pricing page three times in two days is, in isolation, a urinal. The signal is neither meaningful nor actionable on its own. It becomes meaningful only when surrounded by context: who this person is, what they have purchased before, what they looked at before hitting that pricing page, whether they are a new prospect or a churning customer. The same behavioral signal produces entirely different meanings — and should trigger entirely different responses — depending on what surrounds it. This is not a technical insight. It is an interpretive one. And artists have known it far longer than engineers have.

Meaning, though, is only half of what a frame provides. The other half is trust. And the art world has more to teach us about that too.

In the art market, provenance is everything. It is the documented chain of custody that traces an artwork from the artist’s studio to its current location, through every hand that held it. A Vermeer with full provenance commands a different price — and carries a different meaning — than the same painting with gaps in its history. The object may be the same. The context around it is not. We have rebuilt this concept from scratch in data, and we call it first-party data. The principle is identical. Signals with a clear, consented, directly-observed chain of custody are worth more than signals of uncertain origin. When a customer interacts with your brand directly, you have provenance. You know where the signal came from, what it means, and what you are permitted to do with it. Third-party data is art of unknown origin: it might be genuine, it might be a reproduction, and the day the market decides it cares about provenance is the day its value evaporates. We are living through that reckoning now, as third-party cookies collapse and privacy regulations tighten. The art market has always known that provenance is not a compliance formality. It is the foundation of value. We are only beginning to internalize this in data.

If provenance governs what belongs in the frame, composition governs what does not. There is a concept in visual art called negative space — the area around and between the subjects of an image. The empty space. A trained artist knows what an untrained one does not: negative space is not absence. It is composition. It is as deliberate and as important as anything that appears in the frame.

**In AI, what you choose not to include in a context window is as consequential as what you do.**

Anyone who has worked with large language models has encountered context poisoning — the phenomenon by which an overloaded, noisy, poorly structured context window produces degraded or hallucinated outputs. The model is not failing. The composition is. You have filled the frame with clutter and asked the model to find meaning in the noise. The best prompt engineers and context architects I know work the way painters work: through subtraction as much as addition. They ask what to remove, where negative space gives the signal room to breathe. This is not a technical instinct. It is an aesthetic one. And it is one of the rarest and most valuable skills in applied AI today.

Composition in space has a cousin: composition across time. Artists faced this one first too. In 1994, restorers cleaning Leonardo da Vinci’s *The Last Supper* in Milan faced a decision that has haunted restoration for decades: which version of the painting do you restore toward? The version Leonardo painted in 1498? The version it had become by 1700, after years of overpainting? The version it was in its most famous cultural moment? Each version is true. Each is a different context in time.

This is precisely the problem of temporal context in AI, and it is underappreciated. A customer who purchased from you eighteen months ago is not the same signal as a customer who purchased last week. An interest expressed two years ago may have decayed entirely, or it may represent a foundational preference that predicts everything that comes after. A behavior that was anomalous last quarter may be a new normal today. Which version of the truth are you feeding your model? As with restoration, there is no single correct answer. There is only the discipline of asking the question explicitly and making a principled choice about temporal context rather than defaulting to whatever data is cheapest to collect. Recency is not always truth. History is not always noise. The skill is in knowing which is which.

Even once you have chosen which version of the truth to work with, another problem remains: how do you describe it? The ancients had a name for this. Ekphrasis — the literary description of a visual artwork. Homer described Achilles’ shield in the *Iliad*. Keats immortalized a Grecian urn in forty-five lines of verse. The purpose was to bring a visual experience into language, to translate across a sensory gap. Every AI system that attempts to reason about customer behavior is engaged in an act of ekphrasis. The model never sees the customer. It sees a translation: a structured, tokenized, schema-constrained representation of a person’s actions, preferences, and history. The customer browses, clicks, purchases, churns, returns. All of that becomes a description. And something is always gained and lost in translation. What is gained is structure, scale, and comparability. What is lost is texture, ambiguity, and the kind of contextual nuance a human observer would capture instinctively. The question is not whether the translation is perfect — it never will be. The question is how faithful and how complete it is. How much of the original experience survives the conversion into data. This is a data quality question. But it is also a question writers and translators have wrestled with since antiquity.

And once the translation is made, where it lives matters as much as what it says. Imagine the Mona Lisa hanging in a dental waiting room. The painting would be identical, stroke for stroke. The experience would be entirely different — diminished, strange, disconnected from the accumulated cultural context that makes it what it is. The meaning of a great artwork is not only in the object. It is in the institution that frames it, the room that holds it, the audience that approaches it with a certain kind of attention. AI models have the same property, and we underestimate it. The same model, deployed in different products, different user bases, different system prompts and surrounding data environments, will behave in meaningfully different ways. Deployment context is not separate from model performance. It is constitutive of it. A model tuned on e-commerce data and deployed in a healthcare workflow will produce results that are, at best, strange and, at worst, harmful — not because the model is bad, but because the institutional context does not match what the object was made for. Responsible AI deployment is as much a curatorial problem as a technical one. You are not just choosing a model. You are choosing where to hang it, what to hang beside it, and what kind of attention you expect the audience to bring.

I spend a lot of time talking to enterprises about their AI strategies, and the pattern I see, over and over, is organizations deeply focused on model selection — which foundation model, which fine-tuning approach, which inference optimization — and comparatively under-invested in the context layer that feeds those models. This is the equivalent of commissioning a masterpiece and hanging it in a broom cupboard.

I should be direct about why this argument matters to me beyond the metaphor. This is the work we do at Tealium. For more than a decade, we have built the infrastructure that turns raw customer signals into something a model can actually use — a unified, real-time customer profile, assembled from every touchpoint a person has with a brand, governed by explicit consent, and made available to whatever downstream system needs it. And every concept I have described corresponds to a discipline within that work.

Identity resolution — stitching thousands of fragmented observations across devices, sessions, and channels into a faithful representation of one human being — is the ekphrasis problem. It is the act of translation, and the fidelity of that translation determines what every model downstream is capable of understanding. Consent and data governance is the provenance problem — knowing where every signal came from, under what permissions, and what you are allowed to do with it. Real-time data quality and filtering is the negative space problem — deciding, continuously, what belongs in the frame and what is noise that should stay out. The ability to serve historical depth or recent behavior on demand is the restoration problem — letting each use case choose which version of the truth it needs. And routing the right context to the right model at the right moment, in the right deployment, is the museum effect — making sure the artwork is hung in a room where it can be seen for what it is.

The companies winning with AI today are not, in most cases, the ones with the most sophisticated models. They are the ones who have done this work first. They know who their customers are across every touchpoint. They can construct a faithful, temporally coherent representation of customer behavior in real time. They curate what goes into their context windows with the intentionality a gallery director brings to a hanging. They understand that what they don’t send to the model is as important as what they do. This is the work that happens before the model ever runs. It is contextual infrastructure. And it is the differentiating capability of this era of AI — not because the models don’t matter, but because once models reach a certain level of capability, context becomes the primary variable.

Duchamp’s urinal is still provoking conversations more than a hundred years after he submitted it. Not because of the object, but because of the frame he put around it, and the questions that frame forced into the world. The AI systems we build today will be judged not just by the models we chose, but by the contextual frames we constructed around them. By whether we fed them faithful or distorted translations of the world. By whether we curated the negative space with intention. By whether we respected the provenance of the signals we passed. By whether we understood that deploying a model in the wrong institutional context produces not just bad outputs, but a fundamentally different — and potentially damaging — thing.

Artists have always known this. The frame is not decoration. The frame is the argument. It is time we built our AI systems accordingly.
