Context Pollution: Why Your AI Keeps Bringing Back the Past
And how starting a new chat can save you hours of confusion and cleanup
If you’ve worked with AI for a while, you’ve probably noticed this:
You’re working on one thing, then switch topics entirely—but somehow, the old stuff starts to creep back in. Forgotten terms, long-abandoned ideas, elements from earlier interactions suddenly reappear, uninvited.
That’s what we call context pollution.
Here’s how it plays out in practice:
You start a project with an AI—maybe you’re writing a story, coding, or generating images. You stay in the same chat, working step by step. Everything feels coherent. But over time, subtle artifacts of your earlier prompts start resurfacing. A character behaves as they used to, even though you’ve explicitly changed them. Old bugs come back into your code. Or you get cats in a landscape image—even though cats were never mentioned in the current request.
Why?
Because the AI still has access to everything that’s been said in that ongoing chat. Not as “knowledge,” but as active context—the statistical fog that surrounds the current prompt. Even if your instruction is crystal clear, the past is still present.
And here’s the key point:
You can say “ignore everything before”—but it doesn’t help.
That fog can’t just be cleared. Not fully.
Large language models don’t forget like humans. They don’t forget at all. They just suppress—with varying success. If you really want a clean break, there’s only one way:
Start a new chat.
A cold reboot. A fresh context.
I. The Voice That Won’t Speak
A literary example of contextual inertia
Let’s say you’re writing a story. There’s a character—Marek—who’s been established over many paragraphs as the silent type. He rarely talks. Other characters comment on it. His silence becomes part of the atmosphere.
“Marek said nothing, as usual. He stared out the window while the others spoke.”
That pattern repeats. He becomes a fixed presence in the narrative: a quiet observer.
Then, at some point, you decide to change direction. You tell the AI:
“Now make Marek the one who speaks up. Give him a voice of leadership.”
And the AI responds—sort of:
Marek looked at the group. Hesitantly, then with a steady voice, he said, “We need to stay united.” Then he fell silent again.
Technically, the instruction has been followed: he speaks.
But almost immediately, he’s pulled back into silence. The model can’t fully commit to the change. It still sees Marek as “the quiet one.”
And yet, what you were expecting might have sounded more like this:
Marek stepped forward, voice clear, gaze unflinching. “Enough. We’ve waited, we’ve watched, we’ve done nothing. That ends now.”
There was no trace of the silent man he once was. The group listened—stunned.
That’s not just a new line of dialogue.
It’s a complete shift in character presence. A moment of transformation.
But the model doesn’t break the pattern—it modifies it, cautiously. Too cautiously.
Why?
Because the model doesn’t think in terms of characters—it thinks in clusters. The statistical weight of Marek’s previous portrayal is so strong, it overrides the new directive. The character doesn’t evolve. He reverts.
Humans might read a narrative cue and infer transformation. A machine doesn’t. Unless the change is overwhelming and explicit, the old pattern lingers.
The past doesn’t die. It just waits for you to stop paying attention.
II. The Perfect Landscape—with Two Cats Too Many
When visual models cling to what came before
Now let’s shift to image generation.
You’ve previously created several cat-themed prompts:
“Cat on a bookshelf,” “Two cats in a greenhouse,” “Cat on a worn-out carpet.” All successful.
Then, later, you change gears completely. You now want a clean, photorealistic landscape—no animals, no distractions, just aesthetic beauty.
Prompt:
“A stunning photorealistic alpine valley at sunset, with clear lake reflections and atmospheric lighting.”
Result?
You get your stunning landscape—mountains, golden light, depth, everything.
But in the foreground: two cats. Sitting in the grass. As if they never left.
This isn’t a “bug.”
It’s not hallucination in the classic sense. It’s contextual drift—a latent association resurfacing because your previous interactions shaped the statistical likelihood of cats appearing again.
The model “remembers” that you often prompt scenes with cats.
So now, even when you don’t mention them, it adds them—quietly, subconsciously, and off-topic.
A human would say: “Wait—cats? That makes no sense here.”
The model simply says: “Statistically... seems likely.”
What you get is a technically flawless image—polluted by semantic ghosts.
III. The Bug That Came Back
Why you can’t just say “fix the code”
Now the most concrete case: code.
You’re working with the AI to write a number parser in Python. At first, it's very simple:
def parse_number(s):
return int(s.strip())
Then you notice: it breaks on "3.14"
—a float. So you tell the AI:
“Make it accept floats too.”
And it updates accordingly:
def parse_number(s):
return float(s.strip())
So far, so good.
Later, you ask for an improvement:
If the input isn’t a valid number (e.g., "abc"
), return None
instead of throwing an error.
Here’s what the AI gives you:
def parse_number(s):
try:
return int(s.strip())
except ValueError:
return None
Wait—int()
again?
What happened to the float fix?
Simple: the model regressed. It focused so hard on your most recent request (“handle invalid input”) that it forgot your previous one (“use float”). The old default (int
) crept back in because it was statistically dominant in earlier context windows.
Unless you remind the AI of every previous change, this kind of regression is common.
The correct solution would be:
def parse_number(s):
try:
return float(s.strip())
except ValueError:
return None
But you only get that if you either (a) explicitly restate all constraints, or (b) start from scratch.
Conclusion
Large language models don’t remember like we do.
In fact, they don’t really remember at all.
They don’t store knowledge the way a person would.
They carry forward statistical weight—not clear, structured intent.
What was said earlier is still “there,” subtly influencing what comes next, unless you break the chain completely.
You can say: “Ignore that.”
You can say: “Forget the old version.”
You can even say: “Let’s change direction.”
And still, the past returns.
Not because the AI misunderstands you.
But because it still interprets you through the lens of everything that came before.
The most important lesson about context pollution isn’t how to write better prompts.
It’s this:
Sometimes you don’t need better instructions. You just need a clean slate.
And that’s not abstract advice. It’s literal:
👉 Click “New Chat.”
That’s it. That’s the fix.
It may feel trivial, even stupid. But it works.
Because a fresh prompt window means a fresh model state—no lingering bias, no semantic inertia, no ghosts from old tokens.
Start over.
You’ll waste less time undoing the past—and spend more time actually building something that works.