Back in December of 2024, Stanford communications professor and misinformation expert Jeff Hancock admitted that a court declaration he submitted (on the subject of deepfakes and AI-driven misinformation, of all things) contained fabricated citations.
Apparently he’d used ChatGPT to help draft the document, forgot to double-check the placeholders, and the AI invented its own citations.
Which would be like, the most hilarious irony if it wasn’t so unsettling.
But what really gets me isn’t just that a professor who teaches Truth, Trust, and Technology had a fast one pulled on him by AI. It was how the whole situation reminds me of the House of Leaves.
The House
Mark Z. Danielewski’s House of Leaves is a cult classic of literary horror, a nesting-doll novel that traps you inside itself. It includes footnotes within footnotes, narrators wrapped inside other narrators, fabrications entangled with facts until your own sense of reality begins to warp too.
And it’s one of my favorite novels of all time!
Also Read: What is Real in the House of Leaves?
Authority of a Footnote
In House of Leaves, an essayist named Zampanò writes an academic analysis of a documentary film called The Navidson Record. His pages are crammed with references to famous texts, people, and interviews.
Many are real.
Many are not.
He cites The New York Times. Mary Shelley. Harvey Weinstein in Interview magazine. The Oxford Dictionary. A Pulitzer Prize-winning photojournalist named Will Navidson… who, by the way, sounds eerily similar to Kevin Carter, the real-life photographer of The Struggling Girl.
(Yeah, you know the photo. 😬 The one of the obviously starving and anguished child crawling on the ground while a vulture waits patiently behind her, and with no intervention by the photographer. The image won him the Pulitzer Prize but the controversy surrounding the photograph tormented him the rest of his life.)
These citations all feel credible, and some even sound familiar. But if you take the time to fact-check Zampanò, you find yourself in a rabbit hole of pseudo-facts and moments where reality has been mimicked just closely enough to pass.
Hmm, kinda like AI? 🤖
When AI tools generate text, especially when tasked with sounding academic, they create footnotes and attributions that look totally convincing. Their citations are complete with authors, publication dates, and specific article titles.
The problem is that they’re not real.
And these incorrect results often get past us! Not because we’re lazy, but because they appear so authoritative.
So, RIP works cited pages, I guess…
But What If the Story’s Real Enough?
Danielewski writes:
“Some insist it must be true, others believe it is a trick… Others could care less, admitting that either way The Navidson Record is a pretty good tale.”
So does it matter?
That question haunted me during my most recent reading of HOL. It comes back to me again now, as we are surrounded by tools that can provide credible analysis and cite fake research with complete confidence.
In 2025, when media is fluid and sources are algorithmically generated, do we even have the time or capacity to verify every single claim? Or are we more likely to accept the tone of credibility and move on?
That’s why Michael Hancock’s AI-written brief (again — submitted in a case about fake stuff on the internet) is quite the cautionary tale. It’s not that no one noticed the hallucinated citations… It’s that no one thought to look.
That’s probably the part that’s 2spoopy4me.
The Anxiety of (Un)Reality
Both the physical novel and our digital AI platforms raise the same existential questions:
Who is speaking? And why?
Can we trust what we’re reading?
What is real?
AI outputs can feel uncanny in the same way House of Leaves does. It’s something that sounds human, mimics logic and meaning, but occasionally carries a faint, subliminal wrongness.
It’s a feeling that creeps in when we’re not sure what — or who — we’re interacting with.
Let me breakdown and compare the two further…
In House of Leaves:
Who is the true narrator? Who can we trust — Zampanò? Johnny? The editors? Any of them?
Is the house real? Is the Navidson Record real? Is any of this real?
The reader is soon tangled in layers of unreliable storytelling.
In AI:
Is this text “speaking to me”, or just mimicking?
Does this system know anything? Can it lie?
Who authored this answer? Was it pulled from a real person, or is the whole thing fabricated by AI?
But most importantly: If it sounds true and feels meaningful, does it matter if it’s fake?
The uncanny parallels between the 2 abound. AI doesn’t have beliefs or intent. But it sounds like it does. That tension is what’s uncanny.
Like the house, AI has no center. You keep peeling back layers expecting an origin, and find only repetition.
Reader, Beware
House of Leaves warns its readers again and again: beware. Not of gore or ghosts, but of perception itself. What happens when the boundary between fiction and fact breaks down—not because of a lie, but because of structure?
AI doesn’t need to intend to mislead. It just needs to mimic.
So yes, hear me out:
AI is the House of Leaves.
It cites what isn’t real.
It mimics authority with elegance.
It builds infinite interiors.
And it dares you to go deeper.
Like Navidson says — if you pass by that house, don’t slow down. Just keep going.
Advice that I, for one, am apparently not taking… as both a reader of House of Leaves and a daily user of AI. Like, I literally use ChatGPT all day every day in my real life job.
But did I use AI in writing this piece? A lady never tells…
Stay tuned! More ramblings on House of Leaves and AI to come.