precog
JournalDownload
← Journal
The Future Self2026-05-04 · 6 min read

The Stanford 2011 study, retold

What the most-cited future-self experiment actually showed — and what it didn't. A 12-minute walk through the research that Precog runs every Sunday.

The Stanford 2011 study, retold

The 2011 study has been cited everywhere — in retirement planning blogs, in productivity podcasts, in TED talks. Almost always in the same compact form:

"Researchers at Stanford showed that people who saw an aged version of themselves saved twice as much for retirement."

That sentence is true. It is also small enough that the actual finding tends to get flattened into something easier to repeat. So before we go further, let's slow down and look at what was really tested.

The setup

The paper — Increasing Saving Behavior through Age-Progressed Renderings of the Future Self — was published in the Journal of Marketing Research. The lead author was Hal Hershfield, then at NYU, with co-authors at Stanford. The team built immersive virtual reality environments. Participants entered them, saw a virtual mirror, and were shown a digital avatar — either of their current self, or of a digitally aged version of themselves.

The aging was not a cartoon filter. It was custom rendering using each participant's own photograph, age-progressed with realistic skin texture and structural changes. When you walked up to the mirror, the older person on the other side recognizably was you.

After the VR session, participants were asked to make hypothetical financial allocation decisions: how much of a future paycheck to put into a long-term saving account.

What they found

The group that saw their aged self allocated, on average, about 6.2% of their hypothetical paycheck to retirement saving. The group that saw their current self allocated about 4.4%.

A 1.8 percentage-point difference may sound small. Relative to the baseline, it is roughly a 41% increase. Some popular accounts round this up to "twice as much" — which is approximately accurate for some specific subgroups in the data, less so for the headline figure.

Either way, the direction is clean: one brief encounter with the aged self produced more long-term-friendly choices.

The effect was replicated in a second experiment using non-VR images, confirming the mechanism wasn't dependent on the VR apparatus.

What it actually demonstrates

The clean version of the headline is: the aged face changes the saving math.

The careful version is more interesting:

  1. The change happened after a single session — minutes, not weeks. Visual contact is fast-acting.
  2. The change was on a hypothetical allocation, not an actual one. Field replications since 2011 have generally — but not uniformly — supported the effect on real allocations.
  3. The aging had to be of that person specifically. Showing participants a generic old person did not work. Identity was the operative variable.
  4. The effect did not require the aged self to look frail or grim. It worked equally with healthy aged renderings. What mattered was familiarity, not fear.

That fourth point matters for Precog. Our portraits are not about scaring people. They are about familiarity. The "worst" pole exists, but the dominant message is presence — you exist in this future, here is what you look like.

Why VR, originally

Hershfield's team chose VR because it solved a methodological problem: how do you get someone to meet their future self in a way that feels real, not abstract? Imagining the future self produces weak responses. Reading about the future self produces weak responses. Seeing a static photograph produces moderate responses. Standing in a virtual mirror with your aged self looking back at you was the strongest manipulation they could engineer at the time.

The strength of the manipulation was the point. They were testing the upper bound of how much a single visual moment could shift a behavioral intention.

What 2026 makes possible

VR studies are expensive and rare. The 2011 paper required a team, a lab, custom rendering software, and weeks of recruitment.

Fifteen years later, the same age-progressed rendering can be produced by a phone in seconds, using a generative image model trained on hundreds of millions of faces.

This is the underappreciated technological shift Precog operates on. The 2011 study established that a single visual encounter with the aged self changes downstream behavior. A consumer-grade product running this intervention every Sunday — using each user's actual selfie, with each user's actual habit data influencing the projection — was not buildable in 2011. It is buildable now.

What Precog inherits, and what it adds

We inherit:

  • The mechanism — visual encounter with a self-specific aged rendering shifts long-term-friendly behavior
  • The dosage finding — even brief, single-session contact has measurable effect
  • The identity requirement — the aged person must be recognizably you

We add:

  • Cadence — once-a-week, indefinitely, not a one-time intervention
  • Behavioral input — the projection is shaped by what you actually did this week, not just by aging math
  • Three poles — best, prediction, worst, so the future is a space of choices, not a single fate

The 2011 paper showed a 41% lift from one VR session. We don't know what the lift is from 52 Sunday reveals a year. But we know which direction it points, and we know it's the only product running the experiment in long form on real users.

A note on what the study did *not* show

The 2011 study was not a longitudinal behavior study. It did not show that VR future-self contact changed actual saving behavior over years. It showed it changed immediate hypothetical allocations. Field replications since have been more mixed — some supportive, some not finding the effect at scale.

We are honest about this with ourselves: the academic case for Precog's intervention is strong but not conclusive. The strongest version of the claim is that regular visual contact with the aged self plausibly improves long-term decision-making, at low risk. That's the bet we built the product on. We are running a real-world experiment in slow motion, with each user as a participant in their own weekly trial.

Whether it works is the second-most interesting question. The most interesting question is: what does it feel like to live with your future self in the room every Sunday?

That, the 2011 paper could not measure. We hope the next decade of users will.

— Codeful

← Older

From book to app — Hal Hershfield's Future Self, operationalized