Open-Sourcing Myself
It started with something practical: I wanted AI to learn how I write.
I've been writing for eight years — since 2018, when I first moved to the US. Blog posts, book notes, industry analysis, life reflections. 170 pieces in total, scattered across time. Some were carefully crafted; others were just late-night emotional outlets. I wanted AI to read all of it, learn my voice, so it wouldn't turn me into someone else when helping me edit.
But AI returned more than a "writing style summary."
It gave me a fifteen-dimension analysis report — thinking frameworks, value systems, worldview, cognitive patterns, emotional landscape, cultural identity… I sat there for a long time after reading it. Not because it was well-written, but because some of it understood me better than I understood myself.
I decided to keep feeding it data. Weekly journals, random musings, book notes — everything I'd written over the years. I wanted to see what would happen if I kept going.
That's when something clicked: if AI can "understand" a person to this degree just from text, could we use this to remember the people we love?
I thought about my grandmother. If I had collected more of her — the way she talked, her phrases, how she saw the world — could I still "talk" to her after she's gone?
I'd had this idea since 2020. But it wasn't until I got analyzed by AI myself that I truly felt what it meant.
So I decided to go first. Open-source myself.
What AI Saw
After ingesting 170 pieces of writing, AI did something I hadn't expected: instead of summarizing each piece, it looked across the timeline for relationships between things written at different times.
Any single weekly entry is just ordinary rambling. But 170 of them together? AI extracted something humans aren't great at — patterns across time.
Before showing specific findings, here's the big picture AI painted.
Across eight years of writing, it identified a clear arc of intellectual evolution:
- 2018-2020: Idealism and anxiety coexisting. Fresh in America, questioning life's meaning, seeking optimal solutions. Grand ambitions and deep insecurity sitting side by side.
- 2021-2022: "Happiness comes from within" — finding spiritual anchors. Discovering emotions as resources, learning not to judge. Expanding from self-cultivation to understanding others.
- 2023: Disciplined rhythm. 52 weeks of unbroken journaling. 57 books read. "Recording is how I re-perceive the world."
- 2024: Loosening and awakening. Work meaning questioned harder. Rigid personality softening. Meeting YR, moving from "island" to "relationship."
- 2025: "Pessimists are right, optimists succeed." Starting an AI content project on the side. Proposing. "Doing the right things matters more than doing things right."
Interesting map. But what truly surprised me were the threads AI found when it zoomed in — things I hadn't noticed about myself.
Finding 1: AI Mapped My Proposal Decision Chain
AI told me: you're an extremely rational person, but you made a "very sudden" emotional decision — proposing.
My first reaction: yeah, it was pretty sudden.
Then AI said: no. Trace your writing. It wasn't sudden at all.
It strung together a complete chain from entries scattered across seven months:
October 2024 — I attended a friend's wedding. That day I wrote: "Love is a decision, not a feeling." The groom's vow was "From now on, you are my top priority." I wrote: "He's my role model."
Later that month — after a 29-kilometer extreme hike, I wrote: "Maybe in the long arc of life, very little can actually be controlled." For someone who habitually controls everything, that's a significant crack.
April 2025 — in a book note, I quoted: "Only after you truly believe will you find happiness. Only after you truly believe will you have the courage to take the next step."
May 2025 — I proposed. My journal entry: "The fundamental reason is that during ten days apart, we both realized the other person had already become part of our lives. There was no reason not to be together. So, we just decided."
When I read this analysis, honestly, I was stunned. I hadn't realized there was a timeline like this — from "love is a decision" to "accepting loss of control" to "belief brings happiness" to "we just decided."
I thought proposing was a sudden decision. AI told me I'd spent the better part of a year rationally convincing myself to accept an emotional leap of faith.
Finding 2: The Slow Unbinding I Never Noticed
The second surprise was about my personality quietly loosening.
AI said: your writing documents the gradual "unbinding" of a classic J-type personality.
Early 2024 — I started tracking time usage at extreme granularity. Every hour, accounted for. My mom is a teacher; every school break she'd set goals for me. I inherited this habit. But in the same entry I wrote: "We've both suddenly realized that maybe life doesn't need to be packed full every single moment."
Mid 2024 — I started naming my weekly entries "Random Musings" instead of "Week X Summary." Even the naming was getting looser.
Late 2024 — I wrote "An unproductive weekend — I feel guilty" — but then immediately started questioning the guilt itself.
Early 2025 — YR and I were late to Disneyland. I thought of an essay about how "life doesn't need to be rushed," then added: "Let alone when you're out having fun."
Mid 2025 — I wrote: "In high school I hoped to at least make a ripple in history. Now I'm increasingly clear that I'm just an ordinary person."
If AI hadn't strung this together, I probably wouldn't have noticed: over eighteen months, I went from "every minute must be meaningful" to "life doesn't need to be rushed."
The change was too slow. So slow the person living it couldn't feel it. But across 170 entries, AI saw it.
Gradual change is the hardest thing for a person to notice about themselves — but AI can.
Finding 3: A Book Note That Became a Decision
In March 2025, I wrote a note about a newsletter creator named Shaonan:
"Shaonan is the kind of person I want to become — continuously learning, distilling knowledge, sharing with people who share his interests."
On its own, just a reader's admiration. But AI placed it alongside entries from surrounding months and saw a bigger picture.
Looking back:
November 2024 — a project finally shipped. I felt no joy. I wrote: "Nobody seems excited… We're just Sisyphus, endlessly pushing projects uphill." That same period, after reading The Remains of the Day: "The butler missed his father's final moments and his life partner for the sake of 'great events'… Working hard is fine, but sacrificing too much for work is a losing trade."
March 2025 — I wrote an "obituary" for the team I'd been on for four years: "Not a single person walked with it through its entire lifecycle." Then: "Work is one pillar of life, but not the only one."
April 2025 — getting more specific: "Shaonan sits in his study 8-10 hours every Sunday, reading, distilling knowledge, writing 5,000-10,000 words… That's exactly what I want to do."
Looking forward: a few months later, I started building an AI content team in my spare time — doing exactly what Shaonan does: learning, distilling, sharing.
It's not a quit-your-job-and-start-a-company hero story. I didn't quit. I just realized my current work no longer excited me, and I wanted to find something that could "hook" me again. That book note — "Shaonan is the kind of person I want to become" — turns out to have been a blueprint I drew for myself without realizing it.
AI saw something in that book note before I did.
Finding 4: Seeds of Digital Immortality
The last finding relates to this very article.
AI found a thematic thread running through my 2024-2025 writing — across different months and different topics, one emotion kept surfacing: a deep appreciation for human connection.
July 2024 — YR and I were walking past a half-dead tree. She said: "I wonder if after we die, it's some kind of rebirth." I said: "Maybe this life is a dream." She said: "Then when you wake up, promise you'll come find me again." We pinky-promised through tears.
Same period — I set up a NAS system at home to manage photo backups using the 3-2-1 backup principle. Every single photo, protected.
Late July 2024 — after watching Robot Dreams, YR cried and said: "I don't want to just be a passerby in your life." I wrote: "No matter how many times I write this down, tears still come."
January 2025 — I had three different AIs polish the same paragraph, comparing which one best preserved my voice — that was my earliest experiment in "getting AI to understand human expression."
AI connected these dots and interpreted them as "memory anxiety" — a fear of losing things.
Honestly, that interpretation isn't quite right.
I'd had the digital immortality idea since 2020. Its foundation isn't anxiety — it's love. I cherish connection. If I could collect more of my grandmother — after she's gone, I could still remember her. The pinky promise isn't fear; it's love. The NAS isn't anxiety; it's caring.
AI saw what kept appearing, but got why wrong.
Which leads to a more interesting question: how deeply can AI actually understand a person?
What AI Can't See
When I told AI about the misread motivation, it did something fascinating: instead of simply saying "I was wrong," it immediately incorporated my correction into its analytical framework, explaining it as "AI's tendency toward over-interpretation."
Another layer of interesting — AI is always looking for patterns, even in its own mistakes.
This made me think seriously about AI's boundaries.
What does it get right? It's excellent at finding patterns — themes recurring across time, consistency behind scattered signals, gradual changes invisible to the person living them. 170 pieces of writing is too much for a human to cross-reference. Nobody goes back and reads their entry from three years ago next to yesterday's. But AI does.
What does it get wrong? Motivation. AI elevated "exploring" into "life pivot," read "cherishing" as "anxiety." It naturally tends to connect scattered signals into directional arcs — the connecting is accurate, but the meaning it assigns to the arc isn't always right.
Beyond that, three more fundamental things AI can't see:
First, it can't see what was never written down.
In 170 entries, I wrote plenty about work, reading, and YR. But I almost never wrote about: late-night conversations with friends, specific technical decisions I was proud of at work, or how YR and I made up after arguments.
The "me" AI analyzed is a writing-filtered me. The real me is more complex, more contradictory, more messy.
Second, it can't distinguish "what I actually thought" from "what I wrote down."
When writing journal entries, we unconsciously polish, organize, assign meaning. When I wrote "love is a decision" — did I truly think that in that moment? Or did I add that layer of meaning in retrospect? AI can't tell.
Third, it doesn't know what silence means.
There were weeks in 2024 when I didn't write anything. Was I too busy? Too happy to need writing? Or too sad to write? Silence might be the most informative signal — but AI can't read silence.
Can It Replicate Me?
Back to the original question: if I feed AI 170 pieces of my writing, can it "replicate" me?
Honest answer: yes and no.
The yes: AI genuinely captured my thinking patterns — how I approach problems, what I care about, what I fear, my decision logic. If you asked "AI me" a question, its answer might be 70-80% like the real me.
The no: it doesn't have the moment of pinky-promising through tears at the cemetery. It doesn't have the physical sensation of choking up at 2 AM while writing about my grandmother. It has my thinking framework, but not my body temperature.
Last year for my birthday, YR didn't book a fancy restaurant — she booked a small cabin, made longevity noodles, prepared a Little Prince cake. I wrote: "So happy to have found someone who understands me and loves me in exactly the way I want to be loved."
That's the ultimate form of "being understood" — you don't need fifteen analytical dimensions. You just need one person who knows you want a cabin, not a Michelin star.
AI can analyze you, but it can't know you. "Analyzing" and "knowing" are two different things.
Keep Writing
I open-sourced myself. 170 pieces of writing, eight years of inner monologue, one fifteen-dimension analysis report.
AI told me things I hadn't noticed — my proposal wasn't sudden, my new explorations weren't impulsive, my appreciation for human connection runs deeper and longer than I'd realized.
It also got things wrong — read love as anxiety, read exploration as pivot. Then after being corrected, tried to fold the correction itself into its analysis.
But AI also reminded me of something: what it preserves is the trace of thinking, not thinking itself. Like how a NAS can back up photos, but can't back up the feeling of taking them.
Maybe someday, digital immortality technology will truly replicate a person's way of thinking — let the departed "come back" in some form. I've been thinking about this since 2020, and I still am.
But until that day comes, all I can do is keep writing.
Not for AI. For my future self — hoping he'll look back and recognize the person who colored with crayons at a pizza shop, pinky-promised through tears at a cemetery, and wrote journal entries at 2 AM.
170 pieces of writing can't replicate a person. But they can prove that person existed, lived deliberately, and thought deeply.