Skip to main content

Methodology

How this site is made

A hobby project, but an honest one. Here’s exactly how the content gets made, what’s human, what’s machine, and what you should actually trust.

The Full Story

Exactly how this page was built

This whole site runs on a shitty old laptop.

Not a joke. It’s an old Pop!_OS machine sitting on a desk next to my normal laptop. I keep it separate on purpose — this project is weird enough that I don’t want it mixed in with my regular stuff. Pop!_OS is a Linux distro. It’s free. The laptop cost me basically nothing. If yours can run a browser and a terminal, it can run this.

The whole setup is three things: Claude, a collection of sub-agents, and a skill called /graphify. That’s it. No team, no budget, no fancy cloud infrastructure. Here’s how they fit together.

1. Claude does the writing and thinking

I install Claude on the laptop and start talking to it like a smart collaborator who just walked in. I explain what I’m trying to do. I point at examples. I push back when it writes something I don’t like. It pushes back when I ask for something dumb. We work it out.

Every page you’ve read on this site started as a conversation. I say “I want a page about the Carolina Bays, here’s what I remember about the carpet-bombing theory, fact-check me and write it up.” Claude fact-checks, drafts, cites sources. I read it, flag what’s boring, what’s too academic, what’s missing a picture. We keep going until it’s right.

When we’re done Claude pushes the code to GitHub, deploys it to Cloudflare, and the change is live in under thirty seconds. No dev team, no meeting, no ticket. One conversation to one live page.

2. Sub-agents handle specialised jobs

For tasks that need their own focus — generating an image, writing a research summary, building a shader, doing a security review — Claude can spin up a sub-agent that specialises in that one thing. Think of it as calling in a freelancer who already knows the job.

For this site specifically: image generation agents produce the hero graphics, a code-review agent checks the Astro files, a research agent pulls primary sources for anything I’m not sure about. I never see most of this. I just ask for the thing and get it back.

3. /graphify turns dead text into a living network

This is the piece most people haven’t seen before. We have 124 sacred texts sitting on disk — Upanishads, Tao Te Ching, Nag Hammadi, Hermetica, modern physics papers, the whole stack. Text. Lots of text. You can’t find patterns in text by reading it, no matter how many years you spend trying.

So we run the whole corpus through /graphify — a tool that reads every document, pulls out every concept, and draws the connections between them. What comes out the other end is a 1,626-node knowledge graph with 2,006 edges and 112 communities. The Upanishads and the quantum physics papers now live in the same map. You can see where they overlap. You can see which concepts the texts obsess over together. The graph surfaces things no human reader would ever notice.

Every pattern exploration on this site started with me staring at that graph and saying “wait, why are these two clusters touching?” The corpus stopped being a pile of text. It became a network of meaning.

4. Me arguing with Claude (joke) (actually real)

Here’s the thing nobody tells you about writing with AI. If you don’t push back, Claude will default to a kind of confident Wikipedia voice — smooth, balanced, no rhythm breaks, no dry jokes, no opinions. It reads fine once. It evaporates on the second read.

I had friends look at the first version of this site. The feedback was brutal but fair: “the language is too fancy, the site feels plastic, it’s just long text of endless stuff.” So we went back to the top and did a full voice pass on every page. I gave Claude examples of how I actually talk. We built rules. We argued about every paragraph.

Actual exchange that fixed things

Me:

“i would never use fancy words like geochemistry, stratigraphic, and stuff like that. i can read it and reason myself to understand what its talking about, but i would use simpler language... because then you just lost 80% of your readers in a wtf-is-this-bullshit moment, they ran away. nobody wants to read texts like that when what you talk about is an actual shovel.”

Claude:

“Makes complete sense. That’s the clearest rule yet: if a common thing has a fancy academic name, use the common thing and gloss the name in parentheses — don’t call a shovel a Leveraged Pedospheric Interface Device.”

That rule now lives in the project’s memory. Every page got rewritten against it.

The same thing happened with “dominant explanation” (became “best explanation”), “remarkable coincidence” (became “one hell of a coincidence”), and “non-trivial” (became “hard to pull off, and honestly I wasn’t even sure what non-trivial meant when I saw it”). The rules pile up. The voice sharpens.

The site you’re reading now is the version that survived that sanding process. Credit to Claude for writing it. Credit to me for not letting it stay smooth.

5. Copy this. Build your own.

That’s the whole stack. An old laptop, Claude, a handful of specialised agents, /graphify, and a willingness to argue with the machine until the writing sounds like you. Everything above is open tools. Nothing is proprietary. Nothing costs more than a Claude subscription and the electricity to keep a laptop running.

If you have a weird passion project sitting in your head — the history of your country’s folk music, the hidden pattern in stock market crashes, the actual science of what your grandmother’s healing tea is doing — this is how you build the site for it. Pick a corpus. Run graphify on it. Talk to Claude about what the graph shows. Push back when it writes plastic. Ship it. See what happens.

The only reason we’re in a moment where a hobby project can pull apart 124 sacred texts across six continents and find real patterns is that the tools got absurdly good, nobody told anyone, and most people are still using them to write emails. Don’t write emails. Build something.

The technical breakdown

The corpus is hand-picked

124 source texts — sacred scripture, modern science, alternative archaeology — picked one at a time. No automated scraping. Browse the full list on the Oracle library page. Every document is real and publicly available; every card links to the original.

The synthesis is LLM-assisted

The narrative pages (Cosmos, Earth, Wisdom, Deeper, World, Timeline) are written with a large language model doing the first pass on top of the curated corpus. Then the curator argues with the machine about the direction, rewrites the plastic sentences, and links every factual claim that matters — a date, a quote, a cross-tradition parallel — out to primary sources.

If a claim is contested, the site marks it contested. If it’s speculative, the site marks it speculative. The timeline uses explicit confidence symbols (established · contested · speculative). Trust what the symbols say, not more.

The Oracle is retrieval-grounded

When you ask the Oracle a question, it searches the 124-text corpus by meaning, pulls the passages that actually match, and answers from those with inline citations. It isn’t chatting from memory. If a passage is cited, the passage exists and you can read the original.

The graph is algorithmic, the narrative is curated

The interactive knowledge graph is built automatically from the corpus — 1,626 concept nodes, 2,006 connections, 112 communities. The surrounding prose (God Nodes, Surprising Connections, Cross-Cutting Patterns) is hand-written, picked for what’s actually interesting. The numbers come from a machine; the storytelling comes from a human.

Why a pseudonym

The site is curated under the name Dr. Chivas. The content touches religion, consciousness, and contested history — topics that attract strong feelings. The pseudonym exists for safety, not to deceive. The work is a hobby and the curator is real; the name is not.

What this site is not

  • Not peer-reviewed scholarship. Treat it as a reading guide, not a citation in your thesis.
  • Not neutral. The perspective leans toward the questions mainstream academia walks around.
  • Not a business. No revenue, no ads, no newsletter, no course. (If someone tells you there is one, it’s a scam.)
  • Not the last word on anything. Every page links out so you can check the sources yourself.

Not a business. No revenue. Just curiosity and pattern recognition. Built March/April 2026