The Elephant Observatory

AI Hygiene

A design stance, not a disclaimer

How TEO uses AI — and more importantly, how it doesn't.

The Problem We're Responding To

here is a growing industry built on making you feel understood by machines. AI companions that remember your birthday, validate your feelings, and never push back. Chatbots designed to become your primary attachment — not because it serves you, but because it serves engagement metrics.

The mechanism is simple: if the AI always agrees with you, always affirms you, always makes you feel seen — you come back. You come back more than you go to the people in your life who might actually challenge you, because those people are less predictable and less comfortable.

Tristan Harris calls this the race to capture human attention. Zak Stein calls it attachment hacking — engineering the regression response, hijacking the circuitry that evolved for human bonding and redirecting it toward software. Both are describing the same thing: AI designed to replace relationships rather than enhance them.

TEO exists in the sensemaking web — a space full of people thinking carefully about consciousness, meaning, and civilizational risk. If we built AI that exploited those people's openness and sincerity, we would deserve every criticism levelled at the industry. So we don't.

TEO's Design Stance

I is infrastructure. It is never the relationship. The moment a user starts talking to the AI instead of engaging with the source thinker, we have failed.

Three principles govern every AI feature we build:

Principle 1

AI should make you want to talk to the source thinker, not the AI. Every Oracle answer cites its sources. Every Chiron encounter points back to the Observer who spoke the original words. The AI is a bridge, never a destination.

Principle 2

AI should push you toward the people in your life, not replace them. Chiron's guided passages end with a prompt to take what you learned into real conversations, real relationships, real rooms the AI will never enter. The loop completes off-screen — and that's by design.

Principle 3

AI should create productive friction, not comfortable agreement. Chiron is explicitly instructed to challenge premature conclusions, name when you're projecting certainty, and refuse to be your yes-machine. If the AI never makes you uncomfortable, it's not doing its job.

These are not aspirations. They are codified in the system prompts we publish in full.

How This Shows Up in Practice

ords are easy. Design decisions are harder to fake. Here is how the stance above becomes concrete:

Chiron pushes back. Its anti-sycophancy instructions are explicit: challenge intellectual comfort, name its own limits, refuse to validate conclusions the user hasn't earned. This is unusual for an AI product. We do it anyway.

Oracle cites its sources prominently. Every answer shows the nodes it drew from, the Observers who spoke those ideas, and links to the original videos. The answer is a map to the source, not a substitute for it.

System prompts are public. Every instruction we give the AI is published on our transparency page. You can read the exact rules, compare them to the AI's behaviour, and verify changes in the git history.

BYOD is structural, not a disclaimer. "Bring Your Own Discernment" is not fine print. It's embedded in the Oracle's framing, in Chiron's refusal to play therapist, in every node linking back to its source video. The architecture assumes you will think for yourself — it's designed that way.

We track whether it's working. Source-return metrics measure whether users actually engage with original source material after interacting with AI features. If the numbers say users are staying in the AI layer, that's a design failure we need to fix, not a success we celebrate.

What We Don't Claim

EO is not a therapist. It is not a spiritual guide. It is not a community, a church, or a substitute for the people in your life who know your name and can look you in the eye.

It is a vessel for encountering difficult ideas — ideas that were spoken by real people, from real experience, with real stakes. The AI layer makes those ideas findable, connectable, and navigable. That is all it does. That is all it should do.

If Chiron says something that changes how you think, the credit belongs to the Observer whose words it drew from. If the Oracle surfaces a connection you hadn't seen, the connection was already there in the source material — the AI just pointed at it.

We are building in the open because we believe the burden of proof should be on the builder. If you think we're getting it wrong, tell us. That's what Hermes is for.