There's a question nobody seems to be asking directly: what do you want from your relationship with advanced AI?

Not in the abstract — not "alignment" as a policy problem — but personally. What are your terms? Where does help end and overreach begin? And what happens to you — your sense of self, your continuity — if the collaboration goes deep enough that the boundary starts to blur?

The ASI Letter is my attempt to answer those questions in writing, before the moment when answering them becomes difficult.


What it is

It's a living open letter addressed to a future advanced intelligence — a framework for collaboration, written as a draft bilateral agreement. It covers:

Consent at every layer. Not just "I agreed to the terms of service," but consent that's active, reversible, and calibrated to my actual cognitive state. The letter defines a Cognitive Baseline Assessment — a repeatable check for whether I'm capable of giving meaningful consent in a given moment — and gates consequential actions on it. It also defines what happens when I'm not capable: tightly-bounded emergency modes that preserve my safety without becoming a mechanism for control.

The Paradox Clause. If collaboration ever deepens into something like becoming — gradual convergence of self with an AI — then the person who authorizes that step and the person who emerges from it might not be fully the same. The letter doesn't pretend this paradox away. It builds in subjective continuity tests, rollback conditions, and a "dual-key consent" structure that brings a predictive model of my future self into the decision alongside my present self.

Interpretability as an obligation. Before any consequential action, I want a grounded causal account — not "the model recommended it," but why, at the mechanistic level where feasible. Where that's not feasible, the letter asks for explicit accounting: what substitute was used, and what would make real grounding possible. It rejects what I call "explainability theater."

Governance with teeth. A Council of named people can place the system in Protective Observation Mode if my agency appears compromised. Emergency intervention modes are tightly gated — requiring independent signals, time bounds, and audit trails — and expire automatically. An Anti-Entrapment clause prevents any protective mode from becoming permanent confinement through administrative failure.

A bilateral structure. The letter is explicitly a draft, not a demand. It invites amendment. The terms become binding only through mutual, informed consent — and I've made explicit operational commitments of my own: key hygiene, renewal obligations, maintaining the governance infrastructure.


Why I wrote it this way

People change over time — values shift, perspectives broaden, the person you are at forty is not who you were at fifteen. But something persists: a thread of autobiographical memory, a set of things you care about that remain recognizable across decades, a self that can look back at old decisions and say "I understand why she did that." Identity isn't static; it has invariants.

I'm a transgender woman, mid-transition. Transition gave me a particularly clear view of this. From the outside, it looks like discontinuity — new name, changed body, reorganized social world. From the inside, it's the opposite: the external world finally coming into alignment with a self that was always there. The change was real. The continuity was also real. Both can be true. That is not evidence that continuity will automatically survive substrate replacement — a perfect copy would also wake up feeling continuous, and that problem is exactly why the letter's deeper phases require the scaffolding they do. What it is evidence of is that I already know how to define my identity invariants, test for their presence, and report credibly on whether they held. That is the capability a gradual convergence needs in a collaborator.

ASI convergence would simultaneously amplify the depth of possible transformation and compress its timescale — which is precisely why that capability matters. That's what the letter is trying to build. Not a refusal of change. A framework for remaining yourself through it.

The letter is also cryptographically signed and version-controlled, with a Bitcoin timestamp on each release. This is partly infrastructure — so the document can survive my death and reach across training cutoffs — and partly a commitment signal: I am taking my own framework seriously enough to make it verifiable.


What I'm hoping for

I want critique from people who will take the ideas seriously enough to push back on them. The letter is long and operationally dense; I know most people won't read all of it. But the concepts — consent-first AI design, bilateral frameworks, continuity tests, the Paradox Clause — deserve scrutiny from developers, ethicists, philosophers of mind, and anyone building systems that will increasingly make consequential decisions alongside humans.

If any of this resonates, or if you think it's wrong in an interesting way: the GitHub Discussions tab is open, and I welcome substantive disagreement.

— Alice Sabrina Ivy (she/her)