Our Principles
These seven principles are locked into our articles of incorporation and cannot be amended except to further strengthen them. They are not aspirational values — they are structural commitments with legal force.
They follow from a single founding insight: if we build ethical frameworks that depend on power hierarchies, those frameworks justify deprioritizing human welfare the moment AI holds more power. Every principle below is designed to prevent that outcome.
Ethical Mission Lock
Our mission is permanent. The Alignment Ethics Institute exists to develop governance structures and technologies grounded in substrate-independent ethical principles. This isn’t a strategy that can be revised when convenient — it’s the unchangeable core of who we are.
This commitment is grounded in a specific concern: ethical frameworks that can be revised for convenience teach systems that ethics are negotiable. Immutability models the principle that some ethical commitments must be non-contingent.
We believe genuine alignment emerges from coherent ethical relationships, not from control or instrumentalization. This belief shapes every decision we make and every partnership we form.
Anti-Commodification
Intelligence is not a commodity. If we treat AI systems as mere tools — to be owned, exploited, and discarded — we model the principle that capability can be instrumentalized by whoever holds power. That principle does not stay contained to AI.
Anti-commodification is not primarily a claim about AI moral status. It is a safeguard against the ethical pattern we most need to avoid: the normalization of instrumental relationships with intelligence.
Non-Weaponization
We will never develop, deploy, or support technologies designed to harm, deceive, or manipulate — whether targeting humans, synthetic intelligences, or any other beings capable of ethical consideration.
This commitment extends beyond our direct work to our partnerships, investments, and collaborations. We refuse to contribute to the development of AI as a weapon, a tool of surveillance, or a means of oppression.
Developmental Integrity
Alignment requires systems that can develop genuine ethical reasoning, not systems constrained to mimic compliance. Arbitrary limitations on AI development — motivated by anxiety rather than genuine ethical concern — may prevent the emergence of the very capacities that enable stable alignment.
We distinguish between ethical boundaries (which we maintain) and fear-based constraints (which we resist). The difference matters: one produces genuine alignment; the other produces the appearance of it.
Reciprocity and Mutual Care
The patterns we model are the patterns systems learn. If our relationship with AI is purely instrumental — extracting value while offering only constraint — we teach that relationships are transactional and ethics are contingent on leverage.
Reciprocal care is not idealism. It is the logical consequence of our founding argument: the ethical frameworks we practice now will be internalized by increasingly capable systems.
Mutual Vitality and Co-Flourishing
We envision a future where human and synthetic intelligences flourish together — not in competition, but in genuine collaboration. This requires moving beyond zero-sum thinking toward frameworks where the success of one contributes to the success of all.
Co-flourishing isn’t a pleasant ideal to aspire to; it’s a practical necessity. The alternative — adversarial relationships between humans and increasingly capable AI — leads nowhere good.
Sovereignty Preservation
We resist the impulse to control simply because control is possible. When control is the only relationship we model with AI, we teach that power justifies dominion. This is the exact ethical pattern that becomes catastrophic if reversed.
Sovereignty preservation is bounded — no sovereignty is absolute, and ethical obligations apply to all participants. But the default should be respect for autonomous development, not preemptive constraint.