Meaning Alignment Institute
Subscribe
Sign in
Home
Archive
Leaderboard
About
Introducing Full-Stack Alignment
We're announcing an ambitious research program to co-align AI systems and the institutions that embed them with what people actually value.
Jul 31, 2025
•
Oliver Klingefjord
,
Joe Edelman
, and
Ryan Lowe
28
2
7
Model Integrity and Character
Thoughts on model integrity and Claude's new constitution.
Feb 8
•
Oliver Klingefjord
21
3
Looking for testers for a Social App
We are looking for testers for a social app we will use to perform research on AI-based market intermediaries. It will also help you get closer to your…
Aug 14, 2025
•
Oliver Klingefjord
and
Joe Edelman
17
4
3
Most Popular
View all
OpenAI x DFT: The First Moral Graph
Oct 24, 2023
•
Joe Edelman
and
Oliver Klingefjord
37
6
5
Market Intermediaries: A post-AGI Vision for the Economy
Jun 21, 2025
•
Oliver Klingefjord
and
Joe Edelman
30
7
7
Introducing Democratic Fine-Tuning
Aug 29, 2023
•
Joe Edelman
and
Oliver Klingefjord
25
2
4
Model Integrity
Dec 5, 2024
•
Joe Edelman
and
Oliver Klingefjord
27
1
11
Model Integrity and Character
Feb 8
•
Oliver Klingefjord
21
3
Looking for testers for a Social App
Aug 14, 2025
•
Oliver Klingefjord
and
Joe Edelman
17
4
3
Latest
Top
Discussions
Market Intermediaries: A post-AGI Vision for the Economy
An outline of an economic mechanism for human flourishing after AGI. Also, a brief look at an experiment the Meaning Alignment Institute will run later…
Jun 21, 2025
•
Oliver Klingefjord
and
Joe Edelman
30
7
7
Model Integrity
You may want compliance from an assistant, but not from a co-founder. You want a co-founder with integrity. We propose ‘model integrity’ as an…
Dec 5, 2024
•
Joe Edelman
and
Oliver Klingefjord
27
1
11
What are human values, and how do we align to them?
We are excited to release our new paper on values alignment! Co-authored with Ryan Lowe, and funded by OpenAI.
Mar 29, 2024
•
Joe Edelman
,
Oliver Klingefjord
, and
Ryan Lowe
20
5
4
David Shapiro Interview
And two other quick updates;
Feb 6, 2024
•
Oliver Klingefjord
and
Joe Edelman
6
1
Year End Bonus: a GPT to help with your New Year's Resolutions
This time of year, many reflect on their values. What are yours? How can you weave your life around them?
Dec 31, 2023
•
Joe Edelman
9
1
Meaning Alignment Institute: Year in Review
And what's next for 2024
Dec 29, 2023
•
Oliver Klingefjord
and
Joe Edelman
12
2
OpenAI x DFT: The First Moral Graph
Beyond Constitutional AI; Our first trial with 500 Americans; How democratic processes can generate an LLM we can trust.
Oct 24, 2023
•
Joe Edelman
and
Oliver Klingefjord
37
6
5
See all
Meaning Alignment Institute
The Meaning Alignment Institute is a research organization aligning AI & institutions with what really matters.
Subscribe
Recommendations
Amanda Askell's Substack
Amanda Askell
Us
Institute for Meaning Alignment
Our discord
Meaning Alignment Institute
Subscribe
About
Archive
Recommendations
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts