AI Objectives Institute
If you are able to support us as a sponsor, please click here: Donate to AOI
Donate to AOI
Project: Talk to the City
Use the button below to add a new page. When it's ready to go live, drag it up into the synced block in the sidebar above.
If you want to hide pages, you can move them into the toggle below ↓
Our greatest challenge is also our greatest opportunity.
[click on the toggles to expand]
The emergence of artificial general intelligence and advanced AI tools offer an unprecedented opportunity to produce widespread human flourishing, but our current systems do NOT place us on that path.
- Today, it is more likely that AI will end up harming humanity than lead to human flourishing
- We have malicious state actors, misaligned corporations, conflicting structural and human interests that can abuse AI products
- It is hard to build safe & aligned AI systems in a world where profit is the main motivator
- Furthermore, there is massive existential risk before AGI is even reached and AGI will not automatically solve this
Welcome to the AI Objectives Institute
AOI is a non-profit research lab with two components:
- Think Tank
- Alignment, policy and individual & collective flourishing research
- Open-source ethics and alignment review framework
- Product Lab
- Existence proofs & IP aligned with human flourishing
We need to build systems that can take into account human objectives at an individual, collective, and an organizational level.
- What are those objectives?
- Moral, ethical goals and aspirations we have as individuals and collective in the long term
- Machines are developing capabilities to process these objectives but are currently optimizing towards short term exploitative goals
- This is currently an unsolved problem, and the resolution will need human input on top of purely technical AI alignment research.
- It is not enough merely to possess good objectives; one must also be able to pursue good objectives.
- To achieve this:
- We conduct research and build tools that clarify true human objectives, even (and especially) when they're inconsistent or in conflict
- AOI is creating systems that can aggregate and leverage objectives on an individual, societal, and organizational level
- AOI is building tools to enhance human sovereignty; both as an intrinsic good and because we believe that it can complement efforts to mitigate misalignment at higher levels of societal organization.
AOI’s working group is composed of 60 people publishing open-source prototypes, research and papers
- 3 professors, 19 PhDs - academics in AI & safety
- Board members across OpenAI, Anthropic, DeepMind
- 15 AI/ML researchers, independent & working at AGI contenders (OpenAI, Anthropic, DeepMind)
- Entrepreneurs, company builders
- Journalists, media
- Psychologists, neuroscientists, cognitive scientists
Our Mission: We want to enter into the AI era with methods to identify objectives supporting humanity’s best interests, and a tactical path to integrate those objectives into AI & other systems.
Current AOI Projects
Talk to the City
- How can AI help us hear the voices of the many?
- Talk to the City is an LLM-interface for collective intent alignment that can help governments, companies, and institutions engage with individual voices across a large group.
- Opinions can be sourced in an AI-enabled manner that is efficient, time-smart, qualitatively-rich, and comprehensive
- Understand the nuances of consensus and the nuances of dissent on any queried issue, instead of the lowest common denominator consensus
- Source opinions and qualitative perspectives outside of the standard surveying capabilities currently used by institution decision makers
- Open and robust process for distilling many opinions and perspectives into a unified data set
- How do we trust the AI technology entering the markets and our world?
- Launch Observatory is a space where we evaluate AI models and launches based on the objective of human flourishing.
- Ethics review: should it be done?
- Alignment review: is it aligned with human flourishing?
- Public risk review: should it be public?
- Launch Observatory considers all forms of misalignments in the objective function
- Museum of Misalignments: documentation of past failures of systems and AI
- The Eckersley Pledge
- “I will develop AI systems aligned with the objective of human flourishing.”
- Humans are suboptimal at consistently pursuing their own long-term goals, because of myopia, manipulation, and bounded rationality
- These lead to negative outcomes at the individual level, as many fail to achieve their long-term goals
- It also leads to negative outcomes at the social, market and institutional level, as it drives an ecosystem that preys on and reinforces such myopia, manipulation and bounded rationality
- Our proposal is to approach this both at a theoretical and applied level:
- At the theoretical level, we are exploring what formalization of “reward function” for humans that distinguishes between short-term and long-term objectives, and the trade-offs between them
- At the practical level, we are exploring applications that help individuals clarify, communicate and better pursue their long-term goals:
- On clarification and communication of goals, we are exploring how recent advances in dialog-tuned language models can help run a structured self-discovery process often employed by psychologists and personal coaches.
- On pursuing goals, we are exploring a wide range of possible applications that interface with users across various aspects of their daily lives and help them consistently orient to their long-term goals - educational content, active defense systems, apps inspired by developmental psychology, or psychotherapy.
- Defense of human autonomy from AI manipulations
- Reach out to email@example.com if you are interested.
- Scalable self-development techniques through conversational AI
- Force-multiplier for therapists, leveraging ML to go beyond the “one-hour-a-week” paradigm
- Recommender systems that can take into account with human aspiration and meaning
How can you help?
Financially support AOI and our mission
If you can support us through a donation, please click here: Donate to AOI
- Why fund AOI?
- AOI puts the mission of AI-enabled human flourishing center stage. We are not driven by profit or competitive dynamics.
- We don’t have investors expecting us to monetize fast
- We prioritize privacy concerns without pressure from profit motives or cost cutting
- We don’t align with one AGI player's stack for investor alignment
- Philanthropic donors can make a huge impact for the future of AI-enabled human flourishing
- Why now?
- We must act with urgency to safeguard our future as we are at the cusp of LLMs and AGI development that will fundamentally change our world.
- If only in 2008 a similar organization existed to proactively address the world that Web 2.0 and social media would create. This time, we are building a strong organization that safeguards the interests of human flourishing in advance of the technological impact of this new era of AI development.
- AOI needs your help now, we have an urgent need to focus on building. Our teammates are currently working unpaid to support AOI’s vision, and we need to keep their talent.
Work with AOI on our projects
- We are looking for product leaders
- With experience and intimate understanding of incentive alignments, misalignments and their consequences
- Who can switch contexts technical and nontechnical, academic to consumer-facing, interdisciplinary
- We are looking for domain experts for applications and prototype iterations (especially as safety constraints evolve as more powerful AI is being developed)
- Researchers (both AI-focused and economists)