The EA Road to AI
October 10, 2025
Let’s start with the caricature: Effective Altruism (EA) is a movement of cold-hearted rationalists trying to boil down compassion into a spreadsheet.
This picture isn’t entirely wrong, but it confuses the tools with the mission. The movement’s origin lies not in a spreadsheet, but in a simple, powerful moral question.
Philosopher Peter Singer posed a now-famous thought experiment: you see a child drowning in a shallow pond. Saving them is easy, but you’ll ruin your expensive new suit. Do you do it? The answer is obvious. A child’s life is worth more than a suit.
Singer’s point is that we are in this exact situation every day. Children in distant countries die from preventable diseases, and we could save them at a relatively small cost to ourselves. This is the moral heart of Effective Altruism: if you can help others at little cost to yourself, you ought to. 1
This question gained serious traction in the 2000s, pushing a focus on global health charities where the impact of a dollar could be rigorously measured. This is the “effective” part: a commitment to using evidence and reason to guide our altruistic impulses.
From Philosophy to Practice: Earning to Give
So, how do you act on this? The first practical answer was a strategy called “Earning to Give.” Popularized by organizations like 80,000 Hours, the idea is straightforward: to fund life-saving work, you should pursue a high-paying career and donate a significant portion of your income to the most effective charities. 2 This emerged as one of the most prominent and highest-impact paths—a direct, measurable way to turn your career into saved lives.
But this is only half the story. The focus on measurable outcomes leads to a deeper, more complex question that cracks the caricature wide open: What about the unquantifiable? What was the value of discovering calculus or inventing the transistor? Their long-term impact on human well-being has been astronomical. This question reveals the true intellectual heart of modern EA—a rigorous attempt to do good in the face of profound uncertainty.
The Myth of Precise Calculation
The first thing a modern EA practitioner should admit is that calculating a precise value for long-term, speculative projects is impossible. We are all susceptible to motivated reasoning and wishful thinking. Anyone who claims to have a precise dollar value for foundational mathematics research is fooling themselves.
So why bother with an optimization framework? Because its real value isn’t providing a perfect answer. It’s about structuring our thinking and challenging our biases. It forces us to make our assumptions, beliefs, and goals explicit.
Let’s take a simplified example. Imagine choosing between two paths:
- Path A: Becoming a doctor in a developed country.
- Path B: Researching a novel malaria vaccine.
A rigid, calculative approach would fail. The EA framework, however, forces you to ask the right questions to determine your marginal impact—the net positive contribution you make beyond what would have happened anyway. The key concept here is replaceability. If you don’t take that spot at medical school, it will almost certainly be filled by someone nearly as qualified. The net difference you make is your performance above that replacement-level doctor. Key questions include:
- Scale: How big is the problem? As a doctor, you will help hundreds of people. A successful vaccine could help millions. The scale is immense.
- Tractability: Can you make progress? The path to becoming a doctor is well-defined. The path to a new vaccine is fraught with failure.
- Neglectedness: Is anyone else working on it? If the vaccine research is underfunded and overlooked, your contribution could be pivotal because it might not happen otherwise. Your entire impact becomes marginal. Neglectedness is often the strongest signal of high marginal impact. 3
However, these factors are not a simple formula. For a problem with astronomical scale, like the development of safe AGI, it can be one of the most impactful areas to work on even if it is not strictly neglected. The sheer size of the potential outcome means even a small contribution can be decisive.
This process doesn’t spit out a number. It illuminates the trade-offs and forces an honest comparison, steering you away from choices based on prestige or familiarity.
From Total to Partial Ordering
The spreadsheet caricature suggests we can rank all possible good deeds from best to worst—a total ordering. But it’s impossible to definitively say if curing cancer is more valuable than creating a unified model of physics.
A more realistic goal is a partial ordering. We can’t rank everything, but we can identify clear relationships. This is where we separate the value of an outcome from its probability. EAs often think in terms of expected value, which is calculated as:
Expected Value = (Value of Outcome) × (Probability of Outcome)
For example, a 10% chance of winning $1,000 has an expected value of $100 (0.10 * $1000). In an altruistic context, we might compare funding a project that will definitely save 10 lives (Value = 10, Probability = 1.0) with one that has a 1% chance of saving 2,000 lives (Value = 2000, Probability = 0.01). Their expected values are 10 and 20 lives saved, respectively.
We might not be able to compare curing cancer and advancing physics, but we are pretty sure that neither is particularly relevant if a nuclear war breaks out tomorrow. The challenge, of course, is that we have massive uncertainty about the probabilities of these novel, long-term risks. This uncertainty doesn’t invalidate the framework; it leads us to the necessity of a diversified portfolio.
This mindset led the community to diversify. It began by exploiting known, high-return opportunities, like funding insecticide-treated bed nets. But a smart strategy must also explore high-risk, high-reward frontiers. Just as a financial advisor recommends a portfolio because no one can predict which stock will soar, this approach manages profound uncertainty about the future. Many of these projects will fail, but the few that succeed could change the world.
The Logical Path to AI
This search for upstream, high-leverage opportunities leads directly to the movement’s focus on artificial intelligence. It’s not an arbitrary obsession. It’s the logical conclusion of a powerful heuristic: the ultimate problem-solving tool is intelligence itself.
Consider this chain of reasoning:
- A new drug could cure one disease.
- A simulation tool could help design drugs for many diseases.
- A breakthrough in general intelligence could accelerate all scientific discovery, including the creation of those simulation tools.
This is why AI safety is not just another cause for EA; it is potentially the most upstream cause. A beneficial, aligned AGI could help solve nearly every other problem we face. A misaligned AGI could render all other progress meaningless. It is the ultimate high-stakes, high-leverage, and—for now—critically important problem.
The ‘Lost Einstein’ and the Portfolio Solution
But this relentless focus on the ‘upstream’ runs into a powerful counter-argument. What if the child drowning in the pond is the next Einstein? By focusing all our efforts on abstract future risks, are we neglecting the very people who have the potential to solve them?
This is the ‘Lost Einstein’ argument, and it represents a crucial debate within longtermist thinking. It reminds us that saving a life today has both immediate value and immense potential value. A world where millions die from preventable disease is a world poorer in talent, innovation, and moral leadership.
This argument doesn’t invalidate the longtermist framework; it enriches it by proposing a different path to a better future. It suggests that one of the best ways to solve future problems is to increase the number of healthy, educated, and empowered people in the world today. This is a longtermist strategy rooted in broad human potential, and it provides a powerful justification for continuing to fund global health.
This reveals a key strategic disagreement: is it better to directly tackle specific future risks like AI, or to broadly invest in humanity’s capacity to solve problems? The ‘Lost Einstein’ argument doesn’t lead us away from the future. It leads us to a balanced portfolio approach.
Conclusion
So, does the road of EA lead to a world where we ignore the drowning child to focus on robots? No. It leads to a more sophisticated, and far more difficult, conclusion.
Imagine the real scenario isn’t just one child in one pond. Imagine you are standing on a shore. An unknown number of children are drowning just offshore, crying for help. At the same time, you can see a bus careening down a nearby highway, clearly on a path to crash into a crowd, though you don’t know exactly when. And in the distance, a volcano that has been dormant for centuries is beginning to rumble—a threat that could destroy the entire region, but whose eruption is deeply uncertain.
What do you do?
This is the impossible juggling act of modern altruism. The drowning children represent the immediate, undeniable suffering we can alleviate today through global health. The bus is a preventable, medium-term catastrophe like a pandemic or great power conflict. The volcano is the uncertain, existential threat of misaligned AI.
There is no simple answer. You can’t be in all places at once. The modern EA movement’s conclusion is that we must build a diversified portfolio of good. Some must continue to pull the children from the water—the moral blue-chips of our time. Others must dedicate themselves to stopping the bus. And some must work to understand and mitigate the volcano. It’s an admission that we must walk and chew gum at the same time, because the future of all the children, not just the ones in the pond, depends on it.
-
Singer, Peter. “Famine, Affluence, and Morality.” Philosophy & Public Affairs, vol. 1, no. 3, 1972, pp. 229–43. ↩
-
MacAskill, William. “Earning to Give.” 80,000 Hours, 2012, https://80000hours.org/articles/earning-to-give/. ↩
-
MacAskill, William. Doing Good Better: How Effective Altruism Can Help You Make a Difference. Guardian Books, 2015. ↩