Longtermism
The ethical view that positively influencing the long-term future is a key moral priority of our time, given the vast number of future lives at stake.
Also known as: Strong Longtermism
Category: Philosophy & Wisdom
Tags: ethics, philosophy, decision-making
Explanation
Longtermism is an ethical perspective and research program that argues we should give significant moral weight to the well-being of future generations and that positively shaping the long-term trajectory of civilization is among the most important things we can do. The view has emerged as a major strand within the effective altruism movement, championed by philosophers like William MacAskill, Toby Ord, and Hilary Greaves.
The core argument rests on several premises:
**The future is vast**: If humanity survives, there could be trillions of future people. Even small improvements to their prospects represent enormous value in aggregate.
**We can influence it**: Our actions today - particularly regarding existential risks and trajectory changes - can meaningfully affect what happens centuries or millennia from now.
**Future people matter morally**: There is no principled reason to discount the well-being of future people simply because they don't yet exist, just as we don't discount the interests of distant strangers.
Longtermism leads to distinctive practical priorities:
**Existential risk reduction**: Preventing human extinction or permanent civilizational collapse is paramount because these events would eliminate all future potential. Key risks include advanced AI misalignment, engineered pandemics, nuclear war, and extreme climate change.
**Trajectory changes**: Even if civilization survives, its long-term trajectory could be vastly better or worse. Lock-in effects from early decisions about AI governance, values, and institutions could persist indefinitely.
**Moral and institutional progress**: Building institutions, knowledge, and values that help future generations make better decisions.
Criticisms of longtermism include concerns that it provides a convenient justification for neglecting present-day suffering, that the uncertainty of far-future predictions makes longtermist reasoning unreliable, that it can lead to fanaticism about tiny probabilities of enormous outcomes, and that it may reflect the biases of its predominantly affluent Western proponents rather than universal moral truths.
Supporters counter that longtermism doesn't require abandoning near-term priorities - many interventions (like pandemic preparedness) benefit both present and future generations - and that our inability to predict the future precisely doesn't mean we can't meaningfully reduce risks to it.
Related Concepts
← Back to all concepts