What We Owe The Future

Kelsey Piper

William MacAskill’s latest book presents itself as an introduction to the burgeoning longtermist movement. But his views are eccentric – even within the movement he founded.

I have some questions about What We Owe the Future.

In this, I’m in good company. The book debuted with a spot on the New York Times bestseller list; a profile of MacAskill in the New Yorker; and reviews in the New York Times, the Wall Street Journal, the Guardian, Salon, the Boston Review and many more. MacAskill was interviewed on, and I exaggerate only slightly, all the podcasts.

Ads for the book were everywhere. The contrast with the successful, but modest, launch of MacAskill’s previous book, Doing Good Better, is a window into something bigger: The effective altruism movement MacAskill founded is resourceful and well-connected these days, and can get the key things it has to say before millions of eyes.

MacAskill’s book has met with a broadly positive reception from the general public. Interestingly, though, the reaction from the effective altruism movement has been mixed. Effective altruists who started out skeptical of MacAskill’s longtermism are still skeptical. Perhaps more surprisingly, effective altruists who share his worldview still objected to many of the book’s details, pointing out that its perspective on what priorities are implied by longtermism is out of line with what most longtermists — other than MacAskill — are actually doing.

What is the longtermist worldview? First — that humanity’s potential future is vast beyond comprehension, that trillions of lives may lie ahead of us, and that we should try to secure and shape that future if possible.

Here there’s little disagreement among effective altruists. The catch is the qualifier: “if possible.” When I talk to people working on cash transfers or clean water or accelerating vaccine timelines, their reason for prioritizing those projects over long-term-future ones is approximately never “because future people aren’t of moral importance”; it’s usually “because I don’t think we can predictably affect the lives of future people in the desired direction.”

As it happens, I think we can — but not through the pathways outlined in What We Owe the Future. MacAskill is a philosopher, and What We Owe the Future is a philosopher’s book, satisfied at times with an existence proof: Look, he likes to say, abolitionists affected the next century, perhaps the next several centuries; Confucianism won out over other Chinese philosophies and then held on for thousands of years; empires collapse and societies stagnate. You can try to change the morals and values of the society around you, and under some circumstances your changes will have ripple effects into the distant future.

Under some circumstances. But under which circumstances? How do you know if you’re in them? What share of people who tried to affect the long-term future succeeded, and what share failed? How many others successfully founded institutions that outlived them — but which developed values that had little to do with their own?

The first and most fundamental lesson of effective altruism is that charity is hard. Clever plans conceived by brilliant researchers often don’t actually improve the world. Well-tested programs with large effect sizes in small, randomized, controlled trials often don’t work at scale, or even in the next village over. Some interventions manage to backfire and leave recipients worse off than before — MacAskill’s favored example of this, described in his book Doing Good Better, is PlayPumps, an expensive and ill-conceived plan to replace standard water pumps with pumps children could operate while playing (they break down easily, and extracting useful manual labor from children turns out not to work very well).

Not only is it fiendishly difficult to do something that works, it’s often even harder to tell when it has. EA charity evaluator GiveWell has been trying for more than a decade now to figure out how cost-effective it is to distribute deworming medication to children, and their error bars have only narrowed a little from when they first began. Are graduation programs better than cash? Depends how you measure it.

Most well-intentioned, well-conceived plans falter on contact with reality. Every simple problem splinters, on closer examination, into dozens of sub-problems with their own complexities. It has taken exhaustive trial and error and volumes of empirical research to establish even the most basic things about what works and what doesn’t to improve peoples’ lives.

These questions are not unanswerable. Through the heroic work of teams of researchers, many of them have been answered — not with perfect accuracy, but with enough confidence to direct further research and justify further investment. The point isn’t that everything is unknowable; the point is just that knowing things is hard.

That is, ultimately, the simple yet damning response to What We Owe the Future: It does not actually convince me that it has any proposals that matter on the cosmic time scales that it speaks of. This is the fundamental challenge which longtermists must rise to, and which What We Owe the Future has to answer.

Most well-intentioned, well-conceived plans falter on contact with reality. Every simple problem splinters, on closer examination, into dozens of sub-problems with their own complexities.

And viewed through that lens, it’s a somewhat unsatisfying book. There are a lot of people who’ve attempted to change the world — through conquest, through science, through politics — and plenty who look to have succeeded on the scale of at least a few hundred years. But almost none of the weight of MacAskill’s arguments applies to changes on the scale of a few hundred years; they largely rest on the possibility that our actions have impacts on the scale of millenia.

This is most evident where the book addresses the possibility of technological stagnation, where the position it takes is unique: MacAskill argues that because we will eventually run up against hard technological limits to the size of our economy, speeding up economic growth might not matter much — having large effects over how the next few thousand years go, but minimal effects on where we ultimately end up.

Of course, if causing a thousand years of technological stagnation isn’t a significant act, little is. MacAskill’s section on moral change argues that slavery could have persisted into the present day in the absence of a dedicated and well-organized abolitionist movement. The end of mass slavery, he argues, wasn’t overdetermined as a result of economic changes; it was largely the product of a specific political campaign to throw the might of the British Navy behind ending the slave trade. If that group hadn’t acted, slavery could have endured much longer — at least until someone acted with the conviction and determination of the abolitionists in a similarly hospitable political climate. It’s not hard to imagine that could have taken decades or even centuries.

It’s impossible to read the full account and not feel awed by the determination and conviction of the early abolitionists, or gratitude for the much-better-than-it-could-be world they left us.

But on MacAskill’s own terms, it’s hard to claim abolition as a longtermist achievement — an astonishing humanitarian triumph of principled political organizing, yes, but one which mostly justifies itself through the benefits to already-alive enslaved people and their children and grandchildren, not through the benefits to future human civilization.

Many of the same questions come up in the examination of the founding of the United States, which I’m happy to call one of the biggest longtermist success stories in history (though MacAskill doesn’t go quite that far): The Founding Fathers envisioned, and mostly created, a country with distinctive political and social commitments that have eventually made it a superpower. But there are still a dozen questions: How many governments did people found with similar intent that didn’t work out? (How many in France alone in the same time period?) How good does America actually look today, from the perspective of the founders? Is that about as much influence on our descendants three hundred years hence as we can really hope for, absent unusual technological situations? What applicability does this example even have to people who aren’t in the position of starting a revolution and founding a new government? Will the influence of the U.S. really extend eternally into the future?

There is a way of making sense of this set of commitments, which the book briefly gestures at (though I think without a lot of background familiarity with longtermism, the connection between these arguments would be nearly impossible to parse). If it so happens that modern-day humanity builds superintelligent machines, and these machines use our current values to steer civilization, then anything that affects contemporary values and governance will end up having outsized long-term impacts. That’s the “values lock-in” argument, and it’s the main reason to think that if we change peoples’ priorities and moral commitments today it could affect the distant future — which is to say that if we don’t buy the values lock-in argument, there’s little case for trying to change peoples’ values on longtermist grounds.

You might expect people to depart from MacAskill at the claim that superintelligent AI will transform the world in our lifetimes, but, in fact, he is in good company. Effective altruists have worried about emerging technologies that could make it easier to wipe out all of human civilization since the movement was founded, and AI is a major research focus and priority.

But most EAs working on AI disagree with MacAskill about precisely where the challenge lies, a fairly technical disagreement with major implications for what longtermists should do today. MacAskill thinks we’re fairly unlikely to just lose control of the future to AI systems that have no reason to do what humans ask; in a footnote he rates that likelihood at around 3% this century. (Most longtermists are substantially more pessimistic than that.)

Effective altruists have worried about emerging technologies that could make it easier to wipe out all of human civilization since the movement was founded, and AI is a major research focus and priority. But most EAs working on AI disagree with MacAskill about precisely where the challenge lies.

The more typical longtermist perspective is something like this: Broadly, current methods of training AI systems give them goals that we didn’t directly program in, don’t understand, can’t evaluate and that produce behavior we don’t want. As the systems get more powerful, the fact that we have no way to directly determine their goals (or even understand what they are) is going to go from a major inconvenience to a potentially catastrophic handicap.

For this reason, most longtermists working in AI safety are worried about scenarios where humans fail to impart the goals they want to the systems they create. But MacAskill thinks it’s substantially more likely that we’ll end up in a situation where we know how to set AI goals, and set them based on parochial 21st century values — which makes it utterly crucial that we improve our values so that the future we build upon them isn’t dystopian.

Which of these failure modes is more likely has major implications for which approaches to securing the future are most promising. If you think humanity is likely to fail catastrophically at designing AI systems that have goals we can understand and influence — and therefore likely to unleash AIs whose values bear little resemblance to our own — then improving present-day human values isn’t a longtermist priority: The important thing is making sure humanity gets a future at all. If you think large-scale transformative effects from AI aren’t likely to happen then as a longtermist you’d probably focus on other sources of existential risk, rather than on any of the values changes or AI-related risks MacAskill highlights.

Either way, “What should longtermists do?” is a deeply technical question that depends on assessments not just of whether AI is going to pose an unprecedented threat, but of exactly how it’s going to do that — a question where MacAskill happens to disagree with most others focused on risks from AI.

Perhaps this is why the advice MacAskill gives for how to put longtermist principles into action feels disappointingly scant. He recommends voting, being a “moral weirdo” and pursuing careers in effective altruism, but without a clear unifying thread for how those avenues produce the kind of rare and distinctive long-term changes in the world that are inspiringly profiled in the first half of the book.

Many of the critiques of What We Owe the Future have gestured at this complaint, sometimes indirectly. “There’s more than a whiff of selective rigor here,” Christine Emba complained in the Washington Post, and she’s right: while What We Owe the Future is almost fanatically cited and fact-checked, with appendices that rival the book in length (that’s a compliment), the treatment of ways to affect the long-term future often seems satisfied with the fact that doing so is possible rather than estimating the actual odds of success.

MacAskill doesn’t actually make the argument that, Pascal’s wager style, we should be pursuing minute possibilities of influencing the long-term future because even a one-in-a-billion chance of affecting trillions of future people is so important. I happen to know he doesn’t believe that. But it feels to me like an obvious consequence of where the book chooses to focus. It does a remarkably compelling job of introducing and driving home the sheer magnitude and potential of humanity’s future, and then offers a light survey of things that might influence it, without many strong arguments about which are the most important.

The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical.

Here, again, most EAs are doing something different: They’d be happy to tell you what the most important longtermist work is, and they generally think it’s preventing extinction. In biosecurity and AI, rapidly advancing technology will make it cheaper and easier to cause mass death on an unprecedented scale (and make it likelier to happen by accident); that is where the bulk of EA money and attention not directed at present-day causes is directed. For many of these people, their life’s passion is making sure a specific new technology doesn’t kill us all in the next 100 years; they are, in fact, often doing this out of their deep concern for the trillions of people who might live in a flourishing human future, but they’re only trying to influence events they expect to occur quite soon.

MacAskill tackles risks to human civilization in Chapter 5, titled Extinction, though I don’t think he makes what I consider the strongest argument for it: Trying to prevent extinction in the next century possesses at least some of the concreteness that values change lacks (especially if you don’t expect values lock-in in the near future and so have to worry about how long your values differences endure).

You can try to invent specific things that make extinction less likely, like (in the case of pandemic preparedness) better personal protective equipment and wastewater surveillance. You can identify things that make extinction more likely, such as nuclear proliferation, and combat them. These are still thorny problems that reach across domains and in some respects confuse even the full-time experts who study them, but there are achievable near-term technical goals, and longtermists have some genuine accomplishments to point to in achieving them.

In the short term, persuading people to adopt your values is also concrete and doable. Effective altruists do a lot of it, from the campaign against cruelty to animals on factory farms to the efforts to convince people to give more effectively. The hard part is determining whether those changes durably improve the long-term future — and it seems very hard indeed to me, likely because my near-term future predictions differ from MacAskill’s.

That’s how I end up agreeing 99% with a worldview but feeling profoundly mixed about the book that lays it out. The stakes are as high as MacAskill says — but when you start trying to figure out what to do about it, you end up face-to-face with problems that are deeply unclear and solutions that are deeply technical. Is MacAskill right that we are likely to build AI systems that have human-set goals but the wrong human-set goals, or am I right that we’re likelier to fail by not knowing how to set their goals at all?

I think we’re in a dangerous world, one with perils ahead for which we’re not at all prepared, one where we’re likely to make an irrecoverable mistake and all die. Most of the obligation I feel toward the future is an obligation to not screw up so badly that it never exists. Most longtermists are scared, and the absence of that sentiment from What We Owe the Future feels glaring.

If we grant MacAskill’s premise that values change matters, though, the value I would want to impart is this one: an appetite for these details, however tedious they may seem. The stakes are high. The problems we’re trying to think about are without precedent and deeply weird. The problem is that What We Owe the Future doesn’t quite feel like it reflects a style of thought that’ll get us to the bottom of them.

Kelsey Piper is a senior writer at Vox’s Future Perfect. She writes about emerging technologies, global development, pandemics, effective altruism, and what it’ll take to make it safely to the 22nd century.

Published November 2022

Have something to say? Email us at letters@asteriskmag.com.

Further Reading

Subscribe