Since ~every major media outlet has recently run some sort of commentary on Effective Altruism in general and “longtermism” in particular, I feel that as the author of a Substack called Future More Perfect, I am morally obligated (ha) to put in my two cents. I’m going to assume that you’re familiar with basic longtermist and EA premises and terms of art, because if you aren’t, you probably don’t care about any of the stuff below anyway.
The claims I want to defend are:
Even if there are enormously more far-future humans than present/near-future humans, it does minimal good to aim specifically at far-future human welfare rather than “just” the welfare of the next few generations.
This is largely because, for most x-risk problems, it’s too hard to know whether an action you take now will increase or decrease x-risk. Not only can you not know the magnitude, you can’t even know the sign.
However, caring about the next few generations should still lead to different world-improvement priorities than most people have, and some of these will likely decrease x-risk as a bonus. One more “conventional” EA-ish example is investment in disaster resilience.
Another less “conventional” example is increasing the collective intelligence of humanity; it’s probably worth putting a lot of energy toward making future people smarter and better coordinated. EA has done some work here but probably not enough.
(also read Scott Alexander on this because always read Scott Alexander on everything)
How long is long?
The pedant in me wants to say that focusing on improving human flourishing over a 20-100 year time horizon, rather than thousands or millions of years in the future, should still be called “longtermism” because by any normie standard that horizon totally counts as long. Robert Wright, in his solid post critiquing longtermist premises, takes the opposite tack and calls worrying about the next century “shorttermism” which I think concedes too much. I’m using “mediumtermism” because I agree with the EA belief that most people, when prioritizing things to worry about, think way too short-term. Even though this belief has led to some really out-there stuff under the banner of “longtermism,” and even though I think we need a different term to distinguish more sensible things, it is still a useful corrective to baseline mainstream myopia, and so this post should be taken as a friendly critique.
Why try to shape the far future?
The EA premise that there could well be trillions of people living in the far future is, I think, correct. Galaxy-spanning Dyson-sphere-building fully-automated-luxury-space-communism-enjoying future humanity could be a thing! And far be it from me to dissuade anyone from daydreaming about that wondrous possible future.
The premise that taking action to improve far-future human welfare could, in principle, be very morally important is also broadly correct. The stronger version which says it is the Most Important Thing Ever (tm) is probably not correct, because naive trolley-problem arithmetic is unlikely to hold in such applications (insert standard tedious digression about utilitarianism and the Repugnant Conclusion here). But still, if there are actions which we can know will meaningfully move the needle on either the likely number of trillions of future humans there will be, or the likely quality of their lives, those actions do intuitively seem worth caring about!
But note the italics. The EA argument relies on the assumption that we actually can estimate the impact of some present-day actions on far-future human flourishing. If we just can’t tell what that impact is, we should stop worrying about it and work on things we understand better. And I claim that for a lot of “longtermist” causes that is the case: we only think we understand far-future impacts when we don’t think carefully enough about the spectrum of possibilities.
It’s too hard to kill the right baby
A lot of x-risk avoidance thought experiments have a “what if you could go back in time and kill baby Hitler” flavor. The problem is that often in common x-risk avoidance areas, you can’t tell whether you’re actually killing baby Churchill instead.
Take AI safety, for example. A common critique of many recent lines of research into AI safety is that they end up advancing AI capability as a side effect. And if they don’t actually succeed in advancing safety but do advance capability, they plausibly increase x-risk. On the other hand, the “Luddite” argument for trying to slow AI capability progress deliberately— say by regulation, or moral suasion of capability researchers to abandon the field— has a related problem: we might well need a powerful aligned AI to defend us from a powerful unaligned one, so slowing the research that would give us that aligned AI could, again, increase x-risk.
Biosecurity has a similar tradeoff: take too little care, or impose too little regulation, and you could increase the risk of a genocidal engineered (or even accidental) pandemic; take too much care, put too many roadblocks in front of relevant research, and you could increase the risk that we fail to develop the tools we’ll need to stop such a pandemic. Even efforts to reduce the risk of nuclear war have this problem, because the game-theoretic nature of nuclear escalation makes it so easy to inadvertently increase risks you think you’re mitigating.
The good kind of prepping
Still, there is a lot of stuff worth doing that plausibly reduces x-risk and doesn’t have these sorts of potential failure modes. Squirrelling away troves of knowledge, seeds, etc that could help civilization come back faster from a collapse; preemptive preparation of better treatments, vaccines, protective equipment, etc for future pandemics; backup plans to feed people in case of global harvest failure; planetary defense against asteroid impacts— lots of “doomsday prepper, but prosocial and global-scale” efforts are like this.
But these kinds of things are all justifiable in terms of enhancing expected human welfare over the next century. This is partly because they help mitigate the x-risks listed in the last section, the ones for which it’s so much harder to game out good direct responses. It’s also because they help keep people safer and more comfortable even in less extreme scenarios that don’t threaten the survival of the species. And many of them likely will have good capability spinoffs just because innovative resilience technologies so often do: the classic example is the Internet, which was originally designed for post-catastrophe communications resilience, not to give me an easy way to broadcast these thoughts to the world.
The same is even more true of efforts to ameliorate climate change. EAs like to point out that climate change isn’t a “proper” x-risk because it is very unlikely to cause human extinction, and that’s true. But “the worst it could plausibly do is displace billions, kill millions, and make humanity much worse off for centuries to come” doesn’t sound, to most normies, like a reason not to worry. On the other hand, an effective green energy transition won’t just mitigate that tail risk: it will come with a host of great positive side effects that our children and grandchildren will enjoy, from energy superabundance to healthier air to extremely fast cars.
It would be smart to make humanity smarter
What else can we do as mediumtermists to make the next century go better? I think an underrated factor in determining the next few generations’ welfare is our collective applied intelligence: how effective groups of humans are at translating available knowledge and resources into good outcomes. There are at least two important contributors to this:
general levels (and maybe also maximum levels) of individual cognitive capacity
availability of effective coordinating institutions, i.e. mechanisms for negotiating differences and making decisions to unlock cooperative accomplishments with effective large-scale division of labor
Improvements in both have already helped produce (as well as, in part, being produced by) the Great Enrichment of the past 200 years: the Flynn Effect probably means the median modern person is quite a bit smarter than the median preindustrial person, and as institutions go, liberal democracy and corporate capitalism, for all their many flaws, are far better than what came before.
Could we keep going? Are there ways to produce a future population as much smarter as ours is compared to the preindustrial population, or coordinating institutions as much superior to our present ones as liberal democracy and corporate capitalism are superior to monarchy and feudalism? If so, that would be a huge positive multiplier for basically every good future outcome we care about, including economic growth; the ability to address huge problems like climate change; and mitigation of x-risks. “Smarter people working together better are going to be more effective at doing everything good” is almost a tautology.
If we wanted to improve the raw material of human cognitive capacity, we might do things like:
reducing environmental contaminants that hurt cognition (these include lead, airborne particulates, excessive indoor CO2, and I’m sure a bunch of others)
reducing the incidence of childhood traumas that can hurt cognition
improving childhood nutrition, not just alleviating known malnutrition but experimenting with potentially cognitive-development-enhancing supplements
experimenting, very carefully, with gene editing and embryo selection techniques that might improve the genetic contributors to intelligence
experimenting with novel educational techniques too— or maybe not so novel ones, like aristocratic tutoring
And if we wanted to improve coordinating institutions, we might consider:
reforming voting systems
exploring alternatives to elected representation like sortition and citizen juries
experimenting with new methods of political decision making like futarchy
experimenting with improved corporate governance and management methods
piloting more radical institutional incentive structure changes, like the ones described in Radical Markets (I’m by no means endorsing that book’s specific proposals, but they are at least trying to solve the right kinds of problems at the right level of radicalism)
For those few hardy souls who have read my two previous Future More Perfect posts, they’ve also been gesturing in this direction.
As far as I can tell, this is an area of future-improvement that tends to be somewhat neglected by EA and longtermist types. To be fair, there is some EA support for the excellent cause of lead pollution reduction, and the EA career site 80000 Hours has problem profiles on governance and institutional decision making. But cognitive improvement as a general cause seems very underemphasized when there are so many potential avenues for progress; and the 80000 Hours approaches to institutional improvement seem to me like tinkering with details, rather than real fundamental drives for better coordination methods.
Admittedly I am biased. I’ve been privileged to spend much of my life participating in unusually effective institutions comprised of unusually intelligent people, and that has been a significant contributor to my personal sense of fulfillment and meaning in life, as well as a source of frustration when I’ve compared other institutions unfavorably to the best I’ve known. A future with smarter people and better institutions just seems like a satisfying and fun world to live in, and I realize that could distort my judgment about how materially superior it would be. Still, I would expect most EA types to share that same bias, which makes the relative neglectedness of “smarter future” work more surprising.
tl;dr: be 10% more normie
I want to emphasize that this is a friendly critique, I don’t mean it at all as a sneer at EA or longtermism, and I’d urge folks to read Scott Alexander on why a lot of the common criticisms of EA and longtermist folks are hypocritical and question-begging at best. But the key differences I think I have with “EA orthodoxy” (to the extent there is such a thing) are:
Focusing more on the next few generations would actually produce better priorities to guide better-future-minded types to do the most good.
Mediumtermism is also much more sympathetic to normies and this also will improve impact, because it will get more people engaged in better-future-minded projects who would have been turned off otherwise.
Getting more sympathy from normies is a good sign of successful sanity checking because it’s easy to convince yourself of dumb but rational-sounding things when you don’t stress-test them against normie intuitions.
The position I am arguing for is still much closer to the EA/longtermist view than the normie view; I’m saying that going just a small amount in the normie direction will have big dividends.
Again there’s a personal experience bias here. I used to be a pretty doctrinaire libertarian, and if you search hard enough you can probably find lots of embarrassing examples of me saying cringey doctrinaire-libertarian things on the early Internet. I am still much more libertarian than most people, but have become much less orthodox than I was. One main reason for this is that I have found that going a bit in the normie direction makes me much more persuasive and sympathetic. Another reason is that engaging with the best normie criticisms has made it easier to see where orthodox libertarianism goes awry.
I feel like there’s a lot of sociocultural overlap between the libertarian subculture of 20-30 years ago and the EA/longtermist subculture of today. I suspect that in both cases their/our brains are wired differently than most people in some way, and also that they/we take some instinctive delight in being contrarian and feeling smarter and more reasonable than everybody else. And again in both cases I think the truth is much closer to the subcultural position than to the normie position: but failing to take on an adequate measure of normie-ness leads a person down cultish roads that produce much more self-righteousness than positive impact. So let’s not take those roads.
A question that crosses the streams: how do we get a solar panel that will last for >100 years?
I have found that the entire solar industry is focused on a 20-30 year lifetime for panels. And the marketing today is that the panels will then be replaced, and "solar waste management" companies will be necessary. I don't see any physical reason that requires a panel lifetime that short, but the economic incentives (and the shorttermism) are perfectly capable of explaining what people are doing.
As far as the EA movement ... I have found they tend to focus entirely on "AI risk" or "fighting diseases in Sub-Saharan Africa", and ignore everything in the middle. Perhaps they will prove that wrong.