Future More Perfect

Share this post

Intro and precommitment

futuremoreperfect.substack.com

Intro and precommitment

What I'm here to do and how I want to advance that goal next.

Nicholas Weininger
Aug 27, 2021
1
1
Share this post

Intro and precommitment

futuremoreperfect.substack.com

I’m going to use this Substack to write about potentially large steps toward a much better world. There’s lots of futurist and techno-optimist and progress-studies writing out there nowadays. Who am I to add to it, and why should you care?

To take the easy question first: I’m a software engineer and engineering manager and composer of classical music. I live in San Francisco with my wife and son, and as of this writing I divide my time between musical activity (composing and singing) and consulting work with startup CTOs on how to address their engineering management challenges. I’m at a midcareer stage where I’m thinking about the rest of my active life through the lens of “how can I do my bit to make the future better?” and this Substack is largely a way to use writing in public to help myself think through that. If you find it useful to follow along, I’d love to have you join me.

Some distinctive stances that might help you decide whether this is worth reading:

  1. I’m motivated by a felt need to help build a better future for my son and his generation. Being at once a proud but nervous parent, and a very privileged midcareer professional, narrows this focus. My son and his friends have a very good chance of being alive in 2100 (I hope to have an outside chance at it too, but that’s a subject for another post), and it’s never far from my mind that the range of plausible variant possibilities for what his world will be like then is extreme. Indeed, this range may be more extreme than any generation has faced before or ever will face, because this may be the most important century ever. If that’s true, then the stakes for nudging the world toward better outcomes in 2100 literally couldn’t be higher, and those of us with the means and time to do some nudging had better think hard about how to do it.

  2. I’m a short-term pessimist but a long-term optimist. We’re in a rocky time right now— it’s cliched because it’s true. There are well-rehearsed reasons (climate change, political risk, etc) to think the next couple of decades will continue to be rocky. But I believe that changes now germinating, and likely to reach fruition later in this century, have enormous upside potential for 2100 humanity. So I join neither the “things are actually getting better, don’t believe the bad news” camp nor the “we’re all doomed, hunker down/live for today while you can” camp. We’ve got a hell of a fight ahead, and an incredible, wonderful victory there for the winning.

  3. I think mitigating existential risks is necessary but far from sufficient. A lot of people who take long-termism seriously end up concluding that the only really important thing to work on is reducing “x-risk,” i.e. the risk of total human extinction from something like unfriendly AI or engineered bioweapons, or more generally, reducing the risk of extreme downside events that would cause enormous suffering. They’re right that that’s important, and that the chance of extinction even as soon as 2100 is scarily nontrivial. But emotionally and culturally, I cannot join them in their single-minded downside focus. I want something more hopeful and not quite so long-term to aim at, and I think a lot of people do. And I think there’s enough upside in the very-likely-though-sadly-not-certain future where humanity survives this century to more than justify that hope, and to provide extremely high expected value in taking steps that increase that upside.

  4. I want to look beyond just technology and politics. A lot of futurist writing is about either whizbang technology that could make 2100 life much better, or utopian political activism that could bring about radical policy solutions to our present injustices and deprivations, or both (Google “fully automated luxury space communism” for one strand that tries to do both at once). I like to think about those things too. But as I get more crankily middle-aged and more exercised about the shortcomings of my Californian home, I become more aware of how much culture and institutions can matter just as much to the human future— of course those things interact a lot with technology and politics, but they’re not the same. And futurism about cultural and institutional change seems to me neglected enough to be a worthwhile direction to start with.

So, as foretaste and precommitment: my next post will be about how better institutional norms of separating fact-finding from value judgment could radically improve the future. This may seem far into the philosophical weeds, but it’s been on my mind a lot lately because

  • our present rockiness clearly has a lot to do with institutional quality failures— quality of governance, quality of execution, quality of communication, quality of collaborative thought

  • when you dig into the causes of those institutional failures, a lot of them arguably spring from repeated, systemic failure to properly distinguish facts and values

  • there are existing norms about such distinction that we could strengthen and build on, and interesting radical reform possibilities, touching politics but far from limited to politics, that would expand the scope of such norms much further.

Some other topics on my todo list include:

  • Engineering as a creedal “nation,” and how more explicit, higher-aiming engineering cultural values and “national” myths might be a productivity multiplier (with application to a future musical project of mine)

  • What a much smarter future humanity— more cognitively capable, less bias-prone, more skilled in critical though— might look like, and how we might get there

  • How we might aim for a much cleaner, less-polluted future world in the ways that matter most— dramatically reducing the types of pollution that have the most negative impact on human flourishing— without retreating from any of our technological capabilities or forgoing economic growth

  • How we might judge which of the many radical proposals for better institutional governance mechanisms, political and nonpolitical (from DAOs to quadratic voting to election by sortition and on and on), have the best prospect of actually achieving major positive results by 2100

  • How we might greatly increase the amount of beauty in the world by 2100, starting with clarifying our definition of what that would even mean.

You can see already that all of these have implications for both technology and politics but none of them is limited to those spheres.

I do not currently live a life that lets me focus on writing about these things a lot, so the pace of these posts is going to be slow, highly variable, and hard to predict. But here we go!

1
Share this post

Intro and precommitment

futuremoreperfect.substack.com
1 Comment
Robot Elvis
Aug 28, 2021

I loved this intro (and agree with most of it). Looking forward to seeing what you write? Also "Hi Nick, long time no see".

Expand full comment
Reply
TopNewCommunity

No posts

Ready for more?

© 2023 Nicholas Weininger
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing