I watched or rewatched a few space films and TV shows recently, including For All Mankind, Interstellar, Ad Astra, 3Body Problem, etc. and it reignited an idea I’ve been sitting on for awhile. In essence, I believe that we will need massive (and arguably scary levels of) advancements in AI to colonize other planets and moons.
I mean, it’s absolutely impressive that we landed on the moon 6 times with 12 people and with computer processing power equal to less than the CPU in my handheld phone. But we are very far from terraforming the moon or Mars. No doubt, the technology exists to land on them. But watching all these sci-fi shows really impressed upon me the hairsbreadth of margin of error for all the different things we have to do in space. It’s a marvel of human ingenuity and brilliance that we have satellites and space stations at all. But the number of people who have reached earth orbit is below 1,000 (exactly 622 people have reached earth orbit). That just underlines how hard it is to get people who can really handle space, and this cannot scale. A very specific set of specialized and physically fit people can be in space. And that does not scale. How are we going to send people to space en masse and colonize the solar system without a large pool of people fit to be in space and a surplus of technology and rockets to send out there? It doesn’t add up.
For this reason, I sadly think we will not see terraforming in my lifetime. If I manage to be fit and outlive my parents or grandparents, maybe that’s another 40-50 years. In that time period, it’s hard to imagine humanity mass producing capable AI-enabled robots that could take care of all the things we’d need to construct, excavate, and terraform another planetary body. Especially, because it takes us 5-10 years just to send one out (Perseverance rover was planned in 2012 and launched in 2020 and cost $2.7 billion dollars). Our manufacturing and technological capabilities are just not there.
This doesn’t even address the large mountain we need to climb in terms of research and development progress we need to make in AI. We need to create an AI that is not just capable of successfully docking spacecraft in the difficulty of space, but we also need something that can handle highly intelligent autonomous tasks without the guidance of humans for hours to years on end. This is a far cry from being able to chat with ChatGPT or beating a Starcraft 2 pro player. Now, this begs the question, how advanced do we need them to be?
Is sentience a requirement for AI that can do things we need? Or barring that, is Artificial General Intelligence (AGI) a requirement? What is the threshold for AI that is sufficiently capable of helping us colonize and where is the beginning of a dangerous AI that could wipe us out? Are those lines closer than we think? Or are they actually quite far away from each other? It’s an interesting needle to thread because we need an AI that is enormously capable enough to help us terraform and bring us into space, but we also don’t want an AI that is intelligent enough to wipe us out. I like to think of this as a “Vital Margin”, the margin between an AI that is so intelligent that it can help us to forward our race to the far future, which is absolutely vital for our long term survival as a spacefaring race, and an AI that is so intelligent that it could destroy us with a mere thought if it deemed it was needed or possible.
This might be obvious to fans of 2001: A Space Odyssey where HAL is so advanced that it attempts to eliminate the humans which it deems are not capable of completing the mission. HAL, in its own misguided sense of silicon confidence, surmised that it was the one capable of completing this mission alone. Setting that terrible premise aside, wouldn’t we want an AI that was that confident, or rather capable, in its abilities to take on a mission of that scale? Why does Cooper in Interstellar have to embark to faraway planets and black holes to gather data and terraform when a sufficiently advanced AI could do so?
In any case, the question that presses on me the most is that margin between supremely helpful for survival versus apocalyptic Terminator/Matrix outcomes.
But here’s the big caveat in my head: the rarity of intelligent life. I really have to credit Liu Cixin’s 3 Body Problem for these thoughts. The trilogy got me thinking deeply about how rare intelligent life is and combined with one of my favorite science books, 10 Innovations of Evolution. No doubt, probabilistically, intelligent life outside of us Homo Sapiens is likely. But it must be extremely rare! After all, life has been on earth for 3.6 billions years. Intelligent life even close to our level of intelligence appeared less than 100 millions years ago, and likely maybe less than 3 million years ago. That is incredibly incredibly rare, and the number of extinction events that life survived to get here, so frequent. For these reasons and many more, it has molded my thinking that intelligent life is incredibly precious and rare and supremely worth preserving.
Within that context of rarity of life, if humanity perishes, can a highly intelligent AI be a legacy we leave behind? In other words, even if we go over that vital margin from helpful to apocalyptic, is the optimistic way to view this that at least we’ve left behind an intelligent life form on par with us that has a higher chance of survivability? And therefore, that synthetic AI is the residue that we can leave behind to other civilizations which may one day encounter our knowledge and wisdom. Can AI be that advanced version of what we attempted to put into Voyager 1 for some far off alien race to find in the absence of us?
I think that also means that, in either case, we’re stuck with AI.
- We need AI to survive and thrive, in a world that is at risk of climate change, etc. (a topic for another time).
- We need AI to terraform and populate the planets.
- If AI destroys us or we get destroyed, we might need it to continue our legacy into an infinite universe.
It’s a Catch-22. There’s no escaping it.