When AGI Arrives
(I've been thinking about this for the longest time... here's my 2 cents)
Over the past year, I've been reading a lot about what the future holds with AI. I think the dominant narratives can be broadly categorized into 2 camps.
The “optimists” like Peter Diamandis and Ray Kurzweil, who expound abundance economies, merging with AI, and a future where tech unlocks a new level of prosperity.
Then there are the "realists" like Mustafa Suleyman and Dario Amodei, who warn about safety risks, societal disruptions, and the geopolitical implications of powerful AI systems.
Personally, I find myself leaning closer to the realists camp. Here are some recent thoughts I’ve had about it:
#1: AI will create new jobs - but those new jobs might also be done by AI itself
The classic example we always hear is that technology always creates new jobs. The industrial revolution freed factory workers from manual labor and created the cognitive, white-collar work we’re familiar with today.
But in this AI era, the “new jobs will be created for us” argument may not hold.
#2: Will the beneficial effects of AI truly cascade?
It's one thing to have the capability, and another for its benefits to actually reach people.
For example, we've developed highly productive food technologies, but why do we still have famine persisting in many parts of the world? Many problems are human, governance, political in nature, rather than technological ones.
And this also means: When AGI comes, it doesn’t mean we’ll all be replaced overnight.
#3: By democratizing access (to information), we're also democratizing risk
By making information abundant and easy to find, anyone can search how to make a bomb, find toxic molecules, etc.
Also, when you think about it, the optimists may have some personal incentives sharping their views. After all, many of them are building or investing in these technologies - so naturally their outlook tends to emphasize the upside. They can’t quite say that the tech they’re building is going to bring down humanity, can they? 😉
There's also increasing talk about post-labor economies sustained by ideas like UBI (Universal Basic Income), NIT (Negative Income Tax), or even Tax on Robots. Talking about it is one thing, but how it's actually implemented and whether it'll work is another.
Plus, what would a world actually look like when AI does most of the work for us? Would we truly feel freed to pursue the things we love? Or would many of us struggle without the sense of purpose that work currently provides?
These are some questions I’m still trying to find answers to.
Anyway, back to the original question: What happens when AGI arrives?
Will all world problems magically disappear? Almost certainly not.
Are our fears of AI outright replacing us (and rendering us “useless”) valid? Probably not in the way we imagine either.
AGI may dramatically expand what we can do. But whether those capabilities translate into a better world will depend less on the tech itself, and more on how we choose to use it.
When AGI comes, the bottleneck may no longer be intelligence, but human nature (haah, we’ve come full circle haven’t we?). Whether our human values of compassion and shared identity will be strong enough to take AI’s benefits global, or whether mutual distrust and unresolved tensions will continue preventing us from collectively advancing as a society.
The main challenges - and perhaps our purpose in a “post-AGI world” - will be becoming solution bringers (sorry, I couldn’t think of a better name): people who work with tech in hand to solve these deeply human problems.
And that’s the kind of tech optimism I believe in.