When AGI Arrives
Over the past year, I've been reading a lot about what the future holds with AI. I think the dominant narratives can be broadly categorized into 2 camps.
The “optimists” like Peter Diamandis and Ray Kurzweil, who expound abundance economies, merging with AI, and a future where tech unlocks a new level of prosperity.
Then there are the "realists" like Mustafa Suleyman and Dario Amodei, who warn about safety risks, societal disruptions, and the geopolitical implications of powerful AI systems.
Personally, I find myself leaning closer to the realists camp. Here are some recent thoughts I’ve had about it:
#1: AI will create new jobs - but those new jobs might also be done by AI itself
The classic example we always hear is that technology always creates new jobs. The industrial revolution freed factory workers from manual labor and created the cognitive, white-collar work we’re familiar with today.
But in this AI era, the “new jobs will be created for us” argument may not hold.
#2: Will the beneficial effects of AI truly cascade?
It's one thing to have the capability, and another for its benefits to actually reach people.
For example, we've developed highly productive food technologies, but why do we still have famine persisting in many parts of the world? Many longstanding problems are human, governance, political in nature, rather than technological ones.
And this also means: When AGI comes, it doesn’t mean we’ll all be replaced overnight.
#3: By democratizing access (to information), we're also democratizing risk
By making information abundant and easy to find, anyone can search how to make a bomb, find toxic molecules, etc.
Also, when you think about it, the optimists may have some personal incentives shaping their views. After all, many of them are building or investing in these technologies - so naturally their outlook tends to emphasize the upside😉.
There's also increasing talk about post-labor economies sustained by ideas like UBI (Universal Basic Income), NIT (Negative Income Tax), or even Tax on Robots. Talking about it is one thing, but how it's actually implemented and whether it'll work is another.
Plus, what would a world actually look like when AI does most of the work for us? Would we truly feel freed to pursue the things we love? Or would many of us struggle without the sense of purpose that work currently provides?
These are some questions I’m still trying to find answers to.
So, What happens when AGI arrives?
Will all (or even most) world problems magically disappear overnight? Almost certainly not.
Are our fears of AI outright replacing us (and rendering us “useless”) valid? Probably not in the way we imagine. It will certainly obsolete many human roles, as seen in recent tech layoffs. But losing jobs isn't the same as losing purpose. Yet, it's increasingly becoming harder to define that purpose, especially as abilities we once considered uniquely human are no longer exclusive to us.
AGI will dramatically expand what we can do. But whether those capabilities translate into a better world depends less on the tech itself, and more on how we choose to use it.
When AGI comes, the bottleneck is no longer intelligence, but human nature (back to square one!). Will our shared values - like compassion and identity - be strong enough to take AI’s benefits globally? Or will mutual distrust and unresolved tensions keep holding us back?
Member discussion