David Meerman Scott published a fascinating article a few days ago. It compares modern AI companies to Enron and that company’s financial scandal that broke in 2001.
But one paragraph in particular stood out to me that warrants quoting in full:
Altman says there’s a chance that so-called Artificial General Intelligence (which is still years or decades away) has the possibility of turning against humans. “I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Altman says. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” (Source)
Terrifying, right?
I would argue that if you are creating something that has anything other than a 0% chance of wiping out humanity, you probably shouldn’t do it.
For example: marketing Pepsi to be consumed in massive amounts, while definitely bad for humans, doesn’t run the risk of causing mass extinction.
On the other hand, bringing Tyrannosaurus rex back to life definitely has a greater than 0% chance of doing just that.
Now, I’m not a doomsday prepper by any stretch of the imagination… But when someone tells me there’s even a small chance that what they’re making could turn out like The Matrix, I start to worry.
It’s as if they never watched I, Robot or read Jurassic Park (which is actually about runaway technology, not dinosaurs).
These companies have a responsibility to guarantee that this doesn’t happen. We already made this mistake with nuclear weapons. And that threat still looms large over our heads, especially right now during the Russo-Ukraine War.
We have enough threats to deal with. Let’s not create more of our own volition.
I’ll leave you with my favorite quote from Jurassic Park:
“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”
For more daily musings like this, subscribe below: