Depending on who you ask, artificial intelligence (AI) is either about to save or destroy civilization. In WIRED, Gideon Lichfield discusses the different perceptions of AI, writing:
In the midst of this frenzy, I’ve now twice seen the birth of generative AI compared to the creation of the atom bomb. What’s striking is that the comparison was made by people with diametrically opposed views about what it means.
One of them is the closest person the generative AI revolution has to a chief architect: Sam Altman, the CEO of OpenAI, who in a recent interview with The New York Times called the Manhattan Project “the level of ambition we aspire to.” The others are Tristan Harris and Aza Raskin of the Center for Humane Technology, who became somewhat famous for warning that social media was destroying democracy. They are now going around warning that generative AI could destroy nothing less than civilization itself, by putting tools of awesome and unpredictable power in the hands of just about anyone.
Altman, to be clear, doesn’t disagree with Harris and Raskin that AI could destroy civilization. He just claims that he’s better-intentioned than other people, so he can try to ensure the tools are developed with guardrails—and besides, he has no choice but to push ahead because the technology is unstoppable anyway. It’s a mind-boggling mix of faith and fatalism.
For the record, I agree that the tech is unstoppable. But I think the guardrails being put in place at the moment—like filtering out hate speech or criminal advice from chatGPT’s answers—are laughably weak. It would be a fairly trivial matter, for example, for companies like OpenAI or MidJourney to embed hard-to-remove digital watermarks in all their AI-generated images to make deepfakes like the Pope pictures easier to detect. A coalition called the Content Authenticity Initiative is doing a limited form of this; its protocol lets artists voluntarily attach metadata to AI-generated pictures. But I don’t see any of the major generative AI companies joining such efforts.
However, regardless of whether you see it as positive or negative, I think the parallel between generative AI and nuclear weapons is more misleading than useful. Nukes could literally wipe out most of humanity in minutes, but relatively few people can get their hands on one. With generative AI, on the other hand, pretty much everyone will be able to use it, but it cannot wipe out most of humanity at a stroke.
Sure, maybe you could ask a (guardrail-free) GPT-4 or its successors to “design a superbug that is more contagious than Covid-19 and kills 20 percent of the people it infects.” But humanity is still here even though the formulas for deadly toxins and the genetic code of virulent diseases have been freely available online for years.
What makes AI frightening, rather, is that nobody can predict most of the uses people will dream up for it. Some of these uses could be the equivalent of a nuke for very specific things—like college essays, which may rapidly become obsolete. In other cases, the pernicious effects will be slower and harder to foresee. (For example, while ChatGPT has proven an incredibly powerful tool for writing code, some fear it will make redundant the communities where humans share coding knowledge, and thus destroy the very basis on which future AI and human coders are trained.)
Read more here.
If you’re willing to fight for Main Street America, click here to sign up for the Richardcyoung.com free weekly email.