The boss of OpenAI- Sam Altman has dropped a bombshell showing just how powerful AI is becoming—and how that could impact everyone’s jobs.
The story comes from The Atlantic and it’s a comprehensive deep-dive into OpenAI’s current work, complete with in-depth interviews with CEO Sam Altman.
Titled “Does Sam Altman Know What He’s Creating?,” the article shows just how far OpenAI has advanced towards highly powerful, perhaps even superintelligent, AI systems. And it details how worried Altman is about AI’s impact on employment.
Why it matters: The article reveals previously unknown details about OpenAI and further connects the dots between the company’s current work and the possible development of artificial general intelligence (AGI), or superhuman machines.
“I definitely look at AI in the future differently after reading this,” says Marketing AI Institute founder/CEO Paul Roetzer.
Connecting the dots: In Episode 57 of the Marketing AI Show, Roetzer broke down for me the implications of Altman’s conversation with The Atlantic.
- OpenAI has way more dangerous AI than we see today. In the opening paragraph of the interview, Altman says OpenAI has “a dangerous artificial intelligence that his company had built but would never release.” This clearly indicates OpenAI is experimenting with, and choosing not to release, AI with different (and more dangerous) capabilities than we see today.
- That means far more dangerous open-source AI likely already exists. OpenAI’s “red-teaming” efforts (which are human efforts to make its models safer) discovered that GPT-4 could offer “nefarious advice” on how to, for instance, build bombs and plan out terror attacks. As a response, OpenAI put guardrails in place so users couldn’t access this information when using GPT-4. However, open-source models from other parties often don’t have these types of guardrails, says Roetzer.
- OpenAI is already planning for autonomous AI agents. Ilya Sutskever, an OpenAI co-founder who also spoke with The Atlantic, mused about how we’ll eventually have AI agents that essentially collaborate on their own to form an “autonomous corporations” that may be outside full human control. He indicated that a single one of these organizations would be as powerful as 50 Apples or Googles.
- This sounds like science fiction, but it’s not. It’s clear from the interview that Altman and OpenAI believe artificial general intelligence and highly advanced autonomous systems are not only possible, but coming relatively soon. This helps us better understand why some major voices in AI, like godfather of AI Geoff Hinton, are warning of existential dangers from AI, says Roetzer.
- And none of this is slowing down. Multiple times, Altman expressed the need to keep building this technology. “More than anything, I took away that they aren’t slowing down,” says Roetzer. “They’re speeding up.”
- This will have an impact on jobs. Altman was blunt about AI’s impact on employment. “Jobs are definitely going to go away, full stop.” He does note we’ll often replace these jobs with new, better ones. But, says Roetzer: “We’re being unrealistic if we pretend this isn’t going to affect things.” He says the probability is greater than ever that we’ll see a massive impact on jobs in the next 2-3 years.
What to do about it: It’s time to stop ignoring AI’s possible impact on employment. We’re already concerned about AI’s potential power, yet it’s still very early. To date, AI models have been largely trained on text. They are now quickly learning from other types of multimodal media like code, images, videos, audio, and more. This will very rapidly result in significantly more powerful and capable AI systems.
“If you actually understand what they’re building and what these things are capable of, it’s almost irresponsible not to plan for [a major impact on jobs],” says Roetzer.