Editor’s Note: Jill Filipovic is a journalist based in New York and author of the book “OK Boomer, Let’s Talk: How My Generation Got Left Behind.” Follow her on?Twitter. The opinions expressed in this commentary are solely her own. View?more opinion?on CNN.
The biggest tech news this week is the?ouster?of Sam Altman from his role as CEO of OpenAI, a move that has shaken the company and the industry. Hundreds of OpenAI employees have threatened to?resign. Altman has already?moved on?to a role at Microsoft. And OpenAI, the company behind ChatGPT, is on its?third CEO?in as many days.
It’s all very juicy. But this drama should also be raising larger questions, far beyond one company’s internal hirings and firings, including: Who are the people making the decisions?that will determine so much of our technological future? What guiding principles are they using to make those decisions? And how should other institutions – governments, non-tech industries, global alliances, regulatory bodies – reign in the worst excesses of potentially dangerous AI innovators?
OpenAI was founded as a nonprofit, with an explicit mission to harness what may soon be superhuman intelligence “to benefit humanity as a whole.” But that sensibility hasn’t lasted. The company now has a multi-billion-dollar for-profit arm. They have been developing new technologies at lightning speed, and sometimes sending them out to the public?before some employees believed they were ready. The company has already reportedly invented an AI technology so dangerous they will never release it – but they also?won’t tell?reporters or the public exactly what it is.
This dynamic – a potentially dangerous technology developed at extreme speed, largely behind closed doors – is partly to blame for Altman’s firing. The OpenAI board, according to?CNN’s David Goldman, worried that “the company was making the technological equivalent of a nuclear bomb, and its caretaker, Sam Altman, was moving so fast that he risked a global catastrophe.” At particular issue seemed to be Altman’s efforts to make the tools behind ChatGPT available to anyone who wanted to make their own version of the chatbot. This could be widely disastrous, some board members worried.
But then they fired him without warning, and apparently without involving Microsoft, the company’s largest shareholder. Now, Altman is at the new AI group at Microsoft, and one has to wonder if the oversight and caution there will be on par with that at OpenAI, or if he’ll be handed carte blanche to push as fast and hard as he wants. And for all the justified reticence of the OpenAI board, the company has carried out much of its work in secrecy – without the public really understanding what a handful of unaccountable technologists are building, and how it is nearly guaranteed to indelibly change their lives.
AI is broadly understood to have the potential to reshape vast swaths of human existence. At the very least, it seems nearly guaranteed to change how we process information, how we communicate, how we learn and how we work (and if we work). And the ramifications could be much more extreme. AI technologies have already demonstrated the ability?to lie and to cover their tracks. They have already been able to?suggest the design?to make a virus spread more quickly. Many researchers?acutely understand?just how quickly these machines could develop the capacity to annihilate us, including Altman: He has a prepper’s paradise prepared in Big Sur, complete with guns and “gas masks from the Israeli Defense Force” in case AI goes off the rails and the robots go to war against humans,?according to reporting in the New Yorker.
But don’t worry, he told an Atlantic reporter: If AI is determined to wipe us out, “no gas mask is helping anyone.” (If you want an excellent and terrifying rundown of AI’s risks – at least those we understand right now, which are almost certainly a mere sliver of the looming perils –?the Atlantic profile of Altman and his technology?is worth a read).
AI is very exciting technology. But it is also a potentially very dangerous one, and not in the social media sense of “it may give us bad self-esteem and make us lonelier” but in the sense of “it could break down human societies and kill us all.”
Given the life-altering potential of AI – that even if it doesn’t kill us all, it will almost certainly change human existence in unprecedented ways at unprecedented speed – we all have a stake in how it’s being developed. And yet the development is being left to a handful of people (who seem to be?largely?men) in Silicon Valley, and other tech pockets around the globe. And we all have a stake in whose interests AI will serve – and right now, its development is being funded with billions of dollars by people expecting to make a huge profit.
Do the interests of the public align with the interests of the shareholders to whom profit-driven, potentially tremendously lucrative-for-a-few companies are beholden? Or with the interests of tech entrepreneurs who are primarily excited about being at the forefront of the AI revolution, regardless of the potential human costs?
Get Our Free Weekly Newsletter
- Sign up for CNN Opinion’s newsletter
- Join us on Twitter and Facebook
One thing is clear: AI is coming. And how it is built and unleashed on the public matters more than perhaps any technology of the past century. It is, indeed, up there with the atom bomb in its destructive potential – except likely more difficult to regulate and control.
“Regulation” does not begin to scratch the surface of what’s needed to make sure that the AI future is not a catastrophic one, especially since the development of AI is now a massive international arms race, with particularly horrific implications if bad actors develop this technology first. But regulation is, at minimum, a necessary step.
So is transparency: In the US, companies have wide leverage to work behind a veil of secrecy, and much of what AI companies do is kept secret to stymy competition. But the public certainly has a right to understand what life-altering technologies are set to be inflicted upon us, and what the creators are doing to protect humanity – our jobs, our communities, our families, our connections, our educations and our abilities to build a life of purpose, but also our lives and our safety.
The Altman story is fascinating because Altman is the most powerful figure in AI technology, which in effect makes him one of the most powerful men in the world. But that should give us pause: Who is he, what power does he hold, what is he doing with it, who does he answer to, and are we comfortable with this much life-altering potential being held by a few unaccountable people?