Former OpenAI board members are Dear Utol (2025): Maniac Photographer Episode 26calling for greater government regulation of the company as CEO Sam Altman's leadership comes under fire.
Helen Toner and Tasha McCauley — two of several former employees who made up the cast of characters that ousted Altman in November — say their decision to push the leader out and "salvage" OpenAI's regulatory structure was spurred by "long-standing patterns of behavior exhibited by Mr Altman," which "undermined the board’s oversight of key decisions and internal safety protocols."
Writing in an Op-Ed published by The Economiston May 26, Toner and McCauley allege that Altman's pattern of behavior, combined with a reliance on self-governance, is a recipe for AGI disaster.
SEE ALSO: The FCC may require AI labels for political adsWhile the two say they joined the company "cautiously optimistic" about the future of OpenAI, bolstered by the seemingly altruistic motivations of the at-the-time exclusively nonprofit company, the two have since questioned the actions of Altman and the company. "Multiple senior leaders had privately shared grave concerns with the board," they write, "saying they believed that Mr Altman cultivated a 'toxic culture of lying' and engaged in 'behavior [that] can be characterized as psychological abuse.'"
"Developments since he returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance," they continue. "Even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role."
In hindsight, Toner and McCauley write, "If any company could have successfully governed itself while safely and ethically developing advanced AI systems, it would have been OpenAI."
SEE ALSO: What OpenAI's Scarlett Johansson drama tells us about the future of AIThe former board members argue in opposition to the current push for self-reporting and fairly minimal external regulation of AI companies as federal laws stall. Abroad, AI task forces are already finding flaws in relying on tech giants to spearhead safety efforts. Last week, the EU issued a billion-dollar warning to Microsoft after they failed to disclose potential risks of their AI-powered CoPilot and Image Creator. A recent UK AI Safety Institute report found that the safeguards of several of the biggest public Large Language Models (LLMs) were easily jailbroken by malicious prompts.
In recent weeks, OpenAI has been at the center of the AI regulation conversation following a series of high-profile resignations by high-ranking employees who cited differing views on its future. After co-founder and head of its superalignment team, Ilya Sutskever, and his co-leader Jan Leike left the company, OpenAI disbanded its in-house safety team.
Leike said that he was concerned about OpenAI's future, as "safety culture and processes have taken a backseat to shiny products."
Altman came under fire for a then-revealed company off-boarding policy that forces departing employees to sign NDAs restricting them from saying anything negative about OpenAI or risk losing any equity they have in the business.
Shortly after, Altman and president and co-founder Greg Brockman responded to the controversy, writing on X: "The future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model...We are also continuing to collaborate with governments and many stakeholders on safety. There's no proven playbook for how to navigate the path to AGI."
In the eyes of many of OpenAI's former employees, the historically "light-touch" philosophy of internet regulation isn't going to cut it.
Topics Artificial Intelligence OpenAI
Tom Hanks is not really quarantined with Wilson, the ball from 'Cast Away'It may be easier to get coronavirus than we thoughtWe have a potent weapon against coronavirus and should use it promptlyGetting a ChatGPT at capacity error? Tips on how to get past it'Self Reliance' review: Jake Johnson teams with The Lonely Island for comic mayhemApple's AirPods might one day help you hear betterChris Evans slams Trump's reaction to coronavirus spreadGetting a ChatGPT at capacity error? Tips on how to get past itChinese billionaire funding conservative social network GETTR arrested for crypto fraudTrump's COVIDHow to remove Bing from Chrome, Edge browsersHow 'The Last of Us' successfully translates the game's best mechanics to TV'Yellowjackets' Season 2 review: Our favorite messedWordle today: Here's the answer, hints for March 14Many park rangers are still exposed to the public during coronavirusChinese billionaire funding conservative social network GETTR arrested for crypto fraudOscars 2023: The Best Original Song musical performances'Quordle' today: See each 'Quordle' answer and hints for March 13Coachella canceled over coronavirus concernsWatch: Trump caught swearing by C Google introduces insurance and language filters for doctor searches Mark Twain’s Advice for Curing a Cold Wordle today: Here's the answer and hints for September 15 Why Do We Personify the Weather? Ben Tolman’s Grim Paean to the Suburbs Best Apple Watch deals: Save $10 on the Apple Watch Series 9 with Midnight Sport Loop Victor Hugo’s Drawings TikTok is 'not the vibes' Trollope Gets His 65,000 Words Back Why “Junket Is Nice” Is One of the Weirdest Children’s Books Staff Picks: DeLillo, Jean Merrill, Cabinet, and More Karl Ove Knausgaard and Television in America On Train Delays and Selfishness Best Amazon Echo deal: Echo Pop (1st gen) on sale for $22.99 Apply to Be The Paris Review‘s Next Writer Ghostly Beauty: Anna Atkins, the First Woman to Take a Photo Are all men obsessed with the Roman Empire? We investigate. Avoid This Book: The History and Romance of Elastic Webbing The iPhone 15 Pro is faster, but not by much Seeing the World Through Broken Glasses
1.8044s , 10133.8203125 kb
Copyright © 2025 Powered by 【Dear Utol (2025): Maniac Photographer Episode 26】,Warmth Information Network