Journalists Ronan Farrow and Andrew Marantz have published an investigation into Sam Altman, the AI kingpin behind OpenAI, revealing a troubling history of deception and sociopathic tendencies. One former OpenAI board member explains, “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
The New Yorker has published a major investigation of OpenAI CEO Sam Altman written by Ronan Farrow and Andrew Marantz. The article provides a fascinating and deeply researched view into the life of Altman, including blow-by-blow details of his short-lived ouster from the company.
The piece explains that prominent figures in the AI world hold a deep distrust of Altman, with many using the word “sociopathic” to describe his personality. Altman’s enemies list extends beyond OpenAI cofounder Ilya Sutskever, who left the company after failing to give Altman the boot, and Anthropic CEO Dario Amodei, who is bitter enemies with Altman. As Farrow and Marantz write, even former OpenAI board members see Altman as being “unconstrained by truth:”
Farrow and Marantz explain in their article that Altman’s sociopathic tendencies don’t only result in bruised egos with other executives. His approach to business has caused real world problems, like ChatGPT launching without the proper safety guardrails in place:
By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products. In a meeting in December, 2022, Altman assured board members that a variety of features in a forthcoming model, GPT-4, had been approved by a safety panel. Toner, the board member and A.I.-policy expert, requested documentation. She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved. As McCauley, the board member and entrepreneur, left the meeting, an employee pulled her aside and asked if she knew about “the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India without completing a required safety review. “It just was kind of completely ignored,” Jacob Hilton, an OpenAI researcher at the time, said.
Breitbart News social media director and author Wynton Hall explains in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that conservatives must develop a plan to deal with the bias baked into AI by leftists in Silicon Valley. Especially when the personalities running AI companies are as troubling to learn about as Sam Altman, it takes an effective framework to gain the benefits of AI without the bias and downsides.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.
Read the full article here
