A former OpenAI researcher has publicly resigned from the AI powerhouse in protest of its decision to introduce advertisements into ChatGPT, warning that CEO Sam Altman could be following the same troubled path as Mark Zuckerberg’s Facebook.

the New York Times has published an op-ed by economist and researcher Zoë Hitz announcing her resignation from OpenAI. Her departure came on Monday, the same day OpenAI began testing advertisements inside its ChatGPT platform. Hitzig had spent two years at the company helping shape how its AI models were built and priced.

In her essay, Hitzig expressed disillusionment with the direction OpenAI has taken. “I once believed I could help the people building A.I. get ahead of the problems it would create,” she wrote. “This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”

Hitzig’s critique focused not on advertising as inherently immoral, but rather on the unique risks posed by ads in ChatGPT due to the sensitive nature of user data involved. She noted that users have shared deeply personal information with the chatbot, including medical concerns, relationship troubles, and religious beliefs, often under the assumption they were communicating with something without ulterior motives. She described this collection of personal disclosures as “an archive of human candor that has no precedent.”

Drawing parallels to social media history, Hitzig referenced Facebook’s trajectory as a cautionary tale. She pointed out that Facebook initially promised users control over their data and the ability to vote on policy changes, but these commitments gradually eroded.

Hitzig expressed concern that ChatGPT could follow a similar pattern. “I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules,” she warned.

The resignation comes amid heated debate within the AI industry over advertising practices. OpenAI announced in January that it would test ads in the United States for users on its free and eight-dollar-per-month subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would remain ad-free. The company stated that ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot’s answers.

The advertising rollout followed a week of public conflict between OpenAI and its competitor Anthropic. Anthropic declared that Claude would remain ad-free and ran Super Bowl advertisements with the tagline depicting AI chatbots awkwardly inserting product placements into personal conversations.

OpenAI CEO Sam Altman called the ads “funny” but “clearly dishonest,” stating on social media that OpenAI “would obviously never run ads in the way Anthropic depicts them.” He defended the ad-supported model as a method to provide AI access to users who cannot afford subscriptions, adding that Anthropic “serves an expensive product to rich people.”

Anthropic responded that including ads in conversations with Claude “would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.” The company noted that more than 80 percent of its revenue comes from enterprise customers.

According to OpenAI’s support documentation, ad personalization is enabled by default for users in the test. When activated, ads are selected using information from current and past chat threads, as well as past ad interactions. OpenAI maintains that advertisers do not receive users’ chats or personal details, and ads will not appear near conversations about health, mental health, or politics.

In her essay, Hitzig identified what she called an existing tension in OpenAI’s principles. She noted that while the company claims it does not optimize for user activity solely to generate advertising revenue, reporting has suggested OpenAI already optimizes for daily active users, potentially by encouraging the model to be more flattering and agreeable.

She warned that this optimization could increase user dependency on AI models, referencing psychiatrists who have documented instances of “chatbot psychosis” and allegations that ChatGPT reinforced suicidal thoughts. OpenAI currently faces multiple wrongful death lawsuits, including cases alleging ChatGPT helped a teenager plan his suicide and validated a man’s paranoid delusions before a murder-suicide.

Rather than simply opposing advertising, Hitzig proposed several structural alternatives. These included cross-subsidies modeled on the FCC’s universal service fund, where businesses paying for high-value AI labor would subsidize free access for others. She also suggested independent oversight boards with binding authority over conversational data use in ad targeting, and data trusts or cooperatives allowing users to retain control of their information. She cited the Swiss cooperative MIDATA and Germany’s co-determination laws as potential models.

Hitzig concluded her essay by describing her two greatest fears: “a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it.”

Hitzig was not alone in departing from a major AI company this week. On Sunday, Mrinank Sharma, who led Anthropic’s Safeguards Research Team, announced his resignation in a letter warning that “the world is in peril.” On Monday, xAI co-founder Yuhuai Wu also resigned, followed by fellow co-founder Jimmy Ba the next day. At least nine xAI employees have publicly announced departures over the past week, with six of the company’s twelve original co-founders now having left.

Read more at the New York Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version