Mustafa Suleyman, Microsoft’s AI CEO, has cautioned against granting rights to artificial intelligence, stating that it would be a perilous and ill-advised move.

In a recent interview with Wired, Mustafa Suleyman, Microsoft’s AI CEO, expressed his strong opposition to the idea of granting rights to AI systems. Suleyman argued that while AI may appear to be real and convincing, it does not warrant the same moral considerations as human beings.

The former DeepMind and Inflection co-founder emphasized the importance of the industry taking a clear stance that AI is designed to serve humans and not to develop independent desires or goals. “If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals — that starts to seem like an independent being rather than something that is in service to humans,” Suleyman stated. “That’s so dangerous and so misguided that we need to take a declarative position against it right now.”

Suleyman dismissed the notion that AI’s increasingly sophisticated responses are indicative of genuine consciousness, referring to it as mere “mimicry.” He further argued that rights should be tied to the capacity to suffer, something that biological beings experience but AI does not. “You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers,” he explained.

The Microsoft AI CEO’s comments come at a time when some AI companies are exploring the opposite approach, considering whether AI deserves to be treated more like sentient beings. Anthropic, for example, has hired a researcher to investigate whether advanced AI might one day be “worthy of moral consideration.” The company has also experimented with ways to end extreme conversations, such as child exploitation requests, while extending “welfare” considerations to the AI itself.

However, Suleyman maintains that there is no evidence to suggest that AI is conscious and has previously expressed concern about the growing phenomenon of “AI psychosis,” where people develop delusional beliefs after interacting with chatbots.

Breitbart News has previously reported on what was initially called “ChatGPT induced psychosis” but has since been proven to be a mental health problem worsened by practically every AI chatbot:

A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

As the AI industry continues to evolve and advance, the debate surrounding the moral status of AI is likely to intensify. While some companies may explore the idea of AI welfare, Suleyman’s stance serves as a reminder that the primary purpose of AI should be to serve and benefit humans, rather than being granted independent rights or moral consideration.

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version