Google’s YouTube has been secretly using AI to “enhance” users’ videos without their knowledge or consent, leading to a growing sense of unease among content creators.
BBC reports that YouTube, the world’s largest video-sharing platform, has been employing AI to make subtle alterations to creators’ videos without their knowledge or permission. The AI-driven changes, which include sharpening wrinkles in clothing, smoothing skin textures, and even warping ears, have left many content creators feeling unsettled and misrepresented.
The issue first came to prominence when popular music YouTubers Rick Beato and Rhett Shull noticed something amiss in their recent uploads. Beato, who runs a channel with over five million subscribers featuring popular interviews with music legends, initially thought he was imagining things when he spotted strange artifacts in his video. “The closer I looked, it almost seemed like I was wearing makeup,” he recalls. Shull, a friend of Beato’s, investigated his own posts and found similar AI-generated distortions, prompting him to create a video on the subject that has garnered over 500,000 views.
After months of speculation and complaints from users on social media, YouTube finally acknowledged that it has been running an experiment on select YouTube Shorts, the app’s short-form video feature. The company stated that it uses “traditional machine learning technology” to unblur, denoise, and improve clarity in videos during processing, comparing the process to the enhancements made by modern smartphones when recording video.
However, experts argue that there is a significant difference between users having control over AI features on their personal devices and a company manipulating content without the consent of the creators. Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh, suggests that YouTube’s choice of words feels like a misdirection. “Machine learning is in fact a subfield of artificial intelligence,” he explains, dismissing the company’s attempt to draw a line between “traditional machine-learning” and generative AI.
The incident has raised concerns about the growing role of AI in mediating the information and media we consume, often in imperceptible ways. As Jill Walker Rettberg, a professor at the Center for Digital Narrative at the University of Bergen in Norway, points out, “With algorithms and AI, what does this do to our relationship with reality?”
The use of AI to enhance or alter images and videos is not new, with companies like Samsung and Google implementing similar features in their smartphones. However, the lack of transparency and user control in YouTube’s case has led to a sense of unease among content creators, who fear that it could erode the trust they have built with their audiences.
Woolley argues that YouTube’s actions risk blurring the lines of what people can trust online. “This case with YouTube reveals the ways in which AI is increasingly a medium that defines our lives and realities,” he says. “What happens if people know that companies are editing content from the top down, without even telling the content creators themselves?”
Read more at BBC here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here