Workers are largely following mandates to embrace AI in the office, but few are seeing it create real value. According to the Harvard Business Review, the culprit is “workslop,” AI-generated documents that look sharp but are filled with low quality information.
Harvard Business Review reports that despite a surge in generative AI use across workplaces, most companies are seeing little measurable return on investment (ROI). According to a recent report from the MIT Media Lab, 95 percent of organizations see no measurable return on their investment in these technologies, even as the number of companies with fully AI-led processes nearly doubled last year and AI use has likewise doubled at work since 2023.
One possible reason for this puzzling lack of ROI is that AI tools are being used to produce what some experts are calling “workslop” — content that appears polished on the surface but lacks real substance, insight, or value underneath. While generative AI can quickly churn out documents, presentations, emails, and other content that seem professional and well-written at first glance, upon closer inspection much of this material is generic, shallow, obvious, and lacking in original ideas or meaningful contributions.
Rather than augmenting and enhancing human knowledge work, generative AI in its current form may be having the opposite effect — enabling the mass-production of mediocre, low-value content that clogs up workflows, communications channels, and decision-making processes. Employees end up spending more and more time sifting through piles of AI-generated workslop to find the few gems of genuine insight.
Some of the issues with AI-produced content stem from the limitations of the technology itself. Today’s generative AI models are very good at identifying patterns and stitching together bits and pieces of existing content into new compositions. But they struggle with analysis, imagination, and the ability to reason about entirely novel concepts. The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.
However, the workslop problem also stems from how generative AI tools are being deployed and used in organizations. In the rush to take advantage of the technology’s efficiency benefits, many companies are applying it indiscriminately to all sorts of knowledge work without carefully considering whether it’s truly fit-for-purpose. Generative AI is being used as a blunt instrument rather than a precise tool.
There’s also a risk that the technology could deskill and demotivate knowledge workers over time. If “good enough” AI-generated content is seen as acceptable, employees may feel less incentive to apply rigorous human thought and creativity to their work. Reliance on AI could lead to an erosion of critical thinking and problem-solving capabilities.
As generative AI continues its rapid advancement and diffusion, the ability to produce substance, not just slop, will become a key competitive differentiator. Companies that learn to wield the technology judiciously and combine it with capable, creative human minds will thrive. Those that succumb to the temptations of workslop will drown in a sea of their own mediocrity.
Read more at the Harvard Business Review here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here