A new study from Duke University suggests that employees who use AI tools at work face negative judgments from their colleagues and managers, potentially damaging their professional reputation.
Ars Technica reports that the use of AI in the workplace has become increasingly prevalent, with tools like ChatGPT, Claude, and Google Gemini offering the potential to boost productivity. However, a recent study published in the Proceedings of the National Academy of Sciences (PNAS) by researchers from Duke University’s Fuqua School of Business reveals that employees who use these AI tools may face a hidden social cost.
The study, titled “Evidence of a social evaluation penalty for using AI,” conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. The findings consistently showed that employees who received help from AI were perceived as lazier, less competent, less diligent, less independent, and less self-assured compared to those who used conventional technology or no help at all.
Surprisingly, the social stigma against AI use was not limited to specific demographic groups, suggesting that the bias is a general one. This widespread stigma may present a significant barrier to AI adoption in the workplace, as employees may resist using these tools due to concerns about how they will be perceived by their colleagues and superiors.
The study also revealed that employees who used AI tools were less willing to disclose their AI use to colleagues and managers, fearing the negative consequences. This finding aligns with anecdotal evidence of “secret cyborgs” – workers who use AI without telling their bosses due to company bans on AI outputs.
The bias against AI use was found to affect real business decisions as well. In a hiring simulation, managers who did not use AI themselves were less likely to hire candidates who regularly used AI tools. Conversely, managers who frequently used AI showed a preference for AI-using candidates, highlighting the importance of personal experience in shaping perceptions.
The researchers discovered that perceptions of laziness directly explain the evaluation penalty associated with AI use. However, this penalty could be offset when the use of AI was clearly useful for the assigned task, indicating that the negative perceptions diminished significantly when using AI made sense for the job.
The study’s findings present a dilemma for organizations pushing for AI implementation. While AI tools have the potential to save time and increase productivity, the social stigma surrounding their use may hinder adoption and create additional work for both users and non-users tasked with checking AI output quality or detecting AI use in student assignments.
Read more at Ars Technica here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here