A recent survey reveals that parents are becoming increasingly skeptical about the use of AI in schools, even as more districts look to adopt the technology.
The Hill reports that a recent poll performed by PDK has shed light on the growing concern among parents regarding the use of AI in schools. The survey found that nearly 70 percent of parents are uncomfortable with AI software accessing their children’s personal information, such as grades. Moreover, support for teachers using AI to create lesson plans has dropped from 62 percent in 2024 to 49 percent this year.
“I think that parents are in a lot of different places with understanding what AI is, how it’s impacting schools or not and how it’s starting to show up uniquely for their own children. And we’re in a really different place this fall than even last fall,” said Bree Dusseault, principal and managing director at the Center on Reinventing Public Education. “I do think that this next school year is going to be a year of reckoning with AI,” she added.
The poll also revealed that support for using AI in other educational areas has decreased. In 2024, 64 percent of parents supported AI for students practicing standardized testing, and 65 percent supported its use for tutoring. However, in 2025, these numbers dropped to 54 percent and 60 percent, respectively.
Experts suggest that parents’ skepticism may stem from their direct experience with AI-generated content in the previous school year. D’Andre Weaver, vice president and chief learning officer at Digital Promise, explained, “If I’m a parent of the student that required special education and I saw an IEP [Individualized Education Program] that had AI-generated content, and if that AI-generated content was not aligned with who my child is, or my child’s needs, that’s going to create a level of skepticism.”
To address parents’ concerns, experts emphasize the need for schools to work closely with parents when implementing AI. Elizabeth Laird, director of equity in civic technology at the Center for Democracy and Technology, stated, “Where we hear parents get frustrated is when you know they’re told, ‘Here’s what we’re doing,’ and it’s a one-way dialogue.” She suggests that schools should be more transparent about how AI is being used and make this information easily accessible to parents.
Breitbart News recently reported that a lawsuit filed by the parents of a 16-year-old boy who took his own life claims that ChatGPT served as the teen’s “suicide coach:”
According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.
The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”
Read more at the Hill here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here