An AI startup recently conducted what appeared to be a standard public opinion poll about maternal health policy, but the survey responses came entirely from computer simulations rather than actual people using a new technique called “silicon sampling.”

Two professors explain in a recent New York Times guest essay that the incident came to light when Axios published a story on maternal health policy citing findings that a majority of people trusted their doctors and nurses. Initially, the article did not disclose that these results were generated by AI rather than human respondents. Upon closer examination of the sourcing, it was revealed that the public opinion poll was actually a computer simulation conducted by artificial intelligence startup Aaru, with no real people participating in the creation of these opinions. Axios subsequently added an editor’s note and clarification acknowledging this fact.

This practice, known as silicon sampling, is rapidly gaining traction in the polling industry the professors reveal. The concept behind silicon sampling is straightforward and appealing to many in the research field. Large language models have demonstrated the ability to generate responses that closely emulate human answers, creating what polling companies view as an opportunity to simulate survey responses at significantly reduced cost and time compared to traditional polling methods.

The appeal of silicon sampling stems from the mounting challenges facing traditional polling methodologies. Phone polling has become exponentially more difficult to conduct effectively in recent years. Web-based polling faces substantial uncertainty regarding sample quality and representativeness. Silicon sampling appears to offer a solution by eliminating what many see as the messy and expensive component of polling: actually asking real people what they think.

However, critics argue that this approach fundamentally undermines the entire purpose of public opinion polling. According to the authors, Leif Weatherby, director of the Digital Theory Lab at New York University, and Benjamin Recht, a professor of electrical engineering and computer sciences at the University of California, Berkeley, public opinion data serves crucial functions in guiding policy decisions, political strategy, and social science research. This data only holds genuine value when it accurately summarizes the beliefs and opinions of actual human beings.

They write:

But this undermines the very idea of the opinion poll. Public opinion is used to guide policy, politics and social science, and it has value only insofar as it summarizes the beliefs and opinions of actual humans. Using simulations of human opinions in place of the real thing will only worsen our broken information ecosystem, and sow distrust. We should not turn to an artificial society to try to understand our real one.

The journalist Walter Lippmann, in his influential 1922 book “Public Opinion,” wrote that humans form “pictures in their heads” of the societies they live in. He called these pictures “fictions” and “pseudo-environments,” arguing that a democracy needed tools to fix those pictures, and that opinion polling could serve that role. Surveys would never be perfect, but Mr. Lippmann thought they were critical for getting us closer to an accurate sense of the will of the people.

The emergence of silicon sampling raises fundamental questions about the purpose and validity of public opinion research in the modern era. If polling data no longer represents actual human viewpoints but instead reflects what AI systems predict humans might say, the entire foundation of opinion research becomes questionable.

The concerns extend beyond theoretical debates about methodology. When media outlets, policymakers, and researchers rely on polling data to make decisions, they operate under the assumption that this data reflects genuine public sentiment. If that assumption proves false because the data comes from AI simulations rather than real people, decisions based on such information may fail to serve the actual needs and preferences of the population.

Breitbart News social media director Wynton Hall, author of the instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI,  said that AI is dangerous when relied on to replace humans because it has an inherent bias toward the left “baked in.”

During a recent appearance on Real America’s Voice’s “Stinchfield Tonight,” Hall explained how the bias occurs:

Stinchfield said, “I keep hearing that there is this inherent bias inside many of these AI models.”

He asked, “How bad is the bias?”

Hall said, “It’s actually the opening chapter of Code Red. And what I did was I spent two years going through, every study, every white paper, every peer-reviewed academic journal article. And one of the most staggering findings is that even the left leaning academia concedes that there is an inherent bias toward the left politically and large language models LLMs. And so it’s based on the training data, which is mostly made up of things like Wikipedia, which leans left, Reddit, which leans left, something called the Common Crawl, which is a public data set. And then a lot of the academic literature as well. So it’s baked in the bias. And so one of the things we’ve got to do for parents and grandparents and educators is really help to keep those critical thinking skills sharp.So young people don’t think they’re getting, neutral information because they’re not.”

Read the full guest essay at the New York Times here.

Read the full article here

Share.
Leave A Reply

Exit mobile version