As AI tools like ChatGPT become increasingly popular for travel planning, some vacationers are finding themselves led astray by the technology’s hallucinations and misinformation — sometimes putting themselves in danger by searching for nonexistent destinations in dangerous locales.

BBC News reports that the rise of AI trip planning assistants has ushered in a new era of convenience for travelers looking to craft the perfect itinerary. However, this emerging technology is not without its pitfalls, as a growing number of users report being sent to nonexistent locales and given inaccurate travel information by AI tools like ChatGPT.

One cautionary tale comes from Peru, where trekking guide Miguel Angel Gongora Meza overheard two tourists discussing their plans to hike alone to the “Sacred Canyon of Humantay” based on an itinerary generated by an AI program. The problem? No such place exists. “The name is a combination of two places that have no relation to the description,” explained Gongora Meza. He warned that following such misinformation in the challenging Peruvian Andes, with its high elevations and remote paths, could put unprepared hikers in grave danger.

In another instance, a couple using ChatGPT to plan a sunset hike on Japan’s Mount Misen found themselves stranded atop the mountain after the AI gave them an incorrect time for the last cable car off the mountain. “ChatGPT said the last ropeway down was at 17:30, but in reality, the ropeway had already closed,” recounted Dana Yao, one of the unlucky hikers.

AI trip planning tools are also guilty of promoting destinations that are entirely fictional. The BBC reported on a 2024 incident where an AI told users there was an Eiffel Tower in Beijing. In another case, a British traveler was suggested a marathon route across northern Italy that was completely unfeasible. A Fast Company article even detailed how a Malaysian couple trekked to a scenic cable car they had seen on TikTok, only to discover the structure was an AI fabrication.

These blunders stem from how large language models like ChatGPT generate responses by analyzing massive text datasets and putting together statistically appropriate words and phrases. “It doesn’t know the difference between travel advice, directions or recipes,” explained Rayid Ghani, a machine learning professor at Carnegie Melon University. “It just knows words.” This can lead to convincing but erroneous “hallucinations” that are difficult for users to differentiate from factual information.

Breitbart News recently reported on another result of AI hallucinations — libraries flooded with requests for nonexistent books after major newspapers published a reading list generated by AI:

This trend is not limited to physical libraries. Alison Macrina, executive director of Library Freedom Project, reports that early results from a survey on AI’s impact on libraries indicate a growing trust among patrons in their preferred generative AI tools and the outputs they receive. Librarians are being treated like robots over library reference chat, and patrons are becoming defensive when the veracity of AI-powered recommendations is questioned. It appears that more people are trusting their preferred language model over their human librarian.

To combat the issue of AI-hallucinated content, Kristan has developed a system. He searches for the presumed title in the library catalog and, if not found, checks the global library catalog WorldCat. If the title is still missing and presents itself as a traditional book, it raises suspicions of AI involvement. Connecting the title to platforms like Kindle Direct Publishing or learning that the patron’s source is an AI-powered chatbot confirms the likelihood of a hallucinated title.

Experts worry that the proliferation of AI misinformation could undermine the very benefits travel offers, such as fostering cross-cultural understanding and empathy through authentic interactions. Clinical psychotherapist Javier Labourt cautions that AI’s false narratives about destinations could color travelers’ perceptions before they even arrive.

Read more at BBC News here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version