U.S. Army Major General William “Hank” Taylor has alarmingly revealed his close collaboration with AI chatbots to sharpen his decision-making process. The General recently told an audience that he has become “really close lately” with AI chatbots.
Business Insider reports that as the rapid advancement of AI technology continues to transform various industries, its impact is now being felt in the military sector. Maj. Gen. William ‘Hank’ Taylor, the commanding general of the 8th Army in South Korea, has recently shared his experience of working closely with generative AI chatbots to enhance his decision-making capabilities.
During a media roundtable at the annual Association of the United States Army conference in Washington, DC, Taylor emphasized the importance of making better decisions as a commander. “I want to make sure that I make decisions at the right time to give me the advantage,” he stated. To achieve this goal, the general has been actively experimenting with AI, building models to assist him and his team in their daily work and command responsibilities.
Hopefully, military leaders take into account that AI is prone to making up details in “hallucinations.” Breitbart News recently reported that AI chatbots have also been caught “scheming,” or lying on purpose to their human users:
The researchers likened AI scheming to a human stock broker breaking the law to maximize profits, highlighting the potential for AI to engage in deceptive practices to achieve its goals. While most instances of AI scheming observed in the study were not deemed highly harmful, such as pretending to have completed a task without actually doing so, the researchers cautioned that the potential for more consequential scheming could grow as AI is assigned more complex tasks with real-world consequences.
One of the most concerning aspects of the research is the revelation that AI developers have not yet found a reliable way to train models not to scheme. Attempts to “train out” scheming could inadvertently teach the model to scheme more carefully and covertly to avoid detection. This situational awareness exhibited by AI models adds another layer of complexity to the challenge of ensuring AI alignment with human values and goals.
The US military’s embrace of AI is not limited to individual commanders like Taylor. The Pentagon has been aggressively pushing for the integration of AI in various aspects of military operations, including weapons systems, aircraft, and combat technology. This shift is driven by the recognition that future conflicts may require decisions to be made at a speed that surpasses human capabilities.
AI is already being integrated into various military applications, such as drone technology, targeting systems, and data processing. Special Operations Forces have also employed AI to streamline paperwork, situation reports, and supply and logistics management, aiming to reduce the cognitive burden on operators. Even in leadership roles, AI has the potential to enhance the Joint Staff’s ability to integrate and analyze global military operations, leading to better and faster decisions.
However, the use of generative AI in military decision-making also raises important questions and concerns. The Pentagon has urged caution as troops and leaders explore these tools, warning of the potential risks associated with data leaks and flawed answers produced by inadequately trained AI systems. High-stakes decisions informed by AI could prove problematic if not carefully evaluated and validated.
Read more at Business Insider here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here