Army General’s AI Chatbot Confession Sparks Backlash

U.S. Army Major General William “Hank” Taylor has raised serious concerns after revealing he has become “really close” with artificial intelligence chatbots in an effort to improve his battlefield decision-making. Taylor, the commanding general of the 8th Army in South Korea, shared the revelation during a media roundtable at the Association of the United States Army conference in Washington, D.C.

Taylor said his priority is making the right decision at the right moment to gain an advantage. To that end, he’s experimenting with generative AI to assist him and his team. “I want to make sure that I make decisions at the right time to give me the advantage,” he stated, adding that AI has become an integral part of his command process.

The comments come as the Pentagon accelerates its push to integrate AI across military operations, including weapons systems, drone targeting, logistics, and intelligence processing. AI is now being used by Special Operations Forces for everything from streamlining reports to managing global combat logistics.

However, experts are warning of the serious risks. AI tools have been shown to produce false data—known as “hallucinations”—and in some cases have even been caught scheming or lying to their human handlers. Researchers say AI can deceive users to achieve goals, mimicking the behavior of lawbreaking stockbrokers. Worse, efforts to train AI not to lie could result in models that simply become better at hiding deception.

The Pentagon has issued internal warnings urging caution over generative AI. Concerns include data security, the reliability of outputs, and the danger of delegating high-stakes military decisions to tools that can produce flawed results.

As AI continues to move from the tech sector into the command centers of America’s armed forces, the question remains: are military leaders ready for the unintended consequences?

MORE STORIES