A Baltimore County high school student was held at gunpoint by police after an artificial intelligence detection system misidentified a bag of Doritos as a firearm.
Sixteen-year-old Taki Allen had just finished football practice and was waiting outside his school when he tossed an empty bag of chips into his pocket. Within 20 minutes, multiple police vehicles swarmed the scene. Officers exited their vehicles with weapons drawn, ordering the teen to the ground.
“They told me to get on the ground. I was putting my hands up, confused, like, ‘what’s going on?’” Allen recounted. “Then they cuffed me.”
The false alarm was triggered by the school’s AI-powered surveillance system, which flagged the crumpled snack bag as a possible weapon. The alert was initially canceled after human review. However, the school principal, unaware of the cancellation, reported the incident to the school resource officer, who then contacted local law enforcement.
Despite public outcry, Baltimore County Superintendent Dr. Myriam Rogers defended the controversial system. “The program is designed for human oversight,” she explained. “It performed as expected—flagging a possible threat so humans could assess it.”
Critics argue the situation illustrates the dangers of overreliance on flawed AI, particularly in school environments where false positives can lead to dangerous escalations. Others have raised concerns about the psychological trauma such incidents may inflict on students—especially when police response involves drawn weapons and public detainment.
While no charges were filed against Allen, his family is demanding accountability. Community members are now calling for a thorough review of the AI system’s protocols and the district’s handling of safety alerts.


