Tech-Driven Terror: AI Enables Bombing Attempt

Person holding a homemade explosive device

A New York man’s use of artificial intelligence to build and plant bombs across Manhattan exposes a chilling new threat, raising urgent questions about how technology is empowering criminals while ordinary Americans are left to wonder who’s really being protected in our cities.

At a Glance

  • Michael Gann, 55, used AI and online resources to construct improvised explosive devices in Manhattan
  • Federal authorities credit swift interagency action for averting a potential mass casualty event
  • The case spotlights how AI is lowering barriers for would-be domestic terrorists to access dangerous knowledge
  • Debate intensifies over tech companies’ responsibilities and potential new regulations on AI and chemical sales

AI-Driven Bomb Plot Unmasked in Manhattan

Federal prosecutors have charged Michael Gann, a 55-year-old from Inwood, Long Island, with building and attempting to deploy bombs throughout Manhattan. According to law enforcement, Gann harnessed artificial intelligence and online tutorials to research, design, and assemble at least seven improvised explosive devices. He ordered chemicals and parts online, using AI-assisted searches to zero in on formulas for flash powder and chlorine-based explosives. This is reportedly the first high-profile domestic terror plot where AI’s role in enabling bomb-making is at the center of the investigation.

Authorities say Gann’s campaign began in early 2025. He had bomb-making materials shipped to a Nassau County address before deploying devices onto rooftops, subway tracks, and even tossing some off the Williamsburg Bridge. The NYPD and FBI, acting on a tip, swept the areas and found the undetonated devices. Gann was arrested June 5 in SoHo with another explosive in his possession. He now faces federal charges for manufacturing and transporting explosives, and for unlawful possession of destructive devices. If convicted, he could spend up to 40 years behind bars.

Law Enforcement Scrambles While Tech Industry Watches

FBI Assistant Director Christopher Raia credited aggressive intelligence sharing and rapid interagency coordination for stopping what could have been a mass tragedy. U.S. Attorney Jay Clayton, leading the prosecution, declared the plot a grave threat and praised the teamwork that thwarted it. NYPD Commissioner Jessica Tisch managed the front-line police response and reassured jittery New Yorkers that the immediate danger had passed. Law enforcement is still combing through Gann’s online activity to determine the full extent of AI’s role and whether others may have helped him or copied his methods.

This isn’t the first time criminals have used the internet to hunt down bomb-making instructions. But the use of modern AI tools has authorities sounding the alarm. These platforms don’t just regurgitate old anarchist cookbooks—they can synthesize instructions, flag missing steps, and make even the most unstable individual a more effective criminal. Security experts warn that generative AI slashes the technical barriers that once prevented garden-variety troublemakers from becoming real threats. The tech industry, for its part, is watching nervously as calls grow for stricter controls and monitoring of AI-generated content.

Communities on Edge and Policy Debates Ignite

The immediate threat to Manhattan was neutralized, but the aftermath continues to ripple. Residents and commuters, especially near the Williamsburg Bridge and SoHo, are left shaken, with an increased police presence reminding everyone of how close disaster came. Subway service was disrupted, and local businesses in the affected neighborhoods took an economic hit during the investigation. Social anxiety is up, and so is public mistrust—justified, given that a single, unstable individual was able to nearly turn AI into an urban weapon before anyone caught on.

On the political front, this case has lawmakers and bureaucrats falling over themselves to promise new regulations. Some want tighter controls on AI platforms, while others are eyeing online sales of chemicals. The AI industry stands to face even heavier scrutiny, and law enforcement agencies are already lobbying for more power to monitor and intervene in online activity. The only thing moving faster than technology these days is the government’s appetite to clamp down and expand its reach. Meanwhile, the rest of us are left to wonder: if our leaders spent half as much time protecting citizens as they do micromanaging every aspect of our lives, maybe a guy like Gann wouldn’t have gotten as far as he did.

Law Enforcement Success, But Unanswered Questions Remain

FBI Director Kash Patel and NYPD Deputy Commissioner Rebecca Weiner both lauded the fact that more lives weren’t lost, pointing to the sophistication of the devices and the clear, AI-assisted research trail Gann left online. The case is being prosecuted by the U.S. Attorney’s National Security and International Narcotics Unit, underscoring the seriousness with which the feds view this new breed of tech-enabled domestic terrorism. Yet, for all the headlines about how AI helped Gann, the exact nature of the tools he used remains murky. Law enforcement statements and media reports reference “AI-assisted queries,” but the public is still in the dark about how much smarter—or more dangerous—these tools have truly become in the wrong hands.

No significant contradictions have emerged among the major news outlets or official law enforcement sources about the facts of the case. But one thing is clear: as technology gallops ahead and criminals exploit every new loophole, Americans are left to pay the price—while politicians and tech giants scramble to cover their tracks. The Gann case should be a wakeup call. If this is what one man with a laptop and a grudge can do, what’s next? And how much longer will our leaders put ideological agendas, government overreach, or the rights of criminals ahead of basic public safety?

Sources:

Fox News

ABC7NY

The National Desk

CBS News