How exactly does artificial intelligence function in modern warfare? Contrary to sci-fi fantasies of killer robots roaming battlefields, AI's role in conflicts like Gaza is more mundane but no less deadly. Systems with catchy names like "The Gospel," "Lavender," and the creepily-titled "Where's Daddy?" help Israel identify targets—buildings, militants, and their locations—with unprecedented speed.
While sci-fi imagines killer robots, real warfare AI just helps target people faster—with similarly lethal results.
These aren't independent thinking machines. They're sophisticated tools that process data faster than humans ever could. What once took months now takes weeks. Pretty efficient, right? Except when that efficiency translates to over 44,000 Palestinian deaths, according to Gaza health officials.
Microsoft and OpenAI have found themselves in hot water as their commercial AI technologies—not designed for warfare—have been repurposed for military operations since October 2023. Usage spiked dramatically after Hamas' attack. These tools utilize data collected through ongoing surveillance of Gaza residents, raising serious concerns about personal privacy and ethical data use. These companies now face uncomfortable questions about their role in life-and-death decisions happening half a world away.
The reality isn't quite "Terminator" territory. Humans still review AI-generated targets before strikes occur. But when AI accelerates the pace of bombing campaigns and targets family homes where suspected militants might be present, civilian casualties mount rapidly. The machine doesn't pull the trigger, but it sure loads the gun faster. The environmental impact of massive data centers powering these AI systems adds another layer of concern to their deployment in conflict zones.
Legal and ethical nightmares abound. How do you maintain proportionality under international humanitarian law when algorithms help choose targets? Who's accountable when faulty data leads to wrongful deaths? The opacity of these systems doesn't help.
U.S. tech giants didn't set out to build war machines. Yet here we are—commercial AI becoming a force multiplier in deadly conflicts. The Gaza conflict marks a significant shift as automation bias leads military personnel to increasingly trust and defer to AI recommendations, even when human judgment should prevail. The partnership between Silicon Valley and military operations raises thorny questions about responsibility.

