Israel's military has released a new kind of war in Gaza—one where algorithms help decide who lives and who dies. The IDF's adoption of AI tools like Gospel and Lavender isn't just some tech upgrade. It's a fundamental shift in how wars are fought. These systems scan surveillance data, identify targets, and recommend bombings. All faster than humans could dream of doing it.
Gospel finds enemy buildings and equipment. Lavender connects Palestinian men to militant groups like Hamas. Together, they create death by database. The human element? Supposedly still there in final reviews. But when machines serve up thousands of targets daily, how thorough can that review really be?
AI weapons systems don't just find targets—they manufacture death at industrial scale, with humans reduced to rubber-stamping algorithms.
This isn't science fiction. The IDF's Targeting Directorate, built in 2019 by Unit 8200, employs hundreds to operate these systems. After Hamas attacked on October 7, 2023, Israel ramped up deployment dramatically. Push button, get targets. Simple as that. While algorithmic bias remains a significant concern in AI systems, the military continues to expand its automated targeting capabilities.
Critics aren't buying the precision warfare sales pitch. They point to massive civilian casualties and destroyed infrastructure. When algorithms blur accountability, who's responsible? The coder? The officer? The machine? Nobody, apparently.
Israel's broader military transformation includes network-enabled combat systems for real-time intelligence sharing between units. The rapid targeting process has been accelerated by tools like Fire Factory AI, which has drastically shortened preparation times for attacks. Their National Cybersecurity Strategy 2025-2028 builds on this foundation, focusing on AI-driven cyber tools and quantum-resistant encryption.
Defensive tech like Iron Dome gets the headlines, but the offensive applications are what's changing the game. These systems are allowing for a disturbing threshold of up to 20 civilian deaths per strike without proper threat assessment. The scary part? Other nations are watching. Taking notes. This model—where AI systems generate target lists and humans become mere reviewers—could spread globally.
Welcome to the future of conflict. Algorithms don't feel moral qualms. They don't hesitate. They just execute their code. And while Israel claims these tools help distinguish civilians from combatants, the rubble in Gaza tells a different story.
Death by algorithm is still death. Just faster, more efficient. And somehow, even more impersonal.

