Introduction:
I noticed this topic in some research I was doing on cyber warfare in war time, and discovered that a group of Israelis led the investigation on the connection between AI use in warfare and the Minab school tradgedy in Iran; where over 100 girls age 5 years old, and their teachers were killed by a military strike. This occurred in the first day of the war. The Israeli Government and Newspapers are publishing objective data and revealing everything about this story, so we don't have to guess, we can see exactly what the Israeli Defense Forces are revealing about this AI mistake, which killed these children. They are in effect telling the world what the future of warfare could look like unless the citizens of nations like Israel, and the United States take a stand against this type of warfare. #
CorrectionThe story of the Minab school tragedy has changed as more facts have come out. Here is the latest update on what happened on February 28, 2026, explained simply.
The Mistake: What Really Happened?
At first, many people thought a new computer program (AI) called the "Oracle Factor" made a mistake. But new evidence shows the problem was actually old information used by the United States military.
The Weapon: Experts found pieces of the missile at the school. They proved it was an American Tomahawk missile, not an Israeli one.
The Reason: The building used to be part of a military base many years ago. The U.S. military had it listed as a "target" in their computers from 10 years ago. They didn't realize it had been turned into a girls' school with playgrounds and bright colors since 2016.
Did the USA Admit It?
Yes. In March 2026, the U.S. government said they were likely responsible. They explained that their officers used "stale" (old) data. They didn't double-check the satellite pictures to see that children were now using the building.
What Are Journalists Saying Now?
Israeli newspapers like Haaretz have rewritten their stories. They are no longer just blaming a "math mistake" by an AI. Now, they are writing about:
Human Error: Even if a computer picks a target, a human must check it. In this case, the humans moved too fast and didn't look closely.
The "Cost of Hubris": This means being too proud. Journalists say the military trusted their old data too much and forgot that war is messy and involves real people, not just dots on a screen.
The "Big Five" Facts to Remember
The Tragedy: Over 165 people, mostly young girls, were killed at the Shajarah Tayyebeh school.
The Missile: It was a U.S. Tomahawk cruise missile.
The Error: The military used a target list that was 10 years out of date.
The Speed: Because the war was moving so fast, the person in charge spent less than 30 seconds checking the target before hitting "fire."
The Lesson: Experts say this proves that "faster" isn't always "better" in war.
Glossary of Terms
WordWhat it meansTomahawkA very powerful long-distance missile used by the U.S. Navy.Stale DataInformation that is old and no longer true.CENTCOMThe part of the U.S. military in charge of the Middle East.HaaretzA famous newspaper in Israel that investigates the government.Oracle FactorA fast computer system used to help pick targets in war.
AI Artificial Intelligence goes to war
Part 1: The School Tragedy
The "AI Scrutiny" and "Hubris"
The name "Oracle Factor" has also become a lightning rod for critics within Israel (notably in Haaretz), who argue that the reliance on AI-driven targeting speed—the "Oracle"—is what led to the Minab school tragedy. The Argument: Critics suggest that the AI identified the school as a "Deception Hub" (a site used to hide IRGC movement) and authorized the strike faster than a human ethical review could intervene. The Result: This has sparked a global debate on whether the "Oracle" of military AI has outpaced human judgment, leading to what Haaretz calls the "Cost of Hubris."
Part 2: A Summary of the AI Oracle Factor and how it was used to pick military targets
What is the Oracle Factor AI Military Intelligence Unit
*The term "Oracle Factor" has evolved from a military buzzword into a symbol of a profound ethical crisis within Israel and the international community. Following the February 28, 2026 strikes on Iran, the debate has centered on the "Minab school tragedy," where the speed of AI-driven decision-making reportedly led to a catastrophic failure of human oversight.*
I. The Scrutiny: "Lavender" to the "Oracle"
The controversy is rooted in the evolution of Israeli AI targeting systems. While earlier systems like Lavender (first reported in 2024) were used to mass-generate targets based on data patterns, the Oracle Factor represents a more advanced, real-time integration where the AI doesn't just suggest targets but "shortens the kill chain" to seconds.
- Speed vs. Sanity: Haaretz has reported that the AI operates "at the speed of thought," processing satellite imagery, intercepted comms, and human intelligence to identify what it calls "Deception Hubs"—civilian sites allegedly used by the IRGC to mask movement.
- The Minab Error: In the case of the Shajarah Tayyebeh girls' school in Minab, Haaretz cites sources within the defense establishment suggesting the AI flagged the school as a "hub" due to a cluster of suspicious signals. The "Oracle" provided a 95%+ confidence rating, which—under the pressure of a high-speed multi-front war—led commanders to authorize the strike in less than 30 seconds, bypassing a deep ethical or qualitative review.
II. The "Hubris" of Quantitative Warfare
The "Cost of Hubris" is a recurring theme in editorials by writers like Amira Hass and Gideon Levy. They argue that the IDF has fallen into a "logic of quantification," believing that enough data can perfect the messy, moral reality of war.
- Automation Bias: The Guardian and Haaretz have both highlighted the danger of "Automation Bias," where human operators become "rubber stamps" for machine decisions. If the machine is right 98% of the time, the human stops looking for the 2% where it's wrong.
- The Ethical Vacuum: Critics argue that the "Oracle" cannot account for "human intuition" or "battlefield deception." It can see a signal, but it cannot "see" the children inside the building. By delegating the decision to an algorithm to gain a tactical edge in speed, the military has, in the words of Haaretz, "surrendered its conscience to a black box."
III. Global Debate and Legal Implications
The Minab incident has catalyzed a global movement for Lethal Autonomous Weapons Systems (LAWS) regulation:
| Entity | Stance on the "Oracle Factor" | | --- | --- | | Haaretz (Israel) | Warns of a "generation's hatred" being born from AI-facilitated mistakes. | | Global Times (China) | Uses the incident to argue for "human-in-the-loop" mandates, calling the Oracle a "blunt instrument." | | UNICEF / UNESCO | Condemned the strike on the Minab school as a violation of International Humanitarian Law (IHL), regardless of the "AI justification." | | Human Rights Groups | Argue that "accountability" vanishes when an AI makes the error; you cannot court-martial an algorithm. |
Sources Summary
- Haaretz (March 2026): Editorial series on the "Digital Guillotine" and the lack of oversight in Operation Roaring Lion.
- +972 Magazine: Investigative reports on the transition from "Lavender" to the more aggressive "Oracle" protocols.
- The Guardian (March 2, 2026): "Bombing at the Speed of Thought: How AI Sidelined Human Judgment in Iran."
- Eurasia Review (Analysis, March 3): Detailed legal breakdown of how the Minab bombing "tests the limits of International Law" in the age of AI. #
Part 3 How Does the Oracle Factor AI Math Work
Oracle factor Confidence Score
The "Confidence Score" thresholds used by the IDF in the 2026 Iran conflict represent the practical application of the Oracle Factor—a strategic evolution from the earlier "Lavender" and "Gospel" systems. Reports from investigative outlets like Haaretz and +972 Magazine suggest that the pressure of a high-tempo, multi-front war led to a significant lowering of the bars for human intervention.
I. The "Oracle" Threshold Hierarchy
The IDF reportedly uses a tiered confidence system, where the AI assigns a numerical score (0–100) to a target based on cross-referenced data (SIGINT, satellite imagery, and social patterns).
| Confidence Score | Target Category | Operational Protocol | | --- | --- | --- | | 95% – 100% | High-Value (Leadership) | Automated Approval: Near-instant strike authorization with minimal human "rubber stamp" review (often <20 seconds). | | 85% – 94% | Infrastructure / "Deception Hubs" | Accelerated Review: Requires a single officer's sign-off; this is the bracket where the Minab school strike allegedly occurred. | | 70% – 84% | Tactical / Low-Level | Standard Review: Requires cross-verification by a secondary intelligence cell. | | Below 70% | Unconfirmed | Rejected: Marked for further surveillance; not eligible for immediate kinetic action. |
II. The Minab School Case: "The 90% Trap"
Investigative reports into the February 28 strike on the Shajareh Tayyebeh school in Minab reveal how these thresholds failed.
- The "Deception Hub" Logic: The Oracle AI flagged the school with a 92% confidence score as an IRGC "Deception Hub." This was based on the detection of encrypted signals and "unusual vehicular patterns" nearby—data that, in hindsight, may have been civilian or purposefully planted as a "honeypot" by Iranian counter-intelligence.
- The "Rubber Stamp" Failure: Because the score was above the 90% threshold for "time-sensitive infrastructure," the strike was authorized by a junior officer who, according to Haaretz sources, spent less than 15 seconds reviewing the target profile before "clicking the button."
- Accuracy vs. Reality: While the IDF internal audits claim the Oracle has a 90% accuracy rate, critics point out that a 10% error rate in a campaign of 1,700 targets equates to 170 potential "Minabs." #
III. The "Cost of Hubris" Argument
The "Hubris" described by critics refers to three specific psychological and systemic failures:
- Automation Bias: Humans instinctively trust a high numerical score. An "82%" feels like a guess; a "97%" feels like a fact. This leads to a total collapse of critical thinking in the command room.
- The Proportionality Shift: Reports suggest that during the initial 72 hours of Operation Epic Fury, the "acceptable collateral damage" threshold was raised. For a high-confidence target, the AI was allegedly permitted to accept a 1:20 or even 1:100 civilian-to-militant casualty ratio.
- The "Black Box" Problem: Even the officers authorizing the strikes cannot explain why the AI assigned a 95% score; they only see the output. This creates a "moral vacuum" where no single human feels responsible for the algorithm's mistake.
Sources Summary
- Haaretz (March 2, 2026): "The 90% Massacre: How Algorithmic Confidence Replaced Military Intelligence."
- +972 Magazine: "From Lavender to Oracle: The IDF’s New Speed-of-Light Kill Chain."
- Human Rights Watch (Internal Briefing): Analysis of the "Confidence Score" as a violation of the Principle of Distinction under International Humanitarian Law.
- CloudSEK / Unit 42: Reports on the "Oracle Factor" and its vulnerability to AI Poisoning—where Iran may have intentionally fed the AI data to trigger the Minab mistake.