A recent report revealed that the United States is responsible for a deadly missile strike on the Shajareh Tayyebeh Primary School in Minab, Iran, which killed at least 175 people, including children. The attack, which occurred on February 28, 2026, was the result of outdated targeting data provided by the U.S. Defense Intelligence Agency. Despite the attack being initially attributed to a targeting error, Human Rights Watch stresses that the U.S. military failed to take all feasible precautions to avoid civilian harm, which is a violation of the laws of war.
The U.S. military’s failure to verify that the school was not a military objective before striking it raises significant legal and ethical concerns. Under international law, military forces are required to ensure that targets are specific military objectives, or to cancel or suspend attacks when such verification is not possible. In this case, there was no evidence that a military objective existed near the school, and the harm caused to civilians was disproportionate to any potential military gain. These actions may constitute war crimes if they were committed recklessly or deliberately.
The report emphasizes that governments, including the U.S., are legally obligated to make full reparations for violations of the laws of war, including compensating victims and prosecuting responsible individuals. Given the apparent breakdown in the U.S. military’s targeting processes, the report calls for a detailed investigation to determine whether the attack was the result of negligence or recklessness. The use of artificial intelligence in military targeting decisions also raises concerns about accountability and the potential for further civilian harm.
Human Rights Watch urges the U.S. government to make the findings of the investigation public, hold individuals accountable, and carry out necessary reforms to minimize civilian harm in future conflicts. The organization also advocates for a Congressional hearing to scrutinize current military processes and ensure that AI does not solely drive military targeting decisions, which could further undermine accountability in conflict.







