Afghan Civilian Casualties Database Appears in Unexpected Place: Science

BY Jenny Marder  March 11, 2011 at 5:13 PM EST

We close this week with an unusual collaboration in the world of science. In January, 2011, the military released an entire database of civilian casualties to the journal, Science — a first for a science magazine. The data includes all recorded deaths and injuries of Afghan civilians since 2008, broken down by what, where and who did it. Science published the material on Thursday.

The numbers show that 2,537 Afghans civilians were killed and 5,594 were wounded in the past two years. Most of the deaths – 80 percent – are attributed to insurgents, with 12 percent caused by coalition forces, a 26 percent drop.

Weeks after the military provided the data to Science, the United Nations and the Afghan human rights organization, Afghanistan Rights Monitor both jumped on the bandwagon, sending to the journal their own numbers, which show twice as many civilians killed in the same two years. The United Nations estimates were released on Wednesday.

Together, these data sets provide the clearest picture yet of the civilian cost of a war, says John Bohannon, a molecular biologist, and the reporter behind the investigative piece, published in the magazine alongside the data.

This report comes at a time of mounting anger and unrest over civilian casualties in Afghanistan. Last week, NATO forces accidentally killed nine Afghan boys. And President Karzai’s brother said on Sunday that U.S.-led coalition forces accidentally killed one of the president’s cousins during a raid in southern Afghanistan.

Patience, politeness and a long list of science credentials were Bohannan’s passport in, he said. Had he not been a science journalist, he believes he would never have secured such high-level military access.

While embedded in Kandahar, he became acquainted with the military officials from the International Security Assistance Force (ISAF) who had designed a new system to track civilian casualties.

“I wasn’t there to go out on combat patrols, I was there to get data, and to find out how data are collected,” Bohannan said, adding, “The amazing thing is they treated me like a peer. They really just opened all their doors. They talked to me frankly. They showed me classified data and analysis, and just asked me, ‘Please don’t record this, don’t report on that.’ They wanted to show me everything, because the people who were in charge of the civilian tracking system are really proud of it.”

But what they wouldn’t do, for national security reasons, was let him take the data home. So his return to the United States corresponded with what he described as “a long courtship with the military.”

He started by asking for a small sample of data, which he analyzed and sent back. Then he requested more. This went on for a while. “I think I conquered them with politeness,” he said. Finally, in January, the military provided Science with its internal database of civilian casualties, which they call CIVCAS.

The following month, the U.N. and the Afghanistan Rights Monitor agreed to release versions of their own data to Science. Bohannan sent the data to physicists, economists and statisticians to analyze.

Disaggregated data is important for understanding the scope of the war, said Hamit Dardagan, co-director of Iraq Body Count. “When the individual incidents that produce a particular figure are openly and accessibly published, using a transparent and clear set of reasonably standardized terms, it becomes almost trivially easy to see why some figures differ from each other.”

The disparity between ISAF data and that of both the U.N. and the human rights organization, Bohannon says, is a result of different methodology. “I’ve never heard that the reason for that gap is that the military is purposely hiding deaths nor that the U.N. is inflating deaths because of some political motivation,” he said. “What I have heard, which I think is a pretty good explanation, is that these data sets are being produced by different methods.”

Often one data set will miss some events that another does not, or sources will differ, Dardagan said, adding that transparent sets help analysts make sense of these disparities.

And as to who to believe?

“They’re all equally unreliable,” Bohannon said. “There’s a proverb in statistics which is that all statistics are wrong. Taken in isolation, none of these are any more reliable than the others. They all have their problems. But if you take them altogether, it is way better than any one of them in isolation.”

Bohannan explains more on his reporting here: