Jousting with the Lancet: More Data, More Debate over Iraqi Deaths

U.S. soldier carries a wounded Iraqi child (Marine Corps photo)It's one of the most controversial questions today: How many Iraqis have died since the 2003 U.S.-led invasion?

That there is no definitive answer should not come as a surprise, given the chaotic situation in Iraq. Still, it's an important question to ask, for obvious humanitarian, moral and political reasons.

Theoretically, the public health surveys and polls that have been conducted in Iraq -- at great risk to the people involved -- should help inform and further the debate. But the data is complicated by different research approaches and their attendant caveats. The matter has been further confused by anemic reporting, with news articles usually framed as a "he said / she said" story, instead of an exploration and interpretation of research findings.

These are the conditions under which spin thrives: complex issues, political interests and weak reporting. So it's not too surprising that last month saw a spate of what international health researcher Dr. Richard Garfield calls "Swift Boat editorials."

Attack: Iraq Research

Garfield co-authored a 2004 study, published in the British medical journal The Lancet, that estimated that 98,000 more Iraqis died in the 18 months following the U.S. invasion than would have died otherwise. The recent editorials skewered a 2006 follow-up study that estimated more than 650,000 Iraqi "excess deaths" in the 40 months following the invasion. (Garfield was not involved with the 2006 study; in fact, he co-wrote a critique of it to which the study authors have responded.)

"The truth was irrelevant," fumed the Wall Street Journal's January 9 editorial. The newspaper added that the 2006 Lancet study "could hardly be more unreliable," yet its 650,000 figure "was trumpeted by the political left because it fit a narrative that they wanted to believe. And it wasn't challenged by much of the press because it told them what they wanted to hear."

In a more measured column published the previous day, the Washington Times also rejected the Lancet study's 650,000 figure, in favor of the up to 87,000 "documented civilian war deaths" reported by the Iraq Body Count project. The two figures represent "the difference between epochal human tragedy and genocidal madness," opined the newspaper. A similar editorial by conservative columnist Jeff Jacoby ran in the Boston Globe and International Herald Tribune the following week. Other editorials and news articles questioning the Lancet study appeared throughout January.

What fueled renewed criticism of 15-month-old research? Two things: a National Journal article that described what it called "potential problems" with the Lancet study, and a new survey from the Iraqi health ministry and World Health Organization (WHO) that estimated 151,000 "violent deaths ... from March 2003 through June 2006," the same period covered by the Lancet paper.

The recent newspaper editorials were prompted by, and quoted extensively from, the National Journal's January 5 cover story, "Data Bomb." That article (and the editorials it inspired) bemoaned a lack of skepticism towards the 2006 Lancet study, especially among reporters. "Within a week, the study had been featured in 25 news shows and 188 articles in U.S. newspapers and magazines," wrote co-authors Neil Munro and Carl M. Cannon.

However, this characterization neglects the fact that much of the initial coverage of the Lancet study was skeptical bordering on critical. A review of October 2006 U.S. newspaper and wire stories containing the words "Lancet," "Iraq," and "dead" or "death" found that most news reports presented the study as "controversial" (Associated Press, Los Angeles Times, San Francisco Chronicle and Christian Science Monitor, among others), "discredited" (Boston Herald), "politically motivated" (Baltimore Sun), or even an "October surprise" (Washington Post) designed to hurt Republicans in the November 2006 midterm elections. (In contrast, letters to the editor that cited the Lancet study that month unanimously accepted its conclusions, as did the vast majority of editorial columns.)

Iraqi woman and child with U.S. soldier (Navy photo)Perhaps a better measure of the Lancet study's impact is whether it led reporters to revise their Iraqi casualty estimates. In March 2007, many news outlets marked the fourth anniversary of the U.S. invasion by assessing the Iraq War to date. In its coverage, ABC News repeatedly asserted that 60,000 Iraqis had died, as the media watchdog group Fairness and Accuracy in Reporting (FAIR) noted in an action alert. NBC News and the Los Angeles Times also used the 60,000 figure, which was the number of Iraqi civilian deaths from violence given by Iraq Body Count at the time.

"Given the difficulties inherent in gathering precise data on Iraqi deaths, journalists should cite a plausible range of casualty estimates, rather than using the lowest estimate available," argued FAIR. Some major outlets -- including the Washington Post, CNN and CBS -- did just that. On her March 19, 2007 show, CBS's Katie Couric explained, "Estimates of the [Iraqi] dead range from 30,000 to as high as 600,000."

Throwing Data Bombs

The mixed impact of the Lancet study on Iraq reporting aside, Munro and Cannon's article "Data Bomb" raised serious questions about the Lancet researchers and their work. Munro and Cannon categorized their critiques as: "possible flaws in the design and execution of the study," "a lack of transparency in the data," and "political preferences held by the authors and funders."

Two of the authors of the Lancet study, Drs. Gilbert Burnham and Les Roberts, have responded directly to the National Journal article. Asked whether he accepted or rejected their explanations, Neil Munro told me that he didn't "want to get into a back and forth" argument.

To give a sense of the debate, the following summarizes what seem to be the most serious allegations in "Data Bomb," along with responses from the Lancet study authors and others.

Questions about Iraqi medical researcher Dr. Riyadh Lafta:

In "Data Bomb," Munro and Cannon wrote that the Lancet study rests "on the data provided by Lafta, who operated with little American supervision and has rarely appeared in public or been interviewed about his role." Moreover, "Lafta had been a child-health official in Saddam Hussein's ministry of health when the ministry was trying to end international sanctions against Iraq by asserting that many Iraqis were dying from hunger, disease, or cancer caused by spent U.S. depleted-uranium shells remaining from the 1991 Persian Gulf War."

Munro told me that Lafta "declined to speak to [National Journal] under any conditions." He added, "I got copies of articles that Lafta had prepared under Saddam's rule. ... Roberts hadn't read them and Burnham didn't have copies of them." Munro declined to tell me whether he found Lafta's previous research to be questionable. On CNN's Glenn Beck show, Munro wasn't so reticent, calling Lafta's earlier work "crummy scientific papers" that were "part of Saddam's effort to lift economic sanctions."

Burnham and Roberts responded that Lafta "has a long record as a solid partner for international research studies," including having worked with the United Nations on polio surveillance. They pointed out that the Iraqi mortality data generated in 2004 and 2006 under Lafta has "multiple points of internal consistency, which point to the solidity of the data." With regards to Lafta's silence, they said that he "has asked that the media do not contact him in Iraq, because of concerns for his safety and that of his family."

Burnham explained to science blogger Tim Lambert, "Riyadh has worked with a number of international researchers, and we checked his work out with them first. All found him to be a diligent and responsible researcher. ... As far as the papers go, I did look at the 1997 [Lafta study] ... and this is a perfectly respectable nutrition survey." In response to Munro and Cannon's questioning of Lafta's political leanings, Burnham added, "I have tried to point out that Riyadh Lafta is part of the university system ... not the Ministry of Health. He was one of the very few doctors who refused to join the Baath Party under Saddam."

Richard Garfield also vouched for Lafta, telling Lambert, "I knew Riyadh's boss some years before the invasion. ... I got to know Riyadh in the days following the invasion, when I worked closely with his department chair." Garfield added that Lafta's sanctions research, in the context of "the [papers] that I read in Iraq prior to 2003, would stand out as an apolitical report, one that might even get the author in trouble for its lack of repetitive politicized language commonly used then in Iraq."

Lambert posted two of Lafta's sanctions (PDF) studies (PDF) on his blog. Both clearly explain their methods, use WHO standards to define malnutrition and contain little editorializing. "These results could be attributed to the effect of embargo," the 1997 Lafta paper cautiously stated. The most colorful part of his 2000 paper was the ending sentence: "So we can conclude from results that the most important and widespread underlying causes of the deterioration of child health standards in Iraq is the long term impact of the non-humanized economic sanction imposed through united nation [sic] resolution."

Munro also criticized Lafta's earlier research for lacking data on conditions prior to the sanctions. It's true they don't contain pre-sanctions numbers, but Lafta's 1997 paper cited a UNICEF study to support his assertion that "before the embargo severe clinical malnutrition was rarely seen in Iraq."

Regarding the oversight of Lafta's work, Burnham told Tim Lambert, "We have all the original field survey forms. Immediately following the study we met up with Riyadh ... and Shannon [Doocy], Riyadh and I went through the data his team had computer entered, and verified each entry line-by-line against the original paper forms from the field. We rechecked each data item, and went through the whole survey process cluster-by-cluster. We considered each death, and what the circumstances were and how to classify it. Back in Baltimore as we were in the analysis we checked with Riyadh over any questions that came up subsequently. We have the details on the surveys carried at each of the clusters. We do not have the unique identifiers as we made it clear this information was not to be part of the database for ethical reasons to protect the participants and the interviewers."

Questions about the Lancet data and its availability:

U.S. PsyOps soldier talks with men in Baghdad (Army photo)In "Data Bomb," Munro and Cannon wrote, "The [Lancet study] authors have declined to provide the surveyors' reports and forms that might bolster confidence in their findings. ... Under pressure from critics, the authors did release a disk of the surveyors' collated data, including tables showing how often the survey teams said they requested to see, and saw, the death certificates. But those tables are suspicious, in part, because they show data-heaping." Data heaping is when surveys contain fabricated or inaccurate data that clusters together, or heaps, towards "clean" inputs -- for example, multiple entries of numbers like 10 or 20 instead of a range of "messier" numbers like 13 and 17.

Neil Munro wasn't happy with only having access to collated data. "Collated data is not the same as data," he told me. "It's not even the same as raw data, and it's not even the [survey] forms, et cetera."

Burnham and Roberts responded to these criticisms by pointing out that their Iraqi mortality data was "made available to academic and scientific groups in April 2007 as was planned from the inception of the study." The release announcement stated that, due to "major ethical, as well as personal safety concerns, the data we are making available will have no identifiers below Governorate level."

The release announcement also stated that data would only "be provided to organizations or groups without publicly stated views that would cause doubt about their objectivity." Les Roberts told me that condition was added "as a result of mistakes I made with the 2004 study. ... I gave the data out to more or less anyone who asked, and two groups included ... the neighborhood in Fallujah," which the study authors had excluded from their calculations, due to the area's extremely high mortality rates. "As a result, they came up with an estimate twice as high as ours, and it repeatedly got cited in the press as 'Les Roberts reports more than 200,000 Iraqis have died,' and that just wasn't true," he said. "So, to prevent that from happening again, we thought, if a few academic groups who want to check our analysis and re-run their own models want to look, that would be OK, but we're not just going to pass it out [to anyone]."

There seems to be no question that other researchers have had access to the 2006 Lancet study data. So, Munro and Cannon's criticism is essentially that reporters like them have had limited access. Roberts confirmed that the Lancet study authors treat data requests from non-researchers differently. "What we wanted was to release [the data] to people that had the statistical modeling experience and a little bit of experience in this field," he explained. When I asked Neil Munro whether he accepted that security concerns kept the Lancet researchers from collecting personal data or releasing non-collated data, he said, "That's a perfectly coherent response. At the same time, others can judge whether it's a sufficient response."

Still, the information that the Lancet study authors gave Munro and Cannon was detailed enough to reveal what the journalists called examples of data heaping. One example involves when death certificates were reported missing. Burnham and Roberts refuted Munro and Cannon's claim that "all 22 missing certificates for violent deaths were inexplicably heaped in the single province of Nineveh." They pointed out that there were three regions in which "survey interviewers either forgot or chose not to ask for death certificates out of concern for their personal safety."

Roberts told me that the pattern of missing death certificates "does not, in any way, suggest a problem with the data. ... If we went across the United States and could somehow interview people about having lost a loved one, and we identified ... deaths that didn't have a death certificate, those would clump." In particular, they would clump in "Amish communities or Indian communities in Alaska where they don't bother with death certificates," and in areas where "no one really worries about death certificates in their little town." In Iraq, areas with fewer death certificates might also indicate where local institutions are not functioning well. Roberts criticized as "deceptive logic" the assertion made in "Data Bomb" that "the odds against such perfection" in patterns of death certificates "are at least 10,000 to 1." He countered, "That's making the assumption that when one house doesn't have a death certificate, that that's completely independent of the next house."

Munro and Cannon's other "data heaping" example concerns the pattern of violent deaths in a particular area. In "Data Bomb," they wrote that in one area, 24 violent deaths were "neatly divided among 18 houses -- 12 houses reported one death, and six houses reported two deaths." Les Roberts told me, "That's what the data is. ... We could envision lots of means by which houses would essentially have lost one or a couple people." As a theoretical example, he said, "If there was a bombing in a line of men who were queuing up for some reason, you actually would not expect to have more than one or two killed in any house. ... There are lots of reasons why such patterns can exist."

When I asked Neil Munro what was not present in the collated data that might allay or confirm his concerns, he gave the example of D3 Systems, a Virginia-based firm that carries out polling and other research in "difficult environments" around the world. "Locals are hired," Munro said, explaining D3's approach. "They're trained for weeks -- a week or more. They get fired when they violate the rules ... because local Iraqis violate the rules. ... When they send out people to interview in a town, they send them with a camera, and they say, 'Take a ... time-stamped picture of the town and bring it back to show us you were there.' There's no privacy violation there. There's no particular security concern there. ... They gather the ages of the people they interview. Birth dates, for example. And then they look at those birth dates, and if they see data heaping, they'll call the employee back in to explain himself."

Asked whether their approach was less rigorous than D3's method, Les Roberts said the opposite was true. "We only had physicians as interviewers, and all of them were former students of" Riyadh Lafta, who Roberts called "one of the most famous public health scientists in the entire country." Lafta also accompanied some of the survey teams in the field. "We had a link of accountability far, far stronger than is the norm in polling firms," Roberts told me. "This is more intense supervision than almost ever occurs in surveys of this sort." In addition, all interviewers had "previous survey and community medicine experience," and received two days of training, according to the Lancet study. The survey teams did take some pictures in the field, Roberts told me, but they can't be released, due to security concerns.

Questions about political bias:

Sign in the Green Zone, Baghdad (photo by Peter Rimar)Perhaps the most frequently repeated "Data Bomb" critiques are the charges of political bias. "Virtually everyone connected with the study has been an outspoken opponent of U.S. actions in Iraq," wrote Munro and Cannon. "The funding came from the Open Society Institute created by [George] Soros, a top Democratic donor, and from three other foundations." Munro and Cannon also reported that Burnham had "admitted" that the Lancet study was timed to appear "before the [November 2006] election."

"At no time did either Roberts or Burnham say that [the] study's release was timed to affect the outcome of the election," stated the Lancet study authors's response. "Roberts indicated that he wanted to promote discussion of the results, and Burnham told Munro specifically that he was anxious that the 2006 study be released well before the election to dispel any notion of trying to influence outcomes."

The Lancet study authors have consistently said that "planning for the second survey began in October 2005 with the intention of completing and releasing the findings in the spring. However, the violence in Iraq was so great that it prohibited the field teams from beginning the survey until late spring." Since some people dismissed their earlier study specifically because it was published a few weeks before the 2004 election, it's difficult to understand why the authors would want their follow-up research to be published near another election.

It's true that the Lancet study authors and Lancet publisher Richard Horton have voiced opposition to the Iraq War. Garfield acknowledged this, telling the National Journal, "You can have an opinion and still do good science." An exasperated Les Roberts asked one critic, "Do people who publish about malaria death need to be neutral about malaria?"

It's not too surprising that public health researchers would have negative attitudes about war, especially one opposed by the majority of their fellow U.S. or British citizens. Still, it's appropriate to take the authors's views into consideration when evaluating their work. The important question is whether their views compromised their work. As this article suggests, the Lancet study has held up well under intense scrutiny. Moreover, applying Munro's "objectivity" standard would call his own article into question, as he advocated for the invasion of Iraq back in 2001. (For the record, I oppose the Iraq War. I also feel that its negative impact on Iraq, the United States and elsewhere is more than apparent, whether 80,000 or one million Iraqis have died.)

The question of funding is more serious. Numerous analyses have found that funding sources do impact research outcomes. For example, research funded by a pharmaceutical company is more likely to produce results favorable to its drugs. There are three major ways in which funding sources may skew research results: funders may only support research that's structured to maximize the likelihood of obtaining desired results, funding may consciously or unconsciously change researchers's protocols or analyses, or funders may insist that only desired results be published. Did any of these dynamics influence the Lancet study?

According to Gilbert and Burnham, "the fact that some ... financial support in 2006 came from the Open Society Institute had no effect," because "the researchers knew nothing of funding origins." Richard Garfield seconded their account. "I had pressed the Lancet team ... about who are the people at MIT and where does their money come from," he told me.

The Massachusetts Institute of Technology accepted four grants for the Lancet research: $46,000 from the Open Society Institute; $5,000 from Samuel Rubin Foundation, a liberal funder; and two small grants from unnamed sources. MIT's John Tirman, who oversaw the funding, explained that "more than six months after the [Lancet] survey was commissioned, the Open Society Institute ... provided a grant to support public education efforts on the issue. We used that to pay for some travel for lectures, a web site, and so on."

The Lancet study authors have consistently stated that they were only in touch with MIT and were not aware of the funding sources. "They said they didn't know" about the Open Society Institute funding, "and I think that's right," Garfield told me. "But it doesn't matter. The research is either right or it's wrong."

If the funders were not in touch with the Lancet researchers and the researchers didn't know the identity of the funders, then two of the possible ways in which funding can bias science could not have been factors. It is possible that the Open Society Institute decided to fund the Lancet 2006 study because the earlier study's results bolstered criticism of the Iraq War. However, this could not have impacted the Lancet researchers's protocols or analyses, since they weren't aware of the funding source. Lastly, the Open Society Institute funding could have skewed the public debate on Iraq if the Lancet study would not have been done, in the absence of the funding. Tirman's statement that the funding was provided after the study had been commissioned negates this possibility.

Overall, few of the many charges made in the National Journal article "Data Bomb" stick convincingly upon further examination. That is, unless you assume that the Lancet study authors and their colleagues have consistently lied without leaving a paper trail to the contrary. You would also have to assume that the independent health researchers and statisticians who have reviewed the study -- including the chief scientific adviser to the British Defense Ministry -- are either in on the plot, or are too naive or incompetent to notice major problems.

That doesn't mean that the Lancet study is without flaws. It's curious that Munro and Cannon didn't mention the errors that the Lancet study authors themselves have acknowledged. In response to critiques from fellow researchers, the authors published an addendum in which they admitted that one graph in their study -- though labeled accurately -- was "confusing," since it "mixe[d] rates and counts" of Iraqi deaths. They also acknowledged that they had mislabeled U.S. Defense Department numbers of Iraqi casualties, which includes both injuries and deaths, as Iraqi deaths.

Why didn't Munro and Cannon mention these errors in "Data Bomb"? Neil Munro told me that "we ignored many criticisms of the 2006 Lancet paper so we could focus on the core scientific issues." Perhaps he and Cannon considered the acknowledged errors to be minor details, but the fact that they were identified and corrected suggests that the Lancet study authors are more diligent and the scientific debate more robust than "Data Bomb" portrayed.

Six Figure Monte

Car bomb in Baghdad (Navy photo)The other impetus for renewed criticism of the 2006 Lancet study was "Violence-Related Mortality in Iraq from 2002 to 2006," a paper published in the New England Journal of Medicine last month. The paper, which was co-authored by the Iraqi Health Ministry and the World Health Organization (WHO), estimated that 151,000 Iraqis have died due to violence since March 2003.

The WHO paper didn't receive lots of attention, but what coverage there was was positive. Some news stories presented the WHO paper as yet another reason to discount the Lancet study. The Associated Press referred to the WHO paper as "the best effort yet to count [Iraqi] deaths," and noted that its estimate was "far lower than the 600,000 deaths reported in an earlier study." The New York Times contrasted the Iraqi death estimates in the WHO and Lancet studies, adding that the Lancet study had "come under criticism for its methodology." NPR described the WHO paper's projection as "about one-fourth of the number of deaths estimated in an earlier controversial study."

A major reason why the WHO paper was seen as more authoritative is that its survey teams interviewed more people: 9,345 households, compared to 1,849 households in the Lancet study. Interviewing more households reduces the risk that deaths are over- or under-represented in the data, since violent deaths tend to be concentrated in certain areas, instead of being spread out evenly across the country. But, as Tim Lambert pointed out, "the larger sample size just reduces the sampling error," along with narrowing the confidence interval, or range of other possible "correct" answers suggested by the data. Burnham has explained that the sample size used for the Lancet study "is nearly 3 times larger than the average U.S. political survey that reports a margin of error of +/- 3%."

In addition, the interviews that the WHO paper is based on may have been less representative of conditions across Iraq than those conducted for the Lancet study. The WHO survey teams were not able to visit more than 10 percent of their planned interview areas, due to violence. To adjust for not having data from these high-mortality areas, the WHO paper relied on ratios derived from Iraq Body Count data. However, Iraq Body Count only includes Iraqi deaths that have been reported by two or more news sources, compiling a minimum number of confirmed civilian deaths due to violence. By using Iraq Body Count numbers to fill in their missing data points, the WHO paper authors assumed that the likelihood of a violent death being reported is equal across different regions of Iraq, which seems unlikely.

Other factors undermine the "bigger is better" argument. One is that the interviews for the WHO paper were conducted months later than those for the Lancet study. The delay made the WHO teams more likely to miss deaths, due to household movement or disintegration prior to the survey; and more likely to miss or miscategorize deaths, since interviewees were being asked about more distant events. In addition, the WHO survey was much longer than the one used in the Lancet study. Longer surveys tend to result in fewer deaths being reported. A follow-up to a long survey on living conditions in Iraq "revealed twice as many child deaths when researchers revisited the same households asking just about deaths in children," according to the Lancet study authors. Lastly, the WHO survey teams did not ask for death certificates, as the Lancet teams did.

These caveats don't mean that the WHO paper is inferior to the Lancet study -- just that it's difficult to compare them directly. That's especially true since the WHO paper didn't give a number for total deaths. Its 151,000 figure is for violent deaths only. The Lancet study's 650,000 figure represents invasion-related violent and non-violent deaths. Since decisions about what is a violent and what is a non-violent cause of death may vary between studies, the most accurate comparison would be between the number of total deaths.

In an accompanying document (PDF), the authors of the WHO paper explained that "further analysis would be needed to calculate an estimate of the number of [total] deaths and to assess how large the mortality increase due to non-violent causes is, after taking into account that reporting of deaths longer ago is less complete." However, based on the rates of violent and total deaths given in the WHO paper, a rough estimate of around 400,000 total deaths predicted by the WHO data can be made. (Tim Lambert was kind enough to walk me through the calculation, which predicts just over 433,000 Iraqi deaths from violent and non-violent causes.)

"There's a consistent picture if you take into account the limitations of the different studies," Richard Garfield told me. "Everyone in their right minds realizes that there are six figures' worth of excess deaths." As Garfield pointed out to the Chronicle of Higher Education, "Once you've got six figures, a higher six figures and a lower six figures are both describing an extraordinary level of civilian mortality, one of the highest in the world."

The pre- and post-invasion death rates are also similar between the WHO and Lancet studies, according to Les Roberts. "I can't ever remember two studies having such similar results and having it being painted as so controversial," he told NPR. "We found a death rate, after the invasion, 2.4 times higher. That is, mortality a little more than doubled. And this new survey found ... a death rate twice as high after [the invasion]. ... The huge contrast between these two studies is that we think virtually all of that increase was from violence, and they believe that only a third in the increase in mortality was from violence."

Why Do You Ask?

The WHO and Lancet studies are just two of many Iraqi casualty estimates, though their having used the "cluster sampling" approach preferred in volatile regions, and having gone through the peer review process, make them particularly compelling. The same week that the WHO paper was published, the British polling firm ORB released a revised estimate of 1.03 million Iraqi deaths from all causes, "as a result of the conflict," from March 2003 to August 2007. Iraq Body Count, which relies on media sources as described above, reported up to 89,000 Iraqi civilian deaths due to violence, from March 2003 to late February 2008.

U.S. soldier in Kirkuk (Air Force photo)While Les Roberts cautioned that he does not have the information needed to evaluate the ORB poll, he noted its similarity to other estimates. "There was a BBC poll (PDF) that was done at the end of four years of occupation," he told me. "In that poll, 17 percent of households said someone in their household had been killed or injured from the violence of the war." The ORB poll covered an additional six months and found that 20 percent of Iraqi households reported at least one death. Since "every data set ... suggests more people have been killed in this war than injured," Roberts feels that the BBC and ORB polls are "quite consistent." He added that attempts to update the 2006 Lancet data, which are not scientific and make major assumptions, have calculated that there may have been "a million [total Iraqi] deaths by August of 2007."

In contrast, Richard Garfield is skeptical of the ORB poll. "I wouldn't be surprised if there's an upward bias there," he told me. He's also critical of Iraq Body Count (IBC). "Every death is a true death, but there are an enormous number of deaths that don't go into it," he said, comparing IBC's approach to rigorously documenting the tip of an iceberg. Even though IBC clearly states that it doesn't count many civilian deaths, its numbers are often reported as total Iraqi casualties. That makes the project "misleading," in Garfield's eyes. Reporters and government officials think, "The numbers are there, they're convenient and they're low, so we can trust it," he said. (For their part, IBC authored a scathing attack on the 2006 Lancet study.)

Sarah Sewall, the director of the Carr Center for Human Rights Policy, has seen this dynamic first-hand. "I remember very well, a couple different conferences with military officials where everyone was questioning the method and the motive of the IBC's approach," she told NPR. "And it wasn't until the first Lancet survey came out everyone said, 'Oh, well, goodness, the Iraq Body Count is so much more reliable.'"

Underlying the various estimates of Iraqi casualties are not only different research approaches and limitations, but different assumptions of which deaths should be counted and why. Is it important to include Iraqi deaths from non-violent causes? Should only Iraqi civilian deaths be monitored? Who should collect the data, and how should it be used?

"The only important reason to do this is to reduce casualties," Garfield told me. On that count, the various studies of Iraqi mortality paint a more tragic picture than many people realize. "What's been lost in the political noise is a consistent finding ... of a small to moderate rise in the non-violent, regular causes of death," Garfield explained. "It's very worrisome. It means that conditions of life and medical care are not improving or not improving very much. ... Iraqi hospitals were a very big mess before, so it shouldn't have been hard to have a [positive] impact."

Everyone -- war opponents and supporters alike -- presumably wants the United States to have a more positive impact on the lives of Iraqis. The data coming out of Iraq should help us figure out how to do that, if we examine it carefully and critically.

Better media coverage would certainly help, but Les Roberts isn't very optimistic on that count. Roberts contrasted the coverage that CNN and ABC gave his Congo mortality surveys with the difficulty he and others have had even responding to charges made against their Iraq work. Major newspapers declined to publish a response to what Roberts called a "very deceptive" October 2006 Wall Street Journal op/ed by Steven Moore, who served as Paul Bremer's pollster in Iraq. "When you look at the attacks that have been made on ORB and the attacks that have been made on us as a result of putting this information out, suddenly you realize why CNN and the Washington Post don't want to be the entity saying that more than twice as many people have died in Iraq as have died in Darfur," Roberts told me. "That's just not going to win you many friends."

Of course, U.S. policy towards Iraq isn't winning many friends, either. Eventually, that policy will change and a consensus on how many Iraqis died due to the invasion will emerge. That number will be important not for its inevitable use in domestic political debates, but for its use in guiding reconstruction and medical aid to Iraq, and in building a more humane U.S. foreign policy.


Diane Farsetta is the Center for Media and Democracy's senior researcher.