Skip to content


Harry Cassin
Publisher and Editor

Andy Spalding
Senior Editor

Jessica Tillipman
Senior Editor

Bill Steinman
Senior Editor

Richard L. Cassin
Editor at Large

Elizabeth K. Spahn
Editor Emeritus

Cody Worthington
Contributing Editor

Julie DiMauro
Contributing Editor

Thomas Fox
Contributing Editor

Marc Alain Bohn
Contributing Editor

Bill Waite
Contributing Editor

Shruti J. Shah
Contributing Editor

Russell A. Stamets
Contributing Editor

Richard Bistrong
Contributing Editor

Eric Carlson
Contributing Editor

A word of caution on the new Corruption Risk Forecast

Last month, the Center for International Private Enterprise (CIPE) and the European Research Centre for Anti-Corruption and State-Building (ERCAS) released their new Corruption Risk Forecast. The Corruption Risk Forecast aspires to provide a set of “[o]bjective corruption and transparency indicators for over 120 countries to diagnose the present, forecast the future, and set policy targets.” As the lead researcher for TRACE’s Bribery Risk Matrix, which measures business bribery risk, I was interested in taking a close look at the Corruption Risk Forecast’s methodology and underlying data. Having done so, it appears its results may not be reliable.

The Corruption Risk Forecast aims to identify which countries are improving in their handling of corruption, which are stationary, and which are declining. It does this initially by looking at how each country has improved or declined over the past decade in each of five substantive areas: budget transparency, administrative burden, judicial independence, e-citizenship, and press freedom. A country showing “significant” improvement in at least two of these areas is deemed to be improving, while “significant” decline in at least two areas yields the opposite judgment. A combination of significant improvement and significant decline may render the country stationary or trigger further analysis, as I’ll discuss further in a bit.

The specific problem arises in how the makers of the Corruption Risk Forecast determine whether a trend is “significant.” Usually, statisticians may consider a difference to be significant if it places the data point more than one standard deviation away from the average score. But while the Corruption Risk Forecast uses standard deviations as a reference point, it deems a change significant if it is more than one standard deviation away from zero, rather than one standard deviation away from the average change.

The consequences are stark. Concerning e-citizenship, for example, evaluating each country’s change compared to zero results in a finding that 99 out of 120 countries have improved significantly, and no countries have suffered significant decline. This dramatically contrasts the outcome if each country’s change were compared to the mean:

But is this wrong? The Corruption Risk Forecast calculates its e-citizenship scores from one to 10 based on the percentage of the population with fixed broadband subscriptions, as reported by the United Nations’ International Telecommunication Union. Hasn’t every country in the world made advances over the past decade in expanding its citizens’ access to the internet?

Mostly, yes: On average, countries increased their broadband coverage by 8.8 percent between 2008 and 2020. But there are exceptions. Bermuda, for example, dropped from 51.5 percent coverage in 2008 to 36.9 percent in 2020, while five other countries showed a less-dramatic reduction. Among these is Nigeria, which went down from 0.05 percent to 0.03 percent. This is notable because of the Corruption Risk Forecast’s assessment that “Nigeria has not managed to evolve significantly over the past 12 years on any indicator except e-citizens [emphasis added].”

The Corruption Risk Forecast deems several other countries to have “significantly” improved in e-citizenship with no actual change (Burkina Faso) or only the mildest improvement (Burundi increasing broadband access by 0.04 percent, Malawi by 0.05 percent). Liberia—which the Corruption Risk Forecast finds to have improved significantly in three out of the five areas of assessment, including e-citizenship—had only a marginally better showing, going from no broadband at all in 2008 to 0.26 percent coverage in 2020. (Again, the average improvement across all countries was 8.8 percent.)

Among the Corruption Risk Forecast’s five inputs, the e-citizenship indicator suffers the greatest distortion from the questionable methodological decision to gauge the significance of change from zero rather than from the mean. But the issue pervades the Corruption Risk Forecast’s calculations, with greater or lesser impact. If this was a deliberate choice, it was a bad one.

But there’s a more fundamental flaw in the Corruption Risk Forecast’s construction. There is no reason to expect that a comparison across two arbitrary moments in time (2008 and 2020) will yield a reliable forecast of a country’s handling of corruption in the future. This may be why the Corruption Risk Forecast includes additional steps beyond the quantitative calculations: a “political change check” in which the editors evaluate the effect of “radical political events” within the previous four years, and a “societal demand check” that aims to capture the popular pressure for good governance.

Although both checks are underdefined and obscure in their application, they have an outsized effect on the final conclusions: By my count, they altered the forecast for about 18 out of 120 countries—a full 15 percent.

To be clear, there is nothing wrong with using expert evaluation to help build this sort of index. But it does undermine any claim that the Corruption Risk Forecast is fundamentally based on hard facts rather than opinion. More importantly, such evaluation should not be used as a backstop for a statistically flawed methodology. Until these errors and shortcomings are addressed and fixed, I cannot recommend relying on the Corruption Risk Forecast.

Share this post


Comments are closed for this article!