How Do We Know a Good Election When We See One? Toward Common Indicators
“An important feature of the current world is that almost all countries...now hold national-level elections. They’re not all equally democratic, but there’s only a tiny handful of countries that don’t hold [them]. I think most of us can agree on the very best elections and the very worst elections..., but we’re here to talk about all of these more interesting middle categories in which it is actually quite difficult to come to any consensus about...electoral quality.”
Susan Hyde, professor of political science and international affairs, Yale University
Since the mid-1960s, the number of countries holding elections has increased dramatically, and since the end of the Cold War, this number has more than doubled. Today, more than 90 percent of the world’s countries hold elections to choose officials at the local, regional, and national levels of government. Although elections are now a regular feature of political life for billions of people, the quality of these elections varies greatly.
In an effort to improve electoral quality, international and local election observers are becoming a global norm, monitoring and assessing the procedures and perceptions of the entire electoral cycle, from the legal framework for elections to the mechanisms available for disputing results. Practitioners have used data collected during these election-observation missions to assess electoral integrity, offer recommendations for future elections, and improve election-monitoring tools. At the same time, there has been a surge of research interest in election quality, the role of elections in the democratization process, and the impact of election observation. With these shared interests and technological advances in data collection—such as The Carter Center’s ELMO tool, which increases the amount and quality of data—the door is opening to new areas for practitioner/scholar collaboration.
In April IDN, The Carter Center’s Democracy Program, and the International Foundation for Electoral Systems (IFES) organized a two-day workshop to explore areas for potential collaboration. The workshop, titled “Toward Common Indicators for Democratic Elections,” brought together researchers and practitioners for dialogue about existing election-monitoring indicators and specific areas for cooperation. The workshop ended with a public panel, “How Do We Know a Good Election When We See One? Challenges in Measuring Electoral Quality and the Potential for Academic/Practitioner Partnerships.” Three key themes emerged from the workshop and panel: the importance of methods and context in assessing the quality of elections; the role of perceptions throughout the electoral cycle; and the different environments within which practitioners and researchers operate.
Participants raised questions and suggested best practices for measuring the quality of an election, which included discussions of how election experts use a public international law framework to assess electoral integrity. They also focused on innovative qualitative and quantitative methods of election data collection and analysis, like the use of randomized-control trials to measure the impact of election observers on polling stations’ compliance with electoral laws. Even with these new methods, most participants agreed that electoral quality is a difficult concept to measure. Some suggested that rather than focusing on an exhaustive list of indicators, election experts should identify a set of principles (e.g., equal opportunity to vote and run for office, transparency, rule of law) that elections must achieve, including the set of 21 electoral and democratic obligations derived from international human rights law that The Carter Center employs in assessing elections. Experts then can work backward from this set of principles to assess the quality of an election.
Other workshop participants pointed to the difficulty of measuring electoral quality, especially across different countries. As Peter Erben, IFES senior global electoral adviser and chief of party-Indonesia, explained during the public panel:
I’ve spent the last 20 years trying to work on some of the really difficult elections around the world. I was asking myself, after the discussion of the last two days, whether I was able to rank the elections I’ve been involved in through some matrix of my own. And whether I could come to a personal index of how these elections were, and I don’t think I can because I think it is so context-dependent.
Susan Hyde, another panelist, argued that comparing the quality of one election to another is not only possible but also important for a broader understanding of the role of elections in democratic processes. She suggested that meaningful comparisons of elections might be made over time within the same country and/or among countries with similar contexts (e.g., post-conflict settings).
Another key theme of the workshop and panel was the impact of citizens, election officials, and observers’ perceptions on measures of electoral quality. Did citizens in different polling stations believe officials followed the rules? Do election-monitoring reports impact citizens’ perceptions of election credibility? Was the outcome of the election the expected one? David Carroll, director of The Carter Center’s Democracy Program and another panelist, highlighted the importance of election outcomes on perceptions of quality by using Nigeria’s recent presidential election as an example:
We can look at it in terms of electoral quality, but our judgment is impacted by the outcome: the fact that the incumbent lost. . . . [I]f you imagine Nigeria having been an election that was a lot closer, and Jonathan, the incumbent, had won, but all of the other metrics that we may have applied to electoral quality were the exact same, . . . we would be thinking about this differently.
Conversations during the workshop and panel also highlighted the different environments in which practitioners and researchers work, and how these differences impact collaborations. One key difference is the timeline for decision making in election-observation missions versus scholarly research on elections and democracy. Whereas practitioners often must make quick decisions that will be implemented immediately, researchers often have much longer time horizons, given that the average length of time between research and publication can be measured in years. Another important difference is the potential consequences of an assessment of the quality of a given election. As panelist Karen Ferree of the University of California–San Diego pointed out:
There’s a lot riding on the evaluation of the quality. I think as an academic, this is something I don’t appreciate as much, but my colleagues who are practitioners, it’s something I’m learning they always have to think about. What are the consequences of saying that a particular election is good or not good? There’s a lot riding on that, from international development aid to local political conflict depending on those evaluations. It’s a very consequential measurement, and that adds to the difficulty of measuring something that’s already very difficult to measure.
Despite this and other challenges, workshop participants were able to identify indicators at different stages of the electoral cycle that potentially could serve as the basis for collaboration. Going forward, workshop organizers and participants have agreed to keep the dialogue moving by selecting a small group of indicators from the list developed in the workshop and collaborating on best practices for collecting and analyzing data for these indicators. Two future workshops in 2015 are already in the planning phase.
Common Indicator workshop participants during public panel on measuring electoral quality. April 8, 2015 in the Carlos Museum Reception Hall at Emory University