Sabato's Crystal Ball

Two Ways of Thinking About Election Predictions and What They Tell Us About 2018

There are differences in method, accuracy, and probability between quantitative forecasting and ratings-based handicapping

G. Elliott Morris, Guest Columnist June 14th, 2018



— Two approaches to forecasting — one formally statistical, one rigorous yet flexible handicapping — produce different tools that we can use to evaluate the battle for control of the U.S. House in the 2018 midterms.

— The Crystal Ball and other political handicappers use a “qualitative” method to generate ratings of individual seats using election news, candidate evaluation, and some hard data. Others use quantitative modeling to produce probabilities of how likely it is for one party or the other to win each House seat.

— The quantitative model described below is more bullish on the Democrats’ House prospects than the Crystal Ball’s race ratings, but both indicate considerable uncertainty about which party will win a House majority this November.

— Those following this year’s House elections would be wise to take into account both qualitative race ratings, like those done by the Crystal Ball, as well as quantitative models, like the model described below, when assessing the race for the House.


To understand the differences between quantitative, data-driven predictions and those made from traditional, data-influenced handicapping, one should direct their attention to the names of two websites: Sabato’s Crystal Ball at the University of Virginia Center for Politics, and my blog, The Crosstab. One is a reference to the soothsayer, a fortune-teller who stares into their glass ball and derives the fate of an event by evaluating some known and unknown factors. The other is a reference to the contingency table, a common tool in survey research that breaks down responses to a question by subsets of responses to another. The names of mine and this website are coincidentally descriptive of the ways in which our predictive methods differ.

In forecasting outcomes there are both harms and benefits to these two approaches. This piece evaluates those differences in the context of the 2018 midterm elections and delivers some much-needed attention to that which is common, not just contrasting, between the two.

Before I begin, allow me to highlight a guiding principle of this article. The contrast between my quantitative and the Center for Politics’ “qualitative” handicapping of the 2018 House elections is mostly the difference between continuous predictions (those that assign a chance to outcomes between 100% win or 100% loss) and binary predictions (those that assign either “win” or “lose” to a party). Whereas the method I employ uses data to generate a probability of victory for Democrats in all 435 U.S. congressional districts and their chance of winning the majority of seats, the Crystal Ball’s method tells you that either Democrats or Republicans are favored to win a particular race (technically, the Crystal Ball’s method is a discrete method, with set degrees of certainty on both sides, but is more proximate to binary than continuous prediction). Keep this difference between continuous (even better: distributional) and binary/discrete projections in mind.

This piece is broken up in three sections. In the first, I go through what the two methods take into account when projecting race outcomes. I then divulge the differences between what they tell us, and in the final section I break down the differences in the models’ past and current forecasts.


Both my and the Crystal Ball’s methods of predicting the 2018 midterm elections to the United States House of Representatives are processes that (1) take in information, called inputs; (2) do something with that information; and (3) spit out other information, called outputs. If you remember ninth grade mathematics, these are both called functions (Dr. Seuss had a youthful explanation of functions that I remember from high school pre-calculus). However, after this rough categorization, the two functions diverge considerably.

My model to forecast the 2018 United States house midterms is a probabilistic statistical model that takes in a variety of input and, through four stages, produces outputs. The overall approach was developed by political scientists Joseph Bafumi of Dartmouth College, Robert Erikson of Columbia University, and Christopher Wlezien at the University of Texas at Austin. You can read their paper here. The model performs its estimation in four stages (note that the technical details of my model differ slightly from Bafumi et. al.’s methodology):

  1. Calculate an estimate of the national environment today:
    1. Compute a weighted average of all congressional generic ballot polls taken for the 2018 cycle so far.
    2. Compute the average change in post-2016 special elections from the previous Democratic margin in a seat to the margin in the special election.
    3. Repeat this for every day of every year going back to 1992.
  2. Predict the national environment on Nov. 6, 2018:
    1. Use generic ballot polling averages at this point in past cycles…
    2. …combined with the average special election swing, again at this point in past cycles …
    3. …to generate a prediction of the national vote on election day. The final projection has around a six-point margin of error today.
  3. Use a variety of inputs to predict results at the district level:
    1. Create a baseline projection for every district by combining:
      1. The partisan lean of a district (a method developed at FiveThirtyEight that averages a seat’s 2016 democratic presidential win/loss margin with its 2012 presidential margin, weighted 75%/25% to put more emphasis on the more recent cycle);
      2. The previous candidate’s margin in the district;
      3. Candidate-specific variables, like whether an incumbent is running or if one candidate is significantly qualitatively “worse” than the other.
    2. Swing this baseline projection of the district the appropriate amount left/right, determined by the projected Democratic margin in the national vote from step 2.3.
  4. Simulate 50,000 election outcomes:
    1. For each trial, vary the estimated national popular vote randomly according to the margin of error of past predictions of the national vote. Add that error to each seat uniformly (NY-15, the seat where Hillary Clinton did the best in 2016, gets swung just as much as TX-13, the seat where Donald Trump did the best).
    2. Vary the forecast Democratic margin in each seat according to error that is correlated between districts. This accounts for the chance that our forecasts have more error in red than blue districts, white than minority districts, educated than uneducated districts, etc.
    3. Add up the number of seats Democrats win.
    4. Repeat this 50,000 times. The percentage chance that Democrats have of winning the election is simply the number of times they win 218 seats or more (a bare majority) divided by the total number of trials. Each seat has its own win probability generated the exact same way (by keeping a list of seats won/lost in each trial).

On any given day, the numbers generated by my forecasting model represent the best predictions we have of Democratic win margins at the national level and in each House district, and the chance that those projections will err. Remember, these projections are continuous, with outcomes occurring along a distribution of possibilities and each seat having a specific probability of victory. In the end, I produce a dataset of continuous vote shares and win probabilities, ranging from 0% to 100%, for every House seat in the nation and the nation itself.

The process by which the UVA Center for Politics generates its race ratings is different, however, and does not follow such a strict, formal statistical methodology.

The analysts at UVA consider myriad data, some quantitative and some not — often on different scales, e.g. how do you compare previous Democratic win margin to the following headline: “A gay Republican, the child abuse he sanctioned, and the homophobia used to defend him” — to come up with their projections. The analysts use a number of factors, including electoral history, polling, candidate quality, modeling, and district news in the method they use. The ratings ultimately reflect their judgment about the likelihood of one side or the other prevailing in a given contest.


As discussed, these two approaches to forecasting — one formally statistical, one rigorous yet flexible handicapping — produce different tools that we use to understand upcoming elections. Whereas I rate each seat on a scale from 0% to 100% for the likelihood that it is won by Democrats, the team at UVA produce ratings that lie on a discrete scale: from Safe, Likely, and Lean Republican, to Toss-up, to Lean, Likely, and Safe Democratic. To evaluate what these differences might mean in November 2018, it is useful to explore what they meant last time around.

The Past: Accuracy of seat ratings and forecasting models

The big question everyone wants answered is: How likely is it, say, for a “Lean Republican” seat to be won by a Democrat? What about a Likely, or better yet, Safe, Republican seat? One would hope that races rated differently would convey different win probabilities for Democrats and Republicans. Indeed, they do.

To determine how well the UVA Center for Politics election ratings matched election outcomes over time, I combined their historical ratings going back to 2004 with actual results in House districts (made available by the MIT Election and Data Science Lab). The results are shown below.

Figure 1: Accuracy of race ratings by category

Notes: This figure stacks each district over Democrats’ actual November vote margin in the seat depending on its race rating from Sabato’s Crystal Ball. Ratings for all elections since 2004 are included where available.

You can see that there are certainly differences between UVA’s Safe, Likely, and Lean categories on both sides of the aisle; safer districts are less likely to see large upsets, and seats that lean toward either part are sometimes, though not frequently, won by the opposition. Overall, the ratings are relatively well calibrated, and 89% of rated seats not categorized as Toss-ups end up being won by the party that is favored to win.

If I want to compare the Center for Politics rating with my own ratings, however, I need to put them on the same continuous scale. I do so by simply taking the average Democratic win margin and raw probability of victory for every category of race rating. The figure below shows the results of this analysis.

Figure 2: Converting House race ratings to probabilities of victory

Notes: This figure graphs the implied Democratic win margin and win probability for each race rating category. To get an implied forecast of Democratic win margin and win probability, I calculated the average win margin, standard deviation, and percent of the time Democrats win for each race-rating category over all House elections since 2004. Points on the graph are sized by the number of contests in that category.

In the left panel of the graphic, I show that each category has an identifiable point estimate and band of uncertainty (or confidence interval) surrounding it. Lean Democratic seats are won, on average, with a six-point Democratic margin, for example; Lean Republican seats give GOP candidates a seven-point average margin; Likely Democratic seats give Democratic candidates a 14-point average margin, and so forth.

Each of these categories also has a corresponding Democratic probability of victory for the seats placed within. In Lean Democratic seats, Democrats win the elections 78% of the time; Lean Republican: 18%; Likely D: 95%; Likely R: 3%; Safe D: 99%; Safe R 0%, and Toss-up districts: 59%. These values are plotted on the right of the preceding figure, with the size of each point showing the number of seats earning that designation over the years.

It is apparent that qualitative seat ratings have provided good forecasts of Democratic win margins and probabilities in the past, but how do they compare to the predictions generated by my formal statistical model? Below, I recreate the probability-by-ratings figures for seat ratings generated by re-running my 2018 U.S. House midterms model for the 2016 House elections. Specifically, seat ratings are assigned for each seat according to the forecast win probabilities for each seat: if both parties have a win probability below 60%, the seat is considered a Toss-up; Lean Democratic/Republican seats are those with a win probability below 80%. Likely seats are those with win probabilities below 95%. Seats rated as greater than 95% likely for either party are considered Safe R/D.

Figure 3: Probabilities of Democratic victory based on House race ratings

Notes: This figure shows the actual Democratic probability of winning for seats rated as Safe R/D, Likely R/D, Lean R/D, or Toss-up, with the rating derived from its forecast win probability. Points sized by the number of contests in that category.

What first stands out is how pro-Democratic the Toss-up category is. However, as there are only six seats in this category, this error is caused by the Democrats winning one more seat than they ought to (four out of six instead of three out of six) — a likely insignificant difference in the long term.

What is more important is the much higher proportion of Safe to Lean/Likely seats in the quantitative forecast than in the qualitative ratings. It should be noted that this could be partly due to the Center for Politics’ omission of ratings for some lopsided seats.

Of the 384 House elections that took place in states without redistricting their congressional boundaries prior to the 2016 election, my forecast predicted them with 98.7% accuracy, getting just five non-Toss-up seats incorrect three fewer seats in aggregate than they actually did in November. Two of 28 Likely Republican seats were won by Democrats, one of eight Lean Republican seats was won by Democrats, and two of 12 Lean Democratic seats were won by Republican candidates. The predictions for all 435 seats erred 10 times, making total error 2%.

It’s worth noting that the seat with the biggest (10 percentage points) error in my re-run 2016 forecast was AZ-1, which the Center for Politics correctly predicted would be won by now-Rep. Tom O’Halleran (D) instead of Paul Babeu (R) — he’s the candidate referenced in “A gay Republican…” headline cited above. The UVA projections picked other Republican seats as Democratic pickups that I did not, and ended up over-shooting the Democrats’ number of seats by seven last cycle, while I low-balled them by three seats. I pick the AZ-1 example as it displays the biggest weakness of quantitative forecasting: the difficulty of accounting for deficits in candidate quality in a data-driven fashion. However, this is not as large an issue as one might think, given the overall record of the modified Bafumi et. al. method.

To be sure, what if neither method alone is the correct answer? For the sake of completeness, if you had combined the forecasts with a method that accounts for the uncertainty in both projections (using a Bayesian update to the normal distribution of outcomes — certainly a good, but not the most sophisticated way, to do so), you would have predicted the 2016 elections spot on, with Democrats being projected to win 194 seats in the U.S. House, though six individual projections were wrong but canceled each other out.

The figure and table below depicts the quantitative forecast, ratings-based forecast, blend of the two, and final result in each of the top 20 closest districts in the 2016 U.S. House elections.

Figure 4 and Table 1: Blended House forecasts in 20 closest 2016 House races

Notes: This figure shows estimates for the 2016 House elections according to different methods. The “blended” forecast is a Bayesian update to the normal distribution with the quantitative forecast being used as the prior, the seat rating being used as the likelihood, and the resulting posterior estimate and credible interval being used as the final prediction and margin of error.

Above, seats with lines that cross zero are the ones where the blended prediction “missed” the result, though it should be noted that all the outcomes fell within the margin of error.

It should be noted that although the combination of both ratings and the forecasting model are a useful tool for understanding U.S. House elections, the predictiveness of the blended predictions is worse earlier in the cycle. This is due to seat ratings moving less predictably than other indicators (like national congressional polling) and producing more noise in the estimates in, say, June of the election year, rather than late October or November. In other words, this method only works better than the forecast alone when the final race ratings for House seat are made available.

So, what do we know at this point in the piece — and in the 2018 midterms cycle?

First, the data are clear that discrete seat ratings perform ever-so-slightly worse than the data-driven quantitative forecasts, though both did correctly predict the outcome of the House majority in 2016. Second, it’s evident that probabilities are slightly more certain in the quantitative forecast than in the qualitative ratings and that the seat ratings give less room for flexible probability within categories. Third, I find that errors in the quantitative analysis are sometimes controlled for in the discrete ratings, though errors exist elsewhere to cancel out some of these gains. Finally, a blend of both measurements provides the best seat-by-seat understanding of the midterm elections.

Given the track record of both approaches, it is worthwhile to consider the following: what are the differences in the outputs of both models today?

The Present: Different forecasts for different seats

Given the different methods employed by myself and the team at the University of Virginia Center for Politics and the differences in forecasts in 2016, one should expect that there are variations in the predictions for November 2018 as well.

Indeed, there are (some) large differences in our forecasts. Though a portion of the discrepancies can be explained by the inability of qualitative ratings today to adjust for movement in the national environment — which my forecast does, and which pushes expectations toward the party out of power — and some can be explained by my quantitative method not taking well into account the quality of some district’s nominees, other differences can reflect real disagreement among the methodology.

The table below details the 15 districts where my forecasts and the UVA qualitative forecasts disagree — in other words, where one of us say the Democrats/Republicans are more likely to pick up a seat, and the other says Republicans/Democrats are the more probable victors.

Table 2: Differences between Crystal Ball and Crosstab House forecast Democratic win probabilities

However, many of these differences arise in seats that either of us rate as Toss-ups, accentuating differences between forecasts for contests where we’re actually quite uncertain about the outcome of the races. Here’s what that table looks like without Toss-up seats.

Table 3: Differences between Crystal Ball and Crosstab House forecast Democratic win probabilities, excluding Toss-ups

As you can see, where it matters most (in calling districts for either party), the methods are arriving at roughly the same conclusions. It is in Toss-up seats where the biggest discrepancies arise.

However, what really counts is the probability of victory assigned to either party — in each seat and in the nation as a whole — and the range of outcomes in the upcoming midterm elections.

Next, I answer the question of how many seats Democrats are likely to win according to the seat probabilities assigned by both methods. What’s going to happen in November?

The Future: Who’s going to win the majority?

Above all else, the main advantage of the quantitative model is the ability to simulate thousands of possible elections — some where Democrats do better, some where Republicans do, within the statistical range of error we’ve observed in the past — and generate a final probability for the chance that Democrats win a majority of seats. In its current form, the House ratings here at Sabato’s Crystal Ball (and elsewhere) are not able to compete with that (and perhaps, neither should they!).

However, the cool thing about the quantitative abstraction of the UVA seat ratings detailed above is that it comes with everything we need to be able to plug it into the simulation phase of the formal model. This way, we can account for the inherent error in the ratings while producing a nationwide probability that Democrats may win the majority of seats in the U.S. House of Representatives.

Better yet, instead of doing exactly what my model does (which produced some strange-looking seat outcomes due to inflated margins of error transitioning from the seat rating to average win margin), I can skip that step and work directly with the win probabilities derived from the past accuracy of ratings. For every trial, I model the expected change in probability from varying a district’s forecast vote margin according to (1) national error and (2) correlated seat error. I use the inverse normal distribution to make sure that probabilities are adjusted properly (seats rated as Toss-ups will see larger shifts in probability than Safe seats, for example).

Akin to the visualizations on my forecasting homepage, below I graph the range of possible seat outcomes for both models. The taller the line, the more likely it is that the Democrats win that number of seats.

Figure 5: Differing House forecasts

Notes: This figure shows the distribution of possible number of Democratic seats after the 2018 U.S House midterms according to the same simulation method applied to two sets of seat forecasts. Forecasts according to ratings and probabilities generated on June 7, 2018.

After simulating 50,000 trial elections with the seat-level win probabilities assigned by the UVA Center for Politics, we have our answer: Democrats are much more favored in the quantitative forecast (close to a 60% chance of winning the majority of seats) than in the discrete ratings (about a 40% chance). The expected number of districts won by Democrats is 10 seats larger in my own forecast than in the ratings at this website.

Why the difference? The discrepancy is explained by two major factors:

First, the quantitative method has the advantage of being able to look into the future, estimating where the national environment is likely to be in November this year by tracing movement in past election cycles from June until election day.

Second, there is a higher number of Lean and Likely Republican seats in my continuous, probabilistic data than in the UVA ratings. This explains the large right tail of possible Democratic seats graphed in blue above; there are more seats that Democrats can pick up in a large blue “wave” in my data, pushing expectations to the right.

You can see these differences in the cross tabulation (see what I did there?) below. Although mine and the UVA ratings agree on 118 Safe Republican seats, I rate 38 GOP-held districts as Likely or Lean that are rated as Safe here. Nine of their Lean/Likely ratings I give a Toss-up (between 40 and 60% chance of Democratic victory) designation.

Table 4: Comparing forecasts

This again shows the main limitation of the ratings-based approach, at least in trying to plug it into a purely quantitative forecasting method: the discrete scale of probabilistic ratings. Since each seat is put into a category that has a historical probability of voting for a Democrat or Republican, seats are not allowed to vary in the probabilities assigned to them. Even if one Lean Republican seats looks more competitive than another nearly Lean Republican seats, they are both placed in a bucket that elects Republicans 18% of the time.

On the other hand, my forecast generates a specific win margin and win probability for each individual seat, so NC-02 and PA-10 can have their own specific probabilities assigned to them (36% and 22%, respectively) on a continuous scale. All of these differences add up to produce a forecast that is more optimistic about Democrats’ future in the U.S. House of Representatives.

Closing thoughts

This piece has reviewed the methods, history, and current projections of two different prediction methods — a crystal ball and a statistical model — for this fall’s midterm elections to the U.S. House of Representatives. I have reviewed differences in probabilities between the measures, accuracy in the 2016 elections, and even prognosticated about the future of the House according to the two varying processes. What is clear is that the two vary considerably in some parts, and are similar in others.

Not any one approach is god’s gift to election handicapping, however. Both mine and the UVA Center for Politics forecasts have erred in the past, some of which cancel each other out and some of which do not, and a combination of both projections performs best in predicting the final partisan breakdown of seats. Indeed, even within methods, there is variation; the Cook Political Report and Inside Elections race ratings, as well as those published by media outlets like CNN all have disagreements — sometimes large ones — about ratings in some key districts. A new statistical forecasting model published by my soon-to-be colleagues at The Economist also has differences with my own personal method. While these differences can individually sometimes mislead, the truth frequently lies between them all. Apart from the technical details, there are also important differences in how we conceptually utilize continuous and discrete forecasts (some scientific, some journalistic) that this article does not discuss.

As we head into the heat of the summer of this 2018 midterm cycle pundits, politicos, and voters alike should take note of the past, present, and future differences between quantitative forecasting methods and the typical race-based handicapping. If the past holds true, the former will do well at producing precision probabilities for each U.S. House seat based on its individual characteristics, and the latter will do well at reducing large deviation from the forecast that typically arises from issues with candidate quality and rapidly changing districts.

Whatever method you pick (if you’ve learned anything from this piece, you ought to pick both), rest assured that the two are well-tested methods that will get us 90-98% of the way to foreseeing what will happen on November 6, 2018. It’s the remaining 2-10% that will make or break House forecasting this fall.

G. Elliott Morris is a data journalist who specializes in elections, political science, and predictive analytics. Elliott has previously crunched numbers for Decision Desk HQ and the Pew Research Center. Elliott graduated from the University of Texas at Austin in May and he joins The Economist in July. Follow him on Twitter and at his blog,

What Happened in the June 12 Primary

Maine experiments with ranked-choice voting, the Virginia GOP backs Stewart for Senate, and Sanford loses renomination in South Carolina

Geoffrey Skelley and Kyle Kondik, Sabato's Crystal Ball June 14th, 2018



— Maine became the first state in modern U.S. history to use ranked-choice voting (also known as instant-runoff voting) in a statewide election. But this was not the first time that a state used a form of ranked voting or preferential voting. In the early 1900s, a number of states tried out ranked-voting methods, including in statewide contests for offices such as U.S. Senate and governor.

— In Virginia, Prince William County Board of Supervisors Chairman Corey Stewart (R) narrowly defeated state Del. Nick Freitas (R) 45%-43% to win the GOP nomination for U.S. Senate. Anti-Stewart forces rallied late to boost Freitas, but came up just short, much to the chagrin of many GOP leaders. Women won five of the six Democratic primaries for the U.S. House, including in all of the competitive House seats.

— In other primaries, the most notable result was Rep. Mark Sanford (R, SC-1) losing his primary to state Rep. Katie Arrington (R). Arrington likely will be fine in November but we’re moving the district from Safe Republican to Likely Republican.

Table 1: Crystal Ball House ratings change

Member/District Old Rating New Rating
SC-1 Open (Sanford, R) Safe Republican Likely Republican

Maine’s ranked-choice voting experiment

In one way, Maine offered the most interesting results of the night, and not only because of who appears to have won some of the party nominations for governor and Congress. The Pine Tree State became the first state in modern U.S. history to use ranked-choice voting (also known as instant-runoff voting) in a statewide election. RCV or IRV involves voters ranking their choices on the ballot rather than selecting just one candidate such that the eventual winner earns majority support via first-choice votes or first-choice votes combined with second-, third-, or more choice votes after eliminating the last-place candidate in each counting round. Because the counting process for this system can take time to sort out, some Maine races remain uncalled. Election Night tabulations only accounted for first-choice votes, so outcomes remain up in the air unless a candidate won a majority. The seven-way Democratic primary for governor will not be decided for a few days: Maine Attorney General Janet Mills (D) led with about 33% of the first-choice votes ahead of attorney and former 2008 U.S. House candidate Adam Cote at 28.5%, with activist Betsy Sweet (16%) and former Maine Speaker of the House Mark Eves (14.5%) in third and fourth. The ME-2 Democratic primary also remains uncalled, though state Rep. Jared Golden (D) had 49% of the first-choice votes in a three-way contest. Assuming he wins just a few second-choice votes from voters who backed the third-place candidate, Golden should win. In the Republican gubernatorial primary, businessman Shawn Moody (R) won outright, capturing 56% of the first-choice votes. (All results are as of Wednesday afternoon.)

This was not the first time that a state used a form of ranked voting or preferential voting. In the early 1900s, a number of states tried out ranked-voting methods, including in statewide contests for offices such as U.S. Senate and governor. James Bucklin, an election reformer and later a Colorado state senator, proposed one such system that found use in around a half-dozen states. The system generally worked like this: In multicandidate races, voters were supposed to cast ballots with their first and second choices for an office. If no candidate won a majority of first-choice votes, then the second-choice votes were added to the totals of the top-two first-choice vote getters, and the candidate with the most votes would then win. However, voters realized that if both their choices finished in the top two, their votes would actually cancel out in the second-choice round. Moreover, if a voter’s second choice did not finish in the top two, that voter’s second-choice vote was wasted. These problems led to a preponderance of “bullet voting” where voters voted for only their first choice to avoid potentially canceling out their vote.

In total, 10 states used a form (not necessarily Bucklin’s system) of ranked voting for primary elections for statewide offices at least once in the early 20th century: Alabama, Florida, Idaho, Indiana, Louisiana, Maryland, Minnesota, North Dakota, Washington, and Wisconsin. In 1925, Oklahoma passed but never used a ranked-voting scheme. The Sooner State primary law required voters to rank their top two choices in three- or four-candidate races and their top three choices in races with more than four candidates. If voters failed to rank a sufficient number, their votes would not count. The state court threw that law out prior to the 1926 election, which operated under traditional plurality-wins rules. Washington utilized a system somewhat similar to the proposed Oklahoma system, in that the Evergreen State required voters to cast a ballot marking both a first and second choice in any race with four or more candidates; otherwise the ballots were discounted.

The short-lived ranked-voting systems in Florida, Maryland, Minnesota, and Wisconsin operated somewhat similarly to the ranked-choice voting system that Maine used on Tuesday. In the cases of Maryland, Minnesota, and Wisconsin, if no candidate won a majority, their formats dropped the lowest-ranking candidate based on first-choice votes, and then assigned the second-choice votes of those who cast a ballot for the dropped candidate to the remaining candidates. This process repeated itself until someone won a majority of the vote. In Maryland, the ranked-vote primary assigned county delegates to candidates for a binding vote at the state convention, while the other states had a direct primary by popular vote count. Florida operated for a time under the Bryan primary law, which worked somewhat differently from the formats used in Minnesota and Wisconsin. Instead of sequentially eliminating the lowest-ranking candidate and assigning second-choice votes, the Bryan law dropped all but the two highest-ranking candidates by first-choice vote and then assigned the second-choice votes cast by voters who backed one of the eliminated candidates.

However, none of these systems worked the same as the ranked-choice system used in Maine, where voters could rank every candidate in, for instance, the seven-candidate Democratic field for governor. Rather, those old systems generally only called for a first- and second-choice vote. The use of RCV and IRV has progressed in recent times, mostly in cities around the United States, but Maine became the first state to use it in modern times for statewide elections. It appears that RCV will remain a part of Maine elections for some time to come: A referendum to stop the state legislature’s attempt to quash RCV successfully passed 54%-46% on June 12. Legal efforts to stop the use of RCV will continue, but it may be used in congressional general elections in November. However, because of the state constitution’s wording about gubernatorial elections, traditional plurality wins voting will still apply.

Many articles about RCV in Maine have made the point that the election and reelection of controversial Gov. Paul LePage (R) helped precipitate the movement to change the voting system to RCV instead of using the traditional first-past-the-post system. LePage won in 2010 with 38% while center-left independent Eliot Cutler took 36% and state Senate President Libby Mitchell (D) garnered 19%. Four years later, LePage won reelection with 48% while then-Rep. Mike Michaud (D, ME-2) captured 43% and Cutler — running again — won 8%. Maine’s tendency to elect governors with pluralities rather than majorities long predates LePage’s 2010 win. Going back to the 1958 cycle — so starting around the time of Alaska and Hawaii’s first elections as U.S. states — Maine has the second-highest share (56%) of plurality winners in gubernatorial contests, trailing only Alaska (60%). The Last Frontier and the Pine Tree State have the same number of total plurality winners — nine — from 1958 to 2017, but Maine has one more gubernatorial election than Alaska in that time frame (a 1960 special election). Table 2 lays out the data on plurality winners in that time span for the 50 states.

Table 2: Number of plurality winners in gubernatorial elections by state, 1958 to 2017

Notes: This table includes only winning gubernatorial candidates who won with less a majority of the vote. Arkansas (22 elections), Delaware (15), Iowa (19), Kentucky (15), Louisiana (15), South Carolina (15), South Dakota (19), and Tennessee (15) had no plurality winners in gubernatorial elections from 1958 to 2017. In October 1987, Rep. Buddy Roemer (D, LA-4) won a plurality with 33% in the initial election for Louisiana’s governorship, finishing ahead of second-place Gov. Edwin Edwards (D), who won 28%. Because no candidate won a majority, Roemer and Edwards were set to face each other in a runoff under Louisiana’s rules. However, Edwards withdrew from the runoff, a move that elected Roemer by default. While Roemer did win the governorship without a majority, he presumably would have done so in the runoff had Edwards stayed in the race, so Louisiana is credited with no plurality wins.

Given Maine’s propensity to elect governors with less than 50% of the vote — it has done so in nine of the past 11 elections, including four victors who won with less than 40% — it is understandable that many Mainers would want to try out a different voting system. RCV results are supposed to provide broader support for the eventual winners by ensuring that a majority supported the victor in at least some fashion. However, to apply this system to general elections for governor, the state constitution will probably have to be amended. Still, in our federal system states get to decide many aspects of their electoral systems, and Maine’s use of RCV offers us a chance to see how the system works in state and congressional elections.

Virginia’s vote

In the Old Dominion, voters picked congressional candidates in the busiest federal primary day in the commonwealth’s modern history. The GOP primary for U.S. Senate probably received the most attention on Election Night because of the close margin. Prince William County Board of Supervisors Chairman Corey Stewart (R) narrowly defeated state Del. Nick Freitas (R) 45%-43%, with minister E.W. Jackson winning 12%. Freitas led throughout much of the night, but in the end Northern Virginia’s vote helped put Stewart just over the top. Anti-Stewart forces rallied late to boost Freitas, but came up just short, much to the chagrin of many GOP leaders. Stewart has promised to run a “vicious and ruthless” campaign against Sen. Tim Kaine (D) in the general election, but begins the race as a huge underdog. The fundamentals are on Kaine’s side: Virginia voted for Clinton by five percentage points and 2018 is a midterm election with a relatively unpopular Republican president in the White House. The polls are on Kaine’s side: He has led Stewart by double digits in general election horserace polls and has a decent approval rating among Virginia voters. The state of play is on Kaine’s side: The incumbent had 67 times more money in his campaign war chest than Stewart as of May 23, so Stewart will need help from outside Republican and conservative groups. However, he will likely receive little outside assistance because GOP money will mostly flow to much better Republican targets in the 10 seats Democrats are defending in states that Trump carried in 2016, as well as to the three or so seats that Republicans are going to have to seriously defend (Arizona, Nevada, and Tennessee). The Crystal Ball continues to rate the Senate race in Virginia as Safe Democratic.

Down-ballot, primaries for Congress continued the 2018 trend of nominating women: In five of the six Democratic primaries, women won the party’s nomination, including in all of the competitive House seats. In VA-2, only women were on the ballot, but retired Navy commander Elaine Luria (D) will face incumbent Rep. Scott Taylor (R) — who easily won renomination in his primary — in November. In VA-7, former CIA officer Abigail Spanberger (D) won by a crushing 46-point margin over Marine veteran Dan Ward (D), a larger margin than most expected. Spanberger will meet incumbent Rep. Dave Brat (R) in the fall general election. In the most watched Virginia primary on the Democratic side, state Sen. Jennifer Wexton (D) won a large plurality (about 42%) of the vote in a six-way primary. She will face incumbent Rep. Barbara Comstock (R), who won renomination in her primary. Comstock’s primary did signal that she may have some trouble with her base: Her opponent, conservative Air Force veteran and 2014 Senate candidate Shak Hill (R), won 39% of the primary vote. Comstock remains one of the most vulnerable Republican House incumbents in a Toss-up race. VA-5, Virginia’s other competitive House seat, did not hold a primary because neither party opted to use that method to nominate, but in that race a woman will also be the Democratic nominee (journalist and filmmaker Leslie Cockburn).

Other June 12 races

There generally were few surprises across the primary landscape on Tuesday night. That extends to the primary loss by Rep. Mark Sanford (R, SC-1), who was defeated by state Rep. Katie Arrington (R). Sanford, a sometimes-critic of President Trump, had other liabilities, like lingering weakness from an infamous extra-marital affair during his tenure as South Carolina’s governor. Sanford only won his 2016 primary with 56% of the vote, and it was clear that Arrington was pushing him. He is now the second House member to lose renomination, joining Rep. Robert Pittenger (R, NC-9). SC-1 is now an open seat, and Democrats hope their nominee, lawyer Joe Cunningham (D), can push Arrington in a district that Trump won by 14 points, down from Mitt Romney’s 18-point win in 2012. We’re moving SC-1 from Safe Republican to Likely Republican: This was and is a fringe Democratic target no matter who won the Republican primary.

Elsewhere in South Carolina, Gov. Henry McMaster (R) faces a runoff against businessman John Warren (R), who came on late to take the second-place spot from Catherine Templeton (R), who long seemed like McMaster’s main rival. McMaster got about 42% to Warren’s 28%, so Warren has more ground to make up in the short two-week runoff period, but no one would be shocked if an outsider businessman won a GOP primary. The winner will face state Rep. James Smith (D) in a state where Democrats face an uphill battle no matter the political environment.

Voters in Nevada and North Dakota formalized Senate battles between Sen. Dean Heller (R-NV) and Rep. Jacky Rosen (D, NV-3) as well as Sen. Heidi Heitkamp (D-ND) and Rep. Kevin Cramer (R, ND-AL). One could argue that Heller and Heitkamp are, respectively, each party’s most vulnerable Senate incumbent (Heller definitely is, Heitkamp may or may not be). Nevada Democrats also picked Clark County Commissioner Steve Sisolak (D) as their gubernatorial nominee; most observers seemed to believe Sisolak was the strongest opponent for state Attorney General Adam Laxalt (R) in a Toss-up race.

Familiar faces won primaries in two competitive open-seat House races in the Silver State: former Reps. Steven Horsford (D) and Cresent Hardy (R) will face off in NV-4, while frequent candidate Danny Tarkanian (R) again won the nomination in NV-3, where he will face philanthropist Susie Lee (D), who unsuccessfully sought the NV-4 Democratic nomination last cycle. We rate both districts, each of which was close in the last presidential election, as Leans Democratic.

Ratings Upgrades for Democrats in Ohio

Kyle Kondik, Managing Editor, Sabato's Crystal Ball June 13th, 2018


Table 1: Crystal Ball Senate and gubernatorial ratings changes

Senator Old Rating New Rating
Sherrod Brown (D-OH) Leans Democratic Likely Democratic

Governor Old Rating New Rating
OH Open (Kasich, R) Leans Republican Toss-up

A confluence of recent polls and reporting suggests that Sen. Sherrod Brown (D-OH) is not really among the top Republican Senate targets this year. Those indicators include:

— Recent nonpartisan polls from the Cincinnati Enquirer/Suffolk University, Fallon Research for the 1984 Society (an Ohio political group connected to Republican lobbyist Neil Clark), and Quinnipiac University showing Brown leading Rep. Jim Renacci (R, OH-16) in the Senate contest by 16 points (53%-37%), 14 points (48%-34%), and 17 points (51%-34%) respectively.

— Senate Majority PAC, the major Democratic outside Senate spending group, leaving Ohio off its initial $80 million round of television reservations, perhaps indicating confidence about Brown’s position (although reservations can of course be added later).

— Senate Majority Leader Mitch McConnell (R-KY) not listing Ohio among the nine most competitive Senate races in a conversation with the Washington Post before Memorial Day. He instead named the exact same nine states where Senate Majority PAC would book television time, although he later argued to The Hill that Ohio was indeed in play and “very competitive,” citing otherwise unspecified internal polling. We personally have not heard much optimism about the Ohio Senate race from our GOP sources throughout this cycle.

Taking all of these indicators together suggests to us that Brown is in a relatively strong position in a state that seems to be trending Republican but where Brown appears to retain good numbers. That Brown is over 50%, or close to it, in polling is also a positive sign for the incumbent because in what is shaping up to be at least a modestly strong Democratic year, he may be insulated from losing even if undecideds break disproportionately to the lesser-known Renacci. Moreover, one probably would not expect a massive, late break to Republican candidates nationally unless the national environment changes dramatically.

We’re moving Ohio’s Senate race from Leans Democratic to Likely Democratic. That leaves open the possibility of a GOP upset, but for now Brown appears to be in decent shape. He joins Democratic Senate incumbents in bluer states in that category: Sens. Debbie Stabenow (D-MI), Tina Smith (D-MN), Robert Menendez (D-NJ), and Bob Casey (D-PA).

The Senate action is increasingly focused on the nine states alluded to above: Republicans playing defense in Arizona, Nevada, and Tennessee, and Democrats playing defense in Florida, Indiana, Missouri, Montana, North Dakota, and West Virginia.

Upgrading Brown’s reelection odds also prompts us to reexamine the open race for Ohio’s governorship, where most view state Attorney General Mike DeWine (R) as a small favorite over former state Attorney General Richard Cordray (D),[1] whom DeWine unseated in a close 2010 contest.

The same surveys that show Brown comfortably leading Renacci offer conflicting views of the gubernatorial race, but taken together they point to a close race. Fallon has DeWine up 40%-34%, while Quinnipiac (42%-40%) and Suffolk (43%-36%) show Cordray ahead. DeWine had generally been up by more, and was closer to the magic 50% number, in some earlier looks at the race. The polls suggest Toss-up, our new rating, is a better reflection of the current reality in Ohio.

So too does this basic fact: The Ohio governorship is an open seat in what, again, should be a somewhat pro-Democratic (or anti-Republican) environment, and the last three times the Democrats took over the Ohio governorship from the Republicans, they did so in similar kinds of environments when the governorship was open: 1970, 1982, and 2006, all midterms under Republican presidents. We were giving DeWine the benefit of the doubt because of his better name ID and likely resource advantage in the fall (he had substantially more cash on hand as of the most recent campaign finance reports and can self-fund). The latter advantage in terms of money remains even though Cordray is also a good fundraiser, but whatever benefits DeWine accrued from the former (name ID) seem to have evaporated after what was a convincing but expensive and nasty primary victory over Lt. Gov. Mary Taylor (R).

DeWine remains a formidable GOP candidate in a state that more often than not prefers GOP governance, but he has his work cut out for him in what may be a challenging environment.

In an Ohio primary preview we published in early May that also looked at the likely DeWine-Cordray and Brown-Renacci matchups, we hinted at the possibility of these ratings changes in the state’s top two races. Subsequent polling and other developments prompted us to go ahead with them.

We’ll get a sense of how that environment may be playing out in Ohio in early August, when Republicans will be defending the open OH-12 U.S. House seat in a special election. We have the traditionally Republican district that Trump carried by 11 points rated as Leans Republican, and a Monmouth University poll suggested an early GOP edge: state Sen. Troy Balderson (R) led Franklin County Recorder Danny O’Connor (D) by a high single-digit margin according to the poll’s various turnout models. Still, this should end up being a close and competitive race, and outside Republican groups — already accustomed to playing defense on red turf this cycle — are beginning to invest in the district.


1. Kondik worked for Cordray when he was state attorney general from 2009-2010. His book on Ohio presidential elections, The Bellwether, was released in 2016.