Friday, December 9, 2011

Maybe the Republican primary is going just as we should expect

I don't mean that in a snarky way. This is a completely non-snide post. I was just thinking about how even a quick little model with a few fairly intuitive assumptions can fit seemingly chaotic data surprisingly well. This probably won't look much like the models political scientists use (they have expertise and real data and reputations to protect). I'm just playing around.

But it can be a useful thought experiment, trying to explain all of the major data points with one fairly simple theory. Compare that to this bit of analysis from Amity Shlaes:
The answer is that this election cycle is different. Voters want someone for president who is ready to sit down and rewrite Social Security in January 2013. And move on to Medicare repair the next month. A policy technician already familiar with the difference between defined benefits and premium supports before he gets to Washington. What voters remember about Newt was that some of his work laid the ground for balancing the budget. He was leaving the speaker's job by the time that happened, but that experience was key.
This theory might explain Gingrich's recent rise but it does a poor job with Bachmann and Perry and an absolutely terrible job with Cain. It's an explanation that covers a fraction of the data. Unfortunately, it's no worse than much of the analysis we've been seeing from professional political reporters and commentators.

Surely we can do better than that.

Let's say that voters assign their support based on which candidate gets the highest score on a formula that looks something like this (assume each term has a coefficient and that those coefficients vary from voter to voter):

Score = Desirability + Electability(Desirability)

Where desirability is how much you would like to see that candidate as president and electability is roughly analogous to the candidate's perceived likelihood of making it through the primary and the general election.

Now let's make a few relatively defensible assumptions about electability:

electability is more or less a zero sum game;

it is also something like Keynes' beauty contest, an iterative process with everyone trying to figure out who everyone else is going to pick and throwing their support to the leading acceptable candidate;

desirability tends to be more stable than electability.

I almost added a third assumption that electability has momentum, but I think that follows from the iterative aspect.

What can we expect given these assumptions?

For starters, there are two candidates who should post very stable poll numbers though for very different reasons: Romney and Paul. Romney has consistently been seen as number one in general electability so GOP voters who find him acceptable will tend strongly to list him as their first choice even if they may not consider him the most desirable. While Romney's support comes mostly from the second term, Paul's comes almost entirely from the first. Virtually no one sees Paul as the most electable candidate in the field, but his supporters really, really like him.

It's with the rest, though, that the properties of the model start to do some interesting things. Since the most electable candidate is not acceptable to a large segment of the party faithful, perhaps even a majority, a great deal of support is going to go to the number two slot. If there were a clear ranking with a strong second place, this would not be a big deal, but this is a weak field with a relatively small spread in general electability. The result is a primary that's unstable and susceptible to noise.

Think about it this way: let's say the top non-Romney has a twelve percent perceived chance of getting to the White House, the second has eleven and the third has ten. Any number of trivial things can cause a three point shift which can easily cause first and third to exchange places. Suddenly the candidate who was polling at seven is breaking thirty and the pundits are scrambling to come up with an explanation that doesn't sound quite so much like guessing.

What the zero property and convergence can't explain, momentum does a pretty good job with. Take Perry. He came in at the last minute, seemingly had the election sewn up then dropped like a stone. Conventional wisdom usually ascribes this to bad debate performances and an unpopular stand on immigration but primary voters are traditionally pretty forgiving toward bad debates (remember Bush's Dean Acheson moment?) and most of the people who strongly disagreed with Perry's immigration stand already knew about it.

How about this for another explanation? Like most late entries, Perry was a Rorschach candidate and like most late entries, as the blanks were filled in Perry's standing dropped. The result was a downward momentum which Perry accelerated with a series of small but badly timed missteps. Viewed in this context, the immigration statement takes on an entirely different significance. It didn't have to lower Perry's desirability in order to hurt him in the polls; instead, it could have hurt his perceived electability by reminding people who weren't following immigration that closely that Perry had taken positions that other Republicans would object to.

Of course, showing how a model might possibly explain something doesn't prove anything, but it can make for an interesting thought experiment and it does, I hope, at least make a few points, like:

1. Sometimes a simple model can account for some complex and chaotic behavior;

2. Model structure matters. D + ED gives completely different results than D + E;

3. Things like momentum, zero sum constraints, convergence, and shifting to and from ordinal data can have some surprising implications, particularly when;

4. Your data hits some new extreme.

[For a look at what a real analysis of what's driving the poll numbers, you know where to go.]

No comments:

Post a Comment