Unexpectedly Intriguing!
11 November 2016

When we launched this series back in July 2016, we never expected that we would find ourselves crossing into U.S. election analysis, and yet, thanks to the political polls of 2016, here we are!

In today's example of junk science, we're looking at several factors that tick off different boxes on our checklist for how to detect junk science, which include, but at this early date, are not limited to the following items (if you're reading this article on a site that republishes our RSS news feed that doesn't neatly render the following table, please click here to access the version of this article that appears on our site:

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Inconsistencies Observations or data that are not consistent with current scientific understanding generate intense interest for additional study among scientists. Original observations and data are made accessible to all interested parties to support this effort. Observations of data that are not consistent with established beliefs tend to be ignored or actively suppressed. Original observations and data are often difficult to obtain from pseudoscience practitioners, and is often just anecdotal. Providing access to all available data allows others to independently reproduce and confirm findings. Failing to make all collected data and analysis available for independent review undermines the validity of any claimed finding. Here's a recent example of the misuse of statistics where contradictory data that would have avoided a pseudoscientific conclusion was improperly screened out, which was found after all the data was made available for independent review.
Models Using observations backed by experimental results, scientists create models that may be used to anticipate outcomes in the real world. The success of these models is continually challenged with new observations and their effectiveness in anticipating outcomes is thoroughly documented. Pseudosciences create models to anticipate real world outcomes, but place little emphasis on documenting the forecasting performance of their models, or even in making the methodology used in the models accessible to others. Have you ever noticed how pseudoscience practitioners always seem eager to announce their new predictions or findings, but never like to talk about how many of their previous predictions or findings were confirmed or found to be valid?
Falsifiability Science is a process in which each principle must be tested in the crucible of experience and remains subject to being questioned or rejected at any time. In other words, the principles of a true science are always open to challenge and can logically be shown to be false if not backed by observation and experience. The major principals and tenets of a pseudoscience cannot be tested or challenged in a similar manner and are therefore unlikely to ever be altered or shown to be wrong. Pseudoscience enthusiasts incorrectly take the logical impossibility of disproving a pseudoscientific principle as evidence of its validity. By the same token, that scientific findings may be challenged and rejected based upon new evidence is taken by pseudoscientists as "proof" that real sciences are fundamentally flawed.

Given what happened in the U.S. on Election Day 2016 and what happened earlier in the year on Brexit Vote Day, the one clear message of 2016 that voters are sending is that political polling is badly broken.

One example of how badly is given by FiveThirtyEight's Nate Silver, who presented the following analysis indicating how the 2016 Presidential election in the United States was expected to turn out based on the aggregation of numerous state polls across the United States, which had peaked in favor of candidate Hillary Clinton on the eve of Election Day:


538 - Who will win the presidency? 7 November 2016 22:08

Based on such analysis and its propagation throughout the media, many Americans went into and through Election Day with the firm expectation that Hillary Clinton would soon be officially elected to be the next President of the United States.

As we now know however, that expectation was widely off the mark. And the reason that so many Americans were caught flat footed when reality arrived late on 8 November 2016 is because they erroneously placed too much importance on the results of polling and analysis that was fundamentally flawed and which would never pass scientific muster.

Alex Berezow of the American Council on Science and Health argues that's because political poll analysis like this example lacks even the most basic scientific foundation, where the models behind them cannot be falsified:

Earlier, we published an article explaining why there is no such thing as a scientific poll. In a nutshell, because polling relies on good but sometimes inaccurate assumptions, it is far more art than science. As we noted, "Tweaking [voter] turnout models is more akin to refining a cake recipe than doing a science experiment." Still, since American pollsters are good at their jobs, polls tend to be correct more often than not.

Recently, pollsters and pundits have tried to up their game. No longer content with providing polling data, they now want to try their hand at gambling, as well. It has become fashionable to report a candidate's "chance of winning." (ESPN does this, too. Last week, the network predicted that the Seattle Sounders had a 94% chance to advance to the semi-finals of the MLS Cup. I am grateful this prediction ended up being correct.)

However, these predictions are thoroughly unscientific. Why? Because it is impossible to test the model.

Let's use the soccer match as an example. The only way to know if ESPN's prediction that Seattle had a 94% chance of advancing to the semi-finals is accurate is to have Seattle and its opponent play the match 100 (or more) times. If Seattle advances 94 or so times, then the model has been demonstrated to be reasonably accurate. Of course, soccer doesn't work like that. There was only one game. Yes, the Sounders advanced, so the prediction was technically correct, but a sample size of one cannot test the model.

The exact same logic applies to elections. As of the writing of this article, Nate Silver gives Hillary Clinton an absurdly precise 70.3% chance of winning. (No, not 70.2% or 70.4%, but exactly 70.3%.) If she does indeed win on Election Day, that does not prove the model is correct. For Mr Silver's model to be proven correct, the election would need to be repeated at least 1,000 times, and Mrs Clinton would need to win about 703 times.

Even worse, Mr Silver's model can never be proven wrong. Even if he were to give Mrs Clinton a 99.9% chance of winning, and if she loses, Mr Silver can reply, "We didn't say she had a 100% chance of winning."

Any model that can never be proven right or wrong is, by definition, unscientific. Just like conversations with the late Miss Cleo, such political punditry should come with the disclaimer, "For entertainment purposes only."

Starts With a Bang's Ethan Siegel points his finger at a different problem that such poll-based analysis has that renders their conclusions to be invalid: the inherent inconsistencies from systemic errors in data collection.

A systematic error is an uncertainty or inaccuracy that doesn't improve or go away as you take more data, but a flaw inherent in the way you collect your data.

  • Maybe the people that you polled aren't reflective of the larger voting population. If you ask a sample of people from Staten Island how they’ll vote, that’s different from how people in Manhattan — or Syracuse — are going to vote.
  • Maybe the people that you polled aren't going to turn out to vote in the proportions you expect. If you poll a sample with 40% white people, 20% black people, 30% Hispanic/Latino and 10% Asian-Americans, but your actual voter turnout is 50% white, your poll results will be inherently inaccurate. [This source-of-error applies to any demographic, like age, income or environment (e.g., urban/suburban/rural.)]
  • Or maybe the polling method is inherently unreliable. If 95% of the people who say they’ll vote for Clinton actually do, but 4% vote third-party and 1% vote for Trump, while 100% of those who say they’ll vote for Trump actually do it, that translates into a pro-Trump swing of +3%.

None of this is to say that there’s anything wrong with the polls that were conducted, or with the idea of polling in general. If you want to know what people are thinking, it’s still true that the best way to find out is to ask them. But doing that doesn't guarantee that the responses you get aren't biased or flawed....

I wouldn't go quite as far as Alex Berezow of the American Council on Science and Health does, saying election forecasts and odds of winning are complete nonsense, although he makes some good points. But I will say that it is nonsense to pretend that these systematic errors aren't real. Indeed, this election has demonstrated, quite emphatically, that none of the polling models out there have adequately controlled for them. Unless you understand and quantify your systematics errors — and you can’t do that if you don’t understand how your polling might be biased — election forecasts will suffer from the GIGO problem: garbage in, garbage out.

In economics, these are problems that can affect the contingent valuation method (CVM), which is often used to determine how people value the things for which markets do not exist to trade, such as for the preservation of environmental features like biodiversity. In CVM, surveys (polls) are used to ask people how much they would be willing to pay for that feature, where the collected responses are then used to give an indication of how people view its value. All of the problems of polling exist in contingent valuation, where there can be very big differences between what people might say a thing is worth to them (their stated preference) and the actual choices they make with respect to it (their revealed preference), as might be seen in the differences in the results of a pre-election poll and the results of an actual election.

Economist John Whitehead, who knows his way around the problems of the contingent valuation method, weighs in on the factors that may very well have skewed the results of 2016's political polling:

I can think of one technical reason the polls were wrong. The low response rate polls were subject to sample selection bias. Let's say that only 13% of the population is responds to the survey (13% is the response rate in the Elon University Poll). If the 80% that doesn't respond is similar except for observed characteristics (e.g., gender, age, race, political party) then you can weight the data to better reflect the population. But, if the 87% that doesn't respond is different on some unobservable characteristic (e.g., "lock her up") then weighting won't fix the problem. The researcher would need other information about nonrespondents to correct it (Whitehead, Groothuis and Blomquist, 1993). If you don't have the other information then the problem won't be understood until actual behavior is revealed.

Which is to say that you'll have a lot of people who obsess over the reports of pre-election polling, who might be banking on them in setting their expectations for the future, that will ultimately have their hopes dashed when reality turns out to be very different from their expectations. All because the polls and reporting upon which they relied for their outlook were so inherently flawed that they also had no idea of how disconnected from reality their expectations had become.

In many cities around the United States, and particularly within those regions where people counted on a Clinton victory to retain the benefits of their political party's power over the rest of the nation, that disappointment has sometimes turned into protests, discriminatory threats and outright rioting.

Much of which could have been avoided if Americans had trustworthy political polling results and analysis to more properly ground their expectations. Instead, we're discovering that junk science in political polling and punditry and their role in setting irrational expectations has a real cost in physical injuries and property damage within their own communities.

Labels:

About Political Calculations

Welcome to the blogosphere's toolchest! Here, unlike other blogs dedicated to analyzing current events, we create easy-to-use, simple tools to do the math related to them so you can get in on the action too! If you would like to learn more about these tools, or if you would like to contribute ideas to develop for this blog, please e-mail us at:

ironman at politicalcalculations

Thanks in advance!

Recent Posts

Indices, Futures, and Bonds

Closing values for previous trading day.

Most Popular Posts
Quick Index

Site Data

This site is primarily powered by:

This page is powered by Blogger. Isn't yours?

CSS Validation

Valid CSS!

RSS Site Feed

AddThis Feed Button

JavaScript

The tools on this site are built using JavaScript. If you would like to learn more, one of the best free resources on the web is available at W3Schools.com.

Other Cool Resources

Blog Roll

Market Links

Useful Election Data
Charities We Support
Shopping Guides
Recommended Reading
Recently Shopped

Seeking Alpha Certified

Archives