Unexpectedly Intriguing!
03 November 2016

In many of the examples of junk science that we've previously presented, we've focused on cases where junk science results were obtained from what might be called the "Garbage In, Garbage Out", or GIGO, principle, where either inappropriate data was used or where relevant data was suppressed in order to guarantee results that would advance their author's preferred narrative.

But in today's example, we'll focus on situations where the KIBO principle can apply, whose family friendly translation is "Knowledge In, Baloney Out". In short, it is not in the data where the deficiencies will be found, but rather in the author's choice of analytical methods that can be all-too-easily manipulated to achieve a predetermined outcome, which enables pseudoscience results to be advanced with a low probability of detection.

Here's the relevant item from our checklist for how to detect junk science that applies to today's example.

How to Distinguish "Good" Science from "Junk" or "Pseudo" Science
Aspect Science Pseudoscience Comments
Models Using observations backed by experimental results, scientists create models that may be used to anticipate outcomes in the real world. The success of these models is continually challenged with new observations and their effectiveness in anticipating outcomes is thoroughly documented. Pseudosciences create models to anticipate real world outcomes, but place little emphasis on documenting the forecasting performance of their models, or even in making the methodology used in the models accessible to others. Have you ever noticed how pseudoscience practitioners always seem eager to announce their new predictions or findings, but never like to talk about how many of their previous predictions or findings were confirmed or found to be valid?

Today's example also specifically applies to the field of economics, where we'll be discussing Dynamic Stochastic General Equilibrium (DSGE) models, whose characteristics are such that they can really lend themselves to generating pseudoscientific results that can be difficult to detect. DSGE models feature prominently in analyses produced by advocates of the "Freshwater" Real Business Cycle (RBC) and New Keynesian schools of thought within the discipline, whose core assumptions about how the economy behaves are directly incorporated into DSGE models.

Our discussion then starts with an observation which explains why the debate between freshwater and saltwater schools continues, even though the private sector (or marketplace) would appear to have largely rejected the DSGE models produced according to the "freshwater" school's assumptions.

One curiosity that economists seem too polite to note is that one important school of macroeconomic thought—"freshwater" macroeconomics that focuses heavily on the idea of a "real" business cycle and disparages the notion of either fiscal or monetary stimulus—has completely flopped in the marketplace. It lives, instead, sheltered from market forces at a variety of Midwestern nonprofit universities and sundry regional Federal Reserve banks.

Stephen Williamson, a proponent of freshwater views, reminded me of this recently when he contended that macroeconomics is divided into schools of thought primarily because there's no money at stake. In financial economics, according to Williamson, "All the Wall Street people care about is making money, so good science gets rewarded." But in macroeconomics you have all kinds of political entrepreneurs looking for hucksters who'll back their theory....

"Political entrepreneurs" may be far too polite a term. These kinds of econometricians might be better described as pseudoscience peddlers, and we should recognize that they are by no means limited to a single school within the economics discipline. Regardless of where they fall on the ideological spectrum, pseudoscience peddlers are people who are on the prowl for marks that they can trick into buying into their deficient output.

In such hands, a tool like Dynamic Stochastic General Equilibrium modeling can represent a means by which they might distract attention away from severe defects that exist in their analyses, where their presentation of the mathematical model is really little more than window dressing that has been specifically aimed at giving their work a "scientific" veneer to help them sell it while also obsuring the means by which they ensured their predetermined analytical results.

How does that work? NYU economist Paul Romer explains how an econometrician can use confounding variables in a DSGE model to achieve a predetermined result.

As I will show later, when the number of variables in a model increases, the identification problem gets much worse. In practice, this means that the econometrician has more flexibility in determining the results that emerge when she estimates the model.

The identification problem means that to get results, an econometrician has to feed in something other than data on the variables in the simultaneous system. I will refer to things that get fed in as facts with unknown truth value (FWUTV) to emphasize that although the estimation process treats the FWUTV’s as if they were facts known to be true, the process of estimating the model reveals nothing about the actual truth value. The current practice in DSGE econometrics is feed in some FWUTV’s by "calibrating" the values of some parameters and to feed in others tight Bayesian priors. As Olivier Blanchard (2016) observes with his typical understatement, "in many cases, the justification for the tight prior is weak at best, and what is estimated reflects more the prior of the researcher than the likelihood function."

This is more problematic than it sounds. The prior specified for one parameter can have a decisive influence on the results for others. This means that the econometrician can search for priors on seemingly unimportant parameters to find ones that yield the expected result for the parameters of interest.

A potential red flag that such defective analysis is present might be found in cases where DSGE models are either inappropriately or inexplicably put to use in applications that would never past muster in the private sector, which demands greater transparency in analytical methods and which also requires that modeled results be validated with real world observations, both within and from outside the period for which the model was specifically developed. In a sense, the practice of applying DGSE models in these cases may be considered to be a form of junk science p-hacking where the model is specifically tuned to produce their preferred outcome, which we can't mention without sharing John Oliver's priceless commentary on the topic!

Getting back to the topic at hand, after presenting a similar example, Romer describes how "Facts With An Unknown Truth Value" (FWUTV) can survive challenge from its audience thanks to its scientific veneer and lack of transparency:

With enough math, an author can be confident that most readers will never figure out where a FWUTV is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask....

Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, "because I say so" seems like a pretty convincing answer to any question about its properties.

Throw in a number of additional confounding variables, and the number of places where a pseudoscientific practicioner might cook their analysis in a DSGE or error correction model to produce their desired predetermined results increases exponentially, which makes the detection of its deficiencies difficult to achieve without significant effort. These kinds of models might be therefore considered to be an ideal tool of choice for pseudoscience practicioners intent upon plying their deceptive trade, which is why they would choose to employ them over more accepted or demonstrably better analytical methods.

Meanwhile, the core assumptions that underlie such models are perhaps why they don't pass the smell test of at least one mainstream economist:

I am generally a quite traditional mainstream economist. I think that the body of economic analysis that we have piled up and teach to our students is pretty good; there is no need to overturn it in any wholesale way, and no acceptable suggestion for doing so. It goes without saying that there are important gaps in our understanding of the economy, and there are plenty of things we think we know that aren't true. That is almost inevitable. The national – not to mention the world – economy is unbelievably complicated, and its nature is usually changing underneath us. So there is no chance that anyone will ever get it quite right, once and for all. Economic theory is always and inevitably too simple; that can not be helped. But it is all the more important to keep pointing out foolishness wherever it appears. Especially when it comes to matters as important as macroeconomics, a mainstream economist like me insists that every proposition must pass the smell test: does this really make sense? I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way. I do not think that this picture passes the smell test. The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behavior, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether.

"Generally phony" is perhaps the best description for DSGE modeling results altogether. Unless their creators provide full transparency of all the the factors and assumptions that they have incorporated into their model to obtain their analytical results, it may be considered to be a safe policy to reject any findings based on those modeled results. Even with such transparency, should any of those factors and assumptions be considered to be too unrealistic by those independently assessing them, the results obtained from DSGE modeling might still be candidates for automatic rejection.

There are, after all, very valid reasons for why DSGE modeling has been all but completely rejected for use in the private sector, where the market purportedly being modeled by them has itself found them to be neither useful nor relevant to the real world.

References

Blanchard, O. (2016). Do DSGE Models Have a Future? Peterson Institute of International Economics, PB 16-11. [PDF Document]. August 2016.

Gürkaynak, Refet and Edge, Rochelle. Dynamic stochastic general equilibrium models and their forecasts. VoxEU. 28 February 2011.

Keen, Steve. Oliver Blanchard, Equilibrium, Complexity, And the Future of Macroeconomics. Forbes. 6 October 2016.

Romer, David. Advanced Macroeconomics, 4/e. Chapter 7: Dynamic Stochastic General-Equilibrium Models of Fluctuations. [PDF Document]. McGraw Hill. 2012.

Romer, Paul. The Trouble With Macroeconomics. 5 January 2016 Commons Memorial Lecture of the Omicron Delta Epsilon Society. [PDF Document]. 14 September 2016.

Romer, Paul. The Trouble With Macroeconomics, Update. Paul Romer (blog). 21 September 2016.

Smith, Noah. "Freshwater vs. Saltwater" divides macro, but not finance. Noahpinion. 12 December 2013.

Smith, Noah. The most damning critique of DSGE. Noahpinion. 10 January 2014.

Smith, Noah. What Can You Do With a DSGE Model?. Noahpinion. 27 May 2013.

Solow, Robert. Building a Science of Economics for the Real World". Prepared Statement for Congressional Testimony before the House Committee on Science and Technology's Subcommittee on Investigations and Oversight. [PDF Document]. 20 July 2010.

Yglesias, Matthew. Freshwater Economics Has Failed the Market Test. Slate. 18 December 2013.


Labels:

About Political Calculations

Welcome to the blogosphere's toolchest! Here, unlike other blogs dedicated to analyzing current events, we create easy-to-use, simple tools to do the math related to them so you can get in on the action too! If you would like to learn more about these tools, or if you would like to contribute ideas to develop for this blog, please e-mail us at:

ironman at politicalcalculations

Thanks in advance!

Recent Posts

Indices, Futures, and Bonds

Closing values for previous trading day.

Most Popular Posts
Quick Index

Site Data

This site is primarily powered by:

This page is powered by Blogger. Isn't yours?

CSS Validation

Valid CSS!

RSS Site Feed

AddThis Feed Button

JavaScript

The tools on this site are built using JavaScript. If you would like to learn more, one of the best free resources on the web is available at W3Schools.com.

Other Cool Resources

Blog Roll

Market Links

Useful Election Data
Charities We Support
Shopping Guides
Recommended Reading
Recently Shopped

Seeking Alpha Certified

Archives