Wednesday, December 18, 2013

Microfoundations: PLEASE tell a better story!

A scuffle has broken out among some economists over the touchy topic of microfoundations, i.e. the usual formal requirement that macroeconomic models, to be considered legitimate, must be based on things like households and firms optimizing their intertemporal utility, having rational expectations, and so on. Apparently, things got kicked off by economist Tony Yates, who offered a spirited defense of microfoundations of this kind; he's really irritated that people keep criticizing them and worries that, My God!, this might cause DSGE macroeconomists to lose some credibility! In response, Simon Wren-Lewis came back with an equally spirited argument about why modelling should be more flexible and "eclectic." Noah Smith's summary of the whole disagreement puts everything in context.

Noah makes the most important point right at the end. Simply put, no one (I think) is against the authentic spirit of microfoundations, i.e. the idea that macroeconomic models ought to be based on plausible stories of how the real actors in an economy behave. If you get that right, then obviously your model might stand a chance of getting larger aggregate things right too. The problem we have today is that the microfoundations you find in DSGE models aren't like this in the least. So macromodels are actually based on things we know to be wrong. It's very strange indeed. As Noah puts it:

Yates says I just want to get rid of all the microfoundations. But that is precisely, exactly, 180 degrees wrong! I think microfoundations are a great idea! I think they're the dog's bollocks! I think that macro time-series data is so uninformative that microfoundations are our only hope for really figuring out the macroeconomy. I think Robert Lucas was 100% on the right track when he called for us to use microfounded models.

But that's precisely why I want us to get the microfoundations right. Many of microfoundations we use now (not all, but many) are just wrong. Obviously, clearly wrong. Lots of microeconomists I talk to agree with me about that. And lately I've been talking to some pretty prominent macroeconomists who agree as well. 

So I applaud the macroeconomists who are working on trying to develop models with better microfoundations (here is a good example). Hopefully the humble stuff I'm doing in finance can lead to some better microfoundations too. And in the meantime I'm also happy to sit here and toss bombs at people who think the microfoundations we have are good enough!

I couldn't agree more.

In fact, before coming across this debate this morning, I had intended to make a short post linking to the very informative lecture (below, courtesy of Mark Thoma) by macroeconomist George Evans. Lord knows I spend enough time criticizing economists -- and this recent post discussed the limitations of the learning literature, in which Evans has been a key player -- so I want to make clear that I do admire the things he does.

He tells an interesting story about an economic model (a standard New Keynesian model) that -- when the agents in the model learn in a particular constrained way -- has two different equilibria. One is locally stable and the economy has inflation right around a targeted value. Start out with inflation and consumption and expectations close to that equilibrium and you'll move toward that point over time. The second equilibrium is, however, unstable. If you start out sufficiently far away from the stable equilibrium, you won't go there at all, but will wander down into a deflationary zone (and what happens then I don't know).

This model for aggregate behaviour is based on some fairly simple low-dimensional equations for how current consumption and inflation feed, via expectations, into future values and a trajectory for the economy. I don't know how plausible these equations are. I'm guessing that someone can make a good argument about why they should have the form they do (or a similar form). That story would involve references to how things happening now in the economy would influence peoples' behaviour and their expectations, and then how these would cause certain kinds of changes. To really believe this you'd want to see some evidence that this story is correct, i.e. that people, firms, etc., really do tend to behave like this.

The point I want to make is that -- for someone like myself who has not been socialized to accept the necessity of what currently counts as "microfoundations" -- nothing about the story becomes more plausible when I wade into the equations of the New Keynesian model and see how households and firms independently optimize their intertemporal utilities subject to certain budget constraints. If anything, seeing all this dubious stuff makes me less likely to believe in the plausibility of the low dimensional equations for aggregate variables. And this is precisely the problem with microfoundations of this kind. They don't give a good argument for why the aggregate variables should satisfy these equations. They give a very bad. An unconvincing argument. At least for me.

Tuesday, December 17, 2013

Can you spell FRAUD?

A great article in the New York Review of Books by US District Court Judge Jed Rakoff, asking pointed questions about why no executives of major financial institutions have been convicted of fraud for their actions in the lead up to the financial crisis. As he argues, the Commission set up to explore the crisis found plenty of evidence of widespread fraud; the Justice Department has simply failed to act. Why?

...the stated opinion of those government entities asked to examine the financial crisis overall is not that no fraud was committed. Quite the contrary. For example, the Financial Crisis Inquiry Commission, in its final report, uses variants of the word “fraud” no fewer than 157 times in describing what led to the crisis, concluding that there was a “systemic breakdown,” not just in accountability, but also in ethical behavior.

As the commission found, the signs of fraud were everywhere to be seen, with the number of reports of suspected mortgage fraud rising twenty-fold between 1996 and 2005 and then doubling again in the next four years. As early as 2004, FBI Assistant Director Chris Swecker was publicly warning of the “pervasive problem” of mortgage fraud, driven by the voracious demand for mortgage-backed securities. Similar warnings, many from within the financial community, were disregarded, not because they were viewed as inaccurate, but because, as one high-level banker put it, “A decision was made that ‘We’re going to have to hold our nose and start buying the stated product if we want to stay in business.’”

Without giving further examples, the point is that, in the aftermath of the financial crisis, the prevailing view of many government officials (as well as others) was that the crisis was in material respects the product of intentional fraud. In a nutshell, the fraud, they argued, was a simple one. Subprime mortgages, i.e., mortgages of dubious creditworthiness, increasingly provided the chief collateral for highly leveraged securities that were marketed as AAA, i.e., securities of very low risk. How could this transformation of a sow’s ear into a silk purse be accomplished unless someone dissembled along the way?

While officials of the Department of Justice have been more circumspect in describing the roots of the financial crisis than have the various commissions of inquiry and other government agencies, I have seen nothing to indicate their disagreement with the widespread conclusion that fraud at every level permeated the bubble in mortgage-backed securities. Rather, their position has been to excuse their failure to prosecute high-level individuals for fraud in connection with the financial crisis on one or more of three grounds:

Rakoff goes on the examine these grounds, and finds none of them convincing. So, then, why no prosecutions? He discounts the revolving door theory -- the prosecutors have avoided action because of former links to financial firms, or hope of future employment links -- because, in his experience, prosecutors are well motivated to get convictions. He suggests that there are a host of reasons: prosecutors simply had other priorities in the years after 9/11/2001; in many cases government regulators acquiesced early on to changing practices, where in increasingly lax demands on mortgage documentation became the norm. And, finally and most importantly, he points to changes in prosecuting practices over the past few decades:

The final factor I would mention is both the most subtle and the most systemic of the three, and arguably the most important. It is the shift that has occurred, over the past thirty years or more, from focusing on prosecuting high-level individuals to focusing on prosecuting companies and other institutions. It is true that prosecutors have brought criminal charges against companies for well over a hundred years, but until relatively recently, such prosecutions were the exception, and prosecutions of companies without simultaneous prosecutions of their managerial agents were even rarer.

The reasons were obvious. Companies do not commit crimes; only their agents do. And while a company might get the benefit of some such crimes, prosecuting the company would inevitably punish, directly or indirectly, the many employees and shareholders who were totally innocent. Moreover, under the law of most US jurisdictions, a company cannot be criminally liable unless at least one managerial agent has committed the crime in question; so why not prosecute the agent who actually committed the crime?

In recent decades, however, prosecutors have been increasingly attracted to prosecuting companies, often even without indicting a single person. This shift has often been rationalized as part of an attempt to transform “corporate cultures,” so as to prevent future such crimes; and as a result, government policy has taken the form of “deferred prosecution agreements” or even “nonprosecution agreements,” in which the company, under threat of criminal prosecution, agrees to take various prophylactic measures to prevent future wrongdoing. Such agreements have become, in the words of Lanny Breuer, the former head of the Department of Justice’s Criminal Division, “a mainstay of white-collar criminal law enforcement,” with the department entering into 233 such agreements over the last decade. But in practice, I suggest, this approach has led to some lax and dubious behavior on the part of prosecutors, with deleterious results.

If you are a prosecutor attempting to discover the individuals responsible for an apparent financial fraud, you go about your business in much the same way you go after mobsters or drug kingpins: you start at the bottom and, over many months or years, slowly work your way up. Specifically, you start by “flipping” some lower- or mid-level participant in the fraud who you can show was directly responsible for making one or more false material misrepresentations but who is willing to cooperate, and maybe even “wear a wire”—i.e., secretly record his colleagues—in order to reduce his sentence. With his help, and aided by the substantial prison penalties now available in white-collar cases, you go up the ladder.

But if your priority is prosecuting the company, a different scenario takes place. Early in the investigation, you invite in counsel to the company and explain to him or her why you suspect fraud. He or she responds by assuring you that the company wants to cooperate and do the right thing, and to that end the company has hired a former assistant US attorney, now a partner at a respected law firm, to do an internal investigation. The company’s counsel asks you to defer your investigation until the company’s own internal investigation is completed, on the condition that the company will share its results with you. In order to save time and resources, you agree.

Six months later the company’s counsel returns, with a detailed report showing that mistakes were made but that the company is now intent on correcting them. You and the company then agree that the company will enter into a deferred prosecution agreement that couples some immediate fines with the imposition of expensive but internal prophylactic measures. For all practical purposes the case is now over. You are happy because you believe that you have helped prevent future crimes; the company is happy because it has avoided a devastating indictment; and perhaps the happiest of all are the executives, or former executives, who actually committed the underlying misconduct, for they are left untouched.

I suggest that this is not the best way to proceed. Although it is supposedly justified because it prevents future crimes, I suggest that the future deterrent value of successfully prosecuting individuals far outweighs the prophylactic benefits of imposing internal compliance measures that are often little more than window-dressing. Just going after the company is also both technically and morally suspect. It is technically suspect because, under the law, you should not indict or threaten to indict a company unless you can prove beyond a reasonable doubt that some managerial agent of the company committed the alleged crime; and if you can prove that, why not indict the manager? And from a moral standpoint, punishing a company and its many innocent employees and shareholders for the crimes committed by some unprosecuted individuals seems contrary to elementary notions of moral responsibility.

These criticisms take on special relevance, however, in the instance of investigations growing out of the financial crisis, because, as noted, the Department of Justice’s position, until at least recently, is that going after the suspect institutions poses too great a risk to the nation’s economic recovery. So you don’t go after the companies, at least not criminally, because they are too big to jail; and you don’t go after the individuals, because that would involve the kind of years-long investigations that you no longer have the experience or the resources to pursue.

In conclusion, I want to stress again that I do not claim that the financial crisis that is still causing so many of us so much pain and despondency was the product, in whole or in part, of fraudulent misconduct. But if it was—as various governmental authorities have asserted it was—then the failure of the government to bring to justice those responsible for such colossal fraud bespeaks weaknesses in our prosecutorial system that need to be addressed.

Friday, December 13, 2013

Macroeconomics: The illusion of the "learning literature"

For many economists, prevailing theories of macroeconomics based on the idea of rational expectations (RE) are things of elegance and beauty, somewhat akin to the pretty face you see below. This is especially their view in the light of two decades of research looking at learning as a foundation for macroeconomics -- something economists refer to as the "learning literature." You'll hear it said that these studies have shown that most of the RE conclusions also follow from much more plausible assumptions about how people form expectations and adjust them over time by learning and adapting. Sounds really impressive.

As with this apparently pretty face, however, things aren't actually so beautiful and elegant if you take the time to read some of the papers in the learning literature and see what has been done. For example, take this nice review article by Evans and Honkapohja from a few years ago. It is a nice article and reports on interesting research. If you study it, however, you'll find that the "learning" studied in this line of work is not at all what most of us would think of as learning as we know it in the real world. I've written about this before and might as well just quote something I said there:

What the paper does is explore what happens in some of the common rational expectations models if you suppose that agents' expectations aren't formed rationally but rather on the basis of some learning algorithm. The paper shows that learning algorithms of a certain kind lead to the same equilibrium outcome as the rational expectations viewpoint. This IS interesting and seems very impressive. However, I'm not sure it's as interesting as it seems at first.

The reason is that the learning algorithm is indeed of a rather special kind. Most of the models studied in the paper, if I understand correctly, suppose that agents in the market already know the right mathematical form they should use to form expectations about prices in the future. All they lack is knowledge of the values of some parameters in the equation. This is a little like assuming that people who start out trying to learn the equations for, say, electricity and magnetism, already know the right form of Maxwell's equations, with all the right space and time derivatives, though they are ignorant of the correct coefficients. The paper shows that, given this assumption in which the form of the expectations equation is already known, agents soon evolve to the correct rational expectations solution. In this sense, rational expectations emerges from adaptive behaviour.

I don't find this very convincing as it makes the problem far too easy. More plausible, it seems to me, would be to assume that people start out with not much knowledge at all of how future prices will most likely be linked by inflation to current prices, make guesses with all kinds of crazy ideas, and learn by trial and error. Given the difficulty of this problem, and the lack even among economists themselves of great predictive success, this would seem more reasonable. However, it is also likely to lead to far more complexity in the economy itself, because a broader class of expectations will lead to a broader class of dynamics for future prices. In this sense, the models in this paper assume away any kind of complexity from a diversity of views.

To be fair to the authors of the paper, they do spell out their assumptions clearly. They state in fact that they assume that people in their economy form views on likely future prices in the same way modern econometricians do (i.e. using the very same mathematical models). So the gist seems to be that in a world in which all people think like economists and use the equations of modern econometrics to form their expectations, then, even if they start out with some of the coefficients "mis-specified," their ability to learn to use the right coefficients can drive the economy to a rational expectations equilibrium. Does this tell us much?

My view is that NO, it doesn't tell us much. It's as if the point of the learning literature hasn't really been to explore what might happen in macroeconomics if people form expectations in psychologically realistic ways, but to see how far one can go in relaxing the assumptions of RE while STILL getting the same conclusions. Of course there's nothing wrong with that as an intellectual exercise, but it hardly a full bore effort to understand economic reality. It's more an exercise in theory preservation, examining the kinds of rhetoric RE theorists might be able to use to defend the continued use of their favorite ideas. "Yes, if we use the word 'learning' in a very special way, we can say that RE theories are fully consistent with human learning!"

Anyway, I've revisited this idea in my most recent Bloomberg column, which should appear this weekend. I find it quite irritating that lots of economists go on repeating this idea that the learning literature shows that it's OK to use RE when in fact it does nothing of the sort. You often find this kind of argument when economists smack down their critics, implying that those critics "just don't know the literature." In an interview from a few years ago, for example, Thomas Sargent suggested that criticism of excessive reliance on rationality in macroeconomics reflects "... either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished."

On a more thoughtful note, Oxford economist Simon Wren-Lewis also recently defended the RE assumption (responding to a criticism by Lars Syll), mainly arguing that he hasn't seen any useful alternatives. He also refers to this allegedly deep learning literature as a source of wisdom, although he does acknowledge that it's aim has been fairly limited:

...If I really wanted to focus in detail on how expectations were formed and adjusted, I would look to the large mainstream literature on learning, to which Professor Syll does not refer. (Key figures in developing this literature included Tom Sargent, Albert Marcet, George Evans and Seppo Honkapohja: here is a nice interview involving three of them.) Macroeconomic ideas derived from rational expectations models should always be re-examined within realistic learning environments, as in this paper by Benhabib, Evans and Honkapohja for example. No doubt that literature may benefit from additional insights that behavioural economics and others can bring. However it is worth noting that a key organising device for much of the learning literature is the extent to which learning converges towards rational expectations.

However most of the time macroeconomists want to focus on something else, and so we need a simpler framework. In practice that seems to me to involve a binary choice. Either we assume that agents are very naive, and adopt something very simple like adaptive expectations (inflation tomorrow will be based on current and past inflation), or we assume rational expectations. My suspicion is that heterodox economists, when they do practical macroeconomics, adopt the assumption that expectations are naive, if they exist at all (e.g. here). So I want to explain why, most of the time, this is the wrong choice. My argument here is similar but complementary to a recent piece by Mark Thoma on rational expectations.

As I said above, Wren-Lewis is one of the more thoughtful defenders of RE, but he too here lapses into the use of "learning" without any qualification.

More importantly, however, I'm not sure why Wren-Lewis thinks there is only a binary choice. After all, there is a huge range of alternatives between simple naive expectations and RE and this is precisely the range inhabited by real people. So why not look there? Why not actually look to the psychology literature on how people learn and use some ideas from that? Or, why not do some experiments and see how people form expectations in plausible economic environments, then build theories in that way?

This kind of work can and is being done. My Bloomberg column touches briefly on this really fascinating paper from earlier this year by economist Tiziana Assenza and colleagues.  What they did, briefly, is to run experiments with volunteers who had to make predictions of inflation (and sometimes also the output gap) in a laboratory economy. The economy was simple: the volunteers' expectations fed into determining the economic future outcomes for inflation etc. by a simple low dimensional set of equations known perfectly to the experimenters, but NOT known by the volunteers. So the dynamics of the economy here were made artificially simple, and hence easier to learn than they would be in a real economy; but the volunteers weren't given any crutch to help them learn, such as full knowledge of the form of the equations. They had to, gasp, learn on their own! In a series of experiments, Assenza and colleagues then measured what happened in the economy -- did it settle into an equilibrium, did is oscillate, etc. -- and also could closely study how people formed expectations and whether their expectations converged to some homogeneous form or stayed heterogeneous. Did they eventually converge to rational expectations? Umm, NO.

From their conclusions:
In this paper we use laboratory experiments with human subjects to study individual expectations, their interactions and the aggregate behavior they co-create within a New Keynesian macroeconomic setup and we fit a heterogeneous expectations switching model to the experimental data. A novel feature of our experimental design is that realizations of aggregate variables depend on individual forecasts of two different variables, the output gap and inflation. We find that individuals tend to base their predictions on past observations, following simple forecasting heuristics, and individual learning takes the form of switching from one heuristic to another. We propose a simple model of evolutionary selection among forecasting rules based on past performance in order to explain individual forecasting behavior as well as the different aggregate outcomes observed in the laboratory experiments, namely convergence to some equilibrium level, persistent oscillatory behavior and oscillatory convergence. Our model is the first to describe aggregate behavior in a stylized macro economy as well as individual micro behavior of heterogeneous expectations about two different variables. A distinguishing feature of our heterogeneous expectations model is that evolutionary selection may lead to different dominating forecasting rules for different variables within the same economy, for example a weak trend following rule dominates inflation forecasting while adaptive expectations dominate output forecasting (see Figs. 9(c) and 9(d)).

We also perform an exercise of empirical validation on the experimental data to test the model’s performance in terms of in-sample forecasting as well as out-of-sample predicting power. Our results show that the heterogeneous expectations model outperforms models with homogeneous expectations, including the rational expectations benchmark. [MB: In the paper they actually found that the RE benchmark provided the WORST fit of any of several possibilities considered.]

In the experiments, real learning behavior led to a range of interesting outcomes in this economy, including persistent oscillations in inflation and economic output without any equilibrium, or extended periods of recession driven by several distinct groups clinging to very different expectations of the future. Relaxing the assumption of rational expectations turns out not to be a minor thing at all. Include realistic learning behavior in your models, and you get a realistically complex economy that is very hard to predict and control, and subject to many kinds of natural instability.

One of the most important things here is that the best way to generate behavior like that observed in this experimental economy was in simulations in which agents formed their expectations through an evolutionary process, selecting from a set of heuristics and choosing whichever one happened to be working well in the recent past. This builds on earlier work of William Brock and Cars Hommes, the latter being one of the authors of the current paper. It also builds, of course, on the early work from the Santa Fe Institute on adaptive models of financial markets, which uses a similar modelling approach.

So, here we do have an alternative to rational expectations, one that is both far more realistic in psychological terms, and also more realistic in generating the kinds of outcomes one sees in real experiments and real economies. Wonderful. Economists are no longer stuck with their RE straitjacket, but can readily begin exploring the kinds of things we should expect to see in economies where people act like real people (of course, a few economists are doing this, and Assenza and colleagues give some recent references in their paper).

I think this kind of thing is much more deserving of being called a "learning literature." I don't know why it doesn't get more attention.

Wednesday, December 11, 2013

Secular stagnation.... just a poor excuse?

Larry Summers recently made some waves with his proposal that maybe we're in a new era of "secular stagnation," in which low growth is the norm, and much of it comes through temporary and artificial bubbles. Paul Krugman backed the idea here. At face value, it all seems somewhat plausible, but also sounds a lot like a "just so" story to cover up and explain why economic policy and low interest rates haven't been enough to encourage new growth, yet also haven't caused inflation. It also fits together quite well with the usual stories told about the failure of ordinary policy at the zero lower bound, which turns ordinary economics on its head.

Now, I don't want to say that story is completely wrong, but remember that it comes out of quite standard macro analyses based on representative agent models with individuals and firms optimizing over time, and where -- perhaps most importantly -- things like debt overhang do not enter in any way into explaining how people are behaving (and why they may be hugely risk averse). That should be enough to raise some major questions about the plausibility of the story, especially in the aftermath of the biggest financial crisis in a century. For a lot more on such doubts, see the illuminating recent paper Stable Growth in an Era of Crises by Joseph Stiglitz.

But also see this convincing counterargument by some analysts at Independent Strategy, as discussed in the Financial Times. From Izabella Kaminska's discussion:
From the note, their main points are:
• There is no shortage of high return investment projects in the world. And the dearth of global corporate investment, which drove the great recession, means that productive potential is shrinking despite corporate profitability, leverage and cash balances being sound.

• The three ingredients for growth are a) a stable macro environment; b) a sound banking system; c) economic reforms that encourage entrepreneurship. What is missing right now is private sector confidence in the ability of governments and central bankers to provide all three.

• Credit bubbles can boost growth only temporarily and incur heavy costs in terms of subsequent deleveraging and misallocation of resources.
And expanding a bit further, they add:
Secular stagnation is a myopic and short-term view for two reasons. First, it is based on the experience of the Anglo-Saxon economies and parts of Europe currently as well as Japan since the bursting of the bubble at the start of the 1990s. Krugman muses that interest rates should be set at the growth rate of populations, because they would then be equal to a society’s potential capital productivity (and the long-term return on it). But the change in population growth is less relevant than the rise in productivity of an expanding workforce.

Take Germany: its population is ageing and its net population growth is slowing to a trickle (although that may be improved by increased net immigration from southern and eastern Europe). But Germany’s productivity level and growth is high (as is total factor productivity, expressing the gains from technology). Italy has a similar stagnation in its working population, but its real GDP growth has disappeared because of the fall in total factor productivity — Figure 1.
 Read Kaminska's discussion here.

Actually, this push back isn't really surprising. It's what you get if you take the longer historical view, rather than trying to make excuses for why economic theory still can't make sense of things (the theory is poor, that's why!). As a couple of economists from Goldman Sachs noted just after Summers' speech:
"Our view of the recent weakness is more cyclical than secular... The slow rate of recovery in recent years is roughly in line with the performance of other economies following major financial crises, as shown by Reinhart and Rogoff, and the reasons for the weakness in aggregate demand over the last few years have now begun to diminish."
This refers to the great book by Reinhart and Rogoff, by the way, not their other discredited paper.

Saturday, December 7, 2013

Quantum dots from... coal?

For those not following along with recent physics and materials science, the newest wonder material is graphene, made of two-dimensional sheets of carbon just one atom thick. The 2010 Nobel Prize in physics was awarded for its discovery. Here's a short primer on why it's cool because of its amazing strength, conductivity and flexibility. To be honest, there's a long road ahead in making practical devices from the stuff -- this article in Nature from a couple weeks ago surveyed the promise and obstacles -- but the potential is huge.

Now, how about this for irony: a paper just out in Nature Communications shows that graphene quantum dots can be made in a very easy one-step process from ordinary coal. A quantum dot is like an artificial atom, and can be engineered to absorb/emit light at precise frequencies. Isn't it ironic that the cheap stuff we're burning all over the globe just for crude energy may be a great source for one of the most amazing materials we've ever discovered? Here's the abstract of the paper:
Coal is the most abundant and readily combustible energy resource being used worldwide. However, its structural characteristic creates a perception that coal is only useful for producing energy via burning. Here we report a facile approach to synthesize tunable graphene quantum dots from various types of coal, and establish that the unique coal structure has an advantage over pure sp2-carbon allotropes for producing quantum dots. The crystalline carbon within the coal structure is easier to oxidatively displace than when pure sp2-carbon structures are used, resulting in nanometre-sized graphene quantum dots with amorphous carbon addends on the edges. The synthesized graphene quantum dots, produced in up to 20% isolated yield from coal, are soluble and fluorescent in aqueous solution, providing promise for applications in areas such as bioimaging, biomedicine, photovoltaics and optoelectronics, in addition to being inexpensive additives for structural composites.

Friday, December 6, 2013

More on Obamacare.. Oh Man...

Something a little weird happened with my latest Bloomberg column, which appeared last Monday only to disappear pretty much instantaneously and for mysterious reasons. After some investigation, it seems that a "code" was missing in the html or in who knows what other language. Anyway, it's there now.

The topic is Obamacare. Briefly, the theme is that healthcare isn't something we should expect to be easily solved through markets (think of A Market for Lemons and that theme about market failures in similar situations). Economists, I think, mostly know this; some things can be better organized by government. Why doesn't that message get out? Or don't economists really believe this? I can't tell.

Anyway, everyone should have a look at two things that go much deeper than my piddling little column:

1. A great recent article by Michael Sandel that examines how and why markets are often anything but value free; making a market for something often changes how we think about and value that thing, with huge implications for whether markets are beneficial or not. An important point he makes is that whether something should be left to the market IS NOT a question of economics; it always (or almost always) involves consideration of values far broader than economic efficiency and so goes way outside economists' claimed area of expertise.

I think Sandel is right. And I think lots of people are starting to realise this. Even the Pope!!

2. A second thing worth reading is a fascinating book from 1976 by Fred Hirsch, The Social Limits to Growth. I'd never heard of it before reading Sandel's article, which is a little embarrassing, as I'm sure every graduate student in economics has read it as part of their ordinary training. I have a lot to learn. It is a fascinating book, a real classic, and suggests that some of our basic psychological and social behaviors must have long-term effects on our economic well being that the usual theories of markets completely miss. Kind of obvious when you say it like that, but this is economics.... people have tried very hard to deny the obvious.... Hirsch tried hard not to...  

And, for a short excellent primer on Hirsch, see this by someone in the philosophy department of the University of Manitoba, or connected to that department, or a dog of someone in the department.... I have no idea who. But it's written very clearly and I admire it.

Thursday, December 5, 2013

Brad DeLong -- sensible thoughts on "microfoundations"

The one thing about modern macroeconomics that I find really hard to comprehend is that theorists seem to jump through hoops to get models with a certain kind of mathematical consistency in them, even though this guarantees that the models make very little contact with reality (and real data). In fact, it guarantees that these theories are based on sweeping assumptions that we know not to be true in the real world. I've written about this mystery before. A theory having "microfoundations" -- which macro theories are supposed to have to be considered respectable -- is thereby a theory that we know is not consistent with what we know about real human behavior and economic reality. Economists, it seems, only allow themselves to believe in things they know NOT to be true!! How wonderful and counter-intuitive! (I can see why the field must be appealing.)

Imagine if the engineers building and managing modern GPS and other global navigation systems were, for bizarre historical reasons, constrained to go on using a map of the globe essentially like that pictured above: The Square and Stationary Earth of Professor Orlando Ferguson.

Brad DeLong has an interesting comment on this topic today.

Monday, November 18, 2013

Cleaning up finance? That WOULD be a surprise

Several things worth reading that give tiny rays of hope that one day we might get some regulations with real teeth to reform the financial system.

First, some remarks by Kenneth C. Griffin, the founder and chief executive of Citadel, giving his views that the big banks should be broken up, or laws changed to encourage the flowering of smaller banks with competitive advantages on the local level (he mentions in particular putting caps on the size of deposits).

Second, an essay by law professor Peter Henning who is a specialist in financial fraud and enforcement. He gives a look at how developments such as derivatives and high frequency trading have created a host of opportunities for sophisticated market manipulation, but also how some recent legal changes have given regulators more power to pursue actions even if they cannot prove "intention to manipulate."

Third, an extended article looking at how things are developing with the so-called Volcker Rule that would, ostensibly, prohibit investment banks from trading on their own proprietary accounts. This gives a mixed message, actually, and it even seems that some of the regulators are part of the problem. Look at the second sentence below:
Gary Gensler, head of the Commodity Futures Trading Commission, also wants to make it harder for banks to disguise speculative wagers as permissible trading done for customers, according to the officials briefed on the discussions. Underscoring the tension, other regulators privately groused that Mr. Gensler’s agency — which spent most of the last few years completing dozens of other new rules under Dodd-Frank — was too slow to raise concerns about the Volcker Rule.
Finally, a great speech by Elizabeth Warren on why Too Big To Fail remains a very serious prpoblem even five years after the worst moments of the crisis. Some excerpts below:
Thank you, Americans for Financial Reform and the Roosevelt Institute for inviting me to speak today. I’ve been working very closely with both AFR and Roosevelt for years now, and I’m really delighted to be here.

It has been five years since the financial crisis, but we all remember its darkest days. Credit dried up. The stock market cratered. Historic institutions like Lehman Brothers and Merrill Lynch were wiped out. There were legitimate fears that our economy was tumbling over a cliff and that we were heading into another Great Depression . We averted that grim outcome , but the damage was staggering. A recent report by the Federal Reserve Bank of Dallas estimated that the financial crisis cost us upward of 14 trillion dollars — trillion, with a t. That’s $120,000 for every American household — more than two years’ worth of income for the average family. Billions of dollars in retirement savings disappeared . Millions of workers lost their jobs and their sense of financial security. Entire communities were devastated. And a Census Bureau study that came out just a couple months ago shows that home ownership rates declined by 15 percent for families with young children. The Crash of 2008 changed lives forever.

In April 2011, after a two - year bipartisan enquiry, the Senate Permanent Subcommittee on Investigations released a 635 - page report that identified the primary factors that led to the crisis. The list included high - risk mortgage lending, inaccurate credit ratings, exotic financial products , and, to top it all off, the repeated failure of regulators to stop the madness. As Senator Tom Coburn, the Subcommittee’s ranking member, said: “Blame for this mess lies everywhere from federal regulators who cast a blind eye, Wall Street bankers who let greed run wild, and members of Congress who failed to provide oversight.” Even Jamie Dimon, the CEO of JPMorgan Chase, has emphasized inadequate regulation as a source of the crisis. He wrote this to his shareholders: “had there been stronger standards in the mortgage markets, one huge cause of the recent crisis might have been avoided. ”

The crash happened quickly and dramatically, and it caught our nation and apparently even our regulators by surprise. But don’t let that fool you. The causes of the crisis were years in the making, and the warning signs were everywhere. As many of you know, I spent most of my career studying the growing economic pressures on middle class families — families that worked hard and played by the rules but still can’t get ahead. And I’ve also studied the financial services industry and how it has developed over time. A generation ago, the price of financial services — credit cards, checking accounts, mortgages, and signature loans — was pretty easy to see. Both borrowers and lenders understood the basic terms of the deal. But by the time the financial crisis hit, a different form of pricing had emerged. Lenders began to use a low advertised price on the front end to entice customers, and then made their real money with fees and charges and penalties and re - pricing in the fine print. Buyers became less and less able to evaluate the risks of a financial product, comparison shopping became almost impossible , and the market became less efficient. Credit card companies took the lead, with their contracts ballooning from a page and a half back in 1980 to more than 30 pages by the beginning of the 2000’s. And teaser - rate credit cards — which advertised deceptively low interest rates — paved the way for teaser - rate mortgages. When I worked to set up the Consumer Financial Protection Bureau, I pushed hard for steps that would increase transparency in the marketplace. The crisis began one lousy mortgage at a time, and there is a lot we must do to make sure there are never again so many lousy mortgages. CFPB made some important steps in the right direction, and I think we’re a lot safer than we were.

But what about the other causes of the crisis ? … Where are we now, five years after the crisis hit and three years after Dodd - Frank? I know there has been much discussion today about a variety of issues, but I’d like to focus on one in particular. Where are we now on the “Too Big to Fail” problem? Where are we on making sure that the behemoth institutions on Wall Street can’t bring down the economy with a wild gamble ? Where are we in ending a system that lets investors and CEOs scoop up all the profits in good times, but forces taxpayers to cover the losses in bad times? After the crisis, there was a lot of discussion about how Too Big to Fail distorted the marketplace, creating lower borrowing costs for the largest institutions and competitive disadvantages for smaller ones. There was talk about moral hazard and the dangers of big banks getting a free, unwritten, government - guaranteed insurance policy. Sure, there was talk, but look at what happened: Today , the four biggest banks are 30% larger than they were five years ago . And the five largest banks now hold more than half of the total banking assets in the country. One study earlier this year showed that the Too Big to Fail status is giving t he 10 biggest US banks an annual taxpayer subsidy of $83 billion. Wow . Who would have thought five years ago , after we witnessed firsthand the dangers of an overly concentrated financial system, that the Too Big to Fail problem would only have gotten worse?

We should not accept a financial system that allows the biggest banks to emerge from a crisis in record - setting shape while working Americans continue to struggle. And we should not accept a regulatory system that is so besieged by lobbyists for the big banks that it takes years to deliver rules and then the rules that are delivered are often watered - down and ineffective . What we need is a system that puts an end to the boom and bust cycle. A system that recognizes we don’t grow this country from the financial sector; we grow this country from the middle class. Powerful interests will fight to hang on to every benefit and subsidy they now enjoy . Even after exploiting consumers, larding their books with excessive risk, and making bad bets that brought down the economy and forced taxpayer bailouts, the big Wall Street banks are not chastened . They have fought to delay and hamstring the implementation of financial reform, and they will continue to fight every inch of the way. That’s the battlefield. That’s what we’re up against. But David beat Goliath with the establishment of CFPB and, just a few months ago, with the confirmation of Rich Cordray. David beat Goliath with the passage of Dodd - Frank. We did that together – Americans for Financial Reform, the Roosevelt Institute, and so many of you in this room. I am confident David can beat Goliath on Too Big to Fail . We just have to pick up the slingshot again.


Speech by Elizabeth Warren

Friday, November 15, 2013

This guy has some issues with Rational Expectations

I just happened across this interesting panel discussion from a couple years ago featuring a number of economists involved with the Rational Expectations movement, either as key proponents (Robert Lucas) or critics (Bob Shiller). A fascinating exchange comes late on when they discuss Jack Muth -- ostensibly the inventor of the idea, although others trace it back to an early paper of Herb Simon -- and Muth's later attitude on this assumption. It seems that Muth came to doubt the usefulness of the idea after he looked at the behaviour of some business firms and found that they didn't seem to follow the Rational Expectations paradigm at all. He thought, therefore, that it would make sense to employ some more plausible and realistic ideas about how people form expectations, and he pointed, even in the early 1980s, to the work of Kahneman and Tversky.

I'm just going to quote the extended exchange below, including a comment from Shiller who makes the fairly obvious point that if economics is about human behavior and how it influences economic outcomes, then there clearly ought to be a progressive interchange between psychology and economics, and from Lucas who, amazingly enough, seems to find this idea utterly abhorrent, apparently because it may spoil economics as a pure mathematical playground. That's my reading at least:
I wish Jack Muth could be here to answer that question, but obviously he can’t because he died just as Hurricane Wilma was zeroing in on his home on the Florida Keys. But he did send me a letter in 1984. This was a letter in response to an earlier draft of that paper you are referring to. I sent Jack my paper with some trepidation because it was not encouraging to his theory. And much to my surprise, he wrote back. This was in October 1984. And he said, I came up with some conclusions similar to some of yours on the basis of forecasts of business activity compiled by the Bureau of Business Research at Pitt. [Letter Muth to Lovell (2 October 1984)] He had got hold of the data from five business firms, including expectations data, analyzed it, and found that the rational expectations model did not pass the empirical test.

He went on to say, “It is a little surprising that serious alternatives to rational expectations have never really been proposed. My original paper was largely a reaction against very näıve expectations hypotheses juxtaposed with highly rational decision-making behavior and seems to have been rather widely misinterpreted. Two directions seem to be worth exploring: (1) explaining why smoothing rules work and their limitations and (2) incorporating well known cognitive biases into expectations theory (Kahneman and Tversky). It was really incredible that so little has been done along these lines.”

Muth also said that his results showed that expectations were not in accordance with the facts about forecasts of demand and production. He then advanced an alternative to rational expectations. That alternative he called an “errors-in-the-variables” model. That is to say, it allowed the expectation error to be correlated with both the realization and the prediction. Muth found that his errors-in-variables model worked better than rational expectations or Mills’ implicit expectations, but it did not entirely pass the tests. In a shortened version of his paper published in the Eastern Economic Journal he reported,

“The results of the analysis do not support the hypotheses of the naive, exponential, extrapolative, regressive, or rational models. Only the expectations revision model used by Meiselman is consistently supported by the statistical results. . . . These conclusions should be regarded as highly tentative and only suggestive, however, because of the small number of firms studied. [Muth (1985, p. 200)]

Muth thought that we should not only have rational expectations, but if we’re going to have rational behavioral equations, then consistency requires that our model include rational expectations. But he was also interested in the results of people who do behavioral economics, which at that time was a very undeveloped area.

Does anyone else want to comment on issue of testing rational expectations against alternatives and if it matters whether rational expectations stands up to empirical tests or whether it is not the sort of thing for which testing would be relevant?

What comes to my mind is that rational expectations models have to assume away the problem of regime change, and that makes them hard to apply. It’s the same criticism they make of Kahnemann and Tversky, that the model isn’t clear and crisp about exactly how you should apply it. Well, the same is true for rational expectations models. And there’s a new strand of thought that’s getting impetus lately, that the failure to predict this crisis was a failure to understand regime changes. The title of a recent book by Carmen Reinhart and Ken Rogoff—the title of the book is This Time Is Different—to me invokes this problem of regime change, that people don’t know when there’s a regime change, and they may assume regime changes too often—that’s a behavioral bias [Carmen Reinhart and Kenneth Rogoff (2009)]. I don’t know how we’re going to model that. Reinhart and Rogoff haven’t come forth with any new answers, but that’s what comes to my mind now, at this point in history. And I don’t know whether you can comment on it: how do we handle the regime change problem? If you don’t have data on subprime mortgages then you build a model that doesn’t have subprime mortgages in it. Also, it doesn’t have the shadow banking sector in it either. Omitting key variables because we don’t have the data history on them creates a fundamental problem That’s why many nice concepts don’t find their way into empirical models and are not used more. They remain just a conceptual model.

Bob, do you want to . . . or Dale. . . .

More as a theorist, I am sensitive to that problem. That is the issue. If the world were stable, then rational expectations means simply agents learning about their environment and applying what they learned to their decisions. If the environment’s simple, then how else would you structure the model? It’s precisely—if you like, call it “regime change”—what do you do with unanticipated events? More generally—regime changes is only one of them—you were talking about institutional change that was or wasn’t anticipated. As a theorist, I don’t know how to handle that.

Bob, did you want to comment on that? You’re looking unhappy, I thought.

No. I mean, you can’t read Muth’s paper as some recipe for cranking out true theories about everything under the sun—we don’t have a recipe like that. My paper on expectations and the neutrality of money was an attempt to get a positive theory about what observations we call a Phillips curve. Basically it didn’t work. After several years, trying to push that model in a direction of being more operational, it didn’t seem to explain it. So we had what we call price stickiness, which seems to be central to the way the system works. I thought my model was going to explain price stickiness, and it didn’t. So we’re still working on it; somebody’s working on it. I don’t think we have a satisfactory solution to that problem, but I don’t think that’s a cloud over Muth’s work. If Jack thinks it is, I don’t agree with him. Mike cites some data that Jack couldn’t make sense out of using rational expectations. . . . There’re a lot of bad models out there. I authored my share, and I don’t see how that affects a lot of things we’ve been talking about earlier on about the value of Muth’s contribution.

Just to wrap up the issue of possible alternatives to rational expectations or complements to rational expectations. Does behavioral economics or psychology in general provide a useful and viable alternative to rational expectations, with the emphasis on “useful”?

Well, that’s the criticism of behavioral economics, that it doesn’t provide elegant models. If you read Kahnemann and Tversky, they say that preferences have a kink in them, and that kink moves around depending on framing. But framing is hard to pin down. So we don’t have any elegant behavioral economics models. The job isn’t done, and economists have to read widely and think about these issues. I am sorry, I don’t have a good answer. My opinion is that behavioral economics has to be on the reading list. Ultimately, the whole rationality assumption is another thing; it’s interesting to look back on the history of it. Back at the turn of the century—around 1900—when utility-maximizing economic theory was being discovered, it was described as a psychological theory—did you know that, that utility maximization was a psychological theory? There was a philosopher in 1916—I remember reading, in the Quarterly Journal of Economics —who said that the economics profession is getting steadily more psychological. {laughter} And what did he mean? He said that economists are putting people at the center of the economy, and they’re realizing that people have purposes and they have objectives and they have trade-offs. It is not just that I want something, I’ll consider different combinations and I’ll tell you what I like about that. And he’s saying that before this happened, economists weren’t psychological; they believed in such things as gold or venerable institutions, and they didn’t talk about people. Now the whole economics profession is focused on people. And he said that this is a long-term trend in economics. And it is a long-term trend, so the expected utility theory is a psychological theory, and it reflects some important insights about people. In a sense, that’s all we have, behavioral economics; and it’s just that we are continuing to develop and to pursue it. The idea about rational expectations, again, reflects insights about people—that if you show people recurring patterns in the data, they can actually process it—a little bit like an ARIMA model—and they can start using some kind of brain faculties that we do not fully comprehend. They can forecast—it’s an intuitive thing that evolved and it’s in our psychology. So, I don’t think that there’s a conflict between behavioral economics and classical economics. It’s all something that will evolve responding to each other—psychology and economics.

I totally disagree.

I think that we’ve come back around the circle—back to Carnegie again. I was a student of Simon and [Richard] March and [James] Cyert—in fact, I was even a research assistant on A Behavioral Theory of the Firm [Cyert and March (1963)]. So we talked about that in those days too. I am much less up on modern behavioral economics. However, I think what you are referring to are those aspects of psychology that illustrate the limits, if you like, of perception and, say, cognitive ability. Well, Simon did talk about that too—he didn’t use those precise words. What I do see on the question of expectations—right down the hall from me—is my colleague Chuck Manksi [Charles Manksi] and a group of people that he’s associated with. They’re trying to deal with expectations of ordinary people. For a lot of what we are talking about in macroeconomics, we’re thinking of decision-makers sure that they have all the appropriate data and have a sophisticated view about that data. You can’t carry that model of the decision-maker over to many household decisions. And what’s coming out of this new empirical research on expectations is precisely that: how do people think about the uncertainties that go into deciding about what their pension plan is going to look like. I think that those are real issues, where behavioral economics, in that sense, can make a very big contribution to what the rest of us do.

One thing economics tries to do is to make predictions about the way large groups of people, say, 280 million people are going to respond if you change something in the tax structure, something in the inflation rate, or whatever. Now, human beings are hugely interesting creatures; so neurophysiology is exciting, cognitive psychology is interesting—I’m still into Freudian psychology—there are lots of different ways to look at individual people and lots of aspects of individual people that are going to be subject to scientific study. Kahnemann and Tversky haven’t even gotten to two people; they can’t even tell us anything interesting about how a couple that’s been married for ten years splits or makes decisions about what city to live in—let alone 250 million. This is like saying that we ought to build it up from knowledge of molecules or—no, that won’t do either, because there are a lot of subatomic particles—we’re not going to build up useful economics in the sense of things that help us think about the policy issues that we should be thinking about starting from individuals and, somehow, building it up from there. Behavioral economics should be on the reading list. I agree with Shiller about that. A well-trained economist or a well-educated person should know something about different ways of looking at human beings. If you are going to go back and look at Herb Simon today, go back and read Models of Man. But to think of it as an alternative to what macroeconomics or public finance people are doing or trying to do . . . there’s a lot of stuff that we’d like to improve—it’s not going to come from behavioral economics. . . at least in my lifetime. {laughter}

We have a couple of questions to wrap up the session. Let me give you the next to last one: The Great Recession and the recent financial crisis have been widely viewed in both popular and professional commentary as a challenge to rational expectations and to efficient markets. I really just want to get your comments on that strain of the popular debate that’s been active over the last couple years.

If you’re asking me did I predict the failure of Lehmann Brothers or any of the other stuff that happened in 2008, the answer is no.

No, I’m not asking you that. I’m asking you whether you accept any of the blame. {laughter} The serious point here is that, if you read the newspapers and political commentary and even if you read commentary among economists, there’s been a lot of talk about whether rational expectations and the efficient-markets hypotheses is where we should locate the analytical problems that made us blind. All I’m asking is what do you think of that?

Is that what you get out of Rogoff and Reinhart? You know, people had no trouble having financial meltdowns in their economies before all this stuff we’ve been talking about came on board. We didn’t help, though; there’s no question about that. We may have focused attention on the wrong things; I don’t know.

Well, I’ve written several books on that. {laughter} My latest, with George Akerlof, is called Animal Spirits [2009]. And we presented an idea that Bob Lucas probably won’t like. It was something about the Keynesian concept. Another name that’s not been mentioned is John Maynard Keynes. I suspect that he’s not popular with everyone on this panel. Animal Spirits is based on Keynes. He said that animal spirits is a major driver of the economy. To understand Keynes, you have to go back to his 1921 book, Treatise on Probability [Keynes (1921)]. He said—he’s really into almost this regime-change thing that we brought up before—that people don’t have probabilities, except in very narrow, special circumstances. You can think of a coin-toss experiment, and then you know what the probabilities are. But in macroeconomics, it’s always fuzzy. What Keynes said in The General Theory [1936] is that, if people are really thoroughly rational, they would be paralyzed into inaction, because they just don’t know. They don’t know the kind of things that you would need to put into a decision-theory framework. But they do act, and so there is something that drives people—it’s animal spirits. You’re lying in bed in the morning and you could be thinking, “I don’t know what’s going to happen to me today; I could get hit by a truck; I just will stay in bed all day.” But you don’t. So animal spirits is the core of—maybe I’m telling this too bluntly—but it fluctuates. Sometimes it is represented as confidence, but it is not necessarily confidence. It is trust in each other, our sense of whether other people think that we’re moving ahead or . . . something like that. I believe that’s part of what drives the economy. It’s in our book, and it’s not very well modeled yet. But Keynes never wrote his theory down as a model either. He couldn’t do it; he wasn’t ready. These are ideas that, even to this day, are fuzzy. But they have a hold on people. I’m sure that Ben Bernanke and Austin Goolsbee are influenced by John Maynard Keynes, who was absolutely not a rational-expectations theorist. And that’s another strand of thought. In my mind, the strands are not resolved, and they are both important ways of looking at the world.

Foreseeing the next financial crisis...

My latest column in Bloomberg (out today, 15/11/2013) looks a little at the silly pronouncements of some economists (Nobel Prize winners Robert Lucas and Eugene Fama among them) to the effect that "financial crises are by their very nature unpredictable." These are, in my opinion, essentially meaningless statements if at all examined; they're the equivalent of an excuse: "don't blame us economists for not having a clue the whole system was about to explode!" The column argues -- and this is the important point -- that lots of economists haven't taken this easy and shameful way out, but have taken on the hard work of developing ways to measure systemic risks and, we hope, give us a better chance to detect important instabilities and imbalances in financial markets as they emerge. Many have even begun collaborating in their efforts with physicists, engineers and other such types. If they have a little success, then we might then just possibly do something to head off the worst trouble in potential future crises (IF the regulatory system doesn't suffer massive political interference at just the crucial moment, of course, which I recognize is a big IF).

As part of another project, I've been perusing recent efforts to develop various measures of systemic risk, and thought some readers might be interested. I've only started, so this is a very incomplete list, but there's some interesting stuff here:

First, a few very much from within the mainstream econ literature:

Acemoglu, D., Ozdaglar, A., & Tahbaz-Salehi, A. (2013). Systemic Risk and Stability in Financial Networks.
Adrian, T., Covitz, D., & Liang, J. (2013). Financial stability monitoring.
Billio, M., Getmansky, M., Lo, A. W., & Pelizzon, L. (2012). Econometric measures of connectedness and systemic risk in the finance and insurance sectors. Journal of Financial Economics, 104(3), 535–559.
Bisias, D., Flood, M., Lo, A. W., & Valavanis, S. (2012). A Survey of Systemic Risk Analytics. Annual Review of Financial Economics, 4(1), 255–296. doi:10.1146/annurev-financial-110311-101754
Brunnermeier, M. K., & Oehmke, M. (2012). Bubbles, Financial Crises, and Systemic Risk.

Then, some others by people bringing a wider spectrum of ideas and methods to bear:
Battiston, S., Puliga, M., Kaushik, R., Tasca, P., & Caldarelli, G. (2012). DebtRank: too central to fail? Financial networks, the FED and systemic risk. Scientific reports, 2, 541.
Beale, N., Rand, D. G., Battey, H., Croxson, K., May, R. M., & Nowak, M. A. (2011). Individual versus systemic risk and the Regulator’s Dilemma. Proceedings of the National Academy of Sciences of the United States of America, 108(31), 12647–52. doi:10.1073/pnas.1105882108
Bookstaber, R. (2012). Office of Financial Research Using Agent-Based Models for Analyzing Threats to Financial Stability.
Farmer, J. D., Gallegati, M., Hommes, C., Kirman, a., Ormerod, P., Cincotti, S., … Helbing, D. (2012). A complex systems approach to constructing better models for managing financial markets and the economy. The European Physical Journal Special Topics, 214(1), 295–324. doi:10.1140/epjst/e2012-01696-9
Haldane, A. G., & May, R. M. (2011). Systemic risk in banking ecosystems. Nature, 469(7330), 351–5. doi:10.1038/nature09659
Markose, S. M. (2013). Systemic risk analytics: A data-driven multi-agent financial network (MAFN) approach. Journal of Banking Regulation, 14(3-4), 285–305. doi:10.1057/jbr.2013.10
A. Frank et al., Security in theAge of Systemic Risk: Strategies, Tactics and Options for Dealing with Femtorisks and Beyond. International Institute for Applied Systems Analysis.
Thurner, S., & Poledna, S. (2013). DebtRank-transparency: controlling systemic risk in financial networks. Scientific reports, 3, 1888. doi:10.1038/srep01888

Isn't it curious that all these people are wasting their time and effort, since we know from just a few seconds' thought that crises are impossible to predict? In fact, doesn't the EMH tell us that? And isn't that sacred principle enough for us to stop all further thought? It's all a mystery to me why these people persist in what they're doing. [Sorry for the funny indent... I can't seem to get rid of it...]

Thursday, November 14, 2013

Beating the dead horse of rational expectations...

The above award should be given to University of Oxford economist Simon Wren-Lewis for concocting yet another defense of the indefensible idea of Rational Expectations (RE). Gotta admire his determination. I've written about this idea many times (here and here, for example), and I thought it had died a death, but that was obviously not true. I'm not going to say too much except to note that the strategy of the Wren-Lewis argument is essentially to ask "what are the alternatives to Rational Expectations?," then to mention just one possible alternative that he calls "naive adaptive expectations," and then to go on to criticize this silly alternative as being unrealistic, which it indeed is. But that's no defense of RE.

He doesn't ever address the question of why economists don't use more realistic ways to model how people form their expectations, for example by looking to psychology and experimental studies of how people learn (especially through social interactions and copying behavior). The only defense he offers on that score is to say they don't want too many details because they seek a "simple" way to model expectations so they can solve their favorite macroeconomic models. That fact that this renders such models possibly quite useless and misleading as guides to the real world doesn't seem to give him (or others) pause.

Lars Syll was a target in the Wren-Lewis post and has a nice rejoinder here. In comments, others raised concerns about why Lars didn't mentioned specific alternatives to RE. I added a comment there, which I'll reproduce here:
It seems to me that there are clear alternatives to rational expectations and I'm not sure why economists seem loath to use then. Simon Wren-Lewis gives one alternative as naive "adaptive expectations", but this seems like a straw man. Here people seem to believe more or less that trends will continue. That is truly naive. Expectations are important and the psychological literature on learning suggests that people form them in many ways, with heuristic theories and rules of thumb, and then adjust their use of these heuristics through experience. This is the kind of adaptive expectations that ought to be used in macro models.

From what I have read, however, the vast "learning literature" in macroeconomics that defenders of RE often refer to really doesn't go very far in exploring learning. A review I read as recently as 2009 used learning algorithms which ASSUMED that people already know the right model of the economy and only need to learn the values of some parameters. I suspect this is done on purpose so that the learning process converges to RE -- and an apparent defense of RE is therefore achieved. But this is only a trick. Use more realistic learning behavior to model expectations and you find little convergence at all -- just ongoing learning as the economy itself keeps doing new things in ways the participants never quite manage to predict.

As Simon Wren-Lewis himself notes, " it is worth noting that a key organising device for much of the learning literature is the extent to which learning converges towards rational expectations." So again, it seems as if the purpose of the model is to see how we can get the conclusion we want, not to explore the kinds of things we might actually expect to see in the world. This is what makes people angry and I think rightfully about the RE idea. I suspect that REAL reason for this is that, if one uses more plausible learning behavior (not the silly naive kind of adaptive expectations), you find that your economy isn't guaranteed to settle down to any kind of equilibrium, and you can't say anything honestly about the welfare of any outcomes, and so most of what has been developed in economics turns out to be pretty useless. Most economists find that too much to stomach.
One day soon I hope this subject really will be a dead horse.

Tuesday, October 29, 2013

The "triviality" of the EMH

By way of Lars Syll, some comments from Robert Shiller on the difference between the obviously true (and not very interesting) version of the Efficient Markets Hypothesis -- "markets are hard to beat)" -- and the obviously false (yet still widely believed) version that markets possess some kind of mysterious and magical wisdom. The latter is probably the most damaging perversion in finance; maybe in all of economics. Shiller's is also my view, but of course, this is not at all a coincidence as I been influenced by many things Shiller has written over the years: 
Professor Fama is the father of the modern efficient-markets theory, which says financial prices efficiently incorporate all available information and are in that sense perfect. In contrast, I have argued that the theory makes little sense, except in fairly trivial ways. Of course, prices reflect available information. But they are far from perfect. Along with like-minded colleagues and former students, I emphasize the enormous role played in markets by human error, as documented in a now-established literature called behavioral finance …

Actually, I do not completely oppose the efficient-markets theory. I have been calling it a half-truth. If the theory said nothing more than that it is unlikely that the average amateur investor can get rich quickly by trading in the markets based on publicly available information, the theory would be spot on. I personally believe this, and in my own investing I have avoided trading too much, and have a high level of skepticism about investing tips.

But the theory is commonly thought, at least by enthusiasts, to imply much more. Notably, it has been argued that regular movements in the markets reflect a wisdom that transcends the best understanding of even the top professionals, and that it is hopeless for an ordinary mortal, even with a lifetime of work and preparation, to question pricing. Market prices are esteemed as if they were oracles.

This view grew to dominate much professional thinking in economics, and its implications are dangerous. It is a substantial reason for the economic crisis we have been stuck in for the past five years, for it led authorities in the United States and elsewhere to be complacent about asset mispricing, about growing leverage in financial markets and about the instability of the global system. In fact, markets are not perfect, and really need regulation, much more than Professor Fama’s theories would allow …

Friday, October 25, 2013

Economists begin to wonder -- are financial markets inherently unstable?

Justin Fox has a nice piece in the Harvard Business Review looking at how economics and finance have changed in the years since the onset of the crisis. He offers several conclusions, but one is that financial economists are now, much more than before, coming to accept the notion that financial markets are by their nature inherently unstable. Can you imagine that? From the article:

Before the late 1950s, research on finance at business schools was practical, anecdotal, and not all that influential. Then a few economists began trying to impose order on the field, and in the early 1960s computers arrived on college campuses, enabling an explosion of quantitative, systematic research. The efficient market hypothesis (EMH) was finance’s equivalent of rational expectations; it grew out of the commonsense observation that if you figured out how to reliably beat the market, eventually enough people would imitate you so as to change the market’s behavior and render your predictions invalid. This soon evolved into a conviction that financial market prices were in some fundamental sense correct. Coupled with the capital asset pricing model, which linked the riskiness of investments to their return, the EMH became a unified and quite powerful theory of how financial markets work.

From these origins sprang useful if imperfect tools, ranging from cost-of-capital formulas for businesses to the options-pricing models that came to dominate financial risk management. Finance scholars also helped spread the idea (initially unpopular but widely accepted by the 1990s) that more power for financial markets had to be good for the economy.

By the late 1970s, though, scholars began collecting evidence that didn’t fit this framework. Financial markets were far more volatile than economic events seemed to justify. The link between “beta”—the risk measure at the heart of the capital asset pricing model—and stock returns proved tenuous. Some reliable patterns in market behavior (the value stock effect and the momentum effect) did not disappear even after finance journals published paper after paper about them. After the stock market crash of 1987, serious questions were raised about both the information content of prices and the stability of the risk measures used in finance. Researchers studying individual investing behavior found systematic violations of the premise that humans make decisions in a rational, forward-looking way. Those studying professional investors found that incentives cause them to court tail risks (that is, to follow strategies that are likely to generate positive returns most years but occasionally blow up) and to herd with other professionals (because their performance is judged against the same benchmarks). Those looking at banks found that even well-run institutions could be wiped out by panics.

But all this ferment failed to produce a coherent new story about how financial markets work and how they affect the economy. In 2005 Raghuram Rajan came close, in a now-famous presentation at the Federal Reserve Bank of Kansas City’s annual Jackson Hole conference. Rajan, a longtime University of Chicago finance professor who was then serving a stint as director of research at the International Monetary Fund (he is now the head of India’s central bank), brought together several of the strands above in a warning that the world’s vastly expanded financial markets, though they brought many benefits, might be bringing huge risks as well.

Since the crisis, research has exploded along the lines Rajan tentatively explored. The dynamics of liquidity crises and “fire sales” of financial assets have been examined in depth, as have the links between such financial phenomena and economic trouble. In contrast to the situation in macroeconomics, where it’s mostly younger scholars pushing ahead, some of the most interesting work being published in finance journals is by well-established professors out to connect the dots they didn’t connect before the crisis. The most impressive example is probably Gary Gorton, of Yale, who used to have a sideline building risk models for AIG Financial Products, one of the institutions at the heart of the financial crisis, and has since 2009 written two acclaimed books and two dozen academic papers exploring financial crises. But he’s far from alone.

What is all this research teaching us? Mainly that financial markets are prone to instability. This instability is inherent in assessing an uncertain future, and isn’t necessarily a bad thing in itself. But when paired with lots of debt, it can lead to grave economic pain. That realization has generated many calls to reduce the amount of debt in the financial system. If financial institutions funded themselves with more equity and less debt, instead of the 30-to-1 debt-to-equity ratio that prevailed on Wall Street before the crisis and still does at some European banks, they would be far less sensitive to declines in asset values. For a variety of reasons, bank executives don’t like issuing stock; when faced with higher capital requirements, they tend to reduce debt, not increase equity. Therefore, to make banks safer without shrinking financial activity overall, regulators must force them to sell more shares. Anat Admati, of Stanford, and Martin Hellwig, of the Max Planck Institute for Research on Collective Goods, have made this case most publicly, with their book The Bankers’ New Clothes, but their views are widely shared among those who study finance. (Not unanimously, though: The Brunnermeier-Sannikov paper mentioned above concludes that leverage restrictions “may do more harm than good.”)

This is an example of what’s been called macroprudential regulation. Before the crisis, both Bernanke and his immediate predecessor, Alan Greenspan, argued that although financial bubbles can wreak economic havoc, reliably identifying them ahead of time is impossible—so the Fed shouldn’t try to prick them with monetary policy. The new reasoning, most closely identified with Jeremy Stein, a Harvard economist who joined the Federal Reserve Board last year, is that even without perfect foresight the Fed and other banking agencies can use their regulatory powers to restrain bubbles and mitigate their consequences. Other macroprudential policies include requiring banks to issue debt that automatically converts to equity in times of crisis; adjusting capital requirements to the credit cycle (demanding more capital when times are good and less when they’re tough); and subjecting highly leveraged nonbanks to the sort of scrutiny that banks receive. Also, when viewed through a macroprudential lens, past regulatory pressure on banks to reduce their exposure to local, idiosyncratic risks turns out to have increased systemic risk by causing banks all over the country and even the world to stock up on the same securities and enter into similar derivatives contracts.

A few finance scholars, most persistently Thomas Philippon, of New York University, have also been looking into whether there’s a point at which the financial sector is simply too big and too rich—when it stops fueling economic growth and starts weighing on it. Others are beginning to consider whether some limits on financial innovation might not actually leave markets healthier. New kinds of securities sometimes “owe their very existence to neglected risks,” Nicola Gennaioli, of Universitat Pompeu Fabra; Andrei Shleifer, of Harvard; and Robert Vishny, of the University of Chicago, concluded in one 2012 paper. Such “false substitutes...lead to financial instability and could reduce welfare, even without the effects of excessive leverage.”

I shouldn’t overstate the intellectual shift here. Most day-to-day work in academic finance continues to involve solving small puzzles and documenting small anomalies. And some finance scholars would put far more emphasis than I do on the role that government has played in unbalancing the financial sector with guarantees and bailouts through the years. But it is nonetheless striking how widely accepted in the field is the idea that financial markets have a tendency to become unhinged, and that this tendency has economic consequences. One simple indicator: The word “bubble” appeared in 33 articles in the flagship Journal of Finance from its founding, in 1946, through the end of 1987. It has made 36 appearances in the journal just since November 2012.
Too bad this shift didn't take place 20 years ago, or maybe 40 years ago.

Thursday, October 24, 2013

Shiller, Fama and all that...

I couldn't help but write a little in my latest Bloomberg column on the strange choice for this year's Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. For readers of the column I wanted to give a few more links here to some stuff I've written before on the Efficient Markets Hypothesis, an idea that I think has been cause for an enormous waste of intellectual energy over 40 years or so. It has many versions. They are either 1) clearly false and so uninteresting or 2) clearly and unsurprisingly true, and hence uninteresting. That's my opinion in short; links to more extended discussions below.

The aforementioned Prize certainly involves a weird juxtaposition of two names -- Robert Shiller and Eugene Fama -- that you wouldn't normally think of seeing together. The one, Shiller, is a great enthusiast for markets, but also a staunch realist who thinks markets can and do go awry in lots of ways, creating bubbles, wasting resources, etc. The other, Fama, is a great enthusiast for markets who thinks they never go wrong, ever, and have an almost magical capacity to steer investments wisely (I think, but it's pretty hard to know exactly what he believes). So wait -- I guess they are clearly linked after all by the label "market enthusiast." There the similarity ends.

I've written previously on a number of occasions about the dreadfully long and confused arguments over the Efficient Markets Hypothesis. See here for a general introduction, here for some very recent evidence that pretty much kills the idea in one swoop, and here for a discussion of the perversions of normal logic often used by defenders of the EMH to prop the idea up in the face of all evidence. If writing papers about the EMH was banned I think finance would immediately take a step in the right direction. I just don't think it is interesting. Saying that the "market is hard to beat" and is therefore "efficient" in some peculiar sense isn't saying much at all.