Monday, April 29, 2013

How to misunderstand crises... with Rational Expectations

** UPDATE BELOW ** I've just about finished Gary Gorton's excellent book Misunderstanding Financial Crises. I think it's the most convincing book I've read so far that links the mechanisms of the recent crisis to crises in the past. In effect, he argues that the crisis was the direct result of the uncontrolled creation of money by the shadow banking sector, and ultimately took place as a classic bank run, no different from runs in the past, except that this run took place mostly out of public view because it didn't involve ordinary bank deposits. The new kind of money in this bank run was stuff such as repo agreements and commercial paper which played the role of money for financial institutions. In 2007-2008, when lenders lost confidence (for good reason) in the mortgage-backed collateral backing this money, they demanded that money back, and the financial system seized up.

The explanation is convincing and wholly natural. The argument is most convincing because Gorton does a masterful job of placing this bank run in the context of the long history of past runs. And also because Gorton, as an economist, places blame squarely on the economics profession (himself included) for being asleep at the wheel:
Think of economists and bank regulators looking out at the financial landscape prior to the financial crisis. What did they see? They did not see the possibility of a systemic crisis. Nor did they see how capital markets and the banking system had evolved in the last thirty years. They did not know of the existence of new financial instruments or the size of certain money markets. They did not know what "money" had become. They looked from a certain point of view, from a certain paradigm, and missed everything that was important... The blindness is astounding. That economists did not think such a crisis could happen in the United States was an intellectual failure.

It seems to me that there is a certain amount of denial among economists. I have noticed, in talking about the ideas in this book with my economist colleagues, that there is a fairly clear generational divide on this. To younger economists and graduate students, it is obvious that there was an intellectual failure. Some older economists are inclined to hem and haw, resorting to farfetched rebuttals. It is clear that this is a sensitive issue, as like banks no one wants to have to write down the value of their capital.
The book gets rather technical in places talking about the details of day to day financing on Wall St., but all in a way that adds credibility to the main argument.

One other thing of interest. Gorton in a late chapter, when discussing the spectacular failure of the rational expectations paradigm, quotes University of Chicago economist James Heckman, winner of the economics' Nobel Prize (yes, that's not its actual name) in 2000, from an interview he did with John Cassidy in 2010. I hadn't come across the interview before. It's a fascinating read and gives some interesting perspective on varied views held by economists within the Chicago department (Cassidy's words in italics):
What about the rational-expectations hypothesis, the other big theory associated with modern Chicago? How does that stack up now?

I could tell you a story about my friend and colleague Milton Friedman. In the nineteen-seventies, we were sitting in the Ph.D. oral examination of a Chicago economist who has gone on to make his mark in the world. His thesis was on rational expectations. After he’d left, Friedman turned to me and said, “Look, I think it is a good idea, but these guys have taken it way too far.”

It became a kind of tautology that had enormously powerful policy implications, in theory. But the fact is, it didn’t have any empirical content. When Tom Sargent, Lard Hansen, and others tried to test it using cross equation restrictions, and so on, the data rejected the theories. There were a certain section of people that really got carried away. It became quite stifling.

What about Robert Lucas? He came up with a lot of these theories. Does he bear responsibility?

Well, Lucas is a very subtle person, and he is mainly concerned with theory. He doesn’t make a lot of empirical statements. I don’t think Bob got carried away, but some of his disciples did. It often happens. The further down the food chain you go, the more the zealots take over.

What about you? When rational expectations was sweeping economics, what was your reaction to it? I know you are primarily a micro guy, but what did you think?

What struck me was that we knew Keynesian theory was still alive in the banks and on Wall Street. Economists in those areas relied on Keynesian models to make short-run forecasts. It seemed strange to me that they would continue to do this if it had been theoretically proven that these models didn’t work.

What about the efficient-markets hypothesis? Did Chicago economists go too far in promoting that theory, too?

Some did. But there is a lot of diversity here. You can go office to office and get a different view.

[Heckman brought up the memoir of the late Fischer Black, one of the founders of the Black-Scholes option-pricing model, in which he says that financial markets tend to wander around, and don’t stick closely to economics fundamentals.]

[Black] was very close to the markets, and he had a feel for them, and he was very skeptical. And he was a Chicago economist. But there was an element of dogma in support of the efficient-market hypothesis. People like Raghu [Rajan] and Ned Gramlich [a former governor of the Federal Reserve, who died in 2007] were warning something was wrong, and they were ignored. There was sort of a culture of efficient markets—on Wall Street, in Washington, and in parts of academia, including Chicago.

What was the reaction here when the crisis struck?

Everybody was blindsided by the magnitude of what happened. But it wasn’t just here. The whole profession was blindsided. I don’t think Joe Stiglitz was forecasting a collapse in the mortgage market and large-scale banking collapses.

So, today, what survives of the Chicago School? What is left?

I think the tradition of incorporating theory into your economic thinking and confronting it with data—that is still very much alive. It might be in the study of wage inequality, or labor supply responses to taxes, or whatever. And the idea that people respond rationally to incentives is also still central. Nothing has invalidated that—on the contrary.

So, I think the underlying ideas of the Chicago School are still very powerful. The basis of the rocket is still intact. It is what I see as the booster stage—the rational-expectation hypothesis and the vulgar versions of the efficient-markets hypothesis that have run into trouble. They have taken a beating—no doubt about that. I think that what happened is that people got too far away from the data, and confronting ideas with data. That part of the Chicago tradition was neglected, and it was a strong part of the tradition.

When Bob Lucas was writing that the Great Depression was people taking extended vacations—refusing to take available jobs at low wages—there was another Chicago economist, Albert Rees, who was writing in the Chicago Journal saying, No, wait a minute. There is a lot of evidence that this is not true.

Milton Friedman—he was a macro theorist, but he was less driven by theory and by the desire to construct a single overarching theory than by attempting to answer empirical questions. Again, if you read his empirical books they are full of empirical data. That side of his legacy was neglected, I think.

When Friedman died, a couple of years ago, we had a symposium for the alumni devoted to the Friedman legacy. I was talking about the permanent income hypothesis; Lucas was talking about rational expectations. We have some bright alums. One woman got up and said, “Look at the evidence on 401k plans and how people misuse them, or don’t use them. Are you really saying that people look ahead and plan ahead rationally?” And Lucas said, “Yes, that’s what the theory of rational expectations says, and that’s part of Friedman’s legacy.” I said, “No, it isn’t. He was much more empirically minded than that.” People took one part of his legacy and forgot the rest. They moved too far away from the data.

** UPDATE **

On a closely related note, check out between 18:00 and about 20:25 of this video documentary on debt and its primary role in the crisis, link courtesy of Lars Syll. Robert Lucas asserts (around 19:40) that debt just doesn't matter because the level of debt and credit always "cancels out." He seems to think it is strange that anyone could even think that debt should matter, as if he's completely blind to the massive agony and social upheaval ensuing from foreclosures and failed businesses around the US and the world. Lars suggests this is "unbelievable stupidity" and it is certainly unbelievable, but I think maybe it is less stupidity and reflects more a kind of borderline autistic inability to make a distinction between some extremely abstract mathematical model and actual economic reality. In Lucas's models, I suspect that debt and credit do always cancel out. Which is one aspect of what makes those models quite useless for many purposes, and dangerous in the hands of anyone who takes them too seriously.  

Friday, April 12, 2013

Rakoff and the SEC

The Economist has a short interesting article looking at what has happened since federal judge Jed Rakoff, back in 2011, rejected a $285m settlement between the SEC and Citicorp. Rakoff was rightly irked that the SEC and Citicorp had reached a typical business-as-usual ruling where the alleged offender pays a fine (part of the cost of business) yet admits no wrong doing. How, Rakoff asked, does this serve the public interest, especially in deterring further crimes?

That ruling is still in some sort of appeal process, but as the article points out, several other judges have since taken inspiration from Rakoff's action and have rejected similar cozy arrangements between the SEC and various alleged offenders. On the ongoing saga of the Rakoff ruling, it seems that a final appeal decision may come out within a month or so, and many parties have taken an interest. As the article notes,
No less than four amicus briefs (filings by someone not party to the case) have been received—and not just from the usual suspects. The authors were the Business Round Table (an organisation of chief executives); a coalition of 19 prominent law professors; the former head of the SEC, Harvey Pitt; and the Occupy Wall Street Movement. All the submissions ask searching questions about the agency’s performance in light of the financial crisis. It will be [new SEC head Mary Jo] White’s job to restore confidence in the SEC. The courts seem increasingly prepared to make that task harder.
It's only that last bit that I think the article has completely wrong. If it is the job of Mary Jo White to restore confidence in the SEC, then Rakoff and the other judges are actually showing her the way. If anything, they are making it easier. But I'm not convinced this is really the primary purpose of her job....


Wednesday, April 10, 2013

Model abuse... stop it now!

I think we would benefit from better and more realistic models of systems in economics and finance, better models for imagining and assessing risks, and so on. But it is true that having a better model is one thing, using it another entirely. A commenter on my recent Bloomberg piece on agent-based models pointed me to  this post, which looks at some examples of how risk models in finance (ones this author had helped develop) were repeatedly abused and distorted to suit the needs of higher ups:
For a person like myself, who gets paid to “fix the model,” it’s tempting to do just that, to assume the role of the hero who is going to set everything right with a few brilliant ideas and some excellent training data.

Unfortunately, reality is staring me in the face, and it’s telling me that we don’t need more complicated models.

If I go to the trouble of fixing up a model, say by adding counterparty risk considerations, then I’m implicitly assuming the problem with the existing models is that they’re being used honestly but aren’t mathematically up to the task.

But this is far from the case – most of the really enormous failures of models are explained by people lying. Before I give three examples of “big models failing because someone is lying” phenomenon, let me add one more important thing.

Namely, if we replace okay models with more complicated models, as many people are suggesting we do, without first addressing the lying problem, it will only allow people to lie even more. This is because the complexity of a model itself is an obstacle to understanding its results, and more complex models allow more manipulation.
 

Tuesday, April 9, 2013

Poking the hive of DSGE (Distinctly Sensitive Group of Economists)



The other day when I wrote my recent post What you can learn from DSGE, I expected that maybe 6 or 8 people would read it. I mean, it's a fairly tiny fraction of people who really want to read about the methodology of economic modelling, even if some people like myself insist on writing about it occasionally. So I was surprised that this post seems to have drawn considerable attention, especially from economists (apparently) writing on the forum econjobrumors. An economist I know told me about this site a while back, describing it as a hornet's nest of vicious criticism and name calling.

Now I know this first hand: the atmosphere there is truly dynamic and stochastic, choked with the smog of blogosphere-style vitriol (one commenter even suggesting that I should be shot!). Some comments were amusing and rather telling. For example, writing anonymously, one reader commented that....
I like how this blogger cites a GMU Ph.D. student as an example of someone considering alternatives to rational expectations. The author has no idea that such work has been going on for decades. He doesn't know s**t.
Actually, I never implied that alternatives had never been considered before. In any event, I guess the not-so-hidden message here is that grad students from GMU -- and not even in a Department of Economics, tsk! tsk! -- shouldn't be taken seriously. Maybe the writer was just irritated that the graduate student in question, Nathan Palmer, was co-author on the paper, recently published in the American Economic Review, that I just wrote about in Bloomberg. AER is a fairly prominent outlet, I believe, taken seriously in the profession. It seems that some real economists must agree with me that this work is pretty interesting.

Most of the other comments were typical of the blog-trashing genre, but one did hit on an interesting point that deserves some further comment:
...the implication that physicists or other natural scientists would never deploy the analytic equivalent of a representative agent when studying physical processes is not quite correct.
Mean Field Theory:
In physics and probability theory, mean field theory (MFT also known as self-consistent field theory) studies the behavior of large and complex stochastic models by studying a simpler model. Such models consider a large number of small interacting individuals who interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
The ideas first appeared in physics in the work of Pierre Curie[1] and Pierre Weiss to describe phase transitions.[2] Approaches inspired by these ideas have seen applications in epidemic models,[3] queueing theory,[4] computer network performance and game theory.[5]
This is a good point, although I definitely never suggested that this technique is not used in physics. The mean field approach in physics is indeed the direct analogy to the representative agent technique. Theorists use it all the time, as it is simple and leads quickly to results that are sometimes reasonably correct (sometimes even exact). And sometimes not correct.

In the case of a ferromagnet such as iron, the method essentially assumes that each elementary magnetic unit in the material (for simplicity, think of it as the magnetic moment of a single atom that is itself like a tiny magnet) acts independently of every other. That is, each one responds to the overall mean field created by all the atoms throughout the entire material, rather than to, for example, its closest neighbors. In this approximation, the magnetic behavior of the whole is simply a scaled up version of that of the individual atoms. Interactions between nearby magnetic elements do not matter. All is very simple.

Build a model like this -- you'll find this in any introductory statistical mechanics book -- and you get a self-consistency condition for the bulk magnetization. Lo and behold, you find a sharp phase transition with temperature, much like what happens in real iron magnets. A piece of iron is non-magnetic above a certain critical temperature, and spontaneously becomes magnetic when cooled below that temperature. So, voila! The mean field method works, sometimes. But this is only the beginning of the story.

Curie and Weiss wrote down theories like this in the early 1900s and this way of thinking remained in fashion into the 1950s. Famed Russian physicist Lev Landau developed a much more general theory of phase transitions based on the idea. But here's the kicker -- since the 1960s, i.e. for half a century now, we have known that this theory does not work in general, and that the mean field approximation often breaks down badly, because different parts of a material aren't statistically independent. Especially near the temperature of the phase transition, you get strong correlations between different magnetic moments in iron, so what one is doing strongly influences what others are likely to be doing. Assume statistical independence now and you get completely incorrect results. The mean field trick fails, and sometimes very dramatically. As a simple example, a string of magnetic elements in one dimension, held on a line, does not undergo any phase transition at all, in complete defiance of the mean field prediction.

An awful lot of the most interesting mathematical physics over the past half century has been devoted to overcoming this failure, and to learning how go beyond the mean field approximation, to understand systems in which the correlations between parts are strong and important. I believe that it will be crucial for economics to plunge into the same complex realm, if any serious understanding is to be had of the most important events in finance and economics, which typically do involve strong influences acting between people. The very successful models that John Geanakoplos developed to predict mortgage prepayment rates only worked by including an important element of contagion -- people becoming more likely to prepay when many others prepay, presumably because they become more aware of the possibility and wisdom of doing so.

Unfortunately, I can't write more on this now as I am flying to Atlanta in a few minutes. But this is a topic that deserves a little further examination. For example, those power laws that econophysicists seem to find so fascinating? These also seem to really irritate those writing on econjobrumors. But what we know about power laws in physical systems is that they are often (though not always) the signature if strong correlations among the different elements of a system....  so they may indeed be trying to tell us something.

Sunday, April 7, 2013

Mortgage dynamics

My latest Bloomberg column should appear sometime Sunday night 7 April. I've written about some fascinating work that explores the origins of the housing bubble and the financial crisis by using lots of data on the buying/selling behaviour of more than 2 million people over the period in question. It essentially reconstructs the crisis in silico and tests which factors had the most influence as causes of the bubble, i.e. leverage, interest rates and so on.

I think this is a hugely promising way of trying to answer such questions, and I wanted to point to one interesting angle in the history of this work: it came out of efforts on Wall St. to build better models of mortgage prepayments, using any technique that would work practically. The answer was detailed modelling of the actual actions of millions of individuals, backed up by lots of good data.

First, take a look at the figure below:



This figure shows the actual (solid line) rate of repayment of a pool of mortgages that were originally issued in 1986. It also shows the predictions (dashed line) for this rate made by an agent-based model of mortgage repayments developed by John Geanakoplos working for two different Wall St. firms. There are two things to notice. First, obviously, the model works very well over the entire period up to 1999. The second, not obvious, is that the model works well even over a period for which it was not designed, by the data, to fit. The sample of data used to build the model went from 1986 through early 1996. The model continues to work well even out of sample over the final three years of this period, roughly 30% beyond the period of fitting. (The model did not work in subsequent years and had to be adjusted due to a major changes in the market itself, after 2000, especially new possibilities to refinance and take cash out of mortgages that were not there before.).

How was this model built? Almost all mortgages give the borrower the right in any month to repay the mortgage in its entirely. Traditionally, models aiming to predict how many would do so worked by trying to guess or develop some function to describe the aggregate behavior of all the mortgage owners, reflecting ideas about individual behavior in some crude way in the aggregate. As Geanakoplos et al. put it:
The conventional model essentially reduced to estimating an equation with an assumed functional form for prepayment rate... Prepay(t) = F(age(t), seasonality(t), old rate – new rate(t), burnout(t), parameters), where old rate – new rate is meant to capture the benefit to refinancing at a given time t, and burnout is the summation of this incentive over past periods. Mortgage pools with large burnout tended to prepay more slowly, presumably because the most alert homeowners prepay first. ...

Note that the conventional prepayment model uses exogenously specified functional forms to describe aggregate behavior directly, even when the motivation for the functional forms, like burnout, is explicitly based on heterogeneous individuals.

There is of course nothing wrong with this. It's an attempt to do something practically useful with the data then available (which wasn't generally detailed at the level of individual loans). The contrasting approach, seeks instead to start from the characteristics of individual homeowners and to model their behavior, as a population, as it evolves through time:
the new prepayment model... starts from the individual homeowner and in principle follows every single individual mortgage. It produces aggregate prepayment forecasts by simply adding up over all the individual agents. Each homeowner is assumed to be subject to a cost c of prepaying, which include some quantifiable costs such as closing costs, as well as less tangible costs like time, inconvenience, and psychological costs. Each homeowner is also subject to an alertness parameter a, which represents the probability the agent is paying attention each month. The agent is assumed aware of his cost and alertness, and subject to those limitations chooses his prepayment optimally to minimize the expected present value of his mortgage payments, given the expectations that are implied by the derivatives market about future interest rates.

Agent heterogeneity is a fact of nature. It shows up in the model as a distribution of costs and alertness, and turnover rates. Each agent is characterized by an ordered pair (c,a) of cost and alertness, and also a turnover rate t denoting the probability of selling the house. The distribution of these characteristics throughout the population is inferred by fitting the model to past prepayments. The effects of observable borrower characteristics can be incorporated in the model (when they become available) by allowing them to modify the cost, alertness, and turnover.
By way of analogy, this is essentially modelling the prepayment behavior of a population of homeowners as an ecologist might model, say, the biomass consumption of some population of insects. The idea would be to  follow the density of insects as a function of their size, age and other features that influence how and when and how much they tend to consume. The more you model such features explicitly as a distribution of influential factors, the more likely your model will take on aspects of the real population, and the more likely it will be to make predictions about the future, because it has captured real aspects of the causal factors in the past.

Models of this kind also capture in a more natural way, with no extra work, things that have to be put in by hand when working only at the aggregate level. In this mortgage example, this is true of the "burnout" -- the gradual lessening of prepayment rates over time (other things being equal):
... burnout is a natural consequence of the agent-based approach; there is no need to add it in afterwards. The agents with low costs and high alertness prepay faster, leaving the remaining pool with slower homeowners, automatically causing burnout. The same heterogeneity that explains why only part of the pool prepays in any month also explains why the rate of prepayment burns out over time.
One other thing worth noting is that those developing this model found that to fit the data well they had to include an effect of "contagion", i.e. the spread of behavior directly from one person to another. When prepayment rates go up, it appears they do so not solely because people have independently made optimal decisions to prepay. Fitting the data well demands an assumption that some people become aware of the benefit of prepaying because they have seen or heard about others who have done so.

This is how it was possible, going back up to the figure above, to make accurate predictions of prepayment rates three years out of sample. In a sense, the lesson is that you do better if you really try to make contact with reality, modelling as many realistic details as you have access to. Mathematics alone won't perform miracles, but mathematics based on realistic dynamical factors, however crudely captured, can do some impressive things.

I suggest reading the original, fairly short paper, which was eventually published in the American Economic Review. That alone speaks to at least grudging respect on the part of the larger economics community to the promise of agent based modelling. The paper takes this work on mortgage prepayments as a starting point and an inspiration, and tries to model the housing market in the Washington DC area in a similar way through the period of the housing bubble.

Friday, April 5, 2013

What you can learn from DSGE

                                       *** UPDATE BELOW ***

Anyone who has read much of this blog would expect my answer to the above question to be "NOTHING AT ALL!!!!!!!!!!!!!!!!!" Its true, I'm not a fan at all of Dynamic Stochastic General Equilibrium models, and think they offer poor tools for exploring the behaviour of any economy. That said, I also think economists should be ready and willing to use any model whatsoever if they honestly believe it might give some real practical insight into how things work. I (grudgingly) suppose that DSGE models might sometimes fall into this category.

So that's what I want to explore here, and I do briefly below. But first a few words on what I find objectionable about DSGE models.

The first thing is that the agents in such models are generally assumed to be optimisers. They have a utility function and are assumed to maximize this utility by solving some optimization problem over a path in time. [I'm using as my model the well known Smets-Wouters model as described in this European Central Bank document written, fittingly enough, by Smets and Wouters.] Personally, I find this to be a rather hugely implausible account of how any person or firm makes decisions when facing anything but the simplest problems. So it would seem like a miracle to me if the optimal behaviors predicted by the models would turn out to resemble even crudely the behavior of real individuals or firms.

Having said that, if I try to be generous, I can suppose that maybe, just maybe, the actual behaviour of people, while it isn't optimizing anything, might in the aggregate come out to something that isn't at least too far away from the optimal behavior, at least in some cases. I would guess there must be armies of economists out there collecting data on just this question, comparing the actions of real individuals and firms to the optimal predictions of the models. Maybe it isn't always bad. If I twist my arm, I can accept that this way of treating decision making as optimization sometimes lead to interesting insights (for people facing very smple decisions, this would of course be more likely).

The second thing I find bad about DSGE models is their use of the so-called representative agent. In the Smets-Wouters model, for example, there is essentially one representative consumer who makes decisions regarding labor and consumption, and then one representative firm which makes decisions on investment, etc. If you read the paper you will see it mention "a continuum of households" indexed by a continuous parameter, and this makes it seem at first like there is actually an infinite number of agents. Not really, as the index only refers to the kind of labor. Each agent makes decisions independently to optimize their utility; there are no interactions between the agents, no one can conduct a trade with another or influence their behavior, etc. So in essence there is really just one representative laborer and one representative firm, who interact with one another in the market. This I also find wholly unconvincing as the real economy emerges out of the complex interactions of millions of agents doing widely different things. Modelling an economy like this seems like modelling the flow of a river by thinking about the behaviour of a single representative water molecule, bouncing along the river bed, rather then thinking about the interactions of many which create pressure, eddies, turbulence, waves and so on. It seems highly unlikely to be very instructive.

But again, let me be generous. Perhaps, in some amazing way, this unbelievably crude approximation might sometimes give you some shred of insight. Maybe you can get lucky and find that a collective outcome can be understood by simply averaging over the behaviors of the many individuals. In situations where people do make up their own minds, independently and by seeking their own information, this might work. Perhaps this is how people behave in response to their perceptions of the macroeconomy, although it seems to me that what they hear from others, what they read and see in the media, probably has a huge effect and so they don't act independently at all.

But maybe you can still learn something from this approximation, sometimes. Does anyone out there know if there is research exploring this matter of when or under what conditions the representative agent approximation is OK because people DO act independently? I'm sure this must exist and it would be interesting to know more about it. I guess the RBC crowd must have an extensive program studying the empirical limits to the applicability of this approximation? 

So, those are my two biggest reasons for finding it hard to believe the DSGE framework. To these I might add a disbelief that the agents in economy do rapidly find their way to an equilibrium in which "production equals demand by households for consumption and investment and the government." We might stay well away from that point, and things might generally change so quickly that no equilibrium ever comes about. But let's ignore that. Maybe we're lucky and the equilibrium does come about.

So then, what can we learn from DSGE, and why this post? If I toss aside the worries I've voiced above, I'm willing to entertain the possibility that one might learn something from DSGE models. In particular, while browsing the web site of Nathan Palmer, a PhD student in the Department of Computational Social Science at George Mason University, I came across mention of two lines of work within the context of the DSGE formalism that I do think are interesting. I think more people should know about them.

First is work exploring the idea of "natural expectations." A nice example is this fairly recent paper by Andreas Fuster, David Laibson, and Brock Mendel. Most DSGE models, including the Smets-Wouters model, assume that the representative agents have rational expectations, i.e. they process information perfectly and have a wholly unbiased view of future possibilities. What this paper does is to relax that assumption in a DSGE model, assuming instead that people have more realistic "natural" or "intuitive expectations." Look at the empirical literature and you find that there's lots of evidence that investors and people of all kinds tend to overestimate recent trends in time series and expect them to continue. This paper explores some of this empirical literature, but then goes to its main purpose -- to include these trend following expectations into a DSGE model.

As they note, a seminal failure of rational expectations DSGE models is that they struggle "to explain some of the most prominent facts we observe in macroeconomics, such as large swings in asset prices, in other words “bubbles”, as well as credit cycles, investment cycles, and other mechanisms that contribute to the length and severity of economic contractions." These kinds of things, in contrast, do emerge quite readily from a DSGE model once the expectations of the agents is made a little more realistic. From the paper:
.....we embed natural expectations in a simple dynamic macroeconomic model and compare the simulated properties of the model to the available empirical evidence. The model’s predictions match many patterns observed in macroeconomic and financial time series, such as high volatility of asset prices, predictable up‐and‐down cycles in equity returns, and a negative relationship between current consumption growth and future equity returns.   
That is interesting, and all from a DSGE model. Whether you believe it or not depends on what you think about the objections I voiced above about the components of DSGE models, but it is at least nice that this single step towards realism pays some nice dividends in giving more plausible outcomes. This is a useful line of research.

Related work, equally interesting, is that of Paolo Gelain, Kevin J. Lansing and Caterina Mendicino, described in this working paper of the Federal Reserve Bank of San Francisco. This paper essentially does much the same thing as the one I just discussed, though in the context of the housing market. It uses a DSGE with trend following expectations for some of the agents to explore how a government might best try to keep housing bubbles in check through change in interest rates or restrictions on  leverage, i.e. how much a potential home buyer can borrow relative to the house value, or restrictions on how much they can borrow relative to income. The latter seems to work best. As they summarize:
Standard DSGE models with fully-rational expectations have difficulty producing large swings in house prices and household debt that resemble the patterns observed in many industrial countries over the past decade. We show that the introduction of simple moving-average forecast rules for a subset of agents can significantly magnify the volatility and persistence of house prices and household debt relative to otherwise similar model with fully-rational expectations. We evaluate various policy actions that might be used to dampen the resulting excess volatility, including a direct response to house price growth or credit growth in the central bank’s interest rate rule, the imposition of a more restrictive loan-to-value ratio, and the use of a modified collateral constraint that takes into account the borrower’s wage income. Of these, we find that a debt-to-income type constraint is the most effective tool for dampening overall excess volatility in the model economy. 
Again, this is really interesting stuff, worthwhile research, economics that is moving, to my mind, in the right direction, showing us what we should expect to be possible in an economy once we take the realistic and highly heterogenous behaviour of real people into account.

So there. I've said some not so nasty things about DSGE models! Now I think I need a stiff drink.

*** UPDATE ***

One other thing to mention. I'm happy to see this kind of work, and I applaud those doing it. But I do seriously doubt whether embedding the idea of trend following inside a DSGE model does anything to teach us about why markets often undergo bubble-like phenomena and have quite strong fluctuations in general. Does the theoretical framework add anything?

Imagine someone said the following to you:
 "Lots of people, especially in financial markets and the housing market, are prone to speculating and buying in the hope of making a profit when prices go up. This becomes more likely if people have recently seen prices rising, and their friends making profits. This situation  can lead to herding type behavior where many people act similarly and create positive feedbacks and asset bubbles, which eventually crash back to reality. The problem is generally made worse, for obvious reasons, if people can borrow very easily to leverage their investment..." 
I think most people would say "yes, of course." I suspect that many economists would also. This explanation, couched in words, is for me every bit as convincing as the similar dynamic wrapped up in the framework of DSGE. Indeed, it is even more convincing as it doesn't try to jump awkwardly through a series of bizarre methodological hoops along the way. In this sense, DSGE seems more like a straitjacket than anything else. I can't see how it adds anything to the plausibility of a story.

So, I guess, sorry for the title of this post. Should have been "What you can learn from DSGE: things you would be much better off learning elsewhere."