Friday, July 29, 2011

Democracy Crisis

I've long felt that while we need a better science of markets and economics in general, one that embraces ideas from other areas of modern science and brings economics up to date, this will never be enough. The recent crisis wasn't just a puzzling episode, a strange and unpredictable financial hurricane; it emerged directly out of the deep influence of financial industry money on governance. There's no solution to financial stability without good governance.

It's a depressing read, but this long essay by Numerian paints a rather bleak -- and all too realistic -- picture of US democracy. Plus some perspective on the debt-ceiling crisis (which I don't pretend to understand in any detail):
"Financial Armageddon may not ensue from this downgrade – the market may just have to get used to the benchmark “risk free rate” being less than stellar, because there is no alternative in market size and liquidity to US Treasuries. Still, it will be a landmark event – an exclamation point to the closing out of the American Century."  (h/t The Agonist)

Leverage Control -- A Subtle Story

I mentioned recently some work (in progress) by Stefan Thurner and colleagues exploring how leverage influences stability (price volatility) in a competitive, speculative market. Thurner spoke about this at a meeting on Tipping Points in Durham, UK. What I find most appealing about this work is that is explores this question with a model that is rich enough to exhibit many of the basic features we see in speculative markets -- competition between hedge funds and other investment firms to attract investors' funds, the use of leverage to amplify potential gains, the monitoring of leverage by banks who lend to the investment firms, occasional abrupt crashes and bankruptcies, etc.

Is it a perfect model? Of course not, there is no such thing; models are tools for thinking. But it is arguably better than anything else we currently have for running "policy experiments" to test what might happen in such a market if regulators take this or that step -- establishing tight limits to allowed leverage, for example. 

Stefan kindly sent me the slides from his talk, a few of which I'd like to mention here. As I said, this is work in progress, so these are preliminary results. They're interesting because they suggest that avoiding dangerous market instability through leverage limits comes with costs, and that our intuition isn't at all a reliable guide -- we need these kinds of models in which we can discover surprising outcomes (before we discover them in reality).

I won't give a detailed description of the model; it can be found in an early draft of the paper available here. Thurner and colleagues have been working to improve the model over several years, and it now reproduces a number of realistic market behaviors quite naturally. Thurner summarized these as follows:

 
In other words, the hedge funds act to eliminate mis-pricings (taking volatility out of the market), and profit by doing so. Funds have to be aggressive to survive in the face of stuff competition, but suffer if they get too large. Risks shorten the lifetime of a fund. Overall, the models also reproduces the right statistical fluctuations in the market.
As I discussed in my earlier post in this work, competition between hedge funds leads naturally to increasing leverage and drives the market to have a fat-tailed distribution of returns; it becomes subject (like real market) to large price fluctuations as a matter of course driven by its own internal dynamics (no external impacts required). In this condition, the market is highly prone to catastrophic crashes triggered by nothing by small price fluctuations linked to noise traders (unsophisticated investors buying and selling more or less at random). The figure below shows a typical example, plotting the wealth of various funds versus time, with a dramatic crash that affects all funds at once (different colors for different funds):


Now, a natural question is -- could these kinds of events be avoided with proper regulations? One idea would be to restrict the amount of leverage allowed with the aim of keeping the market returns in a mode Gaussian regime, i.e. eliminating fat tails. People could probably argue for decades about whether this would work or not without coming to an answer; this model makes it possible to do an experiment to find out, which is what Thurner and colleagues have done.

Two figures (below) show some of the results, and require some explanation. The different colors correspond to different possible regulatory regimes, and show how behavior changes with maximum allowed hedge fund leverage : BLUE (no other regulations), PALE GREEN (regulations akin to Basel I and II, in which banks loaning to hedge funds are restricted by capital requirements) and RED (a situation in which banks monitor hedge funds and reduce a hedge fund's allowed leverage below the maximum when the volatility in its assets grows; a kind of adaptive leverage control). 

First, consider a figure showing how how the action of hedge funds, and their use of volatility, actually benefits the market -- making it more efficient (in one sense). 
The figure shows the mean square price volatility versus allowed leverage. Increasing leverage lets the hedge funds pounce on opportunities more aggressively and wipe out mis-pricings more effectively. Die hard free market people should love this as it shows that the effect is strongest in the absence of any regulation. The regulated markets require higher leverage to get the same reduction in volatility.

But this isn't the whole story. Now consider another figure for the probability (per unit time) of a failure of one of the hedge funds:
Here the pure free market solution isn't so good, as this probability rises rapidly with increasing leverage. There is a relatively low value of leverage (around 5 in the model's units) where the market benefits of leverage have already been realized, and more leverage only leads to more failures (because it takes the market into the regime of fat-tailed returns; this can happen even if the mean square volatility remains small).
The regulated markets in this case perform marginally better -- the regulations reduce the number of failures, and the cost for this is marginally increased volatility.

A surprising outcome is that these same regulations, in the regime of very high leverage, actually do worse than no regulations at all -- they lead to higher market volatility AND more failures as well, a truly perverse regime.

All in all, then, this model offers a sobering perspective on how regulators might go about trying to avoid crashes linked to fat tails by limiting leverage. Some limitation clearly seems to be good. But too much can be bad, especially when coupled with other market regulations. You can't test out one idea in isolation, because they interact in surprising ways.
I'll probably have some further comments on this in the near future. It's a work in progress, as is my understanding of it -- and of what it means for the bigger picture.

Wednesday, July 27, 2011

Discounting Details

This post offers some further details in connection with an essay I've written for Bloomberg Views. It will be was published tomorrow, 28 July 2011. The topic is economic discounting, which I've posted on before. I naturally didn't get into any mathematical details in the Bloomberg essay, but some readers may find that looking at a little of the mathematics may help to clarify the key point of the argument. So here goes (in a sketch; I encourage everyone to read the original paper):

Suppose that the true discount rate for next year is r1, for the year after is r2, and so on, the rate for the ith year being ri. No one knows what these will be; the rates will fluctuate from year to year. To calculate the total discount factor over a string of N years, you should multiply the individual factors associated with each year as follows,

where δt = 1 year.

But because the future isn't known, Farmer and Geanakoplos point out, determining the correct discount factor to use over the coming N years means averaging over all possible future paths, i.e. all possible sequences of values r1, r2, ... up through rN. Hence, we need to calculate the value of an "effective" discount factor given by the formula,

[Note: This sum, of course, should be divided by the total number of paths to give the average, effective discount.]

Now, it is tempting to think that when you go through the details of calculating this average, summing up the contributions for all possible paths, and dividing by the number of paths, you will find some kind of simple result in which Deff(T) will be equal to a single exponential factor with an average discount rate for the N years, ravg. In others words, you might think -- and most peoples' intuition would tend this way -- that you would find an equation such as


That is, the effective discount rate over N years takes an exponential form with some constant ravg.

Seems sensible, but turns out to be totally wrong. If you demand the equality reflected in the previous equation, then, to make it work as T gets large, it turns out that in many cases ravg will not be a constant in time, but will take on smaller and smaller values as T gets larger. This is what Farmer and Geanakoplos have shown using computer simulations to do the calculation. They used a so-called geometric random walk for the fluctuating rate r, this being the most common mathematical process used in finance to model interest rate fluctuations (i.e. this isn't a crazy or weird model, but a highly plausible one).

Their simulations show that, as a result, the effective discount factor Deff(T) doesn't have an exponential form at all, but rather a very different "power law" form,


where α and β are constants. This falls off with increasing T much slower than an exponential. In other words, it makes discounting much weaker than the incorrect exponential form would suggest it should be.

In the earlier post I discussed some of the implications of this result and a table indicating just how rapidly, after 200 years or so, the exponential and power law forms give wildly different results, with the exponential discounting the value of the future millions or billions of times too strong.

It's rather frightening that a subtle error could make us mis-value the future so profoundly, but this indeed seems to be what we are currently doing. The incorrect exponential form is in wide and standard use by economists doing cost-benefit analyses of everything.

Tuesday, July 26, 2011

Banking Corruption Update

Just in case you're not sufficiently demoralized by the failure of authorities to punish almost anyone in the financial industry for their actions leading up to the crisis, read Joe Nocera on the recent fine handed down by the Federal Reserve to Wells Fargo Bank. The Fed fined the bank $85 million (much less than 1% of its revenues for the last quarter alone). No one will be prosecuted, this despite apparent evidence that multiple bank employees "falsified income information on mortgage applications."

Nocera's comments hit the nail on the head:
What’s more, this practice appears to have been quite widespread — “fostered,” as the Fed puts it, “by Wells Fargo Financial’s incentive compensation and sales quota programs.” Matthew R. Lee, the executive director of Inner City Press/Community on the Move and Fair Finance Watch, spent years bringing Wells’ subprime abuses to the attention of the Federal Reserve. “The way the compensation was designed ensured that abuses would take place,” he says. “It was a predatory system."

These are exactly the kind of loans — built on illegal practices — that gave us the financial crisis. Brokers working for subprime mortgage companies routinely doctored incomes to hand out subprime loans they knew the borrowers could never repay — and then, after taking their fat fees, shoveled the loans to Wall Street, which bundled them into subprime securities. This was the kindling that lit the inferno of September 2008. So again, I ask: Why is there no criminal investigation into what went on at Wells Fargo Financial?

I'm not sure whether Justice Department officials have capitulated entirely to economists' arguments about financial incentives and the inefficiencies of good old fashioned legal punishments (i.e. jail time), but I can't see the culture on Wall St. changing one tiny bit until the law shows some real teeth. The dearth of prosecutions suggests that few people in power really want it to change. 

Various organizations such as the World Audit Organization and the Internet Center for Corruption Research try to estimate the level of corruption in different nations, and have been doing it since 1995. It's hardly an exact science and depends a lot on perceptions. But I'm not surprised that the United States in these estimates has fallen from 14th globally in the year 2000 to 22nd in 2010. Hardly surprising.

Monday, July 25, 2011

Tax Codes (Yawn!) for Financial Stability?

No one (I hope) enjoys reading about tax codes, but Simon Johnson makes a very good point: they make be potentially very useful in helping to stablize markets.

His reasoning is simple. Any number of studies show that, other things being equal, the use of more leverage by banks, hedge funds and other investors creates more instability -- it can amplify small market fluctuations into far larger market upheavals. So stability would be improved by limiting leverage (although how much to limit it is a matter of some subtlety). You can limit leverage with laws, or with incentives. Johnson is thinking about incentives, particularly through the tax code (in the US). Currently, if a hedge fund seeks leverage by borrowing money, they pay interest on that loan and that interest can be deducted from their taxes. Interest payments are deductible. In contrast, if the same fund raises money by selling shares of its stock, they pay dividends on those shares. Those dividend payments are NOT deductible in US tax law. Hence, investing firms have every incentive to raise money for leverage by borrowing, rather than by selling stock.

An elimination of this tax difference may be one way to attempt to reign in the use of leverage and keep it within the bounds of safety. For those interested in the gritty details, see Johnson's testimony at a recent meeting on (double Yawn!) Tax Reform and the Tax Treatment of Debt and Equity.

The Frederic Mishkin Prize

UPDATE BELOW

Anyone who enjoyed the film The Inside Job about the financial crisis will surely enjoy a new prize instituted by the blog Naked Capitalism: The Frederic Mishkin Iceland Prize for Intellectual Integrity.

For those who haven't seen the film, it is very much worth watching, especially as it raises serious questions about prominent economists (Mishkin, Glenn Hubbard, Martin Feldstein and others) who have been writing supposedly objective economic papers attesting to the wonders of financial deregulation without ever disclosing that they had been paid quite handsomely to write these reports by banks and other interested parties.

Mishkin? In 2006, he co-authored a report entitled Financial Stability in Iceland which concluded that Iceland's banks were "essentially sound" despite growing concerns that deregulation there had triggered a huge and unsustainable housing bubble. Nowhere in the report does Mishkin disclose that how much Icelandic Chamber of Commerce paid him to write the report: $124,000.

Mishkin is a professor of economics at Columbia University. I'm not sure what the university thinks about this glowing example of intellectual integrity, but I can't see how it is anything less than academic and scientific fraud.

UPDATE: I just came across this article, which is encouraging, from the Columbia University news service (I believe). As a direct result of the film The Inside Job, Columbia University has now initiated a review of its policy on financial disclosures and conflicts of interest, aiming to come up with some stronger code of conduct.

See also this online debate arranged by The Economist on whether the profession needs a more code of conduct. Most participants seem in favor of more or less full disclosure of potential financial conflicts of interest, although many of the contributions follow the irritating habit of conceptualizing the world of ideas as "the marketplace of ideas." Please, we're not buying and selling ideas, we're thinking about them, debating them, and we hope changing them.

One highlight from the category of "Impossible that anyone but an economist could have said it" comes from Lant Pritchett of Harvard University, speaking of policies in place before the crisis and the economists who pushed them:
While some people were right and some people wrong about the consequences of policies, I have yet to see any evidence any economist acted on anything other than their best reading of the evidence, much less that their views were biased by particular financial ties, much less that a "code of conduct" would have altered this behaviour which would have in turn affected the course of events. The only evaluation I have seen suggests those who got it "wrong" were no more more likely to have had "ties" to the "financial industry" than those who got it right

Friday, July 22, 2011

The Wisdom (???) of Crowds

The notion that markets aggregate the opinions of many and thereby make superior estimations of value has a very long history. It's certainly at the root of the infamous Efficient Markets Hypothesis, which claims that markets gather and process information so efficiently that price movements have no predictable patterns and prices of financial instruments always reflect something very close to the true fundamental value of the assets in question. More recently, The Wisdom of Crowds has been the driving force behind prediction markets. One one way or another, this notion lurks behind the slippery and insidious idea that "markets know best" and that pretty much everything from water distribution to higher education should be organized as a market. 

But in his bestselling book on the topic, James Surowiecki was somewhat careful at the outset to acknowledge that the idea only works in some rather special situations (not that readers paid much attention). A crowd estimating the number of marbles in a jar or the correct price of a stock will only get superior results -- superior in accuracy to the guess of any one individual, and even of experts -- if the people are on average unbiased in their estimates; it won't work if they tend systematically to estimate too high or low. Moreover, the people have to make their estimates independently of one another. Any kind of social influence, one person copying or even being slightly swayed by the actions of another, also spoils the result. Wise crowds very quickly become dumb herds.

For an idea of such broad influence, it's surprising how few experiments have been done to probe in detail around the boundaries where wise crowds become unwise, how it happens and which are the key effects. This has been rectified by an impressive set of experiments carried out by Jan Lorenz and colleagues from ETH-Zurich, and published recently in PNAS. Their idea was to use a crowd of 144 student volunteers and have them perform estimation experiments in a range of conditions. They gave the participants monetary incentives to estimate accurately, and chose questions (on things like geography and crime statistics) for which the true answers are known. Then, in some trials, participants made their estimates on their own, without having any idea about the estimations of others, and in other trials, they were either informed in complete detail of what others had estimated or given at least average information on the others' estimates. The idea was to compare how well the crowd made estimates in the absence and presence of social influence.

What the results show is that social influence totally undermines the wisdom of crowds effect, and does so in three specific ways. It's interesting to consider these in some detail to see just how this whole "wise crowd" illusion falls apart in the face of a little social influence:

1. In what the researchers call the “social influence effect,” the mere act of listening to the judgements of others led to a marked decrease in the diversity of the participants estimates. That is, the estimates of the various people become more like one another -- people adjust their views to fit more closely with others -- but this does very little to improve the collective accuracy of the crowd. In effect, people think they are sharing information, but little information actually gets shared. The figure below illustrates what happens: in successive trials, a measure of the group's opinion diversity decreases dramatically if people hear either full or average information on the estimates of others, meanwhile the collective error decreases only marginally.


2. A second and even more interesting effect is what the researchers call the “range reduction effect.” Imagine that a government tries to use the wisdom of crowds, assembling a group and surveying their opinions, hoping to get a range of views and some idea of how much consensus there is on some topic. You would hope that, if the crowd's estimate was NOT accurate, this lack of accuracy would be reflected in a wide range of estimates from the individuals -- the wide range would signal a lack unanimity and confidence. A truly bad outcome would be a crowd that at once gives a very inaccurate estimate and does so with a narrow range of opinion differences, signalling apparent strong certainty in the result. But this is precisely what the research found -- in the social influence conditions, the individuals' estimates didn't "bracket" the true answer, with some being higher and others lower. Rather, the group narrowed the range of their views so strongly that the truth tended to reside outside of the group's range -- they were both inaccurate and apparently confident at the same time.

3. Finally, and worse still, is the “confidence effect”. The researchers interviewed the participants in the different conditions, asking them how confident they were in the accuracy of the group's final consensus estimate. Social influence, while it didn't make the crowd's estimate any more accurate, did fill the participants with strong confidence and belief in improved accuracy. Think 2005, housing bubble, mortgages with no income and no assets, etc. As hard as it is to imagine that people could have believed the market could not fail to go up further, most did. And they did in large part because they saw others apparently believing the same thing.

Altogether, this careful study points more toward the idiocy of crowds than their wisdom. Social influence is hard to eradicate. Even in markets, supposedly driven by anonymous individuals making their own estimates, lots of people are reading the newspapers and news feeds and listening to analysts, and, even when not, looking to price movements and using them to infer whether someone else may know something they don't. In these experiments, social influence makes everyone think and do much the same thing, makes it likely that the consensus view aims well wide of the actual truth, and, perversely, makes everyone involved increasingly confident that the group knows what it's doing. Some kind of Wisdom.