Tuesday, August 30, 2011

Algorithms are smarter than people

On the topic of algorithmic trading, I recently posted on some evidence documenting the benefits it brings to markets -- more liquidity, lower spreads and trading costs, etc. On a related topic, Ole Roleberg at Freakynomics has a nice post reviewing some of the evidence that automated decision tools actually make better decisions that real people when confronting many different kinds of problems. As he notes,
There’s a host of studies showing that human judgment is poor at synthesizing and weighting a large number of different types of evidence, and that simple, statistical models can outperform humans on tasks such as predicting recidivism, making clinical judgments (psychiatry and medicine), predicting divorce, predicting future academic success, etc. (for an entrypoint to this literature, see here for a blogpost I found that has some good quotes from J.D. Trout and Michael Bishop).

I guess the point is that algorithmic trading can be good or bad depending on the algorithm – and that the danger it brings is more if the ecology of trading algorithms active in a market is of a kind that could create cascading ripples destabilizing the market: One set of algorithms lowering the price of a set of stocks, triggering another set of algorithms to sell these stocks to avoid loss, triggering another set of… and so on.
This is precisely the point I've made before about the dangers of algorithms -- it's not one algorithm that might blow things up, but potentially explosive webs of feedback running between many.

But I think the superior performance of algorithms at making decisions is itself quite striking and not generally recognized. The article to which Rogeberg links makes the following all-too-plausible remark:
Training of large numbers of experts by universities has probably had the perverse effect of increasing the number of people running around making highly confident but wrong judgements. But the tendency to not notice our errors and to place excessive confidence in our subjective judgements is something that all humans suffer from to varying degrees.
One final interesting read -- again thanks to Rogeberger for pointing this out -- is a profile in The Atlantic of Cliff Asness of the quant hedge fund Applied Quantitative Research. AQR was one of the hedge funds involved in the infamous "quant meltdown" of August 2007 which was driven precisely by a positive feedback loop, in the case one which caused a violent de-leveraging among a number of hedge funds using similar strategies and invested in similar assets. This is one of the few cases in which we have a pretty good quantitative model explaining how these kinds of feedback loops emerge essentially in the same way violent storms (or hurricanes) do in the atmosphere -- through ordinary processes which create the conditions in which explosive events become virtually certain. In the profile, Asness describes the dynamics behind the quant meltdown, which weren't as complex, mysterious or irrational as many people seem to think:
He told the New York Post that he blamed the sudden losses not on AQR's computer models but on "a strategy getting too crowded ... and then suffering when too many try to get out the same door" at the same time.

Wednesday, August 24, 2011

Efficiency versus stability

UPDATED BELOW

I had an opinion piece published today in Bloomberg Views looking at the relationship between market efficiency and stability, a topic which hasn't received much attention in the economics literature until recently. The point of the essay was to explore two distinct recent studies which suggest that adding more derivative instruments to markets tends to make them less stable, even if they do push markets toward the ideal of market completeness and efficiency.

I wanted to make available here some further technical information on the two studies I mentioned, but as publication arrived very quickly and I've been pressed with other deadlines I haven't yet managed to write the post as I wanted. However, I can at least offer some information with the idea of updating it very shortly (later today, Thursday 25 August).

I've given some extensive discussion of the first study I mentioned, by economists William Brock, Cars Hommes and Florian Wagener, in an earlier post.

The second study by Matteo Marsili is quite technical and relies for parts of its analysis on ideas and techniques imported from physics. I will tomorrow try to give some simplified discussion of the gist of this argument. What makes this particularly fascinating is that it works fully within the confines of standard general equilibrium models, and examines how market stability should evolve as the market approaches the ideal of market completeness. Agents are assumed to be fully rational, there are no problems with asymmetric information, etc. Even here, however, Marsili finds that the equilibrium becomes more and more unstable as the ideal is approached. Efficient markets are also unstable markets.

UPDATE

Marsili's argument is one he has been developing in a series of papers (with various co-authors) over several years. This paper from last year offers what is perhaps the most concise argument. It looks at a market with informed (fundamentalist) traders and non-informed (noise) traders, and shows, first, that the market becomes efficient as the number of informed traders grows. They are assumed in the model to have different kinds of private information about market outcomes, and the market becomes efficient, roughly speaking, once there are enough traders to cover the space of outcomes so all private information gets aggregated into market prices. The paper then introduces a non-informed trader -- a chartist or trend follower -- and shows that this trader has a maximum impact on the market precisely at the point at which it becomes efficient. The conclusion is very much against standard economic thinking:
[The results suggest} that information efficiency might be a necessary condition for bubble phenomena - induced by the behavior of non-informed traders...
Another paper from two years ago approaches the problem from a slightly different angle. This study looks explicitly at how the proliferation of financial instruments (derivatives) provides more means for diversifying and sharing risks and takes the market to an efficient state. However, it finds that this state is what physicists refer to as a "critical state", which is a state characterized by extreme (essentially infinite) susceptibility to small disturbances. Any small noise stirs up huge fluctuations. Again, efficiency trails instability in its wake. As the paper asserts:
This suggests that the hypothesis of Arbitrage Pricing Theory (the notion that arbitrage works to keep market in an efficient state) may not be compatible with a stable market dynamics.
This paper also makes the important point that market stability really ought to be thought of as a public good because well functioning markets do help everyone. But like most public goods, private individuals acting in their own interests will not likely provide it.

Finally, the paper I discussed in the Bloomberg article is from last year and analyses a model set up specifically so as to include the finance sector. It is very much akin to standard general equilibrium models, and includes essentially two components:

1. There are investors who aim to take their current wealth and preserve it (or make it grow) into the future. They do this by investing in various instruments provided by a sector of financial firms. These investors are assumed to be rational and have full information and they invest their wealth optimally over the set of possible investments.

2. There are financial firms who create the investment instruments and take on risks in supplying them. They also act optimally, and they hedge their risks by trading between themselves. Again, the firms are rational and have full information.

Marsili then studies what happens to this world of investors and financial firms optimally making decisions as the number of different financial instruments grows. The first result confirms expectations -- the financial firms are ever more successful in hedging their risks and they can provide the financial instruments more cheaply. Investors can therefore invest more effectively. The market becomes efficient.

But there are also two unexpected consequences. As Marsili describes them,
As markets approach completeness, however, two "unintended consequences" also arise: equilibrium portfolios develop a marked susceptibility to idiosynchratic shocks and/or parameter uncertainty and hedging engenders divergent trading volumes in the interbank market. Combining these, suggests an inverse relation between financial stability and the size of the financial sector...
In other words, the character of the optimum portfolios for both the investors and the financial firms becomes hugely sensitive to tiny shocks to the economy. As the efficient state is approached, these agents have to work ever harder to adjust their holdings to remain in the optimal condition. The market only remains efficient through an ever faster and more vigorous churning of investment positions. This shows up in the hedging done by the financial firms, where the volume of trading required to remain optimally hedged actually becomes infinite as the market reaches efficiency.

All three of these papers show much the same thing -- efficiency bringing instability along with it. But this latter paper may be the most interesting as it shows directly how the size of the financial sector also naturally explodes as this efficient-unstable regime is approached. The effect sounds suspiciously like what has happened in the past 30 years or so with massive growth in the financial industries in most developed nations.

What I find really remarkable, however, is that all of this comes from the very models that economists have been using for a long time to make arguments about market efficiency. Why did it take a physicist to look at what happens to stability at the same point? This seems bizarre indeed.

Monday, August 22, 2011

The next credit crisis -- in education?

From The Atlantic comes a chart showing an incredible rise in the level of student debt over the past decade or so. The total outstanding debt among US students has grown by a factor of more than five over this period.


 Daniel Indiviglio brings out the crucial point to appreciating just how explosive this rise has been. The figure shows two curves, red for student loan debt, blue for overall household debt. The latter itself went through a rather explosive growth from 1999-2008, yet doesn't come close to matching the rate of growth in student loans:
See that blue line for all other debt but student loans? This wasn't just any average period in history for household debt. This period included the inflation of a housing bubble so gigantic that it caused the financial sector to collapse and led to the worst recession since the Great Depression. But that other debt growth? It's dwarfed by student loan growth.

How does the housing bubble debt compare? If you add together mortgages and revolving home equity, then from the first quarter of 1999 to when housing-related debt peaked in the third quarter of 2008, the sum increased from $3.28 trillion to $9.98 trillion. Over this period, housing-related debt had increased threefold. Meanwhile, over the entire period shown on the chart, the balance of student loans grew by more than 6x. The growth of student loans has been twice as steep.
The number of students has remained more or less constant over the same period. Indiviglio goes on to ponder what happens when the bubble bursts, but there isn't an obvious endgame.

The disturbing thing is what lies behind this sudden expansion; it isn't the high-minded aim to make education possible to ever more people but the chance to make an easy profit on loans guaranteed by the US government. An earlier article in The Atlantic documented fast rises in tuition as universities aim to suck up their share of the easy credit, and of course there's been an explosion in for-profit college companies such as the Education Management Corporation. That company appears to be to the education bubble what Countrywide was to the housing bubble -- a facilitator pushing clients into loans regardless of need, solely to make a profit. As the New York Times recently reported, the Justice Department has joined in a suit against Education Management Corporation, charging it with defrauding the government "...by illegally paying recruiters based on the number of students they enrolled." Get 'em signed up regardless of need or ability to pay. Sound familiar?

As the NYT article noted,
For-profit schools enroll about 12 percent of the nation’s higher-education students yet receive about a quarter of all federal student aid; their students account for almost half of all defaults. In general, these institutions get more than 80 percent of their revenues from federal student aid. 
Good money to be made here, apparently. So you may not be surprised to hear who's behind the Education Management Corporation. According to the NYT, it is 40% owned by Goldman Sachs.

Thursday, August 18, 2011

Coping with chaos -- with false certainty

I was looking today for a paper -- allegedly published in Science earlier this year -- reporting the results of a re-run of the famous Robert Axelrod open competition for algorithms playing the Prisoner's Dilemma. In that competition, the simple TIT-FOR-TAT strategy -- start out the first time cooperating, and then afterward do whatever your opponent did in the preceding round -- won out easily over much more complex strategies. The twist on the new competition (as I've been told), is to allow copying strategies so players can explicitly mimic the behaviour of others they see doing well. Apparently, some very simple (mindless) copying strategies won this new competition, showing how blind copying can be a very effective strategy in competitive games.

But I must have had the wrong reference, because I didn't find the paper in the 8 April 2011 issue of Science. I'll track it down and post on it soon -- this kind of thing obviously has huge implications for strategies used in financial markets, where copying may well out-perform allegedly more sophisticated techniques. But I stumbled over something else in that issue of Science that is worth mentioning, even if briefly (as I don't have access to Science and haven't yet been able to read the full paper).

The paper is entitled Coping with Chaos: How Disordered Contexts Promote Stereotyping and Discrimination. It reports the results of experiments in which two psychologists, Diederik Stapel and Siegwart Lindenberg, tested how the level of environmental uncertainty might influence the tendency of volunteers to make judgments on the basis of simple stereotypes. They took advantage of a rail strike in Utrecht -- during which train stations became much more littered and disordered. Here's the abstract (at least):
Being the victim of discrimination can have serious negative health- and quality-of-life–related consequences. Yet, could being discriminated against depend on such seemingly trivial matters as garbage on the streets? In this study, we show, in two field experiments, that disordered contexts (such as litter or a broken-up sidewalk and an abandoned bicycle) indeed promote stereotyping and discrimination in real-world situations and, in three lab experiments, that it is a heightened need for structure that mediates these effects (number of subjects: between 40 and 70 per experiment). These findings considerably advance our knowledge of the impact of the physical environment on stereotyping and discrimination and have clear policy implications: Diagnose environmental disorder early and intervene immediately.
This is interesting in this limited context of discrimination and how the orderliness of physical environments might influence it, but the effect described seems in fact to be far more general -- it reflects a human longing for order and simplicity whenever faced with too much uncertainty. On the same point, another notable study from 2008 (also in Science) by Jennifer Whitson and Adam Galinsky showed how uncertainty and loss of control makes people more likely to see fictitious patterns in random data. In brief, they had a set of volunteers plays some competitive games in which they could influence how much the volunteers felt in control. For example, they could induce feelings of loss of control and uncertainty by eradicating any link between the players' actions and the outcomes. Then they tested these people on totally random data sets (some looking like stock market time series) to see how much they would perceive fictitious patterns in the random data. Those primed more strongly with the "loss of control and uncertainty" feelings were significantly more likely to see patterns where there were none -- grasping, apparently, for some kind of order in a perplexing world. The paper is available in full at the link I gave above, but here's the abstract:
We present six experiments that tested whether lacking control increases illusory pattern perception, which we define as the identification of a coherent and meaningful interrelationship among a set of random or unrelated stimuli. Participants who lacked control were more likely to perceive a variety of illusory patterns, including seeing images in noise, forming illusory correlations in stock market information, perceiving conspiracies, and developing superstitions. Additionally, we demonstrated that increased pattern perception has a motivational basis by measuring the need for structure directly and showing that the causal link between lack of control and illusory pattern perception is reduced by affirming the self. Although these many disparate forms of pattern perception are typically discussed as separate phenomena, the current results suggest that there is a common motive underlying them.
This seems to fit in very well with the experiments of Stapel and Lindenberg. It reminds me of what Nietzsche said long ago:
"Danger, disquiet, anxiety attend the unknown — the first instinct is to eliminate these distressing states. First principle: any explanation is better than none.”
 Indeed, this seems to be a very general topic on which a great deal is known from psychology. Galinsky's web site lists a forthcoming article which appears to be a review of sorts. I look forward to reading that.

Also worth a read is this feature in Wired, which discusses some related experiments. Curiously, the article quotes Christina Romer, the former chairwoman of President Obama’s Council of Economic Advisers, on economic uncertainty and it's influence on the opinions of "respected analysts" about where things are going. This was from December 2010:
One sign of heightened macroeconomic uncertainty is that the forecasts of respected analysts are all over the map. According to the Survey of Professional Forecasters conducted by the Federal Reserve Bank of Philadelphia, the difference between the highest and the lowest forecasts of unemployment a year from now is about twice as large as it was before the crisis. And forecasters’ reported uncertainty about their longer-run forecasts has shown no sign of improving over the last year. If professional forecasters are unsure of the future, businesses and consumers certainly are as well.
Then again, this spread in forecasts is a healthy thing. Things were by no means "more certain" before the crisis, it only seemed that way to an army of "respected forecasters" who found comfort in saying pretty much the same thing as everyone else.

Friday, August 12, 2011

VIX to September 11 levels

By way of Moneyscience, Nicholas Bloom notes that the VIX -- the so-called fear index -- has spiked to the same level it reached just after 9/11 (not quite as high as during late 2008):





What this means for the future is uncertain -- it is a measure of uncertainty, after all -- but Bloom suggests, by analyzing 16 previous episodes of similar spikes, that a short recession is very likely, as economic growth generally follows some level of coherent confidence, and that is obviously lacking:
I have studied 16 previous uncertainty shocks – events like 9/11, the Cuban Missile Crisis, the Assassination of JFK – and the only certain thing about these is they lead to large short-run recessions (Bloom 2009).

When people are uncertain about the future they wait and do nothing.
  • Firms do not to hire new employees, or invest in new equipment if they are uncertain about future demand.
  • Consumers do not buy a new car, a new TV, or refurnish their house if they are uncertain about their next pay-check.
The economy grinds to a halt while everyone waits.

I cannot attest to the reliability of the statistical analysis (16 events is quite few, after all), but the conclusion would hardly be surprising.

Thursday, August 11, 2011

Looting -- history does repeat itself

Writing at Salon.com, Yves Smith of Naked Capitalism offers a rather depressing but illuminating wrap up of the utter failure of the SEC or the US Justice Department to do almost anything to punish the perpetrators of massive fraud in the run up (and after) the financial crisis. It's a sobering analysis of the world we live in, which isn't (for most of us) the world we thought we lived in until a few years ago:
For most citizens, one of the mysteries of life after the crisis is why such a massive act of looting has gone unpunished. We've had hearings, investigations, and numerous journalistic and academic post mortems. We've also had promises to put people in jail by prosecutors like Iowa's attorney general Tom Miller walked back virtually as soon as they were made.

Yet there is undeniable evidence of institutionalized fraud, such as widespread document fabrication in foreclosures (mentioned in the motion filed by New York state attorney general Eric Schneiderman opposing the $8.5 billion Bank of America settlement with investors) and the embedding of impermissible charges (known as junk fees and pyramiding fees) in servicing software, so that someone who misses a mortgage payment or two is almost certain to see it escalate into a foreclosure. And these come on top of a long list of runup-to-the-crisis abuses, including mortgage bonds having more dodgy loans in them than they were supposed to, banks selling synthetic or largely synthetic collateralized debt obligations as being just the same as ones made of real bonds when the synthetics were created for the purpose of making bets against the subprime market and selling BBB risk at largely AAA prices, and of course, phony accounting at the banks themselves.
The article goes on to document how what is happening now isn't actually too different from what happened following the Crash of 1929, and how much of the problem has been engineered by the increasing influence of economics in law, specially through efforts to limit regulators' powers and the potential liabilities of corporate managers.

This is an old, familiar story. I think the best analysis is still in the brilliant 1993 paper (stimulated by the Savings and Loan Crisis in the US) by George Akerlof and colleagues entitled Looting: The Economic Underworld of Bankruptcy for Profit. Below, enjoy the final two paragraphs:
The S&L fiasco in the United States leaves us with the question, why did the government leave itself so exposed to abuse? Part of the answer, of course, is that actions taken by the government are the result of the political process. When regulators hid the extent of the true problem with artificial accounting devices, when congressmen pressured regulators to go easy on favored constituents and political donors, when the largest brokerage firms lobbied to protect their ability to funnel brokered deposits to any thrift in the country, when the lobbyists for the savings and loan industry adopted the strategy of postponing action until industry difficulties were so large that general tax revenue would have to be used to address problems instead of revenue raised from taxes on successful firms in the industry-when these and many other actions were
taken, people responded rationally to the incentives they faced within the political process.

The S&L crisis, however, was also caused by misunderstanding. Neither the public nor economists foresaw that the regulations of the 1980s were bound to produce looting. Nor, unaware of the concept, could they have known how serious it would be. Thus the regulators in the field who understood what was happening from the beginning found lukewarm support, at best, for their cause. Now we know better. If we learn from experience, history need not repeat itself.
That was 1993. Alas, history has repeated itself and didn't take too long to do so.

Wednesday, August 10, 2011

Algorithmic trading -- the positive side

In researching a forthcoming article, I happened upon this recent empirical study in the Journal of Finance looking at some of the benefits of algorithmic trading. I've written before about natural instabilities inherent to high-frequency trading, and I think we still know very little about the hazards presented by dynamical time-bombs linked to positive feed backs in the ecology of algorithmic traders. Still, it's important not to neglect some of the benefits algorithms and computer trading do bring; this study highlights them quite well.

This paper asks the question: "Overall, does AT (algorithmic trading) have salutary effects on market quality, and should it be encouraged?" The authors claim to give "the first empirical analysis of this question." The ultimate message coming out is that "algorithmic trading improves liquidity and enhances the informativeness of quotes." In what follows I've given a few highlights -- some points being obvious, others less obvious:
From a starting point near zero in the mid-1990’s, AT (algorithmic trading) is thought to be responsible for as much as 73% of trading volume in the U.S in 2009.
That's no longer news, of course. By now, mid-2011, I expect that percentage has risen to closer to 80%.

Generally, when I think of automated trading, I think of two activities: market makers (such as GETCO) and statistical arbitrage high-frequency traders, of which there are many (several hundred) firms. But this article rightly emphasizes that automated trading now runs through the markets at every level:

There are many different algorithms, used by many different types of market participants. Some hedge funds and broker-dealers supply liquidity using algorithms, competing with designated market-makers and other liquidity suppliers. For assets that trade on multiple venues, liquidity demanders often use smart order routers to determine where to send an order (e.g., Foucault and Menkveld (2008)). Statistical arbitrage funds use computers to quickly process large amounts of information contained in the order flow and price moves in various securities, trading at high frequency based on patterns in the data. Last but not least, algorithms are used by institutional investors to trade large quantities of stock gradually over time.
One very important point the authors make is that it is not at all obvious that algorithmic trading should improve market liquidity. Many people seem to think this is obvious, but there are many routes by which algorithms can influence market behaviour, and they work in different directions:
... it is not at all obvious a priori that AT and liquidity should be positively related. If algorithms are cheaper and/or better at supplying liquidity, then AT may result in more competition in liquidity provision, thereby lowering the cost of immediacy. However, the effects could go the other way if algorithms are used mainly to demand liquidity. Limit order submitters grant a trading option to others, and if algorithms make liquidity demanders better able to identify and pick off an in-the-money trading option, then the cost of providing the trading option increases, and spreads must widen to compensate. In fact, AT could actually lead to an unproductive arms race, where liquidity suppliers and liquidity demanders both invest in better algorithms to try to take advantage of the other side, with measured liquidity the unintended victim.
This is the kind of thing most participants in algorithmic trading do not emphasize when raving about the obvious benefits it brings to markets.

However, the most important part of the paper comes in an effort to track the rise of algorithmic trading (over roughly a five year period, 2001-2006) and to compare this to changes in liquidity. This isn't quite as easy as it might seem because algorithmic trading is just trading and not obviously distinct in market records from other trading:
We cannot directly observe whether a particular order is generated by a computer algorithm. For cost and speed reasons, most algorithms do not rely on human intermediaries but instead generate orders that are sent electronically to a trading venue. Thus, we use the rate of electronic message traffic as a proxy for the amount of algorithmic trading taking place.
 The figure below shows this data, recorded for stocks with differing market capitalization (sorted into quintiles, Q1 being the largest fifth). Clearly, the amount of electronic traffic in the trading system has increased by a factor of at least five over a period of five years:


The paper then compares this to data on the effective bid-ask spread for this same set of stocks, again organized by quintile, over the same period. The resulting figure indeed shows a more or less steady decrease in the spread, a measure of improving liquidity:


So, there is a clear correlation. The next question, of course, is whether this correlation reflects a causal process or not. I won't get into details but what perhaps sets this study apart from others (see, for example, any number of reports by the Tabb Group, which monitors high-frequency markets) is an effort to get at this causal link. The authors do this by studying a particular historical event that increased the amount of algorithmic trading in some stocks but not others.The results suggest that there is a causal link.

The conclusion, then, is that algorithmic trading (at least in the time period studied, in which stocks were generally rising) does improve market efficiency in the sense of higher liquidity and better price discovery. But the paper also rightly ends with a further caveat:

While we do control for share price levels and volatility in our empirical work, it remains an open question whether algorithmic trading and algorithmic liquidity supply are equally beneficial in more turbulent or declining markets. Like Nasdaq market makers refusing to answer their phones during the 1987 stock market crash, algorithmic liquidity suppliers may simply turn off their machines when markets spike downward.

This resonates with a general theme across all finance and economics. When markets are behaving "normally", they seem to be more or less efficient and stable. When they go haywire, all the standard theories and accepted truths go out the window. Unfortunately, "haywire" isn't as unusual as many theorists would like it to be.

** UPDATE **

Someone left an interesting comment on this post, which for some reason hasn't shown up below. I had an email from Puzzler183 saying:

"I am an electronic market maker -- a high frequency trader. I ask you: why should I have to catch the falling knife? If I see that it isn't not a profitable time to run my business, why should I be forced to, while no one else is?

You wouldn't force a factory owner to run their plant when they couldn't sell the end product for a profit. Why am I asked to do the same?

During normal times, bid-ask spreads are smaller than ever. This is directly a product of automation improving the efficiency of trading."

This is a good point and I want to clarify that I don't think the solution is to force anyone to take positions they don't want to take. No one should be forced to "catch the falling knife." My point is simply that in talking about market efficiency, we shouldn't ignore the non-normal times. An automobile engine which uses half the fuel of any other when working normally wouldn't be considered efficient if it exploded every few hours. Judgments of the efficiency of the markets ought to include consideration of the non-normal times as well as the normal.

An important issue is to explore if there is a trade-off between efficiency in "normal times" as reflected in low spreads, and episodes of explosive volatility (the mini flash crashes which seem ever more frequent). Avoiding the latter (if we want to) may demand throwing some sand into the gears of the market (with trading speed limits or similar measures).

But I certainly agree with Puzzler183: no one should be forced to take on individual risks against their wishes.

Friday, August 5, 2011

Two interesting links...

I'm traveling today and will probably have little posting time for several days, but here are two interesting links:

1. John Kay has an illuminating essay identifying a common pattern in many financial and economic problems and linking them (loosely) to the structure of one simple, if diabolical, auction-type game:
The game theorist Martin Shubik invented an unpleasant economists’ party game called the dollar bill auction. The players agree to auction a dollar bill with one-cent increments to the bids. As usual, the dollar goes to the highest bidder. The twist is that both the highest bidder and the second-highest bidder must pay.

You might start with a low bid – but offers will quickly rise towards a dollar. Soon the highest bid will be 99 cents with the underbidder at 98 cents. At that point, it pays the underbidder to offer a dollar. He will not now gain from the transaction, but that outcome is better than the loss of 98 cents. And now there is a sting in the tail. There is no reason why the bidding should stop at a dollar. The new underbidder stands to lose 99 cents. But if a bid of $1.01 is successful, he can reduce his loss to a single cent.

The underbidder always comes back. So the auction can continue until the resources of the players are exhausted. The game must end, but never well. There are reports that over $200 has been paid for a dollar in Shubik’s game.
Kay goes on to describe how the structure of this game -- drawing participants to keep wagering a little more so as to avoid a greater loss bears a striking similarity to the recent "solution" agreed to for Greece (" It is plainly better to write down Greece’s debt, even to agree a permanent underwriting of the Greek economy, than to risk the breakdown of European economic integration.")

2. Money is  a fascinating thing, clearly essential to the functioning of modern economic systems, but also often stirring up instability. Banks have historically played a special and priviledged role in the creation of money by having the legal right to take in deposits and lend out against them, with only a fraction kept in reserve. This effectively multiplies the amount of money flowing in an economy and makes it possible for many more beneficial (and non-beneficial) activities to be undertaken than would otherwise be the case. This lending is also a source of instability through bank runs -- the sudden and often cascading withdraw of funds from the economy as depositors rush to get their cash back out for whatever reason, real or imagined. For this very reason, the priviledge banks have to take in deposits and loan against them comes along also with strict regulation (capital requirements, etc.). 

By way of Economist's View, I came across this very important essay by Morgan Ricks in the Harvard Business Law Review (not something I would ever be likely to peruse at random, I can tell you). Ricks points out that the so-called "shadow banking" system which has arisen in the past two decades has come to play a "money creation" role much like traditional banks, yet is not generally subject to the same regulations. In essence, an entire industry associated with the term "money markets" now takes in money (which can be taken out on demand almost instantaneously) and invests this money in longer-term speculative projects. He estimates that this shadow banking system now accounts for more than half the money creation in the US.

Of course, it was just this banking system outside of the banking system that was at the core of the financial crisis. Ricks makes a powerful argument it seems to me that the regulations originally devised for banks should be applied to any firm which plays a money creation role, regardless of what they might be called. (Warning: the paper uses some fairly dense banking jargon at times.)

Wednesday, August 3, 2011

Dudley and Hubbard: Some Greatest Hits

Entering August 2011 we're still suffering through the aftermath of the financial crisis of 2007-2008. Indeed, we may not yet have seen the worst of it. Economies around the globe are suffering, millions are unemployed. Europe, the US, Japan seem to be competing to see who has the biggest problems.

So I thought it might be interesting -- or at least perversely entertaining -- to look back to the rosy days, before the crisis, when our wise academic economists and bankers were telling us how great things were going, mostly because of the wonders of modern financial engineering. Good examples can be found in thousands of reports and academic papers, but I chose a report issued in November 2004 and co-authored by economists R. Glenn Hubbard of Columbia University (formerly an economic advisor to the president) and William Dudley, then at Goldman Sachs and now, post-crisis, acting head (wouldn't you know!) of the Federal Reserve Bank of New York.

The report was happily entitled "How Capital Markets Enhance Economic Performance and Facilitate Job Creation." It is overflowing with wisdom and comforting messages about the nature of global capitalism and the manifold benefits that necessary accrue from vibrant capital markets.

Some highlights follow (anything in bold is my own emphasis). First, in an overview section:
"The ascendancy of the US capital markets — including increasing depth of US stock, bond, and derivative markets — has improved the allocation of capital and of risk throughout the US economy. ... The same conclusions apply to the United Kingdom, where the capital markets are also well-developed."

The consequence has been improved macroeconomic performance. ... Because market prices adjust instantaneously to new information, the development of the capital markets has introduced new discipline into policymaking.

The development of the capital markets has provided significant benefits to the average citizen. Most importantly, it has led to more jobs and higher wages.

The capital markets have also acted to reduce the volatility of the economy. Recessions are less frequent and milder when they occur. As a result, upward spikes in the unemployment rate have occurred less frequently and have become less severe.

The development of the capital markets has also facilitated a revolution in housing finance. As a result, the proportion of households in the US that own their homes has risen substantially over the past decade."
The two economists go on to argue for each of these points in some detail. To begin with, they point out that in the US and UK the financial markets have grown to take over much of the lending previously done by banks, especially when compared to other still bank-centric nations such as Germany or Japan. They then ask (and answer) the question: "Why are the UK and US ahead?":
"The shift from depository institution intermediation to capital markets intermediation appears to be driven mostly by technological developments. Computational costs have fallen rapidly. As technology has improved, information has become much more broadly available. This has improved transparency. As this has occurred, depository institutions have lost some of their ability to charge a premium for their intermediary services. Often, borrowers and lenders interact directly, as they find that the lender can earn more and the borrower can pay less by cutting out the depository intermediary as a middleman."
In short, it seems that the UK and US have simply let the free market work, and used technology to set it free. As a result, they've gained the benefits of more efficient allocation of capital from savers to borrowers, driving the beneficial advance of business and technology. At the same time, the authors note, the increased role of financial markets - especially through the derivatives markets -- has reduced risks:
...the development of the capital markets has helped distribute risk more efficiently. Part of the efficient allocation of capital is the transfer of risk to those best able to bear it — either because they are less risk averse or because the new risk is uncorrelated or even negatively correlated with other risks in a portfolio. This ability to transfer risk facilitates greater risk-taking, but this increased risk-taking does not destabilize the economy. The development of the derivatives market has played a particularly important role in this risk-transfer process.
Dudley and Hubbard cite several sources of data to support these claims -- such as higher returns in US and UK markets in recent years in comparison with Japanese or European markets. Also, they point to the apparently improved stability of the US banking system:
The rapid development of the capital markets over the past decade also appears to have made the US banking system more stable. ... As shown in [the figure below, Exhibit 6 from the paper], only 16 US commercial banks failed during the 2001-2003 period. Moreover, these banks were small, accounting for less than $3 billion in total assets. In contrast, at a comparable point in the business cycle in 1990-1992, 412 commercial banks failed, with assets totaling over $120 billion.


As mentioned above, the authors attribute this dramatic improvement in banking stability to the increased use of derivatives. In particular, they point out, the use of credit derivatives such as credit default swaps (CDS) has had marked beneficial effects on stability:
Credit derivative obligations have become an important element that has helped protect bank lending portfolios against loss. These instruments allow a bank to obtain protection from a third party against the risk of a corporate bankruptcy. This protection allows the bank to continue to lend. At the same time, the bank can limit its credit exposure to individual counterparties and diversify its credit exposure across industries and geographically. The decline in banking failures is evidence that derivatives have helped to distribute risk more broadly throughout the economy.
Finally, all this together has led to overall better macroeconomic performance and stability. The authors argue how this has played out in three significant ways:
First, because the capital markets use mark-to-market accounting, it is more difficult for problems to be deferred. As a result, pain is borne in real time, which means that the ultimate shock to the economy tends to be smaller. In contrast, when depository institutions get into trouble as a group, the pressure for regulatory forbearance increases. Deferral causes the magnitude of the problem to increase. Usually — as can be seen with the US saving and loan crisis and in the case of Japan’s decade-long banking crisis — this forbearance just creates a much bigger problem that poses a greater threat to macroeconomic stability.
In other words, the capital markets have made it much more unlikely to encounter large economic or financial crises, because they act rapidly to keep things in balance. They go on:
Second, by providing immediate feedback to policymakers, the capital markets have increased the benefits of following good policies and increased the cost of following bad ones. Good policies result in lower risk premia and higher financial asset prices. Investors are supportive. Bad policies lead to bad financial market performance, which increases investor pressure on policymakers to amend their policy choices. As a result, the quality of economic policymaking has improved over the past two decades, which has helped improve economic performance and macroeconomic stability.

Third, in the United States, the capital markets have helped make the housing market less volatile. With the development of a secondary mortgage market and the elimination of interest rate ceilings on bank deposits, “credit crunches” of the sort that periodically shut off the supply of funds to home buyers, and crushed the homebuilding industry between 1966 and 1982, are a thing of the past. Today, the supply of credit to qualified home buyers is virtually assured. The result has been to cut the volatility of activity in the economy’s most interest-sensitive sector virtually in half. This change is a truly significant improvement, because it means that the economy’s most credit-sensitive sector is now more stable.
One sentence in that last paragraph deserves repeating as it rises almost to the level of poetry:
...“credit crunches” of the sort that periodically shut off the supply of funds to home buyers, and crushed the homebuilding industry between 1966 and 1982, are a thing of the past.
To be fair I should mention that Dudley and Hubbard did mention Warren Buffet's famous warning that derivatives were "weapons of mass destruction," although they set it off against Alan Greespan's infamous reassurances that the market could be trusted to eliminate any real dangers. They sided with Greenspan.

To be fair also, I should also say that everyone makes mistakes. I assume Dudley and Hubbard wrote everything they did in good faith, based on their true belief that capital markets really are automatically efficient and stable. Just because the paper was a publication of the Goldman Sachs "Global Market Institute" doesn't necessarily mean it was intentionally manufactured as an advertisement for all the things Goldman Sachs and other financial firms do. 

Monday, August 1, 2011

Discounting -- why psychology matters too

A reader of my Bloomberg essay from last week (and related post here) on economic discounting emailed me to make a point that deserves some brief discussion. He asked me not to "ignore the psychology of discounting," i.e. the realistic aspects of human behaviour that really determine how we discount the future, whether or not that behaviour conforms to some "rational" paradigm.

My view is that I couldn't agree more. I didn't mention psychology in the Bloomberg piece for two reasons. First, for lack of space (900 words), and second, so as to give greater emphasis to the mathematical angle, this being where traditional economists feel they are on the firmest ground. Point to the fact that real people (and many other animals as well) in experiments don't seem to follow exponential discounting, but something weaker, and the hard-nosed economist will simply respond by saying this only shows that "people are irrational, and we can learn to act more rationally with logic."

Hence, whatever the importance of psychology, I thought it was important to bring out this one argument against exponential discounting, as it rests on the same dry logic that economists thought supported their position. It doesn't. What economists are currently doing is irrational. That should raise some serious questions about how cost-benefit analyses may be leading us to undervalue the future in a hundred settings.

But that's not at all to say that psychology isn't important. My reader pointed to the work of psychologist Shane Frederick of Yale University. I happen to know of Frederick as I use one of his past experiments on framing effects in some writing seminars I give (along with my colleague Justin Mullins) to Ph.D. students. The puzzle is:
A bat and a ball together cost $1.10. The bat costs one dollar more than the ball. How much is the ball?
Most of us feel an initial inclination to say 10 cents, even though the correct answer is 5 cents. The immediate, intuitive part of our brain pulls us toward the 10-cent response, and we have to use the slow deliberate part of our brain to get the right answer. If I recall the numbers correctly, Frederick did this experiment with students at Princeton University and University of Michigan, giving them something like 15 minutes to respond, and roughly half still gave the wrong answer. I use this to instill in students how important the structure of their writing is -- it's not only what you say, but how you say it, and saying it the wrong way creates a puzzle for your readers.

But back to discounting -- my reader (thanks!) pointed me to this paper by Frederick which looks at how similar framing effects influence how people respond to discounting questions. Very briefly: Some earlier experiments suggested that people, when asked how they value a life now versus one in the future, give a lot more value to the present life. The numbers ranged from 45 lives to more than 200 lives 100 years from now being worth the same amount as one life now. This was taken as evidence of strong discounting in the psychological make-up of people.

In contrast, Frederick showed in further experiments that the results you get depend very strongly on how you ask the question. Frame it in one way and you get evidence of strong discounting, frame it in another and people weigh lives 100 years in the future equal to those today. As he notes, the results are all down to the "elicitation procedures" used by the experimenter:
... different elicitation procedures yield widely varying results because they evoke (or suppress) several distinct considerations or criteria relevant to the evaluation of such life saving programs (e.g., uncertainty, efficiency, and distributional equity), and because they produce, to different extents, experimental demand effects: cues about what a reasonable answer should be.
Perhaps the most important observation made in this paper is that the previous experiments purporting to find evidence of strong discounting actually don't show such evidence. Referring to one of the most prominent such papers, Frederick notes that the experimenters afterward asked participants to explain their responses. Their answers followed the pattern below:

• Technological progress provides means to save people in the future 31.3%
• One should live day by day 31.7%
• Future is uncertain 15.4%
• The life I save may be my own 6.5%
• Present-oriented program saves more lives 1.6%
• Saving lives now means more lives in the future 2.8%
• Other 7.7%
• Do not know 2.9%

Frederick's comments on this I think are quite important:
Notably, there is no category labeled “I care less about future generations than this generation” or anything that suggests “ethical values” or “kinship” or a diminished concern for future people. In the study presented here, respondents were not requested to explain their answers, but were invited to comment on the questions or their answers if they wished. These comments suggest reasons similar to those listed above. Many respondents refused to believe that the future deaths would actually occur (e.g., “We’ll figure out a way to save lives in the future,” “Technology will change and guarantee higher survival rates,” “In 100 years, a solution might be found to save the life.”). Others were dubious of the long term commitment by the government needed to ensure that the future programs would be instituted (e.g., “I don’t trust long term projects in the hands of government agencies which are subject to political whims.”). None of the 29 people who offered written justifications for their choice indicated that they felt less concern for, or empathy toward, or kinship with future people.

I received a good number of very similar comments in the more critical responses to my Bloomberg essay. As Frederick sums up:
The results of this study cast doubt on previous claims that the public values future lives much less than present lives.