Showing posts with label behaviour. Show all posts
Showing posts with label behaviour. Show all posts

Friday, December 2, 2011

Interview with Dave Cliff

Dave Cliff of the University of Bristol is someone whose work I've been meaning to look at much more closely for a long time. Essentially he's an artificial intelligence expert, but has has devoted some of his work to developing trading algorithms. He suggests that many of these algorithms, even one working on extremely simple rules, consistently outperform human beings, which rather undermines the common economic view that people are highly sophisticated rational agents.

I just noticed tht Moneyscience is beginning a several part interview with Cliff, the first part having just appeared. I'm looking forward to the rest. Some highlights from Part I, beginning with Cliff's early work, mid 1990s, on writing algorithms for trading:
I wrote this piece of software called ZIP, Zero Intelligence Plus. The intention was for it to be as minimal as possible, so it is a ridiculously simple algorithm, almost embarrassingly so. It’s essentially some nested if-then rules, the kind of thing that you might type into an Excel spreadsheet macro. And this set of decisions determines whether the trader should increase or decrease a margin. For each unit it trades, has some notion of the price below which it shouldn’t sell or above which it shouldn’t buy and that is its limit price. However, the price that it actually quotes into the market as a bid or an offer is different from the limit price because obviously, if you’ve been told you can buy something and spend no more than ten quid, you want to start low and you might be bidding just one or two pounds. Then gradually, you’ll approach towards the ten quid point in order to get the deal, so with each quote you’re reducing the margin on the trade.  The key innovation I introduced in my ZIP algorithm was that it learned from its experience. So if it made a mistake, it would recognize that mistake and be better the next time it was in the same situation.

HFTR: When was this exactly?

DC: I did the research in 1996 and HP published the results, and the ZIP program code, in 1997. I then went on to do some other things, like DJ-ing and producing algorithmic dance music (but that’s another story!)

Fast-forward to 2001, when I started to get a bunch of calls because a team at IBM’s Research Labs in the US had just completed the first ever systematic experimental tests of human traders competing against automated, adaptive trading systems. Although IBM had developed their own algorithm called MGD, (Modified Gjerstad Dickhaut), it did the same kind of thing as my ZIP algorithm, using different methods. They had tested out both their MGD and my ZIP against human traders under rigorous experimental conditions and found that both algorithms consistently beat humans, regardless of whether the humans or robots were buyers or sellers. The robots always out-performed the humans.

IBM published their findings at the 2001 IJCAI conference (the International Joint Conference on AI) and although IBM are a pretty conservative company, in the opening paragraphs of this paper they said that this was a result that could have financial implications measured in billions of dollars. I think that implicitly what they were saying was there will always be financial markets and there will always be the institutions (i.e. hedge funds, pension management funds, banks, etc). But the traders that do the business on behalf of those institutions would cease to be human at some point in the future and start to be machines. 
Personally, I think there are two important things here. One is that, yes, trading will probably soon become almost all algorithmic. This may tend to make you think the markets will become more mechanical, their collective behaviour emerging out of the very simple actions of so many crude programs.

But the second thing is what this tells us about people -- that traders and investors and people in general aren't so clever or rational, and most of them have probably been following fairly simple rules all along, rules that machines can easily beat. So there's really no reason to think the markets should become more mechanical as they become more algorithmic. They've probably been quite mechanical all along, and algorithmic too -- it's just that non-rational zero intelligence automatons running the algorithms were called people. 

Tuesday, October 18, 2011

Markets are rational even if they're irrational

I promise very soon to stop beating on the dead carcass of the efficient markets hypothesis (EMH). It's a generally discredited and ill-defined idea which has done a great deal, in my opinion, to prevent clear thinking in finance. But I happened recently on a defense of the EMH by a prominent finance theorist that is simply a wonder to behold -- its logic a true empirical testament to the powers of human rationalization. It also illustrates the borderline Orwellian techniques to which diehard EMH-ers will resort to cling to their favourite idea.

The paper was written in 2000 by Mark Rubinstein, a finance professor at University of California, Berkeley, and is entitled "Rational Markets: Yes or No. The Affirmative Case." It is Rubinstein's attempt to explain away all the evidence against the EMH, from excess volatility to anomalous predictable patterns in price movements and the existence of massive crashes such as the crash of 1987. I'm not going to get into too much detail, but will limit myself to three rather remarkable arguments put forth in the paper. They reveal, it seems to me, the mind of the true believer at work:

1. Rubinstein asserts that his thinking follows from what he calls The Prime Directive. This commitment is itself interesting:
When I went to financial economist training school, I was taught The Prime Directive. That is, as a trained financial economist, with the special knowledge about financial markets and statistics that I had learned, enhanced with the new high-tech computers, databases and software, I would have to be careful how I used this power. Whatever else I would do, I should follow The Prime Directive:

Explain asset prices by rational models. Only if all attempts fail, resort to irrational investor behavior.

One has the feeling from the burgeoning behavioralist literature that it has lost all the constraints of this directive – that whatever anomalies are discovered, illusory or not, behavioralists will come up with an explanation grounded in systematic irrational investor behavior.
Rubinstein here is at least being very honest. He's going to jump through intellectual hoops to preserve his prior belief that people are rational, even though (as he readily admits elsewhere in the text) we know that people are not rational. Hence, he's going to approach reality by assuming something that is definitely not true and seeing what its consequences are. Only if all his effort and imagination fails to come up with a suitable scheme will he actually consider paying attention to the messy details of real human behaviour.

What's amazing is that, having made this admission, he then goes on to criticize behavioural economists for having found out that human behaviour is indeed messy and complicated:
The behavioral cure may be worse than the disease. Here is a litany of cures drawn from the burgeoning and clearly undisciplined and unparsimonious behavioral literature:

Reference points and loss aversion (not necessarily inconsistent with rationality):
Endowment effect: what you start with matters
Status quo bias: more to lose than to gain by departing from current situation
House money effect: nouveau riche are not very risk averse

Overconfidence:
Overconfidence about the precision of private information
Biased self-attribution (perhaps leading to overconfidence)
Illusion of knowledge: overconfidence arising from being given partial information
Disposition effect: want to hold losers but sell winners
Illusion of control: unfounded belief of being able to influence events

Statistical errors:
Gambler’s fallacy: need to see patterns when in fact there are none
Very rare events assigned probabilities much too high or too low
Ellsberg Paradox: perceiving differences between risk and uncertainty
Extrapolation bias: failure to correct for regression to the mean and sample size
Excessive weight given to personal or antidotal experiences over large sample statistics
Overreaction: excessive weight placed on recent over historical evidence
Failure to adjust probabilities for hindsight and selection bias

Miscellaneous errors in reasoning:Violations of basic Savage axioms: sure-thing principle, dominance, transitivity
Sunk costs influence decisions
Preferences not independent of elicitation methods
Compartmentalization and mental accounting
“Magical” thinking: believing you can influence the outcome when you can’t
Dynamic inconsistency: negative discount rates, “debt aversion”
Tendency to gamble and take on unnecessary risks
Overpricing long-shots
Selective attention and herding (as evidenced by fads and fashions)
Poor self-control
Selective recall
Anchoring and framing biases
Cognitive dissonance and minimizing regret (“confirmation trap”)
Disjunction effect: wait for information even if not important to decision
Time-diversification
Tendency of experts to overweight the results of models and theories
Conjunction fallacy: probability of two co-occurring more probable than a single one

Many of these errors in human reasoning are no doubt systematic across individuals and time, just as behavioralists argue. But, for many reasons, as I shall argue, they are unlikely to aggregate up to affect market prices. It is too soon to fall back to what should be the last line of defense, market irrationality, to explain asset prices. With patience, the anomalies that appear puzzling today will either be shown to be empirical illusions or explained by further model generalization in the context of rationality.
Now, there's sense in the idea that, for various reasons, individual behavioural patterns might not be reflected at the aggregate level. Rubinstein's further arguments on this point aren't very convincing, but at least it's a fair argument. What I find more remarkable is the a priori decision that an explanation based on rational behaviour is taken to be inherently superior to any other kind of explanation, even though we know that people are not empirically rational. Surely an explanation based on a realistic view of human behaviour is more convincing and more likely to be correct than one based on unrealistic assumptions (Milton Friedman's fantasies notwithstanding). Even if you could somehow show that market outcomes are what you would expect if people acted as if they were rational (a dubious proposition), I fail to see why that would be superior to an explanation which assumes that people act as if they were real human beings with realistic behavioural quirks, which they are.

But that's not how Rubinstein sees it. Explanations based on a commitment to taking real human behaviour into account, in his view, have "too much of a flavor of being concocted to explain ex-post observations – much like the medievalists used to suppose there were a different angel providing the motive power for each planet." The people making a commitment to realism in their theories, in other words, are like the medievalists adding epicycles to epicycles. The comparison would seem more plausibly applied to Rubinstein's own rational approach.

2. Rubinstein also relies on the wisdom of crowds idea, but doesn't at all consider the many paths by which a crowd's average assessment of something can go very much awry because individuals are often strongly influenced in their decisions and views by what they see others doing. We've known this going all the way back to the famous 1950s experiments of Solomon Asch on group conformity. Rubinstein pays no attention to that, and simply asserts that we can trust that the market will aggregate information effectively and get at the truth, because this is what group behaviour does in lots of cases:
The securities market is not the only example for which the aggregation of information across different individuals leads to the truth. At 3:15 p.m. on May 27, 1968, the submarine USS Scorpion was officially declared missing with all 99 men aboard. She was somewhere within a 20-mile-wide circle in the Atlantic, far below implosion depth. Five months later, after extensive search efforts, her location within that circle was still undetermined. John Craven, the Navy’s top deep-water scientist, had all but given up. As a last gasp, he asked a group of submarine and salvage experts to bet on the probabilities of different scenarios that could have occurred. Averaging their responses, he pinpointed the exact location (within 220 yards) where the missing sub was found. 

Now I don't doubt the veracity of this account or that crowds, when people make decisions independently and have no biases in their decisions, can be a source of wisdom. But it's hardly fair to cite one example where the wisdom of the crowd worked out, without acknowledging the at least equally numerous examples where crowd behaviour leads to very poor outcomes. It's highly ironic that Rubinstein wrote this paper just as the dot.com bubble was collapsing. How could the rational markets have made such mistaken valuations of Internet companies? It's clear that many people judge values at least in part by looking to see how others were valuing them, and when that happens you can forget the wisdom of the crowds.

Obviously I can't fault Rubinstein for not citing these experiments  from earlier this year which illustrate just how fragile the conditions are under which crowds make collectively wise decisions, but such experiments only document more carefully what has been obvious for decades. You can't appeal to the wisdom of crowds to proclaim the wisdom of markets without also acknowledging the frequent stupidity of crowds and hence the associated stupidity of markets.

3. Just one further point. I've pointed out before that defenders of the EMH in their arguments often switch between two meanings of the idea. One is that the markets are unpredictable and hard to beat, the other is that markets do a good job of valuing assets and therefore lead to efficient resource allocations. The trick often employed is to present evidence for the first meaning -- markets are hard to predict -- and then take this in support of the second meaning, that markets do a great job valuing assets. Rubinstein follows this pattern as well, although in a slightly modified way. At the outset, he begins making various definitions of the "rational market":
I will say markets are maximally rational if all investors are rational.
This, he readily admits, isn't true:
Although most academic models in finance are based on this assumption, I don’t think financial economists really take it seriously. Indeed, they need only talk to their spouses or to their brokers.
But he then offers a weaker version:
... what is in contention is whether or not markets are simply rational, that is, asset prices are set as if all investors are rational.
In such a market, investors may not be rational, they may trade too much or fail to diversify properly, but still the market overall may reflect fairly rational behaviour:
In these cases, I would like to say that although markets are not perfectly rational, they are at least minimally rational: although prices are not set as if all investors are rational, there are still no abnormal profit opportunities for the investors that are rational.
This is the version of "rational markets" he then tries to defend throughout the paper. Note what has happened: the definition of the rational market has now been weakened to only say that markets move unpredictably and give no easy way to make a profit. This really has nothing whatsoever to do with the market being rational, and the definition would be improved if the word "rational" were removed entirely. But I suppose readers would wonder why he was bothering if he said "I'm going to defend the hypothesis that markets are very hard to predict and hard to beat" -- does anyone not believe that? Indeed, this idea of a "minimally rational"  market is equally consistent with a "maximally irrational" market. If investors simply flipped coins to make their decisions, then there would also be no easy profit opportunities, as you'd have a truly random market.

Why not just say "the markets are hard to predict" hypothesis? The reason, I suspect, is that this idea isn't very surprising and, more importantly, doesn't imply anything about markets being good or accurate or efficient. And that's really what EMH people want to conclude -- leave the markets alone because they are wonderful information processors and allocate resources efficiently. Trouble is, you can't conclude that just from the fact that markets are hard to beat. Trying to do so with various redefinitions of the hypothesis is like trying to prove that 2 = 1. Watching the effort, to quote physicist John Bell in another context, "...is like watching a snake trying to eat itself from the tail. It becomes embarrassing for the spectator long before it becomes painful for the snake."

Monday, October 10, 2011

Creating desires with advertising...

Vance Packard, American journalist of some 50 years ago, quoted in Satyajit Das' book Extreme Money:
A toothbrush does little but clean teeth. Alcohol is important mostly for making people more or less drunk. An automobile can take one reliably to a destination and back...  There being so little to be said, much must be invented. Social distinction must be associated with a house... sexual fulfillment with a particular... automobile, social acceptance with... a mouthwash, etc. We live surrounded by a systematic appeal to a dream world which all mature, scientific reality would reject. We, quite literally, advertise our commitment to immaturity, mendacity and profound gullibility. It is the hallmark of our culture.
 And this was before color television.

Friday, September 30, 2011

The Fetish of Rationality

I'm currently reading Jonathan Aldred's book The Skeptical Economist. It's a brilliant exploration of how economic theory is run through at every level with hidden value judgments which often go a long way to  determining its character. For example, the theory generally assumes that more choice always has to be better. This follows more or less automatically from the view that people are rational "utility maximizers" (a phrase that should really be banned for ugliness alone). After all, more available choices can only give a "consumer" the ability to meet their desires more effectively, and can never have negative consequences. Add extra choices and the consumer can always simply ignore them.

As Aldred points out, however, this just isn't how people work. One of the problems is that more choice means more thinking and struggling to decide what to do. As a result, adding more options often has the effect of inhibiting people from choosing anything. In one study he cites, doctors were presented with the case history of a man suffering from osteoarthritis and asked if they would A. refer him to a specialist or B. prescribe a new experimental medicine. Other doctors were presented with the same choice, except they could choose between two experimental medicines. Doctors in the second group made twice as many referrals to a specialist, apparently shying away from the psychological burden of having to deal with the extra choice between medicines.

I'm sure everyone can think of similar examples from their own lives in which too much choice becomes annihilating. Several years ago my wife and I were traveling in Nevada and stopped in for an ice cream at a place offering 200+ flavours and a variety of extra toppings, etc. There were an astronomical number of potential combinations. After thinking for ten minutes, and letting lots of people pass by us in the line, I finally just ordered a mint chocolate chip cone -- to end the suffering, as it were. My wife decided it was all too overwhelming and in the end didn't want anything! If there had only been vanilla and chocolate we'd have ordered in 5 seconds and been very happy with the result.

In discussing this problem of choice, Aldred refers to a beautiful paper I read a few years ago by economist John Conlisk entitled Why Bounded Rationality? The paper gives many reasons why economic theory would be greatly improved if it modeled individuals as having finite rather than infinite mental capacities. But one of the things he considers is a paradoxical contradiction at the very heart of the notion of rational behaviour. A rational person facing any problem will work out the optimal way to solve that problem. However, there are costs associated with deliberation and calculation. The optimal solution to the ice cream choice problem isn't to stand in the shop for 6 years while calculating how to maximize expected utility over all the possible choices. Faced with a difficult problem, therefore, a rational person first has to solve another problem -- for how long should I deliberate before it becomes advantageous to just take a guess?

This is a preliminary problem -- call is P1 -- which has to be solved before the real deliberation over the choice can begin. But, Conlisk pointed out, P1 is itself a difficult problem and a rational individual doesn't want to waste lots of resources thinking about that one too long either. Hence, before working on P1, the rational person first has to decide what is the optimal amount of time to spend on solving P1. This is another problem P2, which is also hard. Of course, it never ends. Take rationality to it's logical conclusion and it ends up destroying itself -- it's simply an inconsistent idea.

Anyone who is not an economist might be quite amazed by Conlisk's paper. It's a great read, but it will dawn on the reader that in a sane world it simply wouldn't be necessary. It's arguing for the obvious and is only required because economic theory has made such a fetish of rationality. The assumption of rationality may in some cases have made it possible to prove theorems by turning the consideration of human behaviour into a mathematical problem. But it has tied the hands of economic theorists in a thousand ways.

Monday, September 26, 2011

Overconfidence is adaptive?

A fascinating paper in Nature from last week suggests that overconfidence may actually be an adaptive trait. This is interesting as it strikes at one of the most pervasive assumptions in all of economics -- the idea of human rationality, and the conviction that being rational must always be more adaptive than being irrational. Quite possibly not:

Humans show many psychological biases, but one of the most consistent, powerful and widespread is overconfidence. Most people show a bias towards exaggerated personal qualities and capabilities, an illusion of control over events, and invulnerability to risk (three phenomena collectively known as ‘positive illusions’)2, 3, 4, 14. Overconfidence amounts to an ‘error’ of judgement or decision-making, because it leads to overestimating one’s capabilities and/or underestimating an opponent, the difficulty of a task, or possible risks. It is therefore no surprise that overconfidence has been blamed throughout history for high-profile disasters such as the First World War, the Vietnam war, the war in Iraq, the 2008 financial crisis and the ill-preparedness for environmental phenomena such as Hurricane Katrina and climate change9, 12, 13, 15, 16.

If overconfidence is both a widespread feature of human psychology and causes costly mistakes, we are faced with an evolutionary puzzle as to why humans should have evolved or maintained such an apparently damaging bias. One possible solution is that overconfidence can actually be advantageous on average (even if costly at times), because it boosts ambition, morale, resolve, persistence or the credibility of bluffing. If such features increased net payoffs in competition or conflict over the course of human evolutionary history, then overconfidence may have been favoured by natural selection5, 6, 7, 8.

However, it is unclear whether such a bias can evolve in realistic competition with alternative strategies. The null hypothesis is that biases would die out, because they lead to faulty assessments and suboptimal behaviour. In fact, a large class of economic models depend on the assumption that biases in beliefs do not exist17. Underlying this assumption is the idea that there must be some evolutionary or learning process that causes individuals with correct beliefs to be rewarded (and thus to spread at the expense of individuals with incorrect beliefs). However, unbiased decisions are not necessarily the best strategy for maximizing benefits over costs, especially under conditions of competition, uncertainty and asymmetric costs of different types of error8, 18, 19, 20, 21. Whereas economists tend to posit the notion of human brains as general-purpose utility maximizing machines that evaluate the costs, benefits and probabilities of different options on a case-by-case basis, natural selection may have favoured the development of simple heuristic biases (such as overconfidence) in a given domain because they were more economical, available or faster.
 The paper studies this question in a simple analytical model of an evolutionary environment in which individuals compete for resources. If the resources are sufficiently valuable, the authors find, overconfidence can indeed be adaptive:
Here we present a model showing that, under plausible conditions for the value of rewards, the cost of conflict, and uncertainty about the capability of competitors, there can be material rewards for holding incorrect beliefs about one’s own capability. These adaptive advantages of overconfidence may explain its emergence and spread in humans, other animals or indeed any interacting entities, whether by a process of trial and error, imitation, learning or selection. The situation we model—a competition for resources—is simple but general, thereby capturing the essence of a broad range of competitive interactions including animal conflict, strategic decision-making, market competition, litigation, finance and war.
Very interesting. But I just had a thought -- perhaps this may also explain why many economists seem to exhibit such irrational exuberance over the value of neo-classical theory itself?

Tuesday, August 30, 2011

Algorithms are smarter than people

On the topic of algorithmic trading, I recently posted on some evidence documenting the benefits it brings to markets -- more liquidity, lower spreads and trading costs, etc. On a related topic, Ole Roleberg at Freakynomics has a nice post reviewing some of the evidence that automated decision tools actually make better decisions that real people when confronting many different kinds of problems. As he notes,
There’s a host of studies showing that human judgment is poor at synthesizing and weighting a large number of different types of evidence, and that simple, statistical models can outperform humans on tasks such as predicting recidivism, making clinical judgments (psychiatry and medicine), predicting divorce, predicting future academic success, etc. (for an entrypoint to this literature, see here for a blogpost I found that has some good quotes from J.D. Trout and Michael Bishop).

I guess the point is that algorithmic trading can be good or bad depending on the algorithm – and that the danger it brings is more if the ecology of trading algorithms active in a market is of a kind that could create cascading ripples destabilizing the market: One set of algorithms lowering the price of a set of stocks, triggering another set of algorithms to sell these stocks to avoid loss, triggering another set of… and so on.
This is precisely the point I've made before about the dangers of algorithms -- it's not one algorithm that might blow things up, but potentially explosive webs of feedback running between many.

But I think the superior performance of algorithms at making decisions is itself quite striking and not generally recognized. The article to which Rogeberg links makes the following all-too-plausible remark:
Training of large numbers of experts by universities has probably had the perverse effect of increasing the number of people running around making highly confident but wrong judgements. But the tendency to not notice our errors and to place excessive confidence in our subjective judgements is something that all humans suffer from to varying degrees.
One final interesting read -- again thanks to Rogeberger for pointing this out -- is a profile in The Atlantic of Cliff Asness of the quant hedge fund Applied Quantitative Research. AQR was one of the hedge funds involved in the infamous "quant meltdown" of August 2007 which was driven precisely by a positive feedback loop, in the case one which caused a violent de-leveraging among a number of hedge funds using similar strategies and invested in similar assets. This is one of the few cases in which we have a pretty good quantitative model explaining how these kinds of feedback loops emerge essentially in the same way violent storms (or hurricanes) do in the atmosphere -- through ordinary processes which create the conditions in which explosive events become virtually certain. In the profile, Asness describes the dynamics behind the quant meltdown, which weren't as complex, mysterious or irrational as many people seem to think:
He told the New York Post that he blamed the sudden losses not on AQR's computer models but on "a strategy getting too crowded ... and then suffering when too many try to get out the same door" at the same time.