Wow. So the executives who spent a decade laundering billions of dollars will have to partially defer their bonuses during the five-year deferred prosecution agreement? Are you fucking kidding me? That's the punishment? The government's negotiators couldn't hold firm on forcing HSBC officials to completely wait to receive their ill-gotten bonuses? They had to settle on making them "partially" wait? Every honest prosecutor in America has to be puking his guts out at such bargaining tactics. What was the Justice Department's opening offer – asking executives to restrict their Caribbean vacation time to nine weeks a year?And people wonder why the US falls year after year a little further down the Corruption Perceptions Index? As of 2012, we're just slightly ahead of Chile, Uruguay and The Bahamas.
So you might ask, what's the appropriate financial penalty for a bank in HSBC's position? Exactly how much money should one extract from a firm that has been shamelessly profiting from business with criminals for years and years? Remember, we're talking about a company that has admitted to a smorgasbord of serious banking crimes. If you're the prosecutor, you've got this bank by the balls. So how much money should you take?
How about all of it? How about every last dollar the bank has made since it started its illegal activity? How about you dive into every bank account of every single executive involved in this mess and take every last bonus dollar they've ever earned? Then take their houses, their cars, the paintings they bought at Sotheby's auctions, the clothes in their closets, the loose change in the jars on their kitchen counters, every last freaking thing. Take it all and don't think twice. And then throw them in jail.
Sound harsh? It does, doesn't it? The only problem is, that's exactly what the government does just about every day to ordinary people involved in ordinary drug cases.
Inspiration from physics for thinking about economics, finance and social systems
Friday, December 14, 2012
For banks, nothing is illegal
This would be literally unbelievable, except that we've all become desensitized to the double standard of our justice system -- enforcement of laws against ordinary people, and systematic collusion with large banks and corporate offenders to keep anyone from going to jail. I think Matt Taibbi offers the most honest take on this shameful decision to slap HSBC with fines only, rather than pursuing what should have been slam-dunk prosecutions for money laundering and drug smuggling on a global scale:
Wednesday, December 12, 2012
Elements of a stable financial system
It's hardly a hell raising demand for revolution, but this speech by Michael Cohrs of the Bank of England is worth a quick read and offers some pretty encouraging signs that authorities -- in the UK, at least -- are moving (slowly) toward financial regulations that seem pretty sensible and might really help avoid future crises or make them less frequent. I read it as a kind of wish list, but of wishes that are fairly realistic.
On a theoretical level, perhaps the most important thing Cohrs calls for is greater awareness of economic and financial history, with the idea that we might prepare our minds better for the natural instabilities that seem to create crises so frequently:
Cohrs goes on to discuss a number of ideas all being pursued with the idea of making finance more "sustainable." These include establishing simple rules by which large institutions can be wound down and let fail safely when they ought to (this might include using penalties or taxes to establish insurance funds beforehand to handle such events), making financial institutions LESS CONNECTED and changing the culture of finance as well so that financial institutions themselves "ensure they can be regulated." Ok, that final one may be a rather huge challenge.
The good thing is that people from the Bank of England are going around saying these things. Let's hope they can manage to put some of these principles in place, especially in some globally consistent way.
On a theoretical level, perhaps the most important thing Cohrs calls for is greater awareness of economic and financial history, with the idea that we might prepare our minds better for the natural instabilities that seem to create crises so frequently:
At the heart of much of the current policy debate is how the FPC, PRA and FCA develop better processes for anticipating the next problem – whether the problem is an asset bubble, poor risk mismanagement or a flawed or misunderstood financial product. And these are important steps to take. But it seems to me there is an inherent tendency for policymakers to re-fight the last war. As I said above, I am a believer that understanding the past provides a foundation on which to assess the future. But we shouldn’t pretend we can eliminate financial crises completely. Nor that the next crises will necessarily be a carbon copy of the last one.
My anxiety about getting financial regulation to better mitigate future risks has its roots in the issues one sees in the financial crises of the past couple of hundred years or so. Virtually every type of financial institution has been the cause of a crisis at some point in history – country banks back in 1825, universal banks in 1931, small banks in the 1970s, savings and loan companies in the 1980s, international banks in the 1980s and 1990s (debt crises in Latin America and Asia respectively), and even a hedge fund in 1997.
Pretty much all types of financial institution got involved in the problems of 2007/2008. The roll call included insurance companies (although thankfully not those in the UK) alongside investment banks as well as some more traditional commercial and mortgage banks. I find it hard to see a common thread (other than high leverage ratios) amongst the types of institutions that struggled or the mistakes that they made. It is not clear that the reforms we are putting into place today would have, or could have, averted all the problems faced in these crises. Therefore, experience tells me its origins are unlikely to be in an institution and from a product that is obvious to us now. ... I realize this uncertainty is rather unhelpful.Actually, I think it is very helpful. Nothing is more dangerous than belief that now , as we know how things can go wrong, we can probably perform a few engineering tricks and hence avoid further problems in the future. This was the facile belief furthered in the decade prior to the past crisis, especially in basic textbooks of economics and finance and research papers furthering belief in the inevitable "spiral to efficiency" of modern markets (infamously described in this rather embarrasing 2005 paper by Robert Merton and Zvi Modie, which was published even as the markets were on the verge of collapse!).
Cohrs goes on to discuss a number of ideas all being pursued with the idea of making finance more "sustainable." These include establishing simple rules by which large institutions can be wound down and let fail safely when they ought to (this might include using penalties or taxes to establish insurance funds beforehand to handle such events), making financial institutions LESS CONNECTED and changing the culture of finance as well so that financial institutions themselves "ensure they can be regulated." Ok, that final one may be a rather huge challenge.
The good thing is that people from the Bank of England are going around saying these things. Let's hope they can manage to put some of these principles in place, especially in some globally consistent way.
Saturday, December 8, 2012
The Leverage Cycle
My latest Bloomberg column will appear
sometime Sunday night, I expect. I wanted to give readers a few links here to various key papers of John Geanakoplos on the
leverage cycle, as well as a little further discussion of a few
points.
First, this is the most
detailed paper Geanakoplos has published (as far as I know) describing the leverage
cycle -- the natural feedback process that repeatedly drives
economies through cycles in which leverage rises, driving increasing
asset prices, and then falls as investors become uncertain, more
cautious, and demand more collateral, Prices then crash down accordingly. His
argument is that leverage (determined by collateral rates) is a key
macroeconomic variable completely independent from interest rates,
and often just as important to the economy at large. In particular,
increasing (decreasing) leverage is one key direct cause of
increasing (or decreasing) prices. As evidence, look at the figure below
for housing prices from 2000 through 2009. It shows how the average down payment
required for sub-prime mortgages went up and then down, in both cases just in advance
of housing prices. (Okay, this isn't proof of a causal link, I
suppose, but it's enough to convince me.)
But before reading the "serious" paper I recommend first reading the text of this talk that Geanakoplos gave two years ago in Italy. It's much less formal and makes all the main points in a clear way.
From a practical point of view, I think two things stand out to me in his arguments:
First, given the clear
importance of leverage in driving economic outcomes, it is quite
remarkable that the Federal Reserve Bank has not in the past made any
systematic effort to collect the kind of data it would need to
monitor average collateral rates in the economy. Between 2000 and
2005, for example, no one at the Fed was going to banks and
collecting information on how much collateral they were demanding
when lending. This was just not considered a crucial macroeconomic
variable. Judging from the tone in his talk, Geanakoplos finds this
pretty amazing too. He mentions that the Fed contacted him in 2008 or
so to get hold of data of this kind that he had collected. It seems that the Fed has
now accepted the systemic importance of leverage and is seriously
considering including leverage as a key variable in future
macroeconomic monitoring. Whether banks and hedge funds will be
required to report leverage levels, I don't know, but this idea is at
least on the table. Good thing, I think.
A second interesting point is one that
seems kind of obvious in retrospect. The leverage that drives the
rise of prices in Geanakoplos's picture is leverage in long
positions, which enables optimistic investors to buy more than they
would otherwise be able to buy. Leverage in the opposing sense, short
leverage allowing pessimists to speculate on a collapse in the
market, would act to depress prices. Hence, it is probably more than a little significant that credit default swaps (CDS) for mortgage backed
securities were created in 2005. These most likely acted as a key trigger of the beginning of
the crash. As Geanakoplos writes,
In my view, an important trigger for the collapse of 2007–9 was the introduction of CDS contracts into the mortgage market in late 2005, at the height of the market. Credit default swaps on corporate bonds had been traded for years, but until 2005 there had been no standardized mortgage CDS contract. I do not know the impetus for this standardization; perhaps more people wanted to short the market once it got so high. But the implication was that afterward the pessimists, as well as the optimists, had an opportunity to leverage. This was bound to depress mortgage security prices. ... this, in turn, forced underwriters of mortgage securities to require mortgage loans with higher collateral so they would be more attractive, which, in turn, made it impossible for homeowners to refinance their mortgages, forcing many to default, which then began to depress home prices, which then made it even harder to sell new mortgages, and so on. I believe the introduction of CDS trading on a grand scale in mortgages is a critical, overlooked factor in the crisis. Until now people have assumed it all began when home prices started to fall in 2006. But why home prices should begin to fall then has remained a mystery....Of course, if CDS were introduced from the beginning, prices would never have gotten so high. But they were only introduced after the market was at its peak.
Not
that the crisis wouldn't have happened in the absence of CDS
contracts, but they probably hastened the collapse.
Finally,
on a related matter, I think many people may find good value in this
article by Ray Dalio, head of Bridgewater Investments. This is
Dalio's attempt to give a simple explanation of "how the economy
works" and he puts the expansion and contraction of credit at
the very center. He is essentially making much the same argument as
Geanakoplos, but in a less formal way. Fun to read and very
instructive, in my opinion.
Friday, December 7, 2012
Transparency, no -- people might know they're being screwed
By way of Money Science, I had to read this twice just to believe I hadn't misread it. The argument is that transaction charges on credit cards should NOT be transparent to the customer because the customer might then feel cheated, become angry and upset, etc. Better if those charges were lumped into the bulk price of the purchase so the buyer won't know what the bank is charging. Don't upset people with things like this:
...the card operators and issuers are ripping off customers by taking percentage fees opaquely. These fees are applied throughout the process, and the lack of visibility of charging means that customers don’t know they’re being ripped off.Uh, no, actually, I'd rather see the fees made explicit. That way I think the company charging the fee might think twice about my reaction to it. What do others think?
The solution: more transparency.
Now I can see the argument and solution rationale, but I fundamentally disagree with it.
The reason I disagree is that customers are not rational when it comes to money.
They will happily pay fees to ATM operators, currency exchanges, PayPal and more if it is convenient and supports instant gratification.
I should know, as I’m one of them.
Do I count the fees and the breakdown of costs for every transaction?
No.
Do I object when I see the cost of a transaction?
Yes.
Take the example of booking an airline ticket and you see that there is £4.50 ($6) charge for booking the ticket using a credit card.
Do we get upset with the airline?
No.
Are we pissed off with the card company and the bank?
Yes.
Or take the example of my own bank who recently started itemising cross-border transactions with the charge per transaction.
Do I appreciate the transparency?
No.
Do I object to the fee per transaction?
Of course I do.
In other words, customers would far rather prefer everything bundled into one charge where the bank fees are hidden, rather than seeing the fees per transaction itemised explicitly.
Thursday, December 6, 2012
A new take on causality
It's not often that something fundamentally new comes along on the topic of causality. That notion is one of the most basic concepts in science and philosophy, indeed in all human thinking (non-human as well, I would guess). Finding causal links helps us interpret the world, make predictions, render the unpredictable environment around us a little less unpredictable. But we still have a lot to learn about causality, and especially how to infer causal links using data.
This is clear from a fascinating recent study that I think will ultimately have quite an impact on applied studies of causal links in fields ranging from economics and finance to ecology. This paper by George Sugihara and colleagues -- its entitled "Detecting Causality in Complex Ecosystems" -- is well worth a few hours of study, as it explores some history of attempts to detect causal links from empirical data and then demonstrates a new technique that appears to be a significant advance on past techniques.
The key problem in inferring causal links from data, of course, is that mere correlation does not imply causation. The two things in question, A and B, might both be linked to some other causal factor C, but actually have no causal links running from one to the other. In economics, Clive Granger became famous for proposing, in this paper 1969, a way to go beyond correlation. He reasoned that if some thing X causally influences some other thing Y, then including X in a predictive scheme should make predictions of Y better. Conversely, excluding X should make predictions worse. Causal factors, in other words, can be identified as those that reduce predictive accuracy when excluded.
This notion of ‘Granger causality’ makes obvious intuitive sense, and has found many applications, especially in econometrics. However, read the original paper and you quickly see that the theory was developed explicitly for use with stochastic variables, especially in linear systems. As Granger noted, “The theory is, in fact, non-relevant for non-stochastic variables.” Which is unfortunate as so much of the world seems to be more suitably described by nonlinear, deterministic systems.
I've just written for Nature Physics a short essay describing the Sugihara et al. work. I assume many people won't have access to that article (oddly enough, I don't either!) so I thought I'd include a few words here. One problem with Granger causality, the authors point out, is that intimate connections between the parts of any nonlinear system make ‘excluding’ a variable more or less impossible. They demonstrate this for a simple nonlinear system of two variables describing the direction interaction of, say, foxes and rabbits. Call the populations X and Y. Following Granger, you might exclude Y and see if you can predict X. If exclusion of Y reduces your ability to predict, then you've found a causal link. But this recipe yields nothing in this case, because of the nonlinearity. The mathematical model they study definitely, by construction, has a causal links between the two. But the Granger method won't show it.
Why? A key result in dynamical system theory — known as the Takens embedding theorem — implies that one can always reconstruct the dynamical attractor for a system from data in the form of lagged samples of just one variable. In effect, X(t) (fox numbers in time) is always predictable from enough of its earlier values. Hence, excluding Y doesn’t make X any less predictable. The notion of Grange causality would erroneously conclude that Y is non-causal.
To get around this problem, Sugihara and colleagues use the embedding theorem to their advantage. The reconstruction trick can be done for both variables X and Y. I won't dwell on technical details which can be found in the paper, but this yields two mathematical "manifolds" -- essentially, subsets of the space of possible dynamics that describe the actual dynamics that happen. Both of these describe the dynamical attractor of the entire system, one using the variable X, the other the variable Y. Now, sensibly, if X has a causal influence on Y, one should expect this influence to show up as a direct link between the dynamics on these two manifolds. Knowing states on one manifold (for Y) at a certain time should make it possible to know the states on the other (for X) at the same time.
That IS technical, but it's really not complicated. The original paper offers links to some beautiful simulations that aid understanding. The strength of the paper is to show how taking this small step into dynamical system theory pays big results. To begin with, it gives superior performance over the Granger method for several test problems. More impressively, it appears to have already resolved an outstanding puzzle in contemporary ecology.
Ecologists have for decades debated what’s going on with two fish species, the Pacific sardine and northern anchovy, the populations of which on a global scale alternate powerfully on a decadal timescale (see fig below). These data, some suggest, imply that these species must have some direct competition or other interaction, as when the numbers of one go up, those of the other go down. Failing any direct observation of such interactions, however, others have proposed that the global synchrony betrays something else — global forcing from changing sea surface temperatures which just happen to affect the two species differently.
Strikingly, the results from the new method -- Sugihara and colleagues give it memorable name "convergent cross mapping" -- seem to resolve the matter in one stroke. The analysis shows no evidence at all for a direct causal link between the two species, and clear evidence for a link from sea surface temperature to each species. In this case, the correlation is NOT reflecting causation, but simultaneous response to a third factor, though a response in opposite directions.
So there you go -- following the basic ideas of dynamical system theory and actually reconstructing attractors for nonlinear systems makes it possible to tease out causal links far more powerfully than correlation studies alone. This is a major advance on our understanding of causality and I find it hard to believe this technique won’t find immediate application in economics and finance as well as in ecology, neuroscience and elsewhere. If you're involved in time series analysis, looking for correlations and causal relations, give it a read.
Saturday, December 1, 2012
Weird and puzzling browser phenomenon
As a curiosity, I wonder if anyone out there can explain a peculiar phenomenon. When I click on a hypertext link on a web page -- this one, for example -- I typically right-click and then choose "Open Link in New Tab." That way I keep the original page open while also opening the new resource. Do this on the link above and the new tab will display the excellent page of Simolean Sense, a weekly collection of generally great articles from around the web related to human psychology and behavior, finance, biology, mathematics and other such things.
Now the mysterious phenomenon -- Simolean Sense is the ONLY web site I have ever visited where right clicking on links doesn't work for me in a reliable way. On that site, when I right click on a link, typically nothing happens. At first. I have to right click repeatedly, 2, 3,4,5, 8 times, until finally the box opens to show the "Open Link in New Tab" option. It's a completely random phenomenon. (I just noticed that left clicking gives odd behavior too.) My clicking just doesn't work in a reliable way on Simolean Sense, though it does everywhere else. Why is that?
Now the mysterious phenomenon -- Simolean Sense is the ONLY web site I have ever visited where right clicking on links doesn't work for me in a reliable way. On that site, when I right click on a link, typically nothing happens. At first. I have to right click repeatedly, 2, 3,4,5, 8 times, until finally the box opens to show the "Open Link in New Tab" option. It's a completely random phenomenon. (I just noticed that left clicking gives odd behavior too.) My clicking just doesn't work in a reliable way on Simolean Sense, though it does everywhere else. Why is that?
Friday, November 30, 2012
The infallible portfolio?
In this short essay, Ricardo Fenholz of Columbia University makes what seems (to me at least) to be a rather incredible claim: that it's relatively easy to construct a portfolio that is guaranteed to outperform the S&P 500 over one year (or any other interval you like) and also has a limited downside during that year. The idea is the take the S&P 500 Index and tweak it a little, creating a portfolio with less weight in stocks with higher capitalization, and more weight in those with less capitalization, and presto -- you have something guaranteed to outperform the S&P 500, he claims. Is that possible? That easy? Here's a little more detail:
Now, I'm not doubting the veracity of this claim. I'm just stunned that such a simple recipe could work, and can't see the intuition behind it. What if the high cap stocks happen to perform brilliantly next year, relative to the lower cap stocks? Wouldn't this portfolio with underweighted high cap stocks then underperform the S&P Index? I've had a quick look at the paper Fernholz references as a detailed support of his claim, and there he explains the conditions for the theorem to hold in slightly different terms:
But maybe I'm wrong. I'd be interested in the thoughts of others. The paper is quite dense and light on intuitive discussion of the logic. Fernholz suggests that perhaps the existence of these superior portfolios -- which require continuous rebalancing by high-frequency buying and selling of many stocks -- explains some of the very high profits consistently earned by quantitative high-frequency hedge funds such as Renaissance Technology's Medallion Fund. I think I find more convincing the analysis of Lo and Khandani which seemed to suggest that much of the performance of quant hedge funds over the past decade or so can be accounted for by fairly vanilla long short equity strategies, with increasing use of leverage in the mid 2000s (used to maintain high reported earnings even as raw earnings fell off due to competition).
To understand how this works, let’s consider the S&P 500 U.S. stock index. Suppose that we wish to invest some money in S&P 500 stocks for one year. Currently, Apple has a total market capitalization of roughly $500 billion, making it the largest stock in the S&P 500 and equal to approximately 4% of the total capitalization of the entire index. Suppose that we believe it is very unlikely or impossible that either Apple or any other corporation’s capitalization will be equal to more than 99% of the total S&P 500 capitalization for this entire year during which we plan to invest. As long as this turns out to be true, then it is actually pretty simple to construct a portfolio containing S&P 500 stocks that is guaranteed to outperform the S&P 500 index over the course of the year and that has a limited downside relative to this index. In essence, we can construct a portfolio that will never fall below the value of the S&P 500 index by more than, say, 5% and that is guaranteed to achieve a higher value than the S&P 500 index by the end of the year.[1]
This is not a trivial proposition. If we combine a long position in this outperforming portfolio together with a short position in the S&P 500 index, then we have a trading strategy that requires no initial investment, has a limited downside, and is guaranteed to produce positive wealth by the end of the year. According to standard financial theory, this should not be possible.[2] Furthermore, the assumptions that guarantee that our portfolio will outperform the S&P 500 index appear entirely reasonable. After all, not for one day in the more than 50-year history of the S&P 500 has one corporation’s market capitalization come anywhere close to equaling even 50% of the total capitalization of the market. A 99% share of total market capitalization would essentially amount to there being only one corporation in the entire U. S. for an entire year. This seems like neither a likely outcome nor one that investors should take seriously when constructing their portfolios.
What does a portfolio made up of S&P 500 stocks that is guaranteed to outperform the S&P 500 index look like? There are many different ways in which such a portfolio can be constructed, but one feature common to all such portfolios is that relative to the S&P 500 index itself, they place more weight on those stocks with small total market capitalizations and less weight on those stocks with large total market capitalizations. The weight that an index such as the S&P 500 places on each individual stock is equal to the ratio of that stock’s total market capitalization relative to all stocks’ total market capitalizations taken together. In the case of Apple, then, the S&P 500 index would place a weight of roughly 4% in this individual stock while those portfolios that use HFT to outperform this index would instead place a weight of less than 4% in Apple stock.The only condition for this to work, he suggests, is that the assumption that no stock in the market comes to dominate the market in the sense that its market capitalization comes to be a high fraction of that of the entire market. This is, as he notes, a fairly weak assumption, although the weaker you make the assumption, the longer the time interval over which this idea apparently works.
Now, I'm not doubting the veracity of this claim. I'm just stunned that such a simple recipe could work, and can't see the intuition behind it. What if the high cap stocks happen to perform brilliantly next year, relative to the lower cap stocks? Wouldn't this portfolio with underweighted high cap stocks then underperform the S&P Index? I've had a quick look at the paper Fernholz references as a detailed support of his claim, and there he explains the conditions for the theorem to hold in slightly different terms:
The conditions mandate, roughly, that the largest stock have "strongly negative" rate of growth, resulting in a sufficiently strong repelling drift away from an appropriate boundary; and that all other stocks have "sufficiently high" rates of growth.That sounds very different from the quite plausible assumption about no market dominance of a single stock. Indeed, this seems like saying that, if one assumes that large cap stocks will perform poorly, and small caps stock better, then we can build a portfolio guaranteed to outperform the S&P 500 Index by weighting small cap stocks more heavily. Isn't that like assuming we know the future?
But maybe I'm wrong. I'd be interested in the thoughts of others. The paper is quite dense and light on intuitive discussion of the logic. Fernholz suggests that perhaps the existence of these superior portfolios -- which require continuous rebalancing by high-frequency buying and selling of many stocks -- explains some of the very high profits consistently earned by quantitative high-frequency hedge funds such as Renaissance Technology's Medallion Fund. I think I find more convincing the analysis of Lo and Khandani which seemed to suggest that much of the performance of quant hedge funds over the past decade or so can be accounted for by fairly vanilla long short equity strategies, with increasing use of leverage in the mid 2000s (used to maintain high reported earnings even as raw earnings fell off due to competition).
Friday, November 16, 2012
Why time matters
I'm still thinking about the ideas of Ole Peters and the importance difference between time and ensemble averages. A few comments suggest that some people I have "lost the plot," but I'm convinced this issue is indeed extremely important and generally underappreciated. A few things to add for now:
1. This post by economist Lars Syll from earlier this year does an excellent job of laying out the main issues and linking them to the Kelly criterion: a practical criterion for playing risky gambles that is based explicitly on time averages. Lars couldn't have explained the basic ideas more clearly.
2. From some comments on other blogs, similar to some I've seen here, many people familiar with probability theory find it hard to accept that a time average expected return of a random multiplicative process is just not equal to the (usual) expected return of a single round. It isn't. Start with any number you like, multiply it by a long sequence of numbers, each either 0.9 or 1.1 drawn with equal probability, and you will find that the number tends to get smaller. In the limit of an infinite sequence, the result heads to 0. And the result quoted in the Towers and Watson paper, a 1% decline on average per period, is correct.
3. I came across an interesting comment from Tim Johnson, writing on Rick Bookstaber's blog:
1. This post by economist Lars Syll from earlier this year does an excellent job of laying out the main issues and linking them to the Kelly criterion: a practical criterion for playing risky gambles that is based explicitly on time averages. Lars couldn't have explained the basic ideas more clearly.
2. From some comments on other blogs, similar to some I've seen here, many people familiar with probability theory find it hard to accept that a time average expected return of a random multiplicative process is just not equal to the (usual) expected return of a single round. It isn't. Start with any number you like, multiply it by a long sequence of numbers, each either 0.9 or 1.1 drawn with equal probability, and you will find that the number tends to get smaller. In the limit of an infinite sequence, the result heads to 0. And the result quoted in the Towers and Watson paper, a 1% decline on average per period, is correct.
3. I came across an interesting comment from Tim Johnson, writing on Rick Bookstaber's blog:
The model Peters develops appears to be remarkably similar to the one Durand proposed in 1957 (The Journal of Finance, 12, 348–363) and is discussed by Szezkely and Richards (The American Statistician, 2004, Vol. 58, No. 3).The paper by Szezkely and Richards is indeed worth a read, although I'm convinced that Peters has gone considerably further than Durand.
I do not disagree with your assessment that there has been an error in economics for the past 77 years (just one?) but mathematicians working in finance have generally ignored Samuelson's attacks on logarithmic utility. Poundstone's book on the Kelly Criterion is a good description of the battle in the 1960s and I there is a rich contemporary literature that develops Kelly's ideas...
Wednesday, November 14, 2012
Ultimate Limits to Growth
My latest essay in Bloomberg touched on ultimate limits to energy growth (and quite possibly economic growth) due to the accumulation of waste energy in the environment. It's really just an exercise in taking basic physics into account while extrapolating trends in energy use into the future. The conclusion is that continued exponential growth in energy use -- which we've experienced over the past few centuries (and possibly much longer) -- cannot last for much more than a century or so. What about economic growth? We don't know. Economists theorize about a great "decoupling" of energy from economic productivity, but that hasn't happened so far in any country. My conclusion is that economic growth must also end, fairly soon (by which I mean, say, 100 years) -- unless we transform our economic activity to involve far less energy and in a way we have never done before.
I noticed that Noah Smith has a post criticizing physicist Tom Murphy, who I cited in my article. I love Noah's blog and read everything he writes, but I don't think this is his fairest criticism, although parts are fair. Certainly, I can see that economists might feel that their best arguments weren't put forward in the dialogue Murphy recalled between himself and an economist, the subject of Murphy's most widely read post. (Note: I didn't cite that particular post Noah refers to, but this one of Murphy's which looks at trends in energy growth alone.) But reading Noah, I'm led to believe that my understanding of the prevailing view among modern economists on economic growth is hugely mistaken. Indeed, he makes it sound as if economists generally accept that growth must end, and fairly soon (if true, I' very happy about that).
Murphy made the point that, if we extrapolate our current and past energy growth into the future, then we will actually boil the oceans in 400 years (with 2.3% energy growth; sooner with fast growth). To this Noah responds,
Noah goes on:
These are the main points I think Murphy was trying to make. I do agree with Noah that other parts of Murphy's original post are much less convincing. When he moves into proper economic territory, discussing prices, scarcity of future energy, etc., my own feeling was to take that all with a grain of salt, as quite a lot of speculation.
In any event, I'm glad Noah has brought the attention of more economists to the quite short timescale on which continued energy growth leads to problems. If this is already well understood in economics, and built into theories of growth, then great, I've learned something. In my experience a lot of people think we'll be fine and continue growth if we can only find some cheap, infinite and non-polluting energy source to power our future. That's not the case.
I noticed that Noah Smith has a post criticizing physicist Tom Murphy, who I cited in my article. I love Noah's blog and read everything he writes, but I don't think this is his fairest criticism, although parts are fair. Certainly, I can see that economists might feel that their best arguments weren't put forward in the dialogue Murphy recalled between himself and an economist, the subject of Murphy's most widely read post. (Note: I didn't cite that particular post Noah refers to, but this one of Murphy's which looks at trends in energy growth alone.) But reading Noah, I'm led to believe that my understanding of the prevailing view among modern economists on economic growth is hugely mistaken. Indeed, he makes it sound as if economists generally accept that growth must end, and fairly soon (if true, I' very happy about that).
Murphy made the point that, if we extrapolate our current and past energy growth into the future, then we will actually boil the oceans in 400 years (with 2.3% energy growth; sooner with fast growth). To this Noah responds,
Yes, of course. But the point is that 400 years is not very long. And we don't need the oceans boiling before we would see important temperature change (and other associated environmental changes) that would make life rather uncomfortable. I think Murphy is right that most people do not appreciate how soon in the future (soon on a timescale of human history) continued growth of energy use brings problems. This isn't a problem set some 5 billion years in the future. This is one reason I think so many people have found Murphy's posts worth reading: this seems really surprising to them.This is correct. And in fact, Murphy didn't even need to mention waste heat or anything like that to make his argument; he could have just said "Hey, eventually the Sun will explode, and then the whole Universe will degrade into heat, and where will your economy be then?" So what if that happens 500 million years in the future, or 10^100 years? What's the difference? One way or another, the human race is kaput!
Noah goes on:
Again, I think this is just the point Murphy is trying to make -- that if these effects are looming only 400 years (or significantly less) in the future, then perhaps contemporary humans ought to be taking them into account now in making their economic decisions. Certainly it would be appropriate for our leaders to be casting an eye on this long term, and to seek advice from our best economists who might help them think clearly about how our society could manage to change in response. Does economics end with the business cycle? Nothing longer term than that? 400 years is perhaps only 20-30 business cycles away.Are economists ignoring this basic fact? Do economists' models crucially hinge on the idea that economic growth will continue forever and ever and ever? No. The "long term trend growth" that is used in growth and business cycle models is only meant to represent a trend that lasts longer than the business cycle - so, longer than a decade or two. No economist - I hope - thinks that currently living humans are making economic decisions based on what they think is going to happen in 400 years, or 2500 years, or 500 million years.
These are the main points I think Murphy was trying to make. I do agree with Noah that other parts of Murphy's original post are much less convincing. When he moves into proper economic territory, discussing prices, scarcity of future energy, etc., my own feeling was to take that all with a grain of salt, as quite a lot of speculation.
In any event, I'm glad Noah has brought the attention of more economists to the quite short timescale on which continued energy growth leads to problems. If this is already well understood in economics, and built into theories of growth, then great, I've learned something. In my experience a lot of people think we'll be fine and continue growth if we can only find some cheap, infinite and non-polluting energy source to power our future. That's not the case.
Tuesday, November 13, 2012
Why expected value is a mistake
I want to take a closer look at the very interesting work of Ole Peters I mentioned in my last post. He argues that the ensemble averages typically used in economics and finance to compute "expected" returns are, in many cases, inappropriate to making decisions in
the real world; in particular, they severely underestimate risks. Peters begins with a simple gamble:
Peters' gamble is a variation on the famous St Petersburg "paradox" proposed originally by Nicolas Bernoulli, and later discussed by his brother Daniel. There the question is to determine how much a rational individual should be willing to pay to play a lottery based on a coin flip. In the lottery, if the first flip is heads, you win $1. If the first is tails, you flip again. If the coin now comes up heads, you win $2, otherwise you flip again, and so on. The lottery pays out 2^n (^ meaning exponent) dollars if the head comes up on the nth roll. An easy calculation shows that the expected payout of the lottery is infinite -- given by a sum that does not converge (1*1/2 + 2*(1/2)^2 + 4*(1/2)^3 + ...) = (1/2 + 1/2 + 1/2 + ...). The "paradox" is again why real people do not find this lottery infinitely appealing and generally offer less than $10 or so to play.
This is a paradox, of course, only if you have some reason to think that people should act according to the precepts of maximizing expected return. Are there any such reasons? I don't know enough history of economics and decision theory to say if there are -- perhaps it can be shown that such behavior is rational in some specific sense, i.e. in accordance to some set of axioms? But if so, what the paradox really seems to establish, then, is the limited relevance of such rules to living in a real world (that such rules capture an ineffective version of rationality). Peters' resolution of the paradox shows why (at least for my money!).
His basic idea is that we live in time, and act in time, and have absolutely no choice in the matter. Hence, the most natural way to consider the likely payoff coming from any gamble is to imagine playing the gamble many times in a row (rather than many times simultaneously, as in the ensemble average). Do this indefinitely and you should encounter all the possible outcomes, both good and bad. Mathematically, this way of thinking leads Peters to consider the time average of the growth rate (log return) of the wealth of a player who begins with wealth W and plays the gamble over N periods, in the limit as N goes to infinity. In his paper he goes through a simple calculation and finds the formula for this growth rate:
The third line here is explicitly for the St Petersburg lottery, while the second line holds more generally for any gamble with probability p_i of giving a return r_i (with the sum extending over all possible outcomes).
This immediately gives more sensible guidance on the St Petersburg paradox, as this expected growth rate is positive for cost c sufficiently low, and negative when c becomes too high. Most importantly, how much you ought to be willing to pay depends on your initial wealth w, as this determines how much you can afford to lose before going broke. Notice that this aspect doesn't figure in the ensemble average in any way. It's an initial condition that actually makes the gamble different for players of different wealth. Coincidentally, this result is identical to a solution to the paradox originally proposed by Daniel Bernoulli, who simply postulated a logarithmic utility and supposed that people try to maximize utility, not raw wealth. This idea reflects the fact that further riches tend to matter relatively less to people with more money. In contrast, Peters result emerges without any such arbitrary utility assumptions (plausible though they may be). It is simply the realistic expected growth rate for a person playing this game many times, starting with wealth W. Putting numbers in shows that the payoff becomes positive for a millionaire for a cost c less than around $10. Someone with only $1000 shouldn't be willing to pay more than about $6.
It's also useful to go back and work things out for the simpler dice game. One thing to note about the formula is that the average growth rate is NEGATIVE INFINITE for any gamble in which a person stands to lose their entire wealth in one go, no matter how unlikely the outcome. This is true of the dice gamble as laid out before. I was wondering whether this really made any sense, but after some further exploration I now think it does. The secret is to again consider that the person playing has wealth W and that the cost of "losing" isn't the entire wealth, but some cost c. A simple calculation then shows that the time average growth rate for the dice game takes the form shown in the figure below, showing the growth rate versus the c/w, the cost as a fraction of the players' wealth.
Here you see that the payoff is positive, and the gamble worth taking, if the cost is less than about 60% of the player's wealth. If more than that, the time average growth rate is negative. And, if becomes strongly more negative as c/w approaches 1, with the original game recovered for c/w=1. Again, everything makes more sense when a person's initial wealth is taken into account. This initial condition really matters and the question of the likely payoff of a gamble depends strongly on it, as lower wealth means higher chance of going bankrupt quicker and then being out of the game entirely. The possibility of losing all your wealth on one turn, no matter how unlikely, becomes decisive because this becomes certain in the long run.
Again, this way of thinking likely has significance far beyond this paradox. It's really pointing out that ensemble averages are very misleading as guides to decision making, especially when the quantities in question, potential gains and losses, become larger. If they remain small compared to the overall wealth of a person (or a portfolio), then the ensemble and time averages turn out to be the same, giving a formula in which initial wealth doesn't matter. But when potential gains/losses become large, then the initial condition really does matter and the ensemble average is dangerous. These points are made very well in this Towers Watson article I mentioned in an earlier post.
Which brings me to one final point. Ivan in comments suggested that perhaps Peters has changed the initial problem by looking at the time average rather than the ensemble average, and so has not actually resolved the St Petersburg paradox. I'm not yet entirely sure what I think about this. The paradox, if I'm right, is why people don't act in accordance with the precepts of expected return calculated using the ensemble average. To my mind, Peters' perspective resolves this entirely as it shows that this ensemble average simply gives very poor advice on many occasions. In particular, it makes it seem that a person's initial wealth should have no bearing on the question. If you face gambles, and face them repeatedly as we all do throughout life in one form or another, then thinking of facing them sequentially, as we do, makes sense. But that's not, as I say, my final view..... this is one of those things that gets deeper and deeper the more you mull it over....
Let’s say I offer you the following gamble: You roll a dice, and if you throw a six, I will give you one hundred times your total wealth. Anything else, and you have to give me all that you own, including your retirement savings and your favorite pair of socks. I should point out that I am fantastically rich, and you needn’t worry about my ability to pay up, even in these challenging times. Should you do it? ... The rational answer seems to be “yes”—the expected return on your investment is 1,583 1/3% in the time it takes to throw a dice. But what’s your gut feeling?As he notes, almost no real person would take this bet. You have 5 chances out of 6 of being left destitute, one of being made very much wealthier. Somehow, most of us weight outcomes differently than the simple and supposedly "rational" perspective of maximizing expected return. Why is this? Are we making an error? Or is there some wisdom in this?
Peters' gamble is a variation on the famous St Petersburg "paradox" proposed originally by Nicolas Bernoulli, and later discussed by his brother Daniel. There the question is to determine how much a rational individual should be willing to pay to play a lottery based on a coin flip. In the lottery, if the first flip is heads, you win $1. If the first is tails, you flip again. If the coin now comes up heads, you win $2, otherwise you flip again, and so on. The lottery pays out 2^n (^ meaning exponent) dollars if the head comes up on the nth roll. An easy calculation shows that the expected payout of the lottery is infinite -- given by a sum that does not converge (1*1/2 + 2*(1/2)^2 + 4*(1/2)^3 + ...) = (1/2 + 1/2 + 1/2 + ...). The "paradox" is again why real people do not find this lottery infinitely appealing and generally offer less than $10 or so to play.
This is a paradox, of course, only if you have some reason to think that people should act according to the precepts of maximizing expected return. Are there any such reasons? I don't know enough history of economics and decision theory to say if there are -- perhaps it can be shown that such behavior is rational in some specific sense, i.e. in accordance to some set of axioms? But if so, what the paradox really seems to establish, then, is the limited relevance of such rules to living in a real world (that such rules capture an ineffective version of rationality). Peters' resolution of the paradox shows why (at least for my money!).
His basic idea is that we live in time, and act in time, and have absolutely no choice in the matter. Hence, the most natural way to consider the likely payoff coming from any gamble is to imagine playing the gamble many times in a row (rather than many times simultaneously, as in the ensemble average). Do this indefinitely and you should encounter all the possible outcomes, both good and bad. Mathematically, this way of thinking leads Peters to consider the time average of the growth rate (log return) of the wealth of a player who begins with wealth W and plays the gamble over N periods, in the limit as N goes to infinity. In his paper he goes through a simple calculation and finds the formula for this growth rate:
The third line here is explicitly for the St Petersburg lottery, while the second line holds more generally for any gamble with probability p_i of giving a return r_i (with the sum extending over all possible outcomes).
This immediately gives more sensible guidance on the St Petersburg paradox, as this expected growth rate is positive for cost c sufficiently low, and negative when c becomes too high. Most importantly, how much you ought to be willing to pay depends on your initial wealth w, as this determines how much you can afford to lose before going broke. Notice that this aspect doesn't figure in the ensemble average in any way. It's an initial condition that actually makes the gamble different for players of different wealth. Coincidentally, this result is identical to a solution to the paradox originally proposed by Daniel Bernoulli, who simply postulated a logarithmic utility and supposed that people try to maximize utility, not raw wealth. This idea reflects the fact that further riches tend to matter relatively less to people with more money. In contrast, Peters result emerges without any such arbitrary utility assumptions (plausible though they may be). It is simply the realistic expected growth rate for a person playing this game many times, starting with wealth W. Putting numbers in shows that the payoff becomes positive for a millionaire for a cost c less than around $10. Someone with only $1000 shouldn't be willing to pay more than about $6.
It's also useful to go back and work things out for the simpler dice game. One thing to note about the formula is that the average growth rate is NEGATIVE INFINITE for any gamble in which a person stands to lose their entire wealth in one go, no matter how unlikely the outcome. This is true of the dice gamble as laid out before. I was wondering whether this really made any sense, but after some further exploration I now think it does. The secret is to again consider that the person playing has wealth W and that the cost of "losing" isn't the entire wealth, but some cost c. A simple calculation then shows that the time average growth rate for the dice game takes the form shown in the figure below, showing the growth rate versus the c/w, the cost as a fraction of the players' wealth.
Here you see that the payoff is positive, and the gamble worth taking, if the cost is less than about 60% of the player's wealth. If more than that, the time average growth rate is negative. And, if becomes strongly more negative as c/w approaches 1, with the original game recovered for c/w=1. Again, everything makes more sense when a person's initial wealth is taken into account. This initial condition really matters and the question of the likely payoff of a gamble depends strongly on it, as lower wealth means higher chance of going bankrupt quicker and then being out of the game entirely. The possibility of losing all your wealth on one turn, no matter how unlikely, becomes decisive because this becomes certain in the long run.
Again, this way of thinking likely has significance far beyond this paradox. It's really pointing out that ensemble averages are very misleading as guides to decision making, especially when the quantities in question, potential gains and losses, become larger. If they remain small compared to the overall wealth of a person (or a portfolio), then the ensemble and time averages turn out to be the same, giving a formula in which initial wealth doesn't matter. But when potential gains/losses become large, then the initial condition really does matter and the ensemble average is dangerous. These points are made very well in this Towers Watson article I mentioned in an earlier post.
Which brings me to one final point. Ivan in comments suggested that perhaps Peters has changed the initial problem by looking at the time average rather than the ensemble average, and so has not actually resolved the St Petersburg paradox. I'm not yet entirely sure what I think about this. The paradox, if I'm right, is why people don't act in accordance with the precepts of expected return calculated using the ensemble average. To my mind, Peters' perspective resolves this entirely as it shows that this ensemble average simply gives very poor advice on many occasions. In particular, it makes it seem that a person's initial wealth should have no bearing on the question. If you face gambles, and face them repeatedly as we all do throughout life in one form or another, then thinking of facing them sequentially, as we do, makes sense. But that's not, as I say, my final view..... this is one of those things that gets deeper and deeper the more you mull it over....
Subscribe to:
Posts (Atom)