Inspiration from physics for thinking about economics, finance and social systems
Monday, March 25, 2013
FORECAST: new book to be published tomorrow!
That's right, my new book, the cover of which you've seen off to the right of this blog for some time now, will FINALLY be in bookstores in the US tomorrow, March 26. Of course, it is also available at Amazon and other likely outlets on the web. Who knows when reviews and such will begin trickling in. The book was featured in Nature on Thursday in their "Books in brief" section (sorry, you'll need a subscription), but the poor writers of those reviews (I've been one) really have almost no space to say anything. The review does make very clear that the book exists and purports to have some new ideas about economics and finance, but it makes no judgement on the usefulness of the book at all.
Anyone in the US, if you happen to be in a physical bookstore in the next few days, please let me know if you 1) do find the book and 2) where it was located. I've had the unfortunate experience in the past that my books, such as Ubiquity or The Social Atom, were placed by bookstore managers near the back of the store in sections with labels like Mathematical Sociology or Perspectives in the Philosophy of History, where perhaps only 1 or 2 people venture each day, and then probably only because they got lost while looking for the rest room. If you do find the book in an obscure location, feel completely free -- there's no law against this -- to take all the copies you find and move them up to occupy prominent positions in the bestsellers' section, or next to the check out with the diet books, etc. I would be very grateful!
And I would very much like to hear what readers of this blog think about the book.
Friday, March 22, 2013
Quantum Computing, Finally!! (or maybe not)
Today's New York Times has an article hailing the arrival of superfast practical quantum computers (weird thing pictured above), courtesy of Lockheed Martin who purchased one from a company called D-Wave Systems. As the article notes,
... a powerful new type of computer that is about to be commercially deployed by a major American military contractor is taking computing into the strange, subatomic realm of quantum mechanics. In that infinitesimal neighborhood, common sense logic no longer seems to apply. A one can be a one, or it can be a one and a zero and everything in between — all at the same time. ... Lockheed Martin — which bought an early version of such a computer from the Canadian company D-Wave Systems two years ago — is confident enough in the technology to upgrade it to commercial scale, becoming the first company to use quantum computing as part of its business.The article does mention that there are some skeptics. So beware.
Ten to fifteen years ago, I used to write frequently, mostly for New Scientist magazine, about research progress towards quantum computing. For anyone who hasn't read something about this, quantum computing would exploit the peculiar properties of quantum physics to do computation in a totally new way. It could potentially solve some problems very quickly that computers running on classical physics, as today's computers do, would never be able to solve. Without getting into any detail, the essential thing about quantum processes is their ability to explore many paths in parallel, rather than just doing one specific thing, which would give a quantum computer unprecedented processing power. Here's an article giving some basic information about the idea.
I stopped writing about quantum computing because I got bored with it, not the ideas, but the achingly slow progress in bringing the idea into reality. To make a really useful quantum computer you need to harness quantum degrees of freedom, "qubits," in single ions, photons, the spins of atoms, etc., and have the ability to carry out controlled logic operations on them. You would need lots of them, say hundreds and more, to do really valuable calculations, but to date no one has managed to create and control more than about 2 or 3. I wrote several articles a year noting major advances in quantum information storage, in error correction, in ways to transmit quantum information (which is more delicate than classical information) from one place to another and so on. Every article at some point had a weasel phrase like ".... this could be a major step towards practical quantum computing." They weren't. All of this was perfectly good, valuable physics work, but the practical computer receded into the future just as quickly as people made advances towards it. That seems to be true today.... except for one D-Wave Systems.
Around five years ago, this company started claiming that it was producing and achieving quantum computing and had built functioning devices with 128 qubits. It used superconducting technology. Everyone else in the field was aghast by such a claim, given this sudden staggering advance over what anyone else in the world had achieved. Oh, and D-Wave didn't release sufficient information for the claim to be judged. Here is the skeptical judgement of IEEE Spectrum magazine as of 2010. But more up to date, and not quite so negative, is this assessment by quantum information expert Scott Aaronson just over a year ago. The most important point he makes is about the failure of D-Wave to really demonstrate that its computer is really doing something essentially quantum, which is why it would be interesting. This would mean demonstrating so-called quantum entanglement in the machine, or really carrying out some calculation that was so vastly superior to anything achievable by classical computers that one would have to infer quantum performance. Aaronson asks the obvious question:
... rather than constantly adding more qubits and issuing more hard-to-evaluate announcements, while leaving the scientific characterization of its devices in a state of limbo, why doesn’t D-Wave just focus all its efforts on demonstrating entanglement, or otherwise getting stronger evidence for a quantum role in the apparent speedup? When I put this question to Mohammad Amin, he said that, if D-Wave had followed my suggestion, it would have published some interesting research papers and then gone out of business—since the fundraising pressure is always for more qubits and more dramatic announcements, not for clearer understanding of its systems. So, let me try to get a message out to the pointy-haired bosses of the world: a single qubit that you understand is better than a thousand qubits that you don’t. There’s a reason why academic quantum computing groups focus on pushing down decoherence and demonstrating entanglement in 2, 3, or 4 qubits: because that way, at least you know that the qubits are qubits! Once you’ve shown that the foundation is solid, then you try to scale up.So there's a finance and publicity angle here as well as the science. The NYT article doesn't really get into any of the specific claims of D-Wave, but I recommend Aaronson's comments as a good counterpoint to the hype.
Wednesday, March 20, 2013
Third (and final) excerpt...
The third (and, you'll all be pleased to hear, final!) excerpt of my book was published in Bloomberg today. The title is "Toward a National Weather Forecaster for Finance" and explores (briefly) the topic of what might be possible in economics and finance in creating national (and international) centers devoted to data intensive risk analysis and forecasting of socioeconomic "weather."
Before anyone thinks I'm crazy, let me make very clear that I'm using the term "forecasting" in it's general sense, i.e. of making useful predictions of potential risks as they emerge in specific areas, rather than predictions such as "the stock market will collapse at noon on Thursday." I think we can all agree that the latter kind of prediction is probably impossible (although Didier Sornette wouldn't agree), and certainly would be self-defeating were it made widely known. Weather forecasters make much less specific predictions all the time, for example, of places and times where conditions will be ripe for powerful thunderstorms and tornadoes. These forecasts of potential risks are still valuable, and I see no reason similar kinds of predictions shouldn't be possible in finance and economics. Of course, people make such predictions all the time about financial events already. I'm merely suggesting that with effort and the devotion of considerable resources for collecting and sharing data, and building computational models, we could develop centers acting for the public good to make much better predictions on a more scientific basis.
As a couple of early examples, I'll point to the recent work on complex networks in finance which I've touched on here and here. These are computationally intensive studies demanding excellent data which make it possible to identify systemically important financial institutions (and links between them) more accurately than we have in the past. Much work remains to make this practically useful.
Another example is this recent and really impressive agent based model of the US housing market, which has been used as a "post mortem" experimental tool to ask all kinds of "what if?" questions about the housing bubble and its causes, helping to tease out better understanding on controversial questions. As the authors note, macroeconomists really didn't see the housing market as a likely source of large-scale macroeconomic trouble. This model has made it possible to ask and explore questions that cannot be explored with conventional economic models:
Before anyone thinks I'm crazy, let me make very clear that I'm using the term "forecasting" in it's general sense, i.e. of making useful predictions of potential risks as they emerge in specific areas, rather than predictions such as "the stock market will collapse at noon on Thursday." I think we can all agree that the latter kind of prediction is probably impossible (although Didier Sornette wouldn't agree), and certainly would be self-defeating were it made widely known. Weather forecasters make much less specific predictions all the time, for example, of places and times where conditions will be ripe for powerful thunderstorms and tornadoes. These forecasts of potential risks are still valuable, and I see no reason similar kinds of predictions shouldn't be possible in finance and economics. Of course, people make such predictions all the time about financial events already. I'm merely suggesting that with effort and the devotion of considerable resources for collecting and sharing data, and building computational models, we could develop centers acting for the public good to make much better predictions on a more scientific basis.
As a couple of early examples, I'll point to the recent work on complex networks in finance which I've touched on here and here. These are computationally intensive studies demanding excellent data which make it possible to identify systemically important financial institutions (and links between them) more accurately than we have in the past. Much work remains to make this practically useful.
Another example is this recent and really impressive agent based model of the US housing market, which has been used as a "post mortem" experimental tool to ask all kinds of "what if?" questions about the housing bubble and its causes, helping to tease out better understanding on controversial questions. As the authors note, macroeconomists really didn't see the housing market as a likely source of large-scale macroeconomic trouble. This model has made it possible to ask and explore questions that cannot be explored with conventional economic models:
Not only were the Macroeconomists looking at the wrong markets, they might have been looking at the wrong variables. John Geanakoplos (2003, 2010a, 2010b) has argued that leverage and collateral, not interest rates, drove the economy in the crisis of 2007-2009, pushing housing prices and mortgage securities prices up in the bubble of 2000-2006, then precipitating the crash of 2007. Geanakoplos has also argued that the best way out of the crisis is to write down principal on housing loans that are underwater (see Geanakoplos-Koniak (2008, 2009) and Geanakoplos (2010b)), on the grounds that the loans will not be repaid anyway, and that taking into account foreclosure costs, lenders could get as much or almost as much money back by forgiving part of the loans, especially if stopping foreclosures were to lead to a rebound in housing prices.This is precisely the kind of work I think can be geared up and extended far beyond the housing market, augmented with real time data, and used to make valuable forecasting analyses. It seems to me actually to be the obvious approach.
There is, however, no shortage of alternative hypotheses and views. Was the bubble caused by low interest rates, irrational exuberance, low lending standards, too much refinancing, people not imagining something, or too much leverage? Leverage is the main variable that went up and down along with housing prices. But how can one rule out the other explanations, or quantify which is more important? What effect would principal forgiveness have on housing prices? How much would that increase (or decrease) losses for investors? How does one quantify the answer to that question?
Conventional economic analysis attempts to answer these kinds of questions by building equilibrium models with a representative agent, or a very small number of representative agents. Regressions are run on aggregate data, like average interest rates or average leverage. The results so far seem mixed. Edward Glaeser, Joshua Gottlieb, and Joseph Gyourko (2010) argue that leverage did not play an important role in the run-up of housing prices from 2000-2006. John Duca, John Muellbauer, and Anthony Murphy (2011), on the other hand, argue that it did. Andrew Haughwout et al (2011) argue that leverage played a pivotal role.
In our view a definitive answer can only be given by an agent-based model, that is, a model in which we try to simulate the behavior of literally every household in the economy. The household sector consists of hundreds of millions of individuals, with tremendous heterogeneity, and a small number of transactions per month. Conventional models cannot accurately calibrate heterogeneity and the role played by the tail of the distribution. ... only after we know what the wealth and income is of each household, and how they make their housing decisions, can we be confident in answering questions like: How many people could afford one house who previously could afford none? Just how many people bought extra houses because they could leverage more easily? How many people spent more because interest rates became lower? Given transactions costs, what expectations could fuel such a demand? Once we answer questions like these, we can resolve the true cause of the housing boom and bust, and what would happen to housing prices if principal were forgiven.
... the agent-based approach brings a new kind of discipline because it uses so much more data. Aside from passing a basic plausibility test (which is crucial in any model), the agent-based approach allows for many more variables to be fit, like vacancy rates, time on market, number of renters versus owners, ownership rates by age, race, wealth, and income, as well as the average housing prices used in standard models. Most importantly, perhaps, one must be able to check that basically the same behavioral parameters work across dozens of different cities. And then at the end, one can do counterfactual reasoning: what would have happened had the Fed kept interest rates high, what would happen with this behavioral rule instead of that.
The real proof is in the doing. Agent-based models have succeeded before in simulating traffic and herding in the flight patterns of geese. But the most convincing evidence is that Wall Street has used agent-based models for over two decades to forecast prepayment rates for tens of millions of individual mortgages.
Tuesday, March 19, 2013
Second excerpt...
A second excerpt of my forthcoming book Forecast is now online at Bloomberg. It's a greatly condensed text assembled from various parts of the book. One interesting exchange in the comments from yesterday's excerpt:
to which one Jack Harllee replied...Food For Thought commented....Before concluding that economic theory does not include analysis of unstable equilibria check out the vast published findings on unstable equilibria in the field of International Economics. Once again we have someone touching on one tiny part of economic theory and drawing overreaching conclusions.
I would expect a scientist would seek out more evidence before jumping to conclusions.
This response fairly well captures my own position. I argue in the book that the economics profession has been fixated far too strongly on equilibrium models, and much of the time simply assumes the stability of such equilibria without any justification. I certainly don't claim that economists have never considered unstable equilibria (or examined models with multiple equilibria). But any examination of the stability of an equilibrium demands some analysis of dynamics of the system away from equilibrium, and this has not (to say the least) been a strong focus of economic theory.Sure, economists have studied unstable equilibria. But that's not where the profession's heart is. Krugman summarized rather nicely in 1996, and the situation hasn't changed much since then:
"Personally, I consider myself a proud neoclassicist. By this I clearly don't mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one's thinking - and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy. ...That said, there are indeed economists who regard maximization and equilibrium as more than useful fictions. They regard them either as literal truths - which I find a bit hard to understand given the reality of daily experience - or as principles so central to economics that one dare not bend them even a little, no matter how useful it might seem to do so."
Monday, March 18, 2013
New territory for game theory...
This new paper in PLoS looks fascinating. I haven't had time yet to study it in detail, but it appears to make an important demonstration of how, when thinking about human behavior in strategic games, fixed point or mixed strategy Nash equilibria can be far too restrictive and misleading, ruling out much more complex dynamics, which in reality can occur even for rational people playing simple games:
...and from the conclusions, ...
Abstract
Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems.
...and from the conclusions, ...
Cycles in the belief space of learning agents have been predicted for many years, particularly in games with intransitive dominance relations, like Matching Pennies and Rock-Paper-Scissors, but experimentalists have only recently started looking to these dynamics for experimental predictions. This work should function to caution experimentalists of the dangers of treating dynamics as ephemeral deviations from a static solution concept. Periodic behavior in the Mod Game, which is stable and efficient, challenges the preconception that coordination mechanisms must converge on equilibria or other fixed-point solution concepts to be promising for social applications. This behavior also reveals that iterated reasoning and stable high-dimensional dynamics can coexist, challenging recent models whose implementation of sophisticated reasoning implies convergence to a fixed point [13]. Applied to real complex social systems, this work gives credence to recent predictions of chaos in financial market game dynamics [8]. Applied to game learning, our support for cyclic regimes vindicates the general presence of complex attractors, and should help motivate their adoption into the game theorist’s canon of solution concepts
Book excerpt...
Bloomberg is publishing a series of excerpts from my forthcoming book, Forecast, which is now due out in only a few days. The first one was published today.
Secrets of Cyprus...
Just something to think about when scratching your head over the astonishing developments in Cyprus, which seem to be more or less intentionally designed to touch off bank runs in several European nations. Why? Courtesy of Zero Hedge:
Also, much more on the matter here, mostly expressing similar sentiments. And do read The War On Common Sense by Tim Duy:
...news is now coming out that the Cyprus parliament has postponed the decision and may in fact not be able to reach agreement. They may tinker with the percentages, to penalize smaller savers less (and larger savers more). However, the damage is already done. They have hit their savers with a grievous blow, and this will do irreparable harm to trust and confidence.
As well it should! In more civilized times, there was a long established precedent regarding the capital structure of a bank. Equity holders incur the first losses as they own the upside profits and capital gains. Next come unsecured creditors who are paid a higher interest rate, followed by secured bondholders who are paid a lower interest rate. Depositors are paid the lowest interest rate of all, but are assured to be made whole, even if it means every other class in the capital structure is utterly wiped out.
As caveat to the following paragraph, I acknowledge that I have not read anything definitive yet regarding bondholders. I present my assumptions (which I think are likely correct).
As with the bankruptcy of General Motors in the US, it looks like the rule of law and common sense has been recklessly set aside. The fruit from planting these bitter seeds will be harvested for many years hence. As with GM, political expediency drives pragmatic and ill-considered actions. In Cyprus, bondholders include politically connected banks and sovereign governments. Bureaucrats decided it would be acceptable to use depositors like sacrificial lambs. The only debate at the moment seems to be how to apportion the damage amongst “rich” and “non-rich” depositors.
Also, much more on the matter here, mostly expressing similar sentiments. And do read The War On Common Sense by Tim Duy:
This weekend, European policymakers opened up a new front in their ongoing war on common sense. The details of the Cyprus bailout included a bail-in of bank depositors, small and large alike. As should have been expected, chaos ensued as Cypriots rushed to ATMs in a desperate attempt to withdraw their savings, the initial stages of what is likely to become a run on the nation's banks. Shocking, I know. Who could have predicted that the populous would react poorly to an assault on depositors?
Everyone. Everyone would have predicted this. Everyone except, apparently, European policymakers....
Friday, March 15, 2013
Beginning of the end for big banks?
If the biggest banks are too big to fail, too connected to fail, too important to prosecute, and also too complex to manage, it would seem sensible to scale them down in size, and to reduce their centrality and the complexity of their positions. Simon Johnson has an encouraging article suggesting that at least some of this may actually be about to happen:
This paper also offers some interesting analysis on different practical steps that might be taken to end this ridiculous situation.
The largest banks in the United States face a serious political problem. There has been an outbreak of clear thinking among officials and politicians who increasingly agree that too-big-to-fail is not a good arrangement for the financial sector.Most encouraging is the emergence of a real discussion over the implicit taxpayer subsidy given to the largest banks. See also this editorial in Bloomberg from a few weeks ago:
Six banks face the prospect of meaningful constraints on their size: JPMorgan Chase, Bank of America, Citigroup, Wells Fargo, Goldman Sachs and Morgan Stanley. They are fighting back with lobbying dollars in the usual fashion – but in the last electoral cycle they went heavily for Mitt Romney (not elected) and against Elizabeth Warren and Sherrod Brown for the Senate (both elected), so this element of their strategy is hardly prospering.
What the megabanks really need are some arguments that make sense. There are three positions that attract them: the Old Wall Street View, the New View and the New New View. But none of these holds water; the intellectual case for global megabanks at their current scale is crumbling.
On television, in interviews and in meetings with investors, executives of the biggest U.S. banks -- notably JPMorgan Chase & Co. Chief Executive Jamie Dimon -- make the case that size is a competitive advantage. It helps them lower costs and vie for customers on an international scale. Limiting it, they warn, would impair profitability and weaken the country’s position in global finance.So much for the theory that the big banks need to pay big bonuses so they can attract that top financial talent on which their success depends. Their success seems to depend on a much simpler recipe.
So what if we told you that, by our calculations, the largest U.S. banks aren’t really profitable at all? What if the billions of dollars they allegedly earn for their shareholders were almost entirely a gift from U.S. taxpayers?
... The top five banks -- JPMorgan, Bank of America Corp., Citigroup Inc., Wells Fargo & Co. and Goldman Sachs Group Inc. - - account for $64 billion of the total subsidy, an amount roughly equal to their typical annual profits (see tables for data on individual banks). In other words, the banks occupying the commanding heights of the U.S. financial industry -- with almost $9 trillion in assets, more than half the size of the U.S. economy -- would just about break even in the absence of corporate welfare. In large part, the profits they report are essentially transfers from taxpayers to their shareholders.
This paper also offers some interesting analysis on different practical steps that might be taken to end this ridiculous situation.
Tuesday, March 12, 2013
Megabanks: too complex to manage
Having come across Chris Arnade, I'm currently reading everything I can find by him. On this blog I've touched on the matter of financial complexity many times, but mostly in the context of the network of linked institutions. I've never considered the possibility that the biggest financial institutions are themselves now too complex to be managed in any effective way. In this great article at Scientific American, Arnade (who has 20 years experience working in Wall St.) makes a convincing case that the largest banks are now invested in so many diverse products of such immense complexity that they cannot possibly manage their risks:
This is far more common on Wall Street than most realize. Just last year JP Morgan revealed a $6 billion loss from a convoluted investment in credit derivatives. The post mortem revealed that few, including the actual trader, understood the assets or the trade. It was even found that an error in a spreadsheet was partly responsible.
Since the peso crisis, banks have become massive, bloated with new complex financial products unleashed by deregulation. The assets at US commercial banks have increased five times to $13 trillion, with the bulk clustered at a few major institutions. JP Morgan, the largest, has $2.5 trillion in assets.
Much has been written about banks being “too big to fail.” The equally important question is are they “too big to succeed?” Can anyone honestly risk manage $2 trillion in complex investments?
To answer that question it’s helpful to remember how banks traditionally make money: They take deposits from the public, which they lend out longer term to companies and individuals, capturing the spread between the two.
Managing this type of bank is straightforward and can be done on spreadsheets. The assets are assigned a possible loss, with the total kept well beneath the capital of the bank. This form of banking dominated for most of the last century, until the recent move towards deregulation.
Regulations of banks have ebbed and flowed over the years, played out as a fight between the banks’ desire to buy a larger array of assets and the government’s desire to ensure banks’ solvency.
Starting in the early 1980s the banks started to win these battles resulting in an explosion of financial products. It also resulted in mergers. My old firm, Salomon Brothers, was bought by Smith Barney, which was bought by Citibank.
Now banks no longer just borrow to lend to small businesses and home owners, they borrow to trade credit swaps with other banks and hedge funds, to buy real estate in Argentina, super senior synthetic CDOs, mezzanine tranches of bonds backed by the revenues of pop singers, and yes, investments in Mexico pesos. Everything and anything you can imagine.
Managing these banks is no longer simple. Most assets now owned have risks that can no longer be defined by one or two simple numbers. They often require whole spreadsheets. Mathematically they are vectors or matrices rather than scalars.
Before the advent of these financial products, the banks’ profits were proportional to the total size of their assets. The business model scaled up linearly. There were even cost savings associated with a larger business.
This is no longer true. The challenge of risk managing these new assets has broken that old model.
Not only are the assets themselves far harder to understand, but the interplay between the different assets creates another layer of complexity.
In addition, markets are prone to feedback loops. A bank owning enough of an asset can itself change the nature of the asset. JP Morgan’s $6 billion loss was partly due to this effect. Once they had began to dismantle the trade the markets moved against them. Put another way, other traders knew JP Morgan were in pain and proceeded to ‘shove it in their faces’.
Bureaucracy creates another layer, as does the much faster pace of trading brought about by computer programs. Many risk managers will privately tell you that knowing what they own is as much a problem as knowing the risk of what is owned.
Put mathematically, the complexity now grows non-linearly. This means, as banks get larger, the ability to risk-manage the assets grows much smaller and more uncertain, ultimately endangering the viability of the business.
Strategic recklessness
Some poignant (and infuriating) insight from Chris Arnade on Why it's smart to be reckless on Wall St.:
... asymmetry in pay (money for profits, flat for losses) is the engine behind many of Wall Street’s mistakes. It rewards short-term gains without regard to long-term consequences. The results? The over-reliance on excessive leverage, banks that are loaded with opaque financial products, and trading models that are flawed. ... Regulation is largely toothless if banks and their employees have the financial incentive to be reckless.
Sunday, March 10, 2013
Networks in finance
Just over a week ago, the journal Nature Physics published an unusual issue. In addition to the standard papers on technical physics topics, this issue contained a section with a special focus on finance, especially on complex networks in finance. I'm sure most readers of this blog won't have access to the papers in this issue, so I thought I'd give a brief summary of the papers here.
It's notable that these aren't papers written just by physicists, but represent the outcome of collaborations between physicists and a number of prominent economists (Nobel Prize winner Joseph Stiglitz among them) and several regulators from important central banks. The value of insight coming out of physics-inspired research into the collective dynamics of financial markets is really starting to be recognized by people who matter (even if most academic economists won't wake up to this probably for several decades).
I've written about this work in my most recent column for Bloomberg, which will be published on Sunday night EST. I was also planning to give here some further technical detail on one very important paper to which I referred in the Bloomberg article, but due to various other demands in the past few days I haven't quite managed that yet. The paper in question, I suspect, is unknown to almost all financial economists, but will, I hope, gain wide attention soon. It essentially demonstrates that the theorists' ideal of complete, arbitrage free markets in equilibrium isn't a nirvana of market efficiency, as is generally assumed. Examination of the dynamics of such a market, even within the neo-classical framework, shows that any approach to this efficient ideal also brings growing instability and likely market collapse. The ideal of complete markets, in other words, isn't something we should be aiming for. Here's some detail on that work from something I wrote in the past (see the paragraphs referring to the work of Matteo Marsili and colleagues).
Now, the Nature Physics special issue.
The first key paper is "Complex derivatives," by Stefano Battiston, Guido Caldarelli, Co-Pierre Georg, Robert May and Joseph Stiglitz. It begins by noting that the volume of derivatives outstanding fell briefly following the crisis of 2008, but is now increasing again. According to usual thinking in economics and finance, this growth of the market should be a good thing. If people are entering into these contacts, it must be for a reason, i.e. to hedge their risks or to exploit opportunities, and these deals should lead to beneficial economic exchange. But, as Battiston and colleagues note, this may not actually be true:
The bulk of this paper is devoted to supporting this idea, examining several recent independent lines of research which indicate the more derivatives can make market less stable. This work shares some ideas with theoretical ecology, where it was once thought (40 years ago) that more complexity in an ecology should generally confer stability. Later work suggested instead that complexity (at least too much of it) tends to breed instability. According to a number of recent studies, the same seems to be true in finance:
The second paper in the Nature Physics special issue is "Reconstructing a credit network," by Guido Caldarelli, Alessandro Chessa, Andrea Gabrielli, Fabio Pammolli and Michelangelo Puliga. This work addresses an issue that isn't quite as provocative as the value of the derivatives industry, but the topic may be of extreme importance in future efforts to devise effective financial regulations. The key insight coming from network science is that the architecture of a network -- its topology -- has a huge impact on how influences (such as financial distress) spread through the network. Hence, global network topology is intimately linked up with system stability; knowledge of global structure is absolutely essential to managing systemic risk. Unfortunately, the history of law and finance is such that much of the information that would be required to understand the real web of links between financial institutions remains private, hidden, unknown to the public or to regulators.
The best way to overcome this is certainly to make this information public. When financial institutions undertake transactions among themselves, the rest of us are also influenced and our economic well being potentially put at risk. This information should be public knowledge, because it impacts upon financial stability, which is a public good. However, in the absence of new legislation to make this happen, regulators can right now turn to more sophisticated methods to help reconstruct a more complete picture of global financial networks, filling in the missing details. This paper, written by several key experts in this technical area, reviews what is now possible and how these methods might be best put to use by regulators in the near future.
Finally, the third paper in the Nature Physics special issue is "The power to control," by Marco Galbiati, Danilo Delpini and Stefano Battiston. "Control" is a word you rarely hear in the context of financial markets, I suppose because the near religion of the "free market" has made "control" seem like an idea of "communists" or at least "socialists" (whatever that means). But regulation of any sort, laws, institutions, even social norms and accepted practices, all of these represent some kind of "control" placed on individuals and firms in the aim, for society at large, of better outcomes. We need sensible control. How to achieve it?
Of course, "control" has a long history in engineering science where it is the focus of an extensive and quite successful "control theory." This paper reviews some recent work which has extended control theory to complex networks. One of the key questions is if the dynamics of large complex networks might be controlled, or at least strongly steered, by influencing only a small subset of the elements making up the network, and perhaps not even those that seem to be the most significant. This is, I think, clearly a promising area for further work. Let's take the insight of a century and more of control theory and ask if we can't use that to help prevent, or give early warnings of, the kinds of disasters that have hit finance in the past decade.
Much of the work in this special issue has originated out of a European research project with the code name FOC, which stands for, well, I'm not exactly sure what it stands for (the project describes itself as "Forecasting Financial Crises" which seems more like FFC to me). In any event, I know some of these people and apart from the serious science they have a nice sense of humor. Perhaps the acronym FOC was even chosen for another reason. As I recall, one of their early meetings a few years ago was announced as "Meet the FOCers." Humor in no way gets in the way of good science.
It's notable that these aren't papers written just by physicists, but represent the outcome of collaborations between physicists and a number of prominent economists (Nobel Prize winner Joseph Stiglitz among them) and several regulators from important central banks. The value of insight coming out of physics-inspired research into the collective dynamics of financial markets is really starting to be recognized by people who matter (even if most academic economists won't wake up to this probably for several decades).
I've written about this work in my most recent column for Bloomberg, which will be published on Sunday night EST. I was also planning to give here some further technical detail on one very important paper to which I referred in the Bloomberg article, but due to various other demands in the past few days I haven't quite managed that yet. The paper in question, I suspect, is unknown to almost all financial economists, but will, I hope, gain wide attention soon. It essentially demonstrates that the theorists' ideal of complete, arbitrage free markets in equilibrium isn't a nirvana of market efficiency, as is generally assumed. Examination of the dynamics of such a market, even within the neo-classical framework, shows that any approach to this efficient ideal also brings growing instability and likely market collapse. The ideal of complete markets, in other words, isn't something we should be aiming for. Here's some detail on that work from something I wrote in the past (see the paragraphs referring to the work of Matteo Marsili and colleagues).
Now, the Nature Physics special issue.
The first key paper is "Complex derivatives," by Stefano Battiston, Guido Caldarelli, Co-Pierre Georg, Robert May and Joseph Stiglitz. It begins by noting that the volume of derivatives outstanding fell briefly following the crisis of 2008, but is now increasing again. According to usual thinking in economics and finance, this growth of the market should be a good thing. If people are entering into these contacts, it must be for a reason, i.e. to hedge their risks or to exploit opportunities, and these deals should lead to beneficial economic exchange. But, as Battiston and colleagues note, this may not actually be true:
By engaging in a speculative derivatives market, players can potentially amplify their gains, which is arguably the most plausible explanation for the proliferation of derivatives in recent years. Needless to say, losses are also amplified. Unlike bets on, say, dice — where the chances of the outcome are not affected by the bet itself — the more market players bet on the default of a country, the more likely the default becomes. Eventually the game becomes a self-fulfilling prophecy, as in a bank run, where if each party believes that others will withdraw their money from the bank, it pays each to do so. More perversely, in some cases parties have incentives (and opportunities) to precipitate these events, by spreading rumours or by manipulating the prices on which the derivatives are contingent — a situation seen most recently in the London Interbank Offered Rate (LIBOR) affair.
Proponents of derivatives have long argued that these instruments help to stabilize markets by distributing risk, but it has been shown recently that in many situations risk sharing can also lead to instabilities.
The bulk of this paper is devoted to supporting this idea, examining several recent independent lines of research which indicate the more derivatives can make market less stable. This work shares some ideas with theoretical ecology, where it was once thought (40 years ago) that more complexity in an ecology should generally confer stability. Later work suggested instead that complexity (at least too much of it) tends to breed instability. According to a number of recent studies, the same seems to be true in finance:
It now seems that the proliferation of financial instruments induces strong fluctuations and instabilities for similar reasons. The basis for pricing complex derivatives makes several conventional assumptions that amount to the notion that trading activity does not feed back on the dynamical behaviour of markets. This idealized (and unrealistic) model can have the effect of masking potential instabilities in markets. A more detailed picture, taking into account the effects of individual trades on prices, reveals the onset of singularities as the number of financial instruments increases.The remainder of the paper goes on to explore various means that may be taken, through regulations, to try to manage the complexity of the financial network and encourage its stability. Stability isn't something we should expect to occur on its own. It demands real attention to detail. Blind adherence to the idea that "more derivatives is good" is a recipe for trouble.
The second paper in the Nature Physics special issue is "Reconstructing a credit network," by Guido Caldarelli, Alessandro Chessa, Andrea Gabrielli, Fabio Pammolli and Michelangelo Puliga. This work addresses an issue that isn't quite as provocative as the value of the derivatives industry, but the topic may be of extreme importance in future efforts to devise effective financial regulations. The key insight coming from network science is that the architecture of a network -- its topology -- has a huge impact on how influences (such as financial distress) spread through the network. Hence, global network topology is intimately linked up with system stability; knowledge of global structure is absolutely essential to managing systemic risk. Unfortunately, the history of law and finance is such that much of the information that would be required to understand the real web of links between financial institutions remains private, hidden, unknown to the public or to regulators.
The best way to overcome this is certainly to make this information public. When financial institutions undertake transactions among themselves, the rest of us are also influenced and our economic well being potentially put at risk. This information should be public knowledge, because it impacts upon financial stability, which is a public good. However, in the absence of new legislation to make this happen, regulators can right now turn to more sophisticated methods to help reconstruct a more complete picture of global financial networks, filling in the missing details. This paper, written by several key experts in this technical area, reviews what is now possible and how these methods might be best put to use by regulators in the near future.
Finally, the third paper in the Nature Physics special issue is "The power to control," by Marco Galbiati, Danilo Delpini and Stefano Battiston. "Control" is a word you rarely hear in the context of financial markets, I suppose because the near religion of the "free market" has made "control" seem like an idea of "communists" or at least "socialists" (whatever that means). But regulation of any sort, laws, institutions, even social norms and accepted practices, all of these represent some kind of "control" placed on individuals and firms in the aim, for society at large, of better outcomes. We need sensible control. How to achieve it?
Of course, "control" has a long history in engineering science where it is the focus of an extensive and quite successful "control theory." This paper reviews some recent work which has extended control theory to complex networks. One of the key questions is if the dynamics of large complex networks might be controlled, or at least strongly steered, by influencing only a small subset of the elements making up the network, and perhaps not even those that seem to be the most significant. This is, I think, clearly a promising area for further work. Let's take the insight of a century and more of control theory and ask if we can't use that to help prevent, or give early warnings of, the kinds of disasters that have hit finance in the past decade.
Much of the work in this special issue has originated out of a European research project with the code name FOC, which stands for, well, I'm not exactly sure what it stands for (the project describes itself as "Forecasting Financial Crises" which seems more like FFC to me). In any event, I know some of these people and apart from the serious science they have a nice sense of humor. Perhaps the acronym FOC was even chosen for another reason. As I recall, one of their early meetings a few years ago was announced as "Meet the FOCers." Humor in no way gets in the way of good science.
Saturday, March 9, 2013
The intellectual equivalent of crack cocaine
That's what the British historian Geoffrey Elton once called
Post-Modernist Philosophy, i.e. that branch of modern
philosophy/literary criticism typically characterized by a, shall we say, less than wholehearted commitment to clarity and simplicity of
expression. The genre is represented in the libraries by reams of
apparently meaningless prose, the authors of which claim to get at truths that
would otherwise be out of reach of ordinary language. Here's a nice
example, the product of the subtle mind of one Felix Guattari:
You need some self confidence to come instead to the other logically possible conclusion -- that the text is actually purposeful nonsense, all glitter and no content, an affront against the normal, productive use of language for communication, "crack cocaine" as the writer gets the high that comes from appearing deep and earning accolades without putting in the hard work to actually write something that is insightful.
Having said that, let me also say that I am not in any way an expert in postmodernist philosophy and there may be more to the thinking of some of its representatives than this Guattari quote would suggest.
In any event, I think there's something deeply similar here to John Kay's point in this essay about Warren Buffet. As he notes, Buffet has been spectacularly successful and hence the subject of vast media attention, yet, paradoxically, he doesn't seem to have inspired an army of investors who copy his strategy:
“We can clearly see that there is no bi-univocal correspondence between linear signifying links or archi-writing, depending on the author, and this multireferential, multi-dimensional machinic catalysis. The symmetry of scale, the transversality, the pathic non-discursive character of their expansion: all these dimensions remove us from the logic of the excluded middle and reinforce us in our dismissal of the ontological binarism we criticised previously.”I'm with Elton. This writer, it seems to me, is up to no good, trying to pull the wool over the reader's eyes, using confusion as a weapon to persuade the reader of his superior insight. You read it, you don't quite get it (or even come close to getting it), and it is then tempting to conclude that whatever he is saying, as it is beyond your vision, must be exceptionally deep or subtle or complex, too much for you to grasp.
You need some self confidence to come instead to the other logically possible conclusion -- that the text is actually purposeful nonsense, all glitter and no content, an affront against the normal, productive use of language for communication, "crack cocaine" as the writer gets the high that comes from appearing deep and earning accolades without putting in the hard work to actually write something that is insightful.
Having said that, let me also say that I am not in any way an expert in postmodernist philosophy and there may be more to the thinking of some of its representatives than this Guattari quote would suggest.
In any event, I think there's something deeply similar here to John Kay's point in this essay about Warren Buffet. As he notes, Buffet has been spectacularly successful and hence the subject of vast media attention, yet, paradoxically, he doesn't seem to have inspired an army of investors who copy his strategy:
... the most remarkable thing about Mr Buffett’s achievement is not that no one has rivalled his record. It is that almost no one has seriously tried to emulate his investment style. The herd instinct is powerful, even dominant, among asset managers. But the herd is not to be found at Mr Buffett’s annual jamborees in Omaha: that occasion is attended only by happy shareholders and admiring journalists.Buffet's strategy, as Kay describes, is a decidedly old-fashioned one based on close examination of the fundamentals of the companies in which he invests:
If he is a genius, it is the genius of simplicity. No special or original insight is needed to reach his appreciation of the nature of business success. Nor is it difficult to recognise that companies such as American Express, Coca-Cola, IBM, Wells Fargo, and most recently Heinz – Berkshire’s largest holdings – meet his criteria. ... Which leads back to the question of why Berkshire has so few imitators. After all, another crucial insight of business economics is that profitable strategies that can be replicated are imitated until returns from them are driven down to normal levels. Why do the majority of investment managers hold many more stocks, roll them over far more often, engage in far more complex transactions – and derive less consistent and profitable results?The explanation, Kay suggests, is that Buffet's strategy also demands an awful lot of hard work and it's easier for many investment experts to follow the rather different strategy of Felix Guattari, not actually working to achieve superior insight, but working to make it seem as if they do, mostly by obscuring their actual strategies in a bewildering cloud of complexity. Sometimes, as in the case of Bernie Madoff, the obscuring complexity can even take the very simple form of essentially no information whatsoever. People who are willing to believe need very little help:
... the deeper issue is that complexity is intrinsic to the product many money managers sell. How can you justify high fees except by reference to frequent activity, unique insights and arcana? But Mr Buffett understands the limitations of his knowledge. That appreciation distinguishes people who are very clever from those who only think they are.One final comment. I think finance is rife with this kind of psychological problem. But I do not at all believe that science is somehow immune from these effects. I've encountered plenty of works in physics and applied mathematics that couch their results in beautiful mathematics, demonstrate formidable skill in building a framework of theory, and yet seem utterly useless in actually solving or giving insight into any real problem. Science also has a weak spot for style over content.
Friday, March 8, 2013
Obscurity and simplicity
The British economist John Kay is one of my favorite sources of balanced and deeply insightful commentary on an extraordinary number of topics. I wish I could write as easily and productively as he does. He has a great post that is, in particular, on Warren Buffet, but is more generally on an intellectual affliction affecting finance whereby purposeful obscurity often wins out at the expense of honesty and simplicity. Well worth a read. BUT... I think this actually goes way beyond finance. It's part of the human condition... more on that tomorrow....
Subscribe to:
Posts (Atom)