Inspiration from physics for thinking about economics, finance and social systems
Saturday, September 29, 2012
Seeking "ivansml"
I'm finishing off my forthcoming book Forecast: What Extreme Weather Can Teach Us About Economics and have today been going over page proofs (extreme tedium...fixing typos etc). An important matter: I've referred in the book to some work on learning in macroeconomics (Evans and someone else) that was suggested to me by "ivansml", a graduate student at some European institution. Ivan: can you email me (buchanan.mark@gmail.com)? I would like to point out in a footnote that you directed me to this work. I can do this either using "ivansml" and referring to this site, or by using your real name (which I would rather do). THANKS!
Thursday, September 27, 2012
Bubbles
I just stumbled on this post from a few months back by Noah Smith. Like all his stuff it is a fun and informative read. Essentially, he looks back to the famous experimental paper of Vernon Smith and colleagues which found clear evidence for strong and sustained bubbles in an artificial market with students trading a fictitious asset with real value. The novelty of the experiment was that this asset had a clear and perfectly well-known fundamental value (unlike real financial instruments), and so it was easy to see that the market value at first soared way above the fundamental value, and then crashed down again.
Noah's post looks at why this result, for financial economists, didn't nail the proof that asset bubbles can exist and ought to be expected in real markets. Most of the arguments seem to be centered on the idea that the people acting in real markets are far more sophisticated than those students, and so would never pay more than the true fundamental value for anything. Suffice it to say this argument doesn't hold together at all well in the face of empirical evidence on real trading behavior, some of which Noah reviews.
One thing caught my eye, however, and is worth a short mention. As Noah writes...
Bubbles aren't necessarily totally bad things. Perhaps we may find that they are a useful and necessary part of the collective learning process. The foraging of a flock of birds is highly irregular; it moves this way and that, following the lead of different birds at different times, sometimes moving on large excursions in a single direction. A market might be somewhat similar as a collective social process for searching and exploring. We shouldn't expect that what it has found at any one moment is optimal; it may often make huge mistakes. But the process of exploration may be useful anyway.
In that case, we may find that we don't want to stamp out bubbles, unless they get really big. Or if the bubble is one driven by a systematic increase of leverage by investors which sets the stage for a certain explosive episode of de-leveraging, with subsequent long term consequences.
What we do want to stamp out, however, is the dangerous idea (still supported by many economists) that bubbles don't exist. That's the one idea that can make our markets really prone to disasters.
Noah's post looks at why this result, for financial economists, didn't nail the proof that asset bubbles can exist and ought to be expected in real markets. Most of the arguments seem to be centered on the idea that the people acting in real markets are far more sophisticated than those students, and so would never pay more than the true fundamental value for anything. Suffice it to say this argument doesn't hold together at all well in the face of empirical evidence on real trading behavior, some of which Noah reviews.
One thing caught my eye, however, and is worth a short mention. As Noah writes...
If bubbles represent the best available estimate of fundamental values, then they aren't something we should try to stop. But many other people think that bubbles are something more sinister - large-scale departures of prices from the best available estimate of fundamentals. If bubbles really represent market inefficiencies on a vast scale, then there's a chance we could prevent or halt them, either through better design of financial markets, or by direct government intervention.I am certainly someone of the latter camp -- convinced that markets often depart from fundamentals (even such values even exist) for long periods of time. But I think the third sentence on what we might do about bubbles needs to be refined a little from a logical point of view.
Bubbles aren't necessarily totally bad things. Perhaps we may find that they are a useful and necessary part of the collective learning process. The foraging of a flock of birds is highly irregular; it moves this way and that, following the lead of different birds at different times, sometimes moving on large excursions in a single direction. A market might be somewhat similar as a collective social process for searching and exploring. We shouldn't expect that what it has found at any one moment is optimal; it may often make huge mistakes. But the process of exploration may be useful anyway.
In that case, we may find that we don't want to stamp out bubbles, unless they get really big. Or if the bubble is one driven by a systematic increase of leverage by investors which sets the stage for a certain explosive episode of de-leveraging, with subsequent long term consequences.
What we do want to stamp out, however, is the dangerous idea (still supported by many economists) that bubbles don't exist. That's the one idea that can make our markets really prone to disasters.
Thursday, September 13, 2012
Optimism vs Pessimism
As I mentioned at the end of my post yesterday, many of the comments on my recent Bloomberg column chided me for being overly optimistic about the future of humanity, and especially about our capacity to create a sustainable future, especially through the intelligent use of technology to help us control and manage a complex world. The criticism was elegantly put by David Johnson:
The way I look at the argument of Sander van der Leeuw is that he has identified a weak point in the nature of our relationship with the world. Our brains individually and collectively simply cannot match up to the complexity of the world in which we live (especially as our own technology has made it much more complex in recent decades). It's this mismatch that lies behind the pervasive tendency for our actions and innovations to have unanticipated consequences, many of which have led us to very big problems. Hence, he's suggesting that IF WE HAVE ANY HOPE of finding some solutions to our problems through further innovation it will be by finding ways to help our brains cope more effectively. He suggests information technology as the one kind of technology that might be useful in this regard, and which might help -- again, if used properly -- to heal the divide between the real complexity of the world and our pictures and models of it.
I think this makes a lot of sense, and it ought to inform our future use of technology and the way we use it to innovate. But I certainly wouldn't want to go any further and predict that we will actually be able to act in this way, or learn from this insight. If asked to bet on it, I would actually bet that humanity will have to suffer dearly and catastrophically before we ever change our ways.
Even more than stupid, we are stubborn. On this point I also could not agree more with David Johnson:
I just saw your piece on Bloomberg on augmenting our decision making skills artificially, and I am sorry to say, based on quite a bit of painful experience, that this doesn't actually work as one might hope.I received many comments making similar points, and I'd like to say that I agree completely and absolutely.
I'm retired now, but I spent nearly thirty years as a computer software designer, and I can't tell you how many times I have seen people flatly refuse to believe counter-intuitive results coming from some sophisticated program.
Indeed, even simple instruments, such as pressure gauges can present results that cause system operators to dismiss the data as the output of a defective sensor.
For example, the accident at Three Mile Island was the direct result of operators misjudging the meaning of two gauges that were apparently giving contradictory readings. One gauge implied that the level of cooling water was getting too high, while the other implied that it was dangerously low. The operators could not envision any scenario in which both could be correct, so they decided (arbitrarily and without ANY cross-checking!) that the low reading was invalid, and they shut down the emergency cooling water, which was precisely the wrong thing to do.
In that case, it turned out that there was a vapor lock in the plumbing connecting the two parts of the system that the two gauges were monitoring, so, indeed, the pressure in the cooling water supply was rising, even as the water level in the reactor vessel itself was dropping dangerously. However, as simple as this problem really was, it was totally outside the experience of the operators, so they never considered the possibility. Moreover, the system designers had not recognized the possibility either, or they would have designed the plumbing differently in the first place.
My point here is that a problem of this sort is stupidly simple compared to the complexities of systems like the global climate, yet even trained professionals cannot handle the level of weirdness that can result from one unanticipated discrepancy.
In other words, we are generally stupid enough that we cannot understand, much less accept, how stupid we really are, so there is no way that the average person will casually defer to the judgment of an artificial system like a computer program.
The way I look at the argument of Sander van der Leeuw is that he has identified a weak point in the nature of our relationship with the world. Our brains individually and collectively simply cannot match up to the complexity of the world in which we live (especially as our own technology has made it much more complex in recent decades). It's this mismatch that lies behind the pervasive tendency for our actions and innovations to have unanticipated consequences, many of which have led us to very big problems. Hence, he's suggesting that IF WE HAVE ANY HOPE of finding some solutions to our problems through further innovation it will be by finding ways to help our brains cope more effectively. He suggests information technology as the one kind of technology that might be useful in this regard, and which might help -- again, if used properly -- to heal the divide between the real complexity of the world and our pictures and models of it.
I think this makes a lot of sense, and it ought to inform our future use of technology and the way we use it to innovate. But I certainly wouldn't want to go any further and predict that we will actually be able to act in this way, or learn from this insight. If asked to bet on it, I would actually bet that humanity will have to suffer dearly and catastrophically before we ever change our ways.
Even more than stupid, we are stubborn. On this point I also could not agree more with David Johnson:
Seriously, the Arctic ice cap has been more or less stable, within a few percent, for about three million years, but now, in just thirty years, about 75% of the mass of ice has disappeared. Yet, millions of people simply ignore this massive and extremely dangerous change. Instead, they chalk up the reports as evidence of a conspiracy by climate scientists to frighten taxpayers into supporting more fictitious make-work for those self-same scientists. That is a lethal level of stupidity, but it still passes easily for "common-sense" among a very large fraction of the general population.
Wednesday, September 12, 2012
Archeology of Innovation
My last Bloomberg column appeared a few days ago. Anyone interested can read it here. I wanted to bring some attention to what I think is a truly profound argument being made by anthropologist Sander van der Leeuw of Arizona State University on the nature of innovation (technological, social, or otherwise).
Some readers of my Bloomberg column offered my some insightful criticisms by email, which I intend to share tomorrow.
Obviously, humans excel at innovation and this is what probably accounts for our great (rampant) success as a species. This innovation has also brought us to the brink of catastrophe. A recent study published
in Nature concluded that the next few generations should
expect "a sharp reduction
in biodiversity and severe impacts on much of what we depend on to
sustain our quality of life, including fisheries, agriculture, forest
products and clean water." This is also the outcome of our innovation, which is a double-edged sword. A deep question is WHY? Why is our innovation like this, always (it seems) leading to unintended consequences?
My Bloomberg column gives the basics of the argument, but I strongly recommend reading van der Leeuw's paper, "The Archeology of Innovation," for the full picture. It's a fun, mind-expanding paper in an informal style, and what really makes it unique is that it looks at human evolutionary history (over the past 50,000 years or so) through the lens of information and information processing. This is a novel idea, and especially novel given our current information revolution. The paper argues that most of the fundamental transitions in human history -- including the agricultural and industrial revolutions -- were essentially revolutions in which we learned to use information in a new way. As he notes,
... the current emphasis in certain quarters on our present-day society as the ‘information society’ is misguided—every society since the beginning of human evolution has been an ‘information society.’
Learning from the past is of course a good way to see what might happen in the future.
But the most interesting part of his argument, for me, concerns what past patterns might imply for the future of humanity and our ability to overcome our current global challenges. He essentially suggests that we need to think very carefully about how we innovate, rather than do it recklessly with more or less blind hope (as we do today, encouraged by short-sighted economic return). I'll just give a few short segments:
Human cognition, powerful as it may have become in dealing with the environment, is only one side of the (asymmetric) interaction between people and their environment, the one in which the perception of the multidimensional external world is reduced to a very limited number of dimensions. The other side of that interaction is human action on the environment, and the relationship between cognition and action is exactly what makes the gap between our needs and our capabilities so dramatic.
The crucial concept here is that of ‘unforeseen’ or ‘unanticipated’ consequences. It refers to the well-known and oft-observed fact that, no matter how careful one is in designing human interventions in the environment, the outcome is never what it was intended to be. It seems to me that this phenomenon is due to the fact that every human action upon the environment modifies the latter in many more ways that its human actors perceive, simply because the dimensionality of the environment is much higher than can be captured by the human mind. In practice, this may be seen to play out in every instance where humans have interacted in a particular way with their environment for a long time—in each such instance, ultimately the environment becomes so degraded from the perspective of the people involved that they either move to another place or change the way they are interacting with the environment.
Van der Leeuw's point is that it's rather simple minded -- and not really consistent with a real knowledge of history -- to have blind faith in the ability of humanity to innovate its way out of the various global crises we're now confronting. Our innovation in the past is what has caused them. We need, therefore, to innovate differently and more predictably. Can we?Ultimately, this necessarily leads to ‘time-bombs’ or ‘crises’ in which so many unknowns emerge that the society risks being overwhelmed by the number of challenges it has to face simultaneously. It will initially deal with this by innovating faster and faster, as our society has done for the last two centuries or so, but as this only accelerates the risk spectrum shift, this ultimately is a battle that no society can win. There will inevitably come a time that the society drastically needs to change the way it interacts with the environment, or it will lose its coherence. In the latter case, after a time, the whole cycle begins anew—as one observes when looking at the rise and decline of firms, cities, nations, empires or civilizations.