Wednesday, July 27, 2011

Discounting Details

This post offers some further details in connection with an essay I've written for Bloomberg Views. It will be was published tomorrow, 28 July 2011. The topic is economic discounting, which I've posted on before. I naturally didn't get into any mathematical details in the Bloomberg essay, but some readers may find that looking at a little of the mathematics may help to clarify the key point of the argument. So here goes (in a sketch; I encourage everyone to read the original paper):

Suppose that the true discount rate for next year is r1, for the year after is r2, and so on, the rate for the ith year being ri. No one knows what these will be; the rates will fluctuate from year to year. To calculate the total discount factor over a string of N years, you should multiply the individual factors associated with each year as follows,

where δt = 1 year.

But because the future isn't known, Farmer and Geanakoplos point out, determining the correct discount factor to use over the coming N years means averaging over all possible future paths, i.e. all possible sequences of values r1, r2, ... up through rN. Hence, we need to calculate the value of an "effective" discount factor given by the formula,

[Note: This sum, of course, should be divided by the total number of paths to give the average, effective discount.]

Now, it is tempting to think that when you go through the details of calculating this average, summing up the contributions for all possible paths, and dividing by the number of paths, you will find some kind of simple result in which Deff(T) will be equal to a single exponential factor with an average discount rate for the N years, ravg. In others words, you might think -- and most peoples' intuition would tend this way -- that you would find an equation such as


That is, the effective discount rate over N years takes an exponential form with some constant ravg.

Seems sensible, but turns out to be totally wrong. If you demand the equality reflected in the previous equation, then, to make it work as T gets large, it turns out that in many cases ravg will not be a constant in time, but will take on smaller and smaller values as T gets larger. This is what Farmer and Geanakoplos have shown using computer simulations to do the calculation. They used a so-called geometric random walk for the fluctuating rate r, this being the most common mathematical process used in finance to model interest rate fluctuations (i.e. this isn't a crazy or weird model, but a highly plausible one).

Their simulations show that, as a result, the effective discount factor Deff(T) doesn't have an exponential form at all, but rather a very different "power law" form,


where α and β are constants. This falls off with increasing T much slower than an exponential. In other words, it makes discounting much weaker than the incorrect exponential form would suggest it should be.

In the earlier post I discussed some of the implications of this result and a table indicating just how rapidly, after 200 years or so, the exponential and power law forms give wildly different results, with the exponential discounting the value of the future millions or billions of times too strong.

It's rather frightening that a subtle error could make us mis-value the future so profoundly, but this indeed seems to be what we are currently doing. The incorrect exponential form is in wide and standard use by economists doing cost-benefit analyses of everything.

24 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Don't discard exponential discounting just yet. Farmer and Geanakoplos base their 'hyperbolic discount function' on the subjective value test subjects place on events far into the future. Experience and common sense tells me that people are NOT good at making such estimates.

    But even if they were, there is still a more fundamental flaw in Farmer and Geanakoplos' argument. Exponential discounting is a way of expressing the value of a future cash flow as the amount of money one would have to deposit in an account earning continuously compounded interest, today. It is not a subjective valuation of the future cash flow! Money in a deposit account, on the other hand, does not accumulate according to a hyperbolic function!

    So if Farmer and Geanakoplos care so much about events 500 or 700 years from now, they should simply found an 'environment preservation fund' and deposit one dollar earning the risk-free interest rate. Then they can sit back and let the force of compound interest work its magic.

    If you don't believe me, consider what will happen to $1 deposited in an account for 500 years at a (continuously compounded) rate of 4% per annum. Ten compare this with Farmer and Geanakoplos' hyperbolic function using, for instance, alpha = 0.04 and beta = 0.9. Which would you prefer? :)

    ReplyDelete
  3. Jens, you seem to confuse risk-free rate and discount rate. It is common in financial models to discount using the risk-free rate, but the real discount rate includes also premiums for risk, alternativity etc.

    ReplyDelete
  4. Jens,
    This might be true if there was such a thing as a 4% annual return over 500 years - but because of rate fluctuation and risk, there is not. In fact, 30 years is as good as it gets - and that still has default risk.

    ReplyDelete
  5. This can't be right. It seems to completely ignore 30 years of research on consistent yield curve modeling. There are perfectly mathematically well-defined and plausible models of interest rates that will reproduce today's forward rate curve under almost any reasonable dynamics. Geometric brownian motion does give somewhat stupid results in the very long term (because rates get concentrated close to zero), but that model wasn't handed down by god - there are other much more empirically reasonable models widely used in finance. (Almost certainly that includes models that Geanakoplos uses, since he's a partner in a mortgage hedge fund, and those types of investors drove a lot of the model development.)

    Gotta go find the paper...

    ReplyDelete
  6. I am a big fan of New Scientist, but I do not understand why a two year old article is suddenly news-worthy. Did it finally get picked up by a journal?

    The main point seems to be that people may be irrational in ignoring the mean-reversion of discounting rates that has been apparent in the pricing of government bonds.

    Fine, then we are irrational. But is that a basis for policy?

    Fair-enough, the evidence for mean-reversion is only apparent for the next thirty-ish years (and has been for the past hundred) -- so maybe there is a basis for doubt when one speculates about inter-temporal preferences several centuries from now -- but it would seem to be a bit anti-science to ignore the data at hand just because it contradicts an (irrational) belief.

    ReplyDelete
  7. The problem with such reasoning is that it doesn't scale properly with dt. Make dt much less than a year (say, 1/1000th of a year), and then run the simulations to price a 30 year bond (30000 time periods). You'll get a miniscule rate, approaching 0 as dt -> 0. Other models don't have this characteristic; they are robust to choice of dt.

    ReplyDelete
  8. @ D Mac

    I haven't read the paper. Is the scaling problem you highlight to do with how you calculate the rate of variability of interest rates fluctuations? I.e., the probability an interest rate will move by more than 2 percentage points in a year is greater than the probability is will move by 2 percentage points in a day. That would mean that as you scale down the period and dt, you get increasingly smaller ranges of interest rate realizations that you average over. In the limit, as time goes to zero, there would be no variation in interest rates at all, and you would just have whatever rate you started with.

    ReplyDelete
  9. Isn't the result of the paper driven by assuming non-stationary process for one-period discount rates? It seems to me that if r(t) was stationary, then its average over large number of periods would converge to some mean discount rate E[r], and D(T) would converge to something like exp(-E[r]*T) - a standard exponential discounting formula.

    ReplyDelete
  10. @ivansml - I agree. I think that's part of it. What's weird is that they picked up from the Ho-Lee paper of the mid-'80s and ignored all of the (much more realistic) models of interest rate dynamics developed since then. The realistic models all have (quasi) stationary processes for the short rate. ("Quasi" because there are always initial transients to match today's forward curve.) A simple example is the Hull-White model, which has gaussian interest rates. That model has certainty-equivalent exponential discount factors.

    ReplyDelete
  11. Finance Outlook, the complete financial solution.We provide one of a kind innovative financial remedies to all the financial riddles and puzzles that you tumble upon, every moment.

    ReplyDelete
  12. "They used a so-called geometric random walk"

    That assumption needs to be justified. What is a geometric random walk?

    ReplyDelete
  13. "Geometric Random Walks: A Survey
    SANTOSH VEMPALA

    Abstract. The developing theory of geometric random walks is outlined
    here. Three aspects|general methods for estimating convergence (the
    \mixing" rate), isoperimetric inequalities in Rn and their intimate connec-
    tion to random walks, and algorithms for fundamental problems (volume
    computation and convex optimization) that are based on sampling by random walks|are discussed.

    1. Introduction
    A geometric random walk starts at some point in Rn and at each step, moves to a \neighboring" point chosen according to some distribution that depends only on the current point, e.g., a uniform random point within a xed distance.

    The sequence of points visited is a random walk. The distribution of the
    current point, in particular, its convergence to a steady state (or stationary) distribution, turns out to be a very interesting phenomenon. By choosing the one-step distribution appropriately, one can ensure that the steady state distribution is, for example, the uniform distribution over a convex body, or indeed any reasonable distribution in Rn."

    To make this assumption is scientism, not economics. I'm afraid your committment to physics is ruining your chances of becoming a good economist.

    ReplyDelete
  14. It's called "adjusting the wooden ear phones" (from Feynman's talk on cargo cults)

    http://www.lhup.edu/~DSIMANEK/cargocul.htm

    ReplyDelete
  15. I see the superlative contents on your blogs and I perfectly enjoy going through them.
    unsecured loans online

    ReplyDelete
  16. Your computer is rattling instructive and your articles are wonderful.dui attorneys

    ReplyDelete
  17. Thanks for sharing this information its really nice.
    advantages of roll over iras

    ReplyDelete
  18. I would surely give 10 on 10 for such incredible content.
    whole life insurance quotes

    ReplyDelete
  19. Cool website buddy I am gona suggest this to all my list of contacts.
    whole life insurance quotes online

    ReplyDelete
  20. I went over this website and I conceive you've got a large number of splendid information,
    payday loans online direct lenders

    ReplyDelete
  21. I would be appreciating all of your articles and blogs as a result of their suiting up mark.
    payday loan online

    ReplyDelete
  22. Modern times when internet has so much facility of gossip and stuff, your articles have awfully refreshed me.accident compensation

    ReplyDelete
  23. Visit our web and get free Digital Event tickets online. It will be helpful for you.

    ReplyDelete