picking a curve

John Quiggin blogs about his zombie economics book, specifically the chapter on the efficient market hypothesis. This can be summarised as the doctrine that the current price of a security contains all the publicly available information about it. There is a debate about the degree of predictive power this has and the scope of its application, but at the bottom, there you are. In a sense, anyone who says that “you can’t buck the market” is asserting some degree of the EMH, so it’s an idea that has considerable political importance.

The EMH was originally derived based on the stock market, and it’s precisely this bit that I have always found unconvincing. The reason is all about forecasts. The notion of “all the publicly available information” is a very big one; my objection is that all the publicly available information about a company is necessarily historical. Anything else is a forecast or promise derived from it. However, using that information to buy or sell the company’s shares is a forward-looking act; doing so requires you to formulate a view about the future, based on the currently available information.

Now, you can either imagine that decisions are taken based on a pile of information that includes forecasts, or you can assume that the forecasts are part of the decision process. It doesn’t matter. But what does matter is that decisions are taken on the basis of forward-looking statements. This immediately raises the question of how forecasts are made; which means we’ve got to open the black box and dig into institutions.

Commercial forecasts vary hugely in methodology, thoroughness, information content, and rigour; but almost all of them beyond the very simplest usually work by coming up with an estimated growth rate and projecting it forward to the forecast horizon. A serious one is likely to also model some limiting factors and do the sums for several alternative scenarios – for example, oil prices at $40, $80 or $120. This done, you plug in assumptions about margins and market share, and you have your estimate for profits in 2017, and therefore a net present value for the company. You can carry out a sensitivity analysis, changing factors and seeing which have the greatest impact on the model’s output, and you will if you’re serious about this.

But in practically all forecasts, the most powerful variable is the estimated compound annual growth rate (CAGR), which is just what it says on the tin – the average growth rate you’re projecting over the forecast term. The problem with this is that it’s not data, it’s a forecasting assumption; you could decide to assume that the current CAGR will continue to hold, but this will guarantee that your forecast will probably be wrong, as it’s precisely changes in the underlying CAGR that drive any really big shift in an industry. So you’ve got to pick one.

Of course, there are many ways you could do this. You could compare similar phenomena, or use a Bass diffusion curve, or ask everyone else in the office to estimate it and use a Bayesian analysis…but it’s still a pick, so it’s fundamentally going to be an index of the subjective optimism of the forecaster. This may be more or less informed or rational; but eventually it’s a question of how good you think trade will be over the next x years.

To put it another way, forecasts are always driven by Keynesian animal spirits, and a forward-looking decision process based on historical inputs is critically dependent on forecasting. This is characteristic of all such processes, going right back to the Kerrison gun predictor and the other systems based on it.

This might not be so bad in terms of the EMH if there was any reason to think that the rational expectations hypothesis held for forecasts – that they were equally likely to be equally in error. But it obviously doesn’t, because there is a fairly small range of possible options (it is unlikely that two forecasters will use a 0% and a 200% CAGR for the same market), many common factors influence all the forecasters, and the creation of consensus forecasts based on the average of other forecasts creates a psychological anchor. Making radically different predictions to the consensus involves taking a risk.

The upshot, then, is that essentially subjective choices of forecasting assumptions are market-moving factors which affect the asset side of the economy, which as John also points out, is traditionally a problem for economists. Further, Tobin’s q implies a transmission mechanism between this and the level of capital investment in the economy as a whole, the most sensitive determinant of economic activity.

However, as Chris Dillow points out, supposed precision forecasting has strong institutional factors in its favour, which explains its survival in government. I would suspect that similar factors help it help the EMH survive elsewhere.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.