We are going into the budget season when economic forecasting becomes especially prominent. There is a huge gap between how professional forecasters think about the exercise and how the commentariat treats forecasts. Here are some insights into the way those forecasters think, even if it is boring compared to what is in the public rhetoric.
The usual forecast is a point estimate; say, ‘consumer prices are going to rise X percent in the next year’. However, the professional forecaster is thinking that there is actually a range with the X in its middle. The range is often large.
I illustrate this with a fan graph of consumer price forecasts from the most recent Bank of England Monetary Policy Report (February 2023).
Let’s focus on the one-year-out forecast. It was widely reported that the Bank expected consumer prices to rise 3.9 percent in the year to December 2023. But that is not actually what they thought.
The graph shows what they thought. The 3.9 percent is only the mid-point of their expectations. In the Bank’s Monetary Policy Committee’s best collective judgement that if economic circumstances identical to today’s were to prevail on 100 occasions, in 60 of those occasions consumer prices would rise between 2.0 and 5.0 percent; on the other 40 occasions it would rise outside this range.
Huh!, you say, that is not very accurate. The reality is that forecasting is not very accurate, especially if it is a long way out. Say, you want to make a family arrangement for next Christmas day which involves whether it rains or not. Ask a meteorologist for advice; they will give a probability assessment too. Ask them whether it will rain tomorrow and they will give a probability also. When their forecasts are reported on radio or television or in the newspapers you will get their best estimate. That’s what you want. Same in economics.
But you may not always want this best estimate. Suppose you are a farmer who faces a major loss if he thinks it wont rain and it does. The farmer is likely to make a decision which takes into consideration the (lower) possibility of it raining. Same for you actually. You may still take a coat when you are told that it probably won’t rain, but it just might.
Another public outburst which appears quite strange to a statistician is reports on political opinion surveys. Usually at the bottom of the report is a mention of the ‘margin of error’ of the survey. Read that before you read the main body of the commentary. Just about every change breathlessly discussed, every difference pontificated upon, is smaller than the margin of error. Read your political prejudices into the polls, but what they are telling a statistician is that we don’t know what the outcome of the next election will be, even if it is held in circumstances identical to today’s.
The same is true for the economy. When the CPI or some other statistic is published, much attention is given to the differences between the outcome and the current forecast. The professional is likely to judge the outcome by where it sits in the fan. Given the ‘margins of error’ of the forecasts – indicated by that 3-percentage-point difference between the top and bottom of the 60 percent range on the Bank of England’s price fan – the professionals are likely, not so much to yawn, but make a minor adjustment when revising their fan.
Can’t we make the forecasts more precise? We try, but there are inherent limitations. One is that the database is never complete, up-to-date and accurate. A second is that the underlying models are not perfect, although we are always trying to improve them; significant forecasting failure is a major incentive to try and improve the models. If the models are not used to forecast, they do not get improved. How many people hang onto the view that inflation is solely a monetary phenomenon because they have never used their theory to forecast the economy systematically? In contrast, economists had to abandon the simplistic theory because it repeatedly failed to forecast the economy with any precision at all.
Remember too, all forecasts assume something like ‘current circumstances’. Shocks blow forecasts of course. No forecaster dreamed of the possibility of a pandemic like Covid even a few months before it happened. Always read carefully the assumptions that a forecaster sets out; perhaps one should dismiss any report of a forecast where they are not given.
Why then bother to forecast the economy (or the weather)? The Treasury has to because the economic forecasts underpin its fiscal forecasts which are required to plan its borrowing program. Money markets would get very disturbed if the Treasury was changing its borrowing plans every month.
The Reserve Bank is charged with maintaining price stability. It does so, by setting the floor interest rate (Official Cash Rate) which impacts quickly on the monetary system. But it takes longer to impact on the production and expenditure system; the conventional wisdom is about two years. Look, though, at the Bank of England forecast fan for consumer prices. The 60 percent range two years out is between 0% and 3.5%. Perhaps they have been given an impossible mandate, but whatever; they cannot pursue it without forecasts. (To be fair to the Bank of England, observe it has been predicting consumer prices peaking; as a general rule, the hardest issue an economic forecaster faces is tracking the economy at its peaks and troughs.)
Most importantly, a forecasting exercise forces the economists involved to think systematically about the economy. That enables identifying slow structural change more easily and being able to respond rapidly to a shock. (I was impressed by the speed with which both the Treasury and the Reserve Bank responded to the Covid shock, indicative they had a good understanding of their forecasts.)
Anybody can announce a point forecast of the consumer price index at a particular time by randomly guessing. The commentariat often does; best not to look at the record of their past forecasts though. Yet, it is the point-forecasts everyone looks at; it would be better if they thought about the fan.