Output, Interest and Prices: Part II
Revolting against the neoclassicals
Counterrevolution 1: The Monetarists
The flagbearer of monetarism was Friedman. Having already influenced macroeconomics via his permanent income hypothesis, his much more significant contribution was to come in the ideas of monetarism. He had three main contributions: the stability of the money demand function, the role of money supply fluctuations in the business cycle and the expectations-augmented Phillips curve.
Friedman’s 1956 Studies in the Quantity Theory of Money set up “a theory of the demand for money” - in particular, he argued that the demand for real money balances was a function of the level of someone’s permanent income, the interest rate on a range of financial and physical assets as well as expected inflation. Compared to the original Keynesian view, this replaced current income with expected future income, incorporated a wider range of assets beyond just long-term bonds and included the role inflation would have in eroding the value of money.
The consequence is that money demand and velocity would be more stable, since permanent income is less volatile than current income and since changes in the money supply are unlikely to affect returns on all possible assets. The stability of money velocity implies “substantial changes in prices or nominal income are almost invariably the result of changes in the nominal supply of money”. This is buttressed by the portfolio adjustment mechanism, whereby a change in the money supply which causes excess money balances would lead to people buying up not just bonds but all sorts of assets, allowing monetary impulses to have a much larger effect across a range of markets. This was especially due to the fact that “monetary changes have their effect only after a considerable lag and over a long period and that the lag is rather variable”, as described in his 1960 book A Program for Monetary Stability. In this way, Friedman brought the equation of exchange back into the forefront of macroeconomics, and especially in relation to short-run business cycles. His 1963 book with Anna Schwartz, A Monetary History of the United States, applied this concept to the Great Depression. They argued that the Great Depression was caused by the failure of the Federal Reserve to respond appropriately to a fall in money demand.
The third contribution came in his analysis of the Phillips curve - by this point, many were treating the Phillips curve as a menu of inflation and output options that a central bank could choose from. Clearly this seems in tension with the classical dichotomy - how was it possible that a monetary choice could let central banks pick output beyond the productive potential of the economy? Alongside Edmund Phelps, Friedman provided an answer to this paradox. In Phelps’s 1967 “Phillips Curves, Expectations of Inflation and Optimal Unemployment over Time” and 1968 “Money Wage Dynamics and Labour Market Equilibrium”, an expectations-augmented Phillips curve was set up. The idea was that inflation depended upon the deviation from the natural level of output and expectations of inflation. If central banks attempted to exploit the Phillips curve to get higher output with higher inflation, people would soon adapt their inflation expectations - that is, there was not perpetual money illusion. Friedman modelled expectations as adaptive i.e. equal to the inflation in the previous period - as such, with the change in inflation being related to the deviation of output, pegging output above its natural level permanently would lead to an accelerating price level.
Friedman summarised the implications of monetarism in his 1968 address “The Role of Monetary Policy”. He saw the Depression as a “tragic testimony to the power of monetary policy - not, as Keynes and so many of his contemporaries believed, evidence of impotence”. Coupled with his skepticism of central bank finetuning and discretion due to long and variable lags, he argued that the role of monetary policy should be to “prevent money itself from being a major source of economic disturbance” as well as to “provide a stable background for the economy”, with the idea of “offsetting major disturbances in economy system” being “far more limited than is commonly believed”. Certainly, it was not possible for monetary policy to take advantage of the Phillips curve and peg real values like output at a level above the natural level. That is, “there is always a temporary trade off between inflation and unemployment; there is no permanent trade off”. This was because “the temporary trade-off comes not from inflation per se, but from unanticipated inflation, which generally means, from a rising rate of inflation”. As such, the best thing monetary policy could do was to follow a clear rule about the money growth rate which would keep the path of the price level stable and predictable.
The monetarist counterrevolution did successfully land a few powerful blows on the neoclassical synthesis. It highlighted the importance of monetary policy as well as its limitations. The revival of the quantity theory in Friedman’s 1970 The Counterrevolution in Monetary Theory reiterated that “inflation is always and everywhere a monetary phenomenon in the sense that it is and can be produced only by a more rapid increase in the quantity of money than in output”.
Friedman’s work with Schwartz provided a account of the Great Depression contradictory to the Keynesian story, one which is still taken as canon to this day. The Keynesian story invoked the idea that the lowering of interest rates implied expansionary monetary policy, and so the lack of a recovery meant fiscal policy was needed. What they showed was that monetary policy was actually still quite contractionary - in Wicksellian terms, the natural rate had fallen faster than central banks had lowered the market rate. And therein lies one of the costs of forgetting about the Wicksellian difference.
As for the expectations-augmented Phillips curve, although again not entirely new given Wicksell’s observations regarding unanticipated inflation, it was a powerful rejoinder to the neoclassical consensus. According to Greg Mankiw and Ricardo Reis in their 2018 “Friedman’s Presidential Address in the Evolution of Macroeconomic Thought”, his use of this to predict the 1970s stagflation where inflation rose while output stagnated was “one of the greatest successes of out-of-sample forecasting by a macroeconomist”.
However, monetarism did not replace the neoclassical synthesis. For one, Friedman conceded in his 1970 “A Theoretical Framework for Monetary Analysis”, where he translated monetarism into IS-LM language, that “the basic differences among economists are empirical not theoretical”. The dramatic change in the velocity of money that soon followed would limit the validity of his views too. And most importantly, it takes a model to beat a model - Friedman’s unwillingness to do general equilibrium macroeconomic modelling means that monetarism did not provide a sufficient alternative to take over. Nonetheless, it was clear that the neoclassical synthesis faced flaws, spurring on the New Classicals to do the formal modelling that would replace it.
Counterrevolution 2: The New Classicals
The research program of the New Classicals represented the birth of modern macroeconomics as it is now done - in that sense, it was less of just a counterrevolution as much as it was a Kuhnian revolution in itself. Although its contributions were incredibly wide-ranging, the core message the New Classicals offered was methodological: rather than having aggregate relationships and expectations described in an ad hoc fashion, the New Classicals argued that all of this ought to be derived from basic microeconomic principles in general equilibrium.
The groundwork for the New Classical revolution was laid by John Muth in 1961 with his “Rational Expectations and the Theory of Price Movements”. This paper introduced the idea of rational expectations i.e. the idea that consumers and firms formed expectations in a manner that would be “essentially the same as the predictions of the relevant economic theory”. In other words, people’s expectations were consistent with the model and they could not be systematically wrong. Notice that with adaptive expectations, it was possible for people to be permanently fooled since their expectations were entirely backwards looking. In the case of inflation, that meant that expectations of inflation would be equal to inflation plus an unpredictable error term.
The importance of this was underlined by Robert Lucas’s 1976 “Econometric Policy Evaluation: A Critique”. In this canonical paper, he put forward what is now known as the Lucas critique. He noted that in the neoclassical synthesis, “theorists suggest forms for consumption, investment, price and wage setting functions separately; these suggestions, if useful, influence individual components … The aggregate behaviour of the system then is whatever it is”. The problem was that using these models to predict outcomes when policy changed involved assuming people’s decisions did not vary with policy choices. As Lucas put it, “everything we know about dynamic economic theory indicates that this presumption is unjustified … to assume stability under alternative policy rules is thus to assume that agents’ views about the behaviour of shocks to the system are invariant under changes in the true behavior of these shocks”. Insofar as “optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models”.
Methodologically, this was a big change, since it implied that the only way to avoid this was by modelling from microfoundations - that is, structural parameters which were invariant to policy i.e. the tastes and technology available to consumers and firms as well as the constraints which implied tradeoffs in their preferences. So instead of having a consumption function that related output to consumption mechanically, it was necessary to model how consumers optimised intertemporally. Instead of having a Phillips curve that related output to price mechanically, it was necessary to model how firms might set prices based on their expectations of the future. In this way, Lucas set up the modern method of macroeconomic modelling: dynamic stochastic general equilibrium models. Dynamic meant that the model occured over time with agents being forwards-looking, stochastic meant there were random shocks, general meant considering the entire economy simultaneously and equilibrium meant thinking about how consumers and firms optimised their behaviour for their objectives subject to constraints.
Lucas deployed the insights of rational expectations in his seminal 1972 paper, “Expectations and the Neutrality of Money”, providing a general equilibrium explanation that reconciled the Samuelson-Solow Phillips curve with the Friedman-Phelps version. He would go on to build the first business cycle model with these features in his 1973 “Some International Evidence on Output Inflation Tradeoffs”. The idea in both was the Lucas Islands model, which was inspired by the work of Phelps et al. in their 1970 Microeconomic Foundations of Employment and Inflation. In short, there are producers on individual islands who can only see the nominal prices of their goods but not the general price level. As such, they would have to go through a process of signal extraction, trying to figure out if a rise in nominal prices was due to nominal shocks i.e. a rise in the general price level or due to real shocks i.e. a rise in demand for their good. In the former case, they shouldn’t do anything, but in the latter case, they ought to raise real output. Even with rational expectations and no money illusion, this imperfect information meant that they would produce more when faced with a nominal shock, producing an output and inflation relationship. However, their rational expectations meant that they could not be tricked by a systematic central bank policy of using nominal shocks to drive up real output. The consequence of making clear the difference between anticipated and unanticipated inflation is that there would be a temporary but not permanent Phillips curve in the form below, where output is a function of its natural level, of the deviation of inflation from expectations and of random shocks.
He would go on to build a more complete business cycle model in this vein with his 1975 “An Equilibrium Model of the Business Cycle” and expound upon that in his 1977 “Understanding Business Cycles”. This was a model of business cycles built on Walrasian general equilibrium with microfoundations for the behaviour of rational and maximising agents. And crucially, Lucas described “the central problem in macroeconomics”, given that “business cycles are all alike”, as finding a framework of showing monetary non-neutrality without the existence of persistently unexploited opportunities for mutual gain. By producing business cycle fluctuations without the need to resort to non-clearing markets or disequilibrium, Lucas did exactly that.
This combination of rational expectations, the natural rate hypothesis and general equilibrium microeconomics spawned the rest of the New Classical literature, concentrated among “freshwater” universities near the Great Lakes i.e. the likes of Chicago, Northwestern, Pennsylvania, Rocherster and Carnegie-Mellon. For one, Thomas Sargent and Neil Wallace wrote “Rational Expectations, the Optimal Monetary Instrument and the Optimal Money Supply Rule” in 1975, where they argued for the policy ineffectiveness proposition i.e. monetary policy could not systematically manage output and employment levels, since any systematic policy actions would be anticipated by rational agents. Another contribution would come in 1977 by Finn Kydland and Edward Prescott, who published “Rules Rather than Discretion”. In this, they noted that even if central banks had perfect information about economics shocks and how to deal with them, the problem of time inconsistency meant it might not succeed. For example, consider a central bank which promises to lower inflation to a certain level - in the next period, once people’s inflationary expectations are lowered, they have an incentive to renege on their promise and exploit the Phillips curve relationship. As such, rational agents anticipating this will not believe that the promise to lower inflation is credible. The implication of this for central bank policy was explored by Robert Barro and David Gordon in two 1983 papers, “Rules, Discretion and Reputation in a Model of Monetary Policy” and “A Positive Theory of Monetary Policy in a Natural Rate Model”. They pushed for the use of systematic rules that would constrain central banks, ensuring there wouldn’t be this inflationary bias where central banks attempted to exploit the use of inflation in the short run.
I think it is incredibly difficult to overstate how much of a change the New Classicals represented - if there was ever a father of modern macroeconomics, it is Lucas. In his 1991 “Macroeconomics in Disarray”, Greg Mankiw described the New Classicals as “young Turks intent on destroying the consensus”, who “led a revolution in macroeconomics that was as bloody as any intellectual revolution can be”.
As we have seen, many of these ideas aren’t necessarily new in their entireity: the natural rate hypothesis, the importance of expectations in the Phillips curve and even the policy ineffectiveness proposition naturally follow from the commentary of Wicksell and Friedman. But it is the way of doing the macroeconomics that was revolutionary!
Firstly, the formalisation of general equilibrium macroeconomics was a rejection of the increasing tendency within the neoclassical synthesis to go equation by equation without treating the system as a whole, due to the focus on large macroeconometric models. Secondly, the use of microeconomic foundations represented a complete rejection of the more ad hoc neoclassical approach. Thirdly, the idea of rational expectations provided a formal basis for including expectations within macroeconomic models that was consistent across all of its components. Combined together, they represent an entirely new paradigm in terms of writing down models and producing theory.
And they weren’t quiet about it. In 1978, Lucas and Sargent turned up to a conference at the Federal Reserve Bank of Boston. It was there, in the heart of the neoclassical hegemony, that they gave their contribution, titled “After Keynesian Macroeconomics”. This piece remains one of the harshest and fiercest rebuttals of Keynesian macroeconomics to date. They declared the notions that Keynesian predictions were “wildly incorrect” and that “the doctrine on which they were based is fundamentally flawed, are now simple matters of fact”. Their intent was to show that these flaws were “fatal” - in other words, “modern macroeconomic models are of no value … and that this condition will not be remedied by modifications along any line”. They spoke of the “spectacular failure of the Keynesian models in the 1970s” as “econometric failure on a grand scale”. In that sense, “Keynesian policy recommendations have no sounder basis, in a scientific sense, than recommendations of non-Keynesian economists or, for that matter, non-economists”. The job now was of “sorting through the wreckage”. As Lucas commented in an 1998 interview with Brian Snowdon and Howard Vane, they were “in the enemy camp and were trying to make a statement that we weren’t going to be assimilated”.
Unsurprisingly, the old guard didn’t take this sitting down. Even at the 1978 conference, Benjamin Friedman (no relation to Milton) responded by claiming that Lucas and Sargent had “declined to answer substantive questions raised about their equilibrium business cycle theory”, decrying their contribution as merely “unfocused rhetorical attack”. Solow joined in too, noting that they seemed “to regard the postulate of optimising behaviour as self-evident and the postulate of market-clearing behaviour as essentially meaningless … the one that they think is self-evident I regard as meaningless and the one that they think is meaningless, I regard as false”. He continued this rejection of New Classical economics by noting that “Lucas and Sargent say after all there is no evidence that labour markets do not clear, just the unemployment survey. That seems to me to be evidence”. And Tobin commented in his 1980 Asset Accumulation and Economic Activity that “obviously, we do not live in an Arrow-Debreu world”, so a “literal application of the market-clearing idea” as implied by the New Classicals was a “severe draft on credulity”
Nonetheless, this polemical approach continued in Lucas’s 1980 “The Death of Keynesian Economics”. He posited that “Keynesian economics is dead” in a sociological sense. That is, “one cannot find a good, under-40 economist who identifies himself and his work as Keynesian. Indeed, people even take offence if referred to in this way. At research seminars, people do not take Keynesian theorising seriously any more – audience starts to whisper and giggles to one another”. And he was in many ways correct for a while - but it wasn’t New Classical economists who took over.
Counterrevolution 3: The Real Business Cycle Theorists
Although the methodological innovations of the New Classicals were important, it was soon clear that business cycle models of monetary misperceptions were unable to match the magnitudes of real world fluctuations. And it seemed absurd that rational individuals would not simply find ways to figure out the growth of the money supply, given that the information was publicly available. So the need to account for business cycle fluctuations remained, and other freshwater economists took on this challenge. Charles Nelson and Charles Plosser argued in their 1982 “Trends and Random Walks in Macroeconomic Time Series” that “macroeconomic models that focus on monetary disturbances as a source of purely transitory fluctuations may never be successful in explaining a large fraction of output variation and that stochastic variation due to real factors is an essential element of any model of macroeconomic fluctuations”. Their argument relied on an empirical analysis of macroeconomic time series - if it were the case that real shocks were unimportant, we ought to see output and other variables return to their trend level. Instead, they found that a lot of shocks were persistent and affected the trend level. Buttressed by the very real nature of the 1970s oil shocks, real disturbances were put back into the forefront in analysing business cycles. Kydland and Prescott would be the first to model this, publishing “Time to Build and Aggregate Fluctuations” in 1982. They were quickly followed by John Long and Charles Plosser, in their 1983 “Real Business Cycles”. And thus real business cycle theory kicked off.
The basic idea behind Kydland and Prescott’s paper was that exogenous changes in technology could provide an impulse for fluctuations. These would be amplified by the lags in the investment process, the desire by workers to substitute their labour across time and the desire by consumers to smooth their consumption. The result was a model which matched the stylised facts and replicated the real world data in the US between 1950 and 1975. Without any sort of rigidities or frictions or the need for money, they had managed to produce a realistic-looking model of the business cycle. However, it is worth noting that they rejected traditional econometric tests as those “would have resulted in the model being rejected”, instead arguing for calibration, where the goal was simply to figure out which parameter values allowed the model to best fit the data. And when Prescott went on to integrate this into the Solow growth in his 1986 “Theory Ahead of Business Cycle Measurement”, with the technological shocks being ones which affected the marginal product of labour, the title itself is a testament to their view of econometric measurement. The implications of RBC theories were enormous - if true, they meant that involuntary employment wasn’t a thing, that all business cycles were efficient and that the role of government was simply to improve technological progress and not to stabilise the business cycle.
It was soon clear that RBC theory, at least in its simplest form, was not possibly accurate. Some of this is common sense - the Great Depression just wasn’t everyone deciding to go on holiday, and the microeconomic evidence on labour supply just didn’t match the theory. Technological diffusion is slow, and it’s not really plausible that society suddenly loses some technology. Money is very much non-neutral, which just wasn’t accounted for. Inflation is often countercyclical to output, unlike the procyclical implications of RBC models. And the persistence of shocks could by accounted for by other mechanisms, such as the ideas of hysteresis and learning-by-doing. The former is the notion that after a long negative shock, the productivity of workers may have worsened due to a long period of unemployment, while the latter is about the fact that productivity often rises as a result of workers and firms producing a lot and getting better at it. But in spite of all of these empirical difficulties, real business cycle theorists did have lasting contributions. The most notable is the idea put forward by Thomas Cooley in his 1995 book Frontiers of Business Cycle Research, which aimed at summarising the RBC research program: “growth and fluctuations are not distinct phenomena to be studied with separate data and different analytical tools”.
As with the New Classicals, the RBC theorists provided a huge upgrade in methodological firepower. And the basic Kydland and Prescott model would be the core of macroeconomic modelling going forward - in that sense, it surpassed the New Classicals, whose information imperfection models of the business cycles would not be taken further. Indeed, Lucas conceded in his 2001 “Professional Memoir” that “the Bald Peak conference … marked the beginning of the end for my attempts to account for the business cycle in terms of monetary shocks”. This was the 1978 conference where Kydland and Prescott’s model was presented. And consequently, he confessed to John Cassidy that “monetary shocks just aren’t that important” in a 1996 New Yorker interview. So perhaps the best analogy for what RBC theory did for the New Classicals is what the neoclassical synthesis did for Keynesians: it transformed a few revolutionary ideas into an entire field.