Salvaging Keynes: The New Keynesians
So given the implausibility of RBC theory, did Keynesians just go quietly into the night? No. While the methodological implications of the New Classicals and the RBC theorists were damning for Old Keynesian theory, a new generation of researchers kept up the criticism of RBC theory while looking for ways to revive Keynesian insights about recessions while keeping rational expectations and microeconomic explanations. They were located in coastal “saltwater” institutions, such as Harvard, Stanford, Yale and Berkeley. And whatever had been seen so far in the disagreements over the New Classical counterrevolution paled in comparison to the disagreements that arose in this period - this was by far the most bitter and vicious period in macroeconomics even to this day, with freshwater economists and saltwater economists duking it out from the early 1980s till the late 1990s.
One of the first salvos came from Larry Summers’s 1986 response to Prescott’s paper arguing for theory over measurement - in this piece titled “Some Skeptical Observations on Real Business Cycle Theory”, he gave his view that “real business cycle models of the type urged on us by Prescott have nothing to do with the business cycle phenomena observed in the United States or other capitalist economies”. The fact that it could be calibrated to fit the facts was unimportant, since “extremely bad theories can predict extremely well”. Indeed, “many theories can approximately mimic any given set of facts; that one theory can does not mean that it is even close to right”. Summers suggested that “the image of a big loose tent flapping in the wind comes to mind” - that is, RBC theory being tied to reality by a few calibrated parameters did not make it an accurate theory. More importantly, it simply “defies credulity” not to think about exchange failures in explaining large-scale recessions - in that sense, “nothing could be more counterproductive in this regard than a lengthy professional detour into the analysis of stochastic Robinson Crusoes”.
Referring to New Classicals and RBC theorists under the same broad umbrella, Alan Blinder followed suit, declaring in his 1988 “The Fall and Rise of Keynesian Economics” that “the ascendancy of new classicism in academia was instead a triumph of a priori theorising over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism”. Solow rejoined the fray too with his 1987 Nobel Lecture/1988 paper titled “Growth Theory and After”, noting that the setup of RBC models means that “any kind of market failure is ruled out from the beginning by assumption … I find none of this convincing”.
Plosser duly responded, by fending off these complaints around freshwater macroeconomics being too idealised with his 1989 “Understanding Real Business Cycles”. He said that “it is logically impossible to attribute an important portion of fluctuations to market failure without an understanding of the sorts of fluctuations that would be observed in the absence of the hypothesised market failure. Keynesian models started out asserting market failures (like unexplained and unexploited gains from trade) and thus could offer no such understanding”.
So the New Keynesians had their work cut out: they needed to find the rigidities which could explain rather than simply assert market failures. In pursuing that, they focused on microfounding four types: nominal wages, nominal prices, real wages and real prices.
The first generation of New Keynesians looked at nominal wages, with Stanley Fisher’s 1977 “Long-Term Contracts, Rational Expectations and the Optimal Money Supply Rule”, Edmund Phelps and John Taylor’s “Stabilising Powers of Monetary Policy under Rational Expectations” in the same year as well as Taylor’s “Aggregate Dynamics and Staggered Contracts”. The basic idea in all of these papers was that even with rational expectations, there could be rigidities if wages were determined via longer-term contracts in lieu of spot markets. While spot markets might be appropriate where the buyers and sellers could be anonymous i.e. financial assets or where the product was homogenous i.e. agricultural produce, these were abundantly not true of labour. Instead, the cost of transactions and negotiations meant that it was easier to stick to a contract, creating nominal wage rigidity.
This was followed by explanations of nominal price rigidity. One important suggestion was of menu costs, by Mankiw in his 1985 “Small Menu Costs and Large Business Cycles”, by George Akerlof and Janet Yellen in two 1985 papers of “A Near Rational Model of the Business Cycle, with Wage and Price Inertia” and “Can Small Deviations from Rationality Make Significant Differences to Economic Equilibria?” as well as by Julio Rotemberg’s 1987 The New Keynesian Microfoundations. The notion was that it was costly to reset prices, not only in the literal physical effort required to change it, but also because it was expensive to spend time renegotiating purchase contracts with suppliers and sales contracts with customers. Of course, price-setting only makes sense as an idea if firms have some sort of market power - as such, New Keynesians modelled this by assuming monopolistic competition with the help of the technical work in Avinash Dixit and Joseph Stiglitz’s 1977 “Monopolistic Competition and Optimum Product Diversity”. What Olivier Blanchard and Nobuhiro Kiyotaki outlined in 1987 within “Monopolistic Competition and the Effects of Aggregate Demand” was that this provided a strong foundation for explaining how small nominal price rigidities could affect real output, since the private incentive for changing prices differed from the social cost of not having price changes.
The importance of these nominal rigidities did not seem sufficient, but as Laurence Ball and David Romer noted in their 1990 “Real Rigidities and the Non-Neutrality of Money”, they could be amplified by real rigidities into realistic shocks. In terms of real wage rigidity, there were three main theories. The first was the idea of implicit contracts, put forward independently by Martin Baily’s 1974 “Wages and Employment under Uncertain Demand”, Donald Gordon’s 1974 “A Neoclassical Theory of Keynesian Unemployment” as well as Costas Azariadis’s 1975 “Implicit Contracts and Underemployment Equilibria”. Because firms had access to capital and insurance markets, they were better able to weather economic fluctuations than workers - consequently, workers would be willing to accept a stable wage which was on average lower, as a way to insure against a more variable income stream. The second was the insider-outside theory proposed by Assar Lindbeck and Dennis Snower in the 1988 The Insider Outsider Theory of Employment and Unemployment, whereby the costs of hiring new employees ensured that existing employees had some insider power to extract rents in the form of higher wages. And the third was the efficiency wage hypothesis as summarised by Akerlof and Yellen in their 1986 Efficiency Wage Models of the Labour Market, which posited that labour productivity is dependent on wages. This could be for a variety of reasons. One suggested by Andrew Weiss’s 1980 “Job Queues and Layoffs in Labor Markets with Flexible Wages” was adverse selection: firms are unwilling to hire workers willing to take lower wages, since that signals they aren’t very qualified. Another was shirking a la Carl Shapiro and Joseph Stiglitz’s 1984 “Equilibrium Unemployment as a Worker Discipline Device”: the principal-agent problem of monitoring worker productivity could only be ameliorated if firing was a credible threat i.e. if workers could not earn those wages elsewhere. And a third explanation suggested by Akerlof in his 1982 “Labour Contracts as Partial Gift Exchange” was the notion of fairness: workers that feel screwed over by low wages will be lazy. All of these real wage rigidities meant that insofar as “in equilibrium an individual firm’s production costs are reduced if it pays a wage in excess of market clearing … there is equilibrium involuntary unemployment”.
And for real price rigidities, several were suggested. One outlined in “Markups and the Business Cycle” by Julio Rotemberg and Mike Woodford in 1991 was that monopolistically competitive firms faced more competition in booms - by contrast, they were able to engage in implicit collusion more during recessions. As such, the markups of the price above marginal cost was countercyclical, resulting in a real friction. Another would be the idea of thick markets, whereby there are lower search costs within markets during booms due to the market being more packed - again, there would be a countercyclical marginal cost that would cause a real rigidity. A further option might be the fact that it is more expensive to seek external financing than internal financing, due to the information asymmetry between borrowers and lenders - in recessions, the availability of internal funds is limited and so the costs of borrowing, as Ben Bernanke and Mark Gertler argued in their 1989 “Agency Costs, Net Worth and Business Fluctuations”, would rise as firms shifted to external financing. And finally, Stiglitz posited that people might judge quality based on the price in his 1987 “The Causes and Consequences of the Dependence of Quality on Price”, resulting in another real rigidity against price flexibility.
In Mankiw and Romer’s 1991 book, New Keynesian Economics, they collated all of these aspects together, arguing that “a distinguishing feature of the new Keynesian economies” was the interaction between real imperfections where desired relative prices were not perfectly responsive to changes in demand and nominal rigidities where desired nominal prices were not perfectly responsive to desired relative prices.
So with their own research program blossoming, the New Keynesians doubled down on RBC theorists. Mankiw capitalised on his New Keynesian work by publishing “A Sticky Price Manifesto” with Ball in 1984, where they described “two kinds of macroeconomists … One kind believes that price stickiness plays a central role in short-run economic fluctuations … The other kind does not”. Those who did not conform to the traditional view that price rigidities mattered were “heretics”, and they claimed that “a macroeconomist faces no greater decision than whether to be a traditionalist or a heretic. This paper explains why we choose to be traditionalists”.
The RBC theorists were by no means quiet about their discontent. Lucas produced a furious and scathing commentary piece on Ball and Mankiw’s manifesto, asking “why do I have to read this? This paper contributes nothing - not even an opinion or belief - on any of the substantive questions of macroeconomics. What fraction of US real output variability in the postwar period can be attributed to monetary instability? … Ball and Mankiw have nothing to offer on this question a very difficult one. Ball and Mankiw have nothing to offer on this question, beyond saying, trivially, that they believe the answer is a positive number and suggesting, falsely and dishonestly, that others have asserted it is zero. Yet monetary non-neutrality is the intended subject of their paper! One can speculate about the purposes for which this paper was written - a box in the Economist? - but obviously it is not an attempt to engage other macroeconomic researchers in debate over research strategies”.
Barro would also get involved, with a 1989 paper titled “New Classicals and Keynesians, or the Good Guys and the Bad Guys”. With the title itself seething with pettiness, it is no surprise that the rest of the piece is just as hard-hitting. With respect to New Keynesianism, he said “it was hard to see how these ideas constitute a well-defined area of research that will actually rehabilitate Keynesian analysis”. As such, “macroeconomic research seems to be evolving into two camps: could it be the good guys versus the bad guys”?
In light of these responses, Mankiw proceeded to keep up the critiques with his 1989 “Real Business Cycles: A New Keynesian Perspective”, where he said that “real business cycle theory does not provide an empirically plausible explanation of economic fluctuations”. This was because “if society suffered some important adverse technological shock, we would be aware of it” - by contrast, “it seems undeniable that the level of welfare is lower in a recession”, making RBC theory’s explanations and implications dubious. In the same year, Taylor published his “The Evolution of Ideas in Macroeconomics”, where he referred to “this extreme view” of RBC theory as “far from reality”. He reiterated this view in his 2007 “Thirty Five Years of Model Building for Monetary Policy Evaluation”, where he described this period of domination by RBC theorists as a “dark age”.
Tobin piled on too in 1996, describing RBC theory as the “elegant fantasies” of “Robinson Crusoe macroeconomics” in his book Full Employment and Growth. And econometrician Chris Sims, who was no fan of the neoclassical synthesis (saying it “was corrupt” and “deserved its fate” in his 2011 Nobel Lecture), would nevertheless posit that “it is fair to say that most RBC research has ignored most of the known facts about the business cycle”, in his 1996 “Macroeconomics and Methodology”.
The weight of the rebuttal was such that by 1994, even Lucas had accepted in his “Review of Milton Friedman and Anna Schwartz’s A Monetary History of the United States” that RBC theory was not “a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not”. This certainly did not satisfy everyone - Maurice Obstfeld and Kenneth Rogoff expressed their dissatisfaction in their 1996 textbook Foundations of International Macroeconomics: “a theory of business cycles that has nothing to say about the Great Depression is like a theory of earthquakes that explains only small tremors”.
Nonetheless, it was clear by the mid-1990s that the New Keynesian research program, despite the best efforts of RBC theorists, was going to stay. But with such a huge array of possible frictions, it was still unclear on what the agenda would be going forwards.
Resuscitating Growth: Endogenous Growth Theory
Meanwhile, what were growth theorists doing in all of this? The insights of the New Classicals and the Real Business Cycle theorists had shifted the focus in macroeconomics away from stabilisation policy and towards the long-run growth of an economy’s productive potential. As Lucas put it in his 1984 Marshall Lecture/1988 paper “On the Mechanics of Human Development”, “the consequences for human welfare involved in questions like these are simply staggering: once one starts to think about them, it is hard to think about anything else”. Coupled with this was the inability of the neoclassical growth models to explain growth in the long run. In a theoretical sense, neoclassical models implied that output per capita would a steady state, with the only driver of long-run growth being that of Total Factor Productivity, which was taken as exogenous. But crucially, there was also an empirical lacuna - it was increasingly clear that the neoclassical prediction of convergence between countries was not occurring and many developing countries were actively falling in living standards. As such, the field of growth was ripe for research.
Paul Romer would fire the opening shot with his 1986 “Increasing Returns and Long Run Growth”. He posited that technological progress occurred endogenously as a result of the learning-by-doing process - that is, the investment by firms in capital had positive benefits on TFP. And since knowledge spills over to others, this would increase the TFP level in the economy as a whole, ensuring there weren’t diminishing returns such that the economy reached a steady state. Meanwhile, Lucas’s approach in his 1988 paper was to include human capital, which when combined with physical capital was not subject to diminishing returns.
The second generation of these endogenous growth models would come from the endogenous process of TFP creation, rather than simply looking at how side benefits from broadly-defined capital accumulation could provide constant returns to scale. Romer’s 1990 “Endogenous Technological Change” reiterated that “once the cost of creating a new set of instructions has been incurred, the instructions can be used over and over again at no additional cost” - but unlike his first paper, this one emphasised the endogenous decision to engage in research and development as a way of creating new blueprints and ideas. Gene Grossman and Elhanan Helpman proposed that R&D could allow firms to produce an expanding variety of products in their 1991 Innovation and Growth in the Global Economy. And Philippe Aghion alongside Peter Howitt, in their 1992 “A Model of Growth Through Creative Destruction”, spoke of the notion of a quality ladder - that is, innovation to make better and better products. All three of these models of endogenous growth provided an explanation for what the world patterns of growth look like, and they have broadly fared well against the facts.
Reunifying Macroeconomics: The New Neoclassical Synthesis
With the growth literature chugging along, let’s return to the questions of business cycles. By the mid-1990s, we had the two parallel research programs of RBC theorists and of New Keynesians: could short-run macroeconomics be reunified as it had once been by the neoclassical synthesis? This was the goal of the new neoclassical synthesis. As Jordi Galí put it in his 2008 textbook Monetary Policy, Inflation and the Business Cycle, this NNS had a “core structure that corresponds to an RBC model on which a number of elements characteristic of Keynesian models are superimposed”
The basic premise of the model was built off of RBC theory: a utility-maximising representative agent optimising for consumption-savings and labour-leisure decisions across time, subject to their budget constraints and the production technology. What defined New Keynesian was the fact that firms were taken as monopolistically competitive as per the Dixit-Stiglitz model, and only a fraction of firms could change their prices every period. This method of pricing was suggested by Guillermo Calvo in his 1983 “Staggered Prices in a Utility Maximising Framework” and was taken as a way to approximate the nominal price rigidity previously discussed. The fact of a utility-maximising representative agent meant that it was possible to talk about the welfare of the agent in relation to the optimal monetary policy. That’s exactly what Rotemberg and Woodford did in their 1997 “An Optimisation Based Econometric Framework for the Evaluation of Monetary Policy”. What followed was the widespread adoption of John Taylor’s rule as the canonical benchmark for monetary policy rules. In essence, what he suggested in his 1993 “Discretion Versus Policy Rules in Practice” was that central banks set their interest rate based on the natural rate, the deviation of output from its natural level and the deviation of inflation its targeted level.
This benchmark model of monopolistic competition, sticky prices and the Taylor rule would be summarised in Richard Clarida, Jordi Galí and Mark Gertler’s 1999 “The Science of Monetary Policy” as well as Woodford’s 2003 Interest and Prices. The sine non qua of this approach was the idea of inflation targeting described by Woodford, since the “efficient level of output is the same as the level of output that eliminates any incentive for firms on average to either raise or lower prices … it may well be more convenient for a central bank to concern itself simply with monitoring the stability of prices”. It would reach its pre-2008 apex in the form of Frank Smets and Raf Wouters’s large-scale New Keynesian DSGE model which they built in their 2007 “Shocks and Frictions in US Business Cycles”.
So on the eve of the financial crisis, we had reached and were building on a new neoclassical synthesis. Compared to the neoclassical synthesis of the 1960s, the main change in the long run was the fact that endogenous growth theory meant we didn’t just take TFP growth g as given, but had an explanation for how it occurred.
In the short-run, the IS-LM-PC model was replaced block by block. The determination of real output is now linked to potential output (as the RBC theorists had reminded us of), expectations of output (as the monetarists had noted with the permanent income hypothesis) and the Wicksellian difference. The real interest rate is set by central banks based on a Taylor rule (as proposed by the New Keynesians). And the price setting equation takes on a New Classical flavour, being set as a function of expected inflation, the output gap and random shocks.
This Post Takes the Old Classicals Seriously
Ultimately, we can see how all five research programs that followed the neoclassical synthesis contributed to the new neoclassical synthesis which dominates macroeconomics today. According to Blanchard’s “What Do We Know about Macroeconomics that Fisher and Wicksell Did Not?” he wrote in 2000, there has been a “surprisingly steady accumulation of knowledge”. But in Woodford’s 1999 “Revolution and Evolution in Twentieth Century Macroeconomics”, he noted that “the degree to which there has been progress over the course of the century is sufficiently far from transparent”.
And as Keynes put it in 1936, “the world is ruled by little else” other than “the ideas of economists”. Since we “are usually the slaves of some defunct economist”, I want to go back further than the neoclassical synthesis and reflect on the progress we’ve made since the Old Classicals, because I think at each stage we’ve overestimated the degree to which things are new.
Let’s remind ourselves of the pinnacle macroeconomics had reached before Keynes. By then, it was already understood that output depended on factors of production and output growth depended on TFP improvements. The role of monetary policy was to stabilise prices and in doing so, output - it did so by affecting the Wicksellian difference. And it could not systematically trick people because only unanticipated inflation drives real output. What have we added since? We have a better sense for how TFP growth actually occurs and a model to describe all of the long-run stuff. In a similar vein, we have actual models for the short run now, within which we can better model expectations and behaviour. From that, we have more theoretically robust reasons to stick to target inflation, to use monetary rules over discretion, to follow the Taylor rule as well as to respect the natural rate and not exploit the Phillips curve. All of which allow us to achieve a reduction in price and output variability.
Notice that substantively, the ideas are quite similar. Indeed, Woodford described his approach as neo-Wicksellian in his 2003 textbook, and “an attempt to resurrect a view that was influential among monetary economists prior to the Keynesian revolution”. So the main differences are methodological: by doing the rigorous legwork, we have a much more serious modelling approach which we can quantitatively check - and hopefully this means we won’t forget insights as we did between the Old Classicals and now.
Friedman once said at a 1975 AEA presentation that it is hard “to specify what we … have learned in the past two hundred years”, commenting that “we have advanced beyond Hume in two respects only: first, we have a more secure grasp on the quantitative magnitudes involved; second, we have gone one derivative beyond Hume”. But unlike Hume, Wicksell already knew that it wasn’t the derivative of the price level i.e. inflation that mattered, it was the rate of change of inflation and whether it was anticipated. So to misquote Friedman, I contend that we have advanced beyond Wicksell in two respects only: first, we have a more secure grasp on the quantitative magnitudes involved; second, we have gone one layer of microfoundations beyond Wicksell.