Services

The rise of services and the fall of the market-clearing price

The triumph at the end of the 19th century of the idea of the market-clearing price over objective (intrinsic) and subjective value theories was followed by the rise of intangibles in what were then the world’s leading industrial nations.

With price accepted as the determinant of how things were made, nations with lower costs eroded the lead Britain then had in manufacturing and steadily overtook it as the 20th century progressed. Investors in manufacturing reduced their investment in the UK and located their factories elsewhere. When he published Principles of Economics in 1890, Alfred Marshall could never have foreseen the impact the concept of the market-clearing price would have in the following decades. If he believed it had any consequences, it would have been to increase the production of tangible goods. Britain’s economy, then the world’s biggest, would be helped to get bigger and its people materially richer. At that time, probably more than two-thirds of British GDP was due to industries that produced tangibles like food, clothes, buildings and machines. The overwhelming majority of those in paid employment were men working in farms, mines and factories and other businesses making things, not services. Britain’s position as the world’s largest economy in 1890 depended upon its production of coal, steel, goods made of wool and cotton and machines used in the factories of practically every other country on earth.

Marshall would, therefore, have found the decline in tangible industries since his book was published to be perplexing and, probably, disturbing. The fact the British economy nevertheless remains one of the world’s largest is due to the rise of the production and trade in intangibles as a source of output and employment.

The UK today depends upon services, a sector that Marshall largely dismissed. But their growth has been vital. In 2018, 81 per cent of British gross domestic product (GDP) was accounted for by service industries, organisations and businesses that produce intangibles1. The sector was the source of 84 per cent of all jobs in the UK at that time.

Historic data for employment in the UK shows that half the labour force was employed in the service sector in 1926. In 1978, the first year that separate data was provided for service employment, just over 70 per cent of the 25 million people working in the UK were employed in industries other than farming, mining and manufacturing (FM&M). Since then, the total number working in FM&M industries has fallen by a further 5 million and now account for less than 8 per cent of the total labour force. In parts of Britain, industries producing tangibles have practically disappeared. If trends seen since 1978 continue, no one will be making tangibles in the UK within a generation.

Without the growth in service industries, the opportunities for women in the workplace would have been much smaller. In 1978, women accounted for 30 per cent of the total British labour force. Today, the figure is 47 per cent. Of all the jobs created in services since 1978, more than 60 per cent have gone to women. The service sector has not only saved the UK economy at a time of precipitate decline in industries producing tangibles. It has allowed women to be financially self-sufficient to an extent unparalleled in history. The pattern in the UK economy is being repeated in all what the IMF calls advanced economies. In every OECD country, services account for the majority of output. And even in low-income countries, services can provide more than a quarter of total production and jobs.

The rise in services turned economic thinking on its head. Mercantilists believed a nation’s wealth depended upon the accumulation of money. The industrial revolution, which began in the UK in the mid-18th century, inspired the view that manufacturing was the new engine of economic growth. Classical economists believed that future lay in producing and trading tangible goods. The service sector was regarded as contributing nothing to national income. Those that it employed were living off transfers from productive sectors of the economy and not creating value themselves. For many Marxists who saw the industrial working class as the instrument for revolutionary change, the forecast that manufacturing would grow as a source of employment was axiomatic.

For neoclassical economists, in contrast, the qualitative character of output was of passing interest. In their analysis, there was no meaningful difference between tangibles and intangibles and no need for government to be concerned about the balance between tangible and intangible production. Whether it took the form of utility, wages, rents, interest or profits, the benefits flowing from production and consumption could be measured and stimulated best by the market-clearing price.

For most of the 20th century, nevertheless, there was an understandable bias in favour of industries that made tangibles. What they produced could be seen and their influence on the balance between imports and exports could be more easily measured. Exports from ports were counted and imports were registered to facilitate the application of customs duties. The balance of merchandise trade could be quantified. But who was to know how much money was flowing to and from an economy as a result of service activities like banking and insurance? And if it couldn’t be counted, perhaps it shouldn’t even be considered? This focus on tangible manufacturing was accepted by governments of developing nations which believed that poverty at home could only be combated by ending colonial economic arrangements that involved exporting raw materials and importing finished goods. This required promoting manufacturing industries, usually through plans and protectionism.

A more positive attitude to service industries emerged in the post-1945 era in the three-sector hypothesis of economic growth developed by the English economist Colin Clark2 and the French civil servant Jean Fourastie3. They argued that economic development started with a primary stage where extractive industries and farming were dominant and advanced to a secondary stage, where manufacturing became the principal source of growth. In the tertiary stage of development, services mattered most.

Fourastie was particularly positive about tertiary industries which he argued led to a higher quality of life and more humane conditions of employment. He argued that the level of economic development could be defined according to the proportion of the labour force employed in each sector. Traditional economies had at least 70 per cent in primary industries, mining and farming. Transitional economies had at least 50 per cent in manufacturing. Once employment in service industries rose above 70 per cent, an economy could be defined as having entered the tertiary stage. Modern revisions of the three-sector hypothesis argue that economies can enter a fourth and, then, a fifth phase where knowledge industries are the key source of production.

The UK, nevertheless, tried to halt, or reverse, the relative decline in primary and secondary industries that produced tangibles. In 1815, the Houses of Parliament passed Corn Laws. These imposed customs duties with the aim of increasing the relative price of imported grain and protecting profits, wages and jobs in the farm sector, then Britain’s principal source of employment. Deeply unpopular among low-income groups, and particularly those working in manufacturing and services in Britain’s new industrial towns, the Corn Laws were the source of the most divisive political debate since the English Civil War. They were finally repealed in 1849. For more than half a century after, Britain was a bastion of free trade.

Attitudes changed towards the end of the 19th century. Industrial and agricultural workers were by then electoral constituencies to be cultivated. There were votes in promising to protect farms and factories from unfair or malign foreign competition. After losing in the 1906 general election to the Liberal Party, which had campaigned for free trade, the Conservative Party adopted the idea of tariff reform, a measure to protect UK, dominion and imperial producers from foreign imports.

To help win the First World War, UK governments introduced protectionist measures to lift food and manufactured good output. The Conservative-Liberal coalition that won the first post-war election in December 1918 maintained some forms of protection without explicitly abandoning the ideal of free trade. But the rise of unemployment following the end of the war quickly prompted unprecedented peace-time government intervention.

The Safeguarding of Industries Act of 1921 imposed import duties equivalent to 33 per cent on the value of imports of goods deemed to have been essential to Britain’s victory4. It was the first in a series of measures designed to protect manufacturing that were to be cornerstones of British government policy for more than 50 years. A comprehensive programme of protection was introduced by the coalition government dominated by the Conservative Party that was elected in 1931 to counter the impact of the world depression which that year had raised British unemployment to more than 20 per cent.

The Second World War prompted the introduction of an even more extensive system of protectionism for primary and secondary sectors. Direct controls on imports and spending were dismantled after it ended, but British governments continued to act to protect agriculture and industry. In an effort to lift growth, the Conservatives, in power from 1951 to 1964, set up the National Economic Development Office (NEDO), a body designed to accelerate economic growth through indicative planning5. The Labour government of 1964-70 made further attempts to promote manufacturing output and displace manufactured good imports. After temporarily attempting to deregulate the economy, the Conservative government elected in 1970 restored controls and planning to stop increases in unemployment in tangible industries. These were expanded after Labour returned to office in 1974.

The election of a Conservative government under Margaret Thatcher in 1979 was a turning point in policy thinking. It declared attempts to prevent the decline of manufacturing industries to be ineffective and mainly counterproductive. If manufacturing was contracting, this was the judgment of the market expressed through price that should not be resisted. Whether it was replaced by services became a matter of indifference. The policies pursued since 1921 were deemed to have failed.

The rise of industries making intangibles, despite many years of government action to promote domestic tangible production, is a satisfying confirmation that you can’t buck the idea of the market-clearing price. The fact that the UK economy retained its place in the global economic pecking order in the past 30 years, despite the relative decline in manufacturing, has also disproved dire warnings that an economy without mines and factories couldn’t flourish.

Old worries about relying on tangible goods produced in distant places have found a new echo in the concerns that emerged after the British banking crisis of 2007-08. It is estimated that up to 8 per cent of UK GDP and 4 per cent of British jobs depended in 2008 upon finance, the most sophisticated of all service industries. There have been calls for fresh efforts to stimulate manufacturing, but little meaningful action. British banks, meanwhile, have been treated as industries too big and important to be allowed to fail. Similar measures have been taken elsewhere. Dozens of American financial institutions have failed or are surviving mainly because of government support.

The collapse of the creditworthiness of some of the world’s largest financial service businesses has been accompanied by soul-searching among bankers about what went wrong, promises of more regulation from governments and rueful satisfaction among practically everyone else that they only have themselves to blame. There have been many calls for those responsible to be punished but no agreement about where the guilt lies. Former dean of the London Business School Laura Tyson6, a director of US investment bank Morgan Stanley, summed up the bemusement at the annual World Economic Forum in Davos in February 2009. Speaking at a debate about the causes of the credit crunch, she declared that it was the system, not individuals, who were responsible.

It was a sweeping charge that is unfalsifiable. The system had obviously failed.

A further blow to conventional economic wisdom came in 2020 with the explosive spread of the Covid-19 virus. Government action to contain and reverse it was often more intrusive than had been seen since the 1939-45. Shops and restaurants were closed and workers were instructed to stay at home. Across the world, a programme of Covid vaccination was launched. The result was one of the largest one-year falls in global output in history.

To offset its impact on employment and income, governments everywhere expanded public spending. Trump’s President Trump dismissed the advice of most economists and in a year tripled the US budget deficit to 15 per cent of GDP.

President Biden, elected in November 2020, announced a $1.9trn increase in government expenditure. The justification is the need to insulate more Americans from the economic impact of Covid and contain its spread. The spending programme is also designed to address longstanding Democrat objectives, as was made clear in a statement made on her first day of her term in office by Biden’s Treasury Secretary Janet Yellen:

If you have listened to President Biden speak over the past few weeks, you have heard him talk about four historic crises. COVID-19 is one. But in addition to the pandemic, the country is also facing a climate crisis, a crisis of systemic racism, and an economic crisis that has been building for fifty years…long before COVID-19 infected a single individual, we were living in (an economy) where wealth built on wealth while certain segments of the population fell further and further behind.”

Policymakers are also being seized by demands for action to contain carbon dioxide emissions caused by burning hydrocarbons.

It’s said the neoliberal era which started with the election of Thatcher in 1979 is coming to an end. Government intervention and deficit financing are back in fashion. But only for now. The suspicion is that in due course, Covid measures will be reversed and the market once more freed to solve all human problems including climate change.

Conventional economics has been shaken. But its intellectual dominance remains in most university departments, finance ministries and central banks. But something is wrong with the theory that’s ruled since Alfred Marshall distilled the idea of the market-clearing price in 1890.

Intangibility and the death of the demand curve

It’s time to get back to first principles.

What is economics?

The claim that it is a science like physics and chemistry is contentious. Marshall, a mathematician by training, argued that scientific methods were applicable to economics and useful in making economic analysis more rigorous. But he never claimed that economics was a science and used logical reasoning to make his essential points.

The most compelling arguments against viewing economics as a natural science have been made by Austrian School thinkers and by their successors. Their case is that economics can only be properly understood by recognising it is based on a priori reasoning, a category defined by Immanuel Kant7, a seminal enlightenment thinker. An a priori principle is one that is so obvious that it requires no scientific proof. An alternative way of looking at the issue is that a priori reasoning is not shaped by experience in contrast with a posteori arguments which are.

Ludwig Von Mises8, father of Austrian School economics, argued that the idea that people act to improve their well-being can be accepted without the experimentation that theories of natural science require. According to Von Mises, it is possible to derive knowledge of objective reality through introspection and logical deduction. This is a method that is closer to that used in geometry than in physics.

This approach is challenged by most conventional economists. They essentially base their thinking on the epistemological concept of falsification defined by Karl Popper, an Austrian philosopher who spent the majority of his working life at the LSE. The falsification principle states that there are no such things as facts, only theories that are yet to be falsified. So gravity isn’t a fact. It’s a theory that has not yet been shown to be wrong but might be, as Newtonian physics was by Albert Einstein’s theory of relativity. Applied to economics, falsification suggests that theories of demand, supply and value should be accepted only to the extent that they have not been proven to be false.

In 1935, Popper elaborated on his arguments in The Logic of Scientific Discovery which proposed that theory and experience constantly modify each other through criticism to such an extent that “the empirical basis of objective science has thus nothing ‘absolute’ about it.” Instead he famously proclaimed that science did not “rest upon solid bedrock” since “the bold structure of its theories rises, as it were, above a swamp.”

Popper applied this criticism to challenge Marxism which he argued was impossible to falsify and, consequently, not scientific. Logically, therefore, any a priori argument – one that is so obvious as to be beyond investigation – cannot be scientific. Popper and his followers dismissed Austrian School arguments as not worthy of serious debate for that reason.

The critique of conventional economics contained in this book is based on the application of logic or, at a certain level, common sense. It is consequently aligned with the methodology of Austrian economics and is based upon a logical investigation of human behaviour at the level of the individual. But the analysis starts with the question that might have been put by Karl Marx if he had been writing today about the dominance of services in advanced economies. What is a service? Or, to adapt the language used in Chapter 1 of the first volume of Capital, what is an intangible commodity?

Calling a commodity an intangible defines what it isn’t. It can’t be seen, touched, smelt, tasted or heard. If it can be, then it must have a physical or tangible characteristic, which would mean it’s not an intangible. An intangible’s invisible and, consequently, unquantifiable in the way that a tangible is.

Put a cup in the palm of your hand and you can sense its weight and texture. You know its usefulness, or utility in the classical sense of the word, and how to maintain it. There are other tangibles to store them in. A consumer can discern the variations in the design of cups. Their colour, shape and size are obvious after a few seconds’ examination. Companies know how to make them, the cost of the materials used in their production and the amount of labour needed at every stage of manufacturing. Each tangible is differentiable in the eyes of buyers and, normally, replicable by producers after a bit of study. We know where we are with tangibles.

But can that be said about intangibles? It is arguable whether a service has any usefulness or utility in the sense that classical economists might accept the word. To what extent is knowledge useful to someone who doesn’t express it or share it? And yet intangibles share with tangibles the capacity to create a powerful impression and they are exchanged for money, just like things you can see. But the essential characteristics of intangibles are invariably mysterious. They are, by definition, difficult to describe. If they can be identified by a human sense, then they must be a tangible or have a tangible origin.

Can the existence of an intangible be proven? Popperian epistemology suggests that they can only be accepted if they are open to falsification. But how can you test whether something that has no physical properties exists? It’s impossible. Popperian thinking, it would therefore seem, tends to dismiss an intangible as a meaningful category from a scientific point of view. Something can only be subject to coherent scientific analysis if it has at tangible characteristics. Conventional economics, based as it invariably is on the principle of falsifiability, has problems with intangibility from the outset. This may be the reason why services have attracted so little attention in microeconomic theory.

But even if this issue is ignored, conventional economics fails to deal coherently with intangibility. In economics, aggregate demand for a product is equal to the demand for that product expressed by individual consumers. The theory of why individuals buy what they do is seminal and essential if economics as a body of thought is to have any meaning. What are the implications for this theory if it is applied coherently and consistently to choice among intangibles?

The answer is that all major microeconomic textbooks have failed to address this challenge. They invariably either treat intangibles (services) as identical to tangibles or they exclusively refer to tangibles and imply that the theoretical conclusions apply equally to intangibles. This is reflected in every book on microeconomics that a student might refer to.

Economics by David Lipsey and Alex Chrystal is a recommended text in most UK university economic reading lists. It devotes Chapter 5 to Consumer Choice: Indifference Theory, which sets out the conventional analysis of the behaviour of an individual with limited resources making the choice among a range of choices in the form of goods. The following passages taken from that chapter refer almost exclusively to goods with tangible characteristics.

All the units of the same product are identical; for example, one tin of Heinz baked beans is the same as another tin of Heinz baked beans.

Economics, Chapter 5, page 87, paragraph 1

For example, the total utility of consuming 14 cups of coffee a week is the sum total satisfaction provided by all 14 cups of coffee. The marginal utility of the fourteenth cup of coffee consumed is the addition to total satisfaction provided by consuming that extra cup. Put another way, the marginal utility of the fourteenth cup is the addition to total utility gained from consuming 14 cups of coffee per week rather than 13.”

Economics, Chapter 5, page 87, paragraph 3

Early thinkers about the economy struggled with the problem of what determines the relative prices of products. They encountered the paradox value: essential products without which we could not live, such as water, have relatively low prices. On the other hand, some luxury products, such as diamonds, have relatively high prices, even though we could easily survive without them. Does it not seem odd that water, which is so important to us, has such a low market value, while diamonds, which are much less important, have a much higher market value?

Economics, Chapter 5, Box 5.1, page 89

This concept is important, and deserves further elaboration. The table gives hypothetical data for the weekly consumption of milk by one consumer, Ms Green.

Economics, Chapter 5, Box 5.2, pages 91-92

We start by deriving a single indifference curve. To do this, we give an imaginary consumer, Kevin, some quantity of each of two products, say 18 units of clothing (C) and 10 units of food (F).”

Economics, Chapter 5, pages 92-95

Drawing pins that came in red packages of 100 would be perfect substitutes for identical pins that came in green packages of 100 for a colour-blind consumer.

Box 5.3, page 96. The box also refers to: left- and right-hand gloves; water; food and beverages; cars, TV sets, dishwashers, tennis rackets and green peas.

“The table shows combinations of food and clothing available to Jane…

Economics, Chapter 5, pages 97-99

This line shows how Karen’s purchases (of food and clothing) react to changes in income with relative prices held constant.

Economics, Chapter 5, Figure 5.7 and pages 99-100.

“In part (i) of Figure 5.9, a new type of indifference map is plotted in which the horizontal axis measures litres of petrol and the vertical axis measures the value of all other goods consumed.

Economics, Chapter 5, Paragraph 2, pages 101-02.

Chapter 5 in Economics does refer occasionally to intangibles (services).They are: going to football and going to the cinema (page 88); films, plays, cricket matches (Box 5.3, page 96) and hairdressing (footnote, page 104). Intangibles are, however, referred to much less frequently than tangibles and all examples involving geometric presentations focus on tangibles. Throughout the chapter, it is implied that the reader could substitute intangibles for tangibles without altering the conclusions reached.

The focus on tangibles continues. Chapter 6 of Economics, which deals with the cost structure of firms, almost exclusively refers to tangible production (cars, sheet steel, rubber, spark plugs and electricity). The chapter refers to units and unit costs. This is terminology drawn from the practices of tangible manufacture. Further examples of the focus on tangibles can be found throughout the text. There is no reference to the words “services” or “tangibles” in Economics’ 13-page index.

Other basic text books used by undergraduate and graduate students of economics maintain the focus on tangibles. Here are two examples among many:

Fundamental Methods of Mathematical Economics by Alpha C Chaing and Kevin Wainwright avoids referring to specific products and uses mathematical notation to explain the theories addressed by Lipsey and Chrystal. But whenever the goods are referred to, no distinction is made between tangibles and intangibles:

…we shall still allow our hypothetical consumer the choice of only two goods, both of which have continuous, positive marginal-utility functions.

Chapter 12, page 374

Microeconomic Analysis by Hal R Varian, a basic text for graduate students of economics, also makes no distinction between tangibles and intangibles and treats them as interchangeable.

The focus of microeconomic analysis on tangibles has a pedigree. Alfred Marshall’s Principles of Economics (1890), regarded as the first economics text book, refers mainly to tangibles. They include tobacco and cigars (page 19); eggs (page 35); cotton (page 35) and match-boxes (page 35). In chapter 2, Marshall classifies goods in a way that includes intangibles (services) but makes them interchangeable as “economic goods”. The ground-breaking depiction of demand and supply curves intersecting at the market-clearing price is based on the demand and supply of knives and knife handles.

Marshall’s intellectual approach was explained in the book’s Chapter 2:

(Economists) deal with man as he is: but being concerned chiefly with those aspects of life in which the action of motive is so regular that it can be predicted, and the estimate of the motorforces can be verified by the results, they have established their work on a scientific basis…For in the first place, they deal with facts which can be observed, and quantities which can be measured and recorded…

This definition seems to exclude intangibles: they are rarely regular; they can’t be observed by human senses and they, consequently, can’t be measured.

The final reference is to Price Theory, the classic microeconomic text by Milton Friedman. It refers to tennis rackets and balls; shoes, pianos, houses, cars, car tyres, butter, oleo, water, and diamonds. There is not a single example of a product that could be deemed to be intangible.

The theoretical focus on intangibles is continuing. It is as if the thought that there could be a difference in rational human behaviour when making a choice between things that can be seen, touched, tasted, smelt or heard and in circumstances where the thing being bought and sold can’t was either dismissed as irrelevant or had not even occurred. But it is intuitively obvious that nothing could be more different to a tangible than an intangible and this difference is substantive and not just semantic, as conventional economics seems to imply. Once that thought is taken seriously, the consequences for microeconomics are profound, as will now be explained.

The concept of a demand curve is based on the idea that an individual is able to make coherent judgments about distributing his or her income among a wide range of options. This entails envisaging rational consumers organising their consumption of goods in such a way that no other arrangement of what they consume will make them happier. This in turn is based on the idea that individuals are subject to diminishing marginal utility9: that they derive less additional enjoyment from consuming an extra unit of a particular product than they did from the previous one. Economics by Lipsey & Chrystal defines the concept concisely.

The marginal utility generated by additional units of any product diminishes as an individual consumes more of it, holding constant the consumption of all other products.

Economics, Page 87, Lipsey & Chrystal

No command of economics suggests that this argument makes sense. Owning a third pair of identical shoes increases your happiness by less than owning the first pair. But how can that idea be translated into a concept that has scientific validity? Utility, as it is understood by economists, is a subjective category. It can’t be measured and you can’t compare the utility enjoyed by one person by their consuming a product with that of another person consuming an identical one. If economics was based on attempting to measure and compare utility in the sense the word is used here, it would stand accused of being an exercise in metaphysics or psychology. It escapes this charge by creating a bridge between a person’s subjective preferences and the real and material world through the concept of equimarginal choice, a brilliant insight distilled in the second half of the 19th century, as the previous chapter explained.

This idea, developed by the marginalists of the late 19th century, is so powerful it has survived everything that experience has thrown at economics since. The concept of equimarginal choice is embedded in all basic microeconomic texts. Again, you need refer to only one of them to see that it still lives and breathes.

To maximise utility, consumers allocate spending between products so that equal utility is derived from the last unit of money spent.

Economics, Page 88, Lipsey & Chrystal

This is most simply understood by examining how a rational consumer might allocate his or her consumption between two products: say bananas and biscuits which are assumed to be things that the consumer desires.

For example, if he or she had bananas and no biscuits, a consumer might be prepared to sacrifice three bananas for one biscuit without becoming more or less happy. If 17 bananas and one biscuit are possessed, the idea of diminishing marginal utility suggests that the consumer might be prepared, for example, to give up two bananas for one more biscuit. At the other extreme, if all the consumer had was 10 biscuits, the idea of diminishing marginal utility suggests that he or she might give up two of them, or more, to get one banana without any change in his or her happiness. So there is a trade-off, or preference curve, between biscuits and bananas, but this trade-off changes according to the relative quantity of the things possessed.

Economists originally expressed this concept by devising a two-dimensional relationship between two products which tracked the combinations where happiness was constant. This was finally named an indifference curve at the start of the 20th century. It is generally presented geometrically as being convex to the origin. Modern economics has abandoned the geometric approach and uses mathematics instead. But the idea remains intact though students of economics have to master algebra to grasp it.

Having plotted an individual’s indifference curve – the bridge between the subjective (desires) on the one hand and the objective and the material (the thing desired) on the other – the next step is also entirely logical. This involves inserting a price ratio between the two products being studied. The price ratio is taken as being given and cannot be influenced by the consumer. It is expressed in the form of a ratio, in this case between bananas and biscuits. So if the price of one biscuit is twice the price of one banana, a straight line can be drawn between the axis showing the number of bananas and the axis showing the number of biscuits. It can be demonstrated logically and mathematically that a consumer will allocate his or her spending on bananas and biscuits in such a way as to equalise the price ratio with the ratio between the marginal utility of a biscuit and the marginal utility of a banana. With these assumptions, a rise in the relative price of something will often lead to a fall in consumption of it, regardless of which pair of products is used.

The logic underpinning this argument is powerful. But modern economists are uncomfortable with the idea that the demand curve, without which economics cannot exist, is based on the concepts of utility and marginal utility. Most basic economics text books start with a discussion of aggregate demand and supply and only examine the utility foundations of that idea later. Austrian economists accept the power of marginal analysis but generally reject the idea of indifference as being an impossible concept.

Every action necessarily signifies a choice, and every choice signifies a definite preference. Action specifically implies the contrary of indifference….If a person is really indifferent between two alternatives, then he cannot and will not choose between them. Indifference is therefore never relevant for action and cannot be demonstrated in action.

Page 225-226, Towards a Reconstruction of Utility and Welfare Economics by Murray Rothbard10 in The Logic of Action, Volume 1, Cheltenham, UK

This is an elaboration of the fable of Buridan’s Ass which died of starvation because it couldn’t decide which of two equally desirable bales of hay to eat. Austrian thinkers also disagree with conventional neoclassical economists by arguing that value is utility (desirability). Conventional neoclassical economists regard value as being a concept of limited interest. They focus on price which they see as the product of subjective and objective factors. The Austrian School of economics does not, however, investigate the implications of a logical investigation of how people will make choices between and among intangibles.

The logical defect in microeconomic theory

The logical defect of conventional theories of consumer choice when applied to intangibles is easily identified.

For a consumer to choose between bananas and biscuits, he or she must be capable of distinguishing between them. But how is this done? Obviously, by sight, touch, smell, taste and, if you drop one, sound. But for these to be discerned, the thing in question must have at least one physical characteristic. Intangibles, by definition, don’t even have that. Not only is it impossible for a consumer to define a coherent trade-off between intangibles in his or her mind. It is impossible for him or her actually to distinguish between them in any objective way or in a manner that might be expressed to another person. There is, consequently, no logical reason – as Austrian economists argue – to accept the theoretical existence, let alone the shape, of the utility curve when applying basic consumer theory to intangibles.

What distinguishes this line of argument from the Austrian exposition of the relationship between price and the quantity is that without a utility curve, there can be no individual demand curve. This is because if the idea of diminishing marginal utility collapses when it comes to intangibles, then it can make no more sense at an aggregate level. There is no reason why the consumption of a particular service by a collection of rational individuals will take a predictable pattern when plotted against price. A demand curve showing what the quantity of a service might be at any particular price is an impossibility. Identical service transactions might involve wildly different prices. Varying the price of a particular service, consequently, has unpredictable consequences. People might buy the same amount, less or more for no reason that others could recognise or they might themselves explain.

With the demand curve invalidated, can analysis of supply side factors help us in deciding what the right price of a service should be? Conventional microeconomists approach the relationship between price and supply in two ways: one is subjective and the other is objective. The objective cost tradition, which echoes the arguments of classical economic thinkers from Adam Smith to Karl Marx, defines the value that suppliers put on of what they supply as being equal to its cost of production. Costs can be seen as either the price paid for an input or the value, measured objectively, of what has been foregone by buying that input. This is the opportunity cost. This approach is followed by most economic text books.

Three major determinants of the quality supplied in a particular market are: 1 The price of the product 2 the prices of inputs to production 3 The state of technology.

Economics, Page 45, Chapter 3 Demand, Supply and Price, by Lipsey & Chrystal

The subjective tradition regards supply as being an entirely subjective category. Regardless of what the costs of production might be, someone is only going to sell something if he or she will be made happier by doing so. Something won’t be offered to the market unless this action increases the utility at the margin of the person supplying it. The essence of this line of thinking was neatly expressed in an article in the Quarterly Journal of Austrian Economics.

The “opportunity cost” perspective, thankfully, finds a home in both perspectives. But even neoclassical economists who insist on the subjective nature of value often stick to the “classical” part of “neoclassical” by treating cost as objective…A consistently Austrian perspective interprets cost as a value, not a thing, foregone, as it has the same nature as value.”

An Austrian Foundation for Microeconomic Principles by John Egger11, Quarterly Journal of Austrian Economics, 2008

The logic of both approaches is coherent when applied to tangibles. But it collapses when it is applied to intangibles. As has already been demonstrated, a marginal utility-based theory for choices can’t work when a thing is intangible. It’s consequently impossible, therefore, for a supplier – using the Austrian (or subjective) line of thinking – to make coherent choices about what he or she is supplying to a market.

Intangibility also invalidates the objective, cost-based approach to supply. How can you work out the cost of producing a particular service when there is no way of measuring its constituent parts as you can with tangibles?

A cup is made with clay, paint, ceramics, heat, machines and labour. But what is good service at a restaurant made of? Should the amount invested in a chef’s education be calculated? And what exactly was the opportunity cost of the waiter’s charming smile that helped you enjoy your meal so much? Was it the tiny amount of additional energy involved that could have been used for other purposes or the lengthy parenting that made the waiter such a nice person? Or is it in the waiter’s genes? How do you price that?

With an intangible, unlike for a tangible, there can be no scientific way of working out the cost of each unit of production. And if it is impossible to devise a coherent production function for an individual service producer, then it is impossible scientifically to prove a stable relationship exists between services and prices at an aggregate level. As with the subjective approach to supply, objective thinking cannot prevent the disappearance of the concept of the supply curve when it is applied to intangibles.

These observations may appear facile. But they place a bomb under the conceptual foundations supporting our idea of what price is and how the market works. In tangibles, price is the result of the interaction, at an aggregate level, between demand for a particular product and the costs, objective or subjective, involved in producing it. Consumers make choices between goods in a coherent manner. Producers compete with each other using price for a share of the market for those goods.

But with intangibles, there is no demand curve connecting the levels in the price of a service and the quantity of it consumed. There is no supply curve between price and the quantity supplied.

When it comes to intangibles, the market not only doesn’t work. It actually doesn’t exist.

It’s the reason why it is so hard for people supplying services to devise a coherent pricing strategy and for consumers to work out what they should in fact be paying for them12. This seems obvious. So why hasn’t conventional microeconomic analysis spotted it?

What’s gone wrong is that conventional microeconomics makes no distinction between tangibles and intangibles, even though the difference could not be greater. From Marshall to Friedman, the view is that if something can be traded and has a price, then economics will treat it as if it were, in effect, a tangible. Economists’ treatment of intangibles echoes a memorable phrase attributed, perhaps inaccurately, to Mark Twain who said: “A man with a hammer thinks everything is a nail”13.

An economist who focuses on price tends to think everything is a tangible.

This focus on tangibility is understandable. It is the way that economists have translated the subjective concept of utility, which cannot be measured and for which interpersonal comparisons are impossible, into a scientifically testable relationship between price, on the one hand, and supply and demand involving many people, on the other. But the analysis doesn’t work when applied to intangibles.

A new microeconomic theory is required and one that deals with the two most important questions economists address. How is value produced? And how is it distributed? This challenge will now be tackled.

______________________________________________________________

Notes to Chapter 2

1 There have been substantial changes in the way the ONS reports economic statistics. The figures for employment are taken from the Employee Jobs by Industry data series from 1978 published on 11 September 2013 on the ONS website. The figures for farming, mining and manufacturing cover a total of 32 separate sectors. In total the number employed in the UK in these sectors has fallen by 4.5 million since the end of 1978.

2 Colin Grant Clark (1905-89) pioneered the use of gross national product (GNP) for tracking trends in total economic output. After graduating from Oxford, he worked as a research assistant for William Beveridge at the London School of Economics (LSE). Clark unsuccessful ran twice as a prospective Labour Party MP in the 1930s.

3 Jean Fourastie (1907-90) was a French economist and adviser to the French government and the EU.

4 The Safeguarding of Industries Act called for various measures including duties on imports sold at prices below the cost of production.

5 The National Economic Development Office (NEDO) and the National Economic Development Council (founded in 1961) were modelled on France’s Economic & Social Council. They worked on the basis of indicative planning, recommendations designed to promote production in specific sectors. Both were abolished in 1992.

6 Laura Tyson was president of the Council of Economic Advisers during the administration of President Bill Clinton. She was a professor at the Haas School of Business in the University of California, Berkley and an adviser to US President Obama.

7 Immanuel Kant (1724 – 1804) is a central figure of modern philosophy. His major work is Critique of Pure Reason (Kritik der reinen Vernunft) which was published in 1781.

8 Ludwig Von Mises (1881-1973) was born in Lemberg, now part of the Ukraine. He studied at the University of Vienna and completed a PhD in law in 1906. Until 1934, Von Mises worked as a teacher, secretary of the Vienna Chamber of Commerce and adviser to Austrian governments. He left Austria for Switzerland in 1934 and, finally, arrived in New York in 1940. From 1945 until 1969, Von Mises taught at New York University.

9 Utility theory remains, despite the great advanced in economics as a science since 1890, at the heart of the theory of demand. Chapter 5 of Economics by Richard Lipsey and Alex Chrystal, published by Oxford University Press in 2007 and a recommended first text for many economics students, devotes no less than 22 pages to the theory of indifference without examining how the theory would be affected by applying it to intangibles. Milton Friedman’s classic Price Theory, Aldine Transaction (2008) has 31 pages on the utility analysis of demand and supply and also does not distinguish between tangibles and intangibles. Microeconomic Analysis by Hal R Varian, published by WW Norton & Company (3rd Edition 1992), is considered to be the best text book for masters students of microeconomics. It devotes an entire chapter to the theory of utility maximisation which is based on the concept of possible consumption bundles which are implicitly of tangible goods.

10 Murray Rothbard (1926-95) was a libertarian writer and champion of the ideas of Ludwig Von Mises. He was briefly involved with the New York Objectivist group, an association of thinkers connected with Ayn Rand, author of the anti-collectivist novel The Fountainhead. Objectivism argued that objective knowledge, or the objective world, could only be secured through the proper deployment of subjective human competences, including the capacity for deductive and inductive logical thought. Objectivists argue that the only way the world can be changed to the better is through individual actions, an argument in line Von Mises’ praxeology. The idea that value-creation depends on interaction does not appear to have been seriously considered. Members of the Rand’s Objectivist group included the young Alan Greenspan.

11 John Egger, who taught at Towson University, Maryland, also provided a clear definition of the Austrian argument for supply to be subjectively determined. “..the supply of one good is determined by the demand for the best (highest-valued) other good that could be produced instead. Consistent with Say’s Law and a utility foundation of supply, this ties the supply of one good to the demand for other goods, emphasizing that it is really the consumer who determines a business’s costs.

12 An example among many came in November 2009 when an executive at the Mayo Clinic in the US told the author that the clinic had extreme difficulty in setting a price for individual surgical procedures. Top healthcare service providers like the Mayo specialise in life-saving treatments to people who are either rich enough or well-insured enough to charge as much as they like. A rich person with a life-threatening condition he or she believes only the Mayo Clinic can treat might be prepared to offer his or her entire wealth for that treatment. Conventional economic theory suggests that the clinic might legitimately accept such a payment since it reflects the subjective valuation by the patient of the service it offers and is the market-clearing price for the individual in question. Conventional microeconomists would tend to argue that this is a monopoly situation. It appears to be, but only in the mind of the patient. The challenge of pricing healthcare procedures raises profound issues for healthcare systems financed through insurance. If you can’t price a procedure, how can you insure against it?

13 The law of the instrument is also known as Maslow’s hammer after US psychologist Abraham Maslow who wrote in his book The Psychology of Science (1966) “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” Maslow is better known for devising the Maslow hierarchy of needs.