Giffen’s paradox is an economic term named after the British economist and statistician Robert Giffen. The law of demand states that when the price of a commodity falls, the demand for it rises. However Giffen’s Paradox is an exception to this law. This is,…
The Battle of the Bismarck Sea was a battle fought in February 1943 in Southeast Asia during World War II, between the Japanese Navy and the US Air Force. In game theory, its modeling was done by O. G. Haywood, Jr. in his article…
The Lagrange function is used to solve optimization problems in the field of economics. It is named after the Italian-French mathematician and astronomer, Joseph Louis Lagrange. Lagrange’s method of multipliers is used to derive the local maxima and minima in a function subject to equality constraints.
The existence of constraints…
Economic science makes a distinction between normative and positive economics. Positive economics is the branch of economics that focuses in the description and explanation of economic phenomena, while normative is concerned with the application of positive economics with the purpose of giving advice on…
Economic science makes a distinction between positive and normative economics. While the former is the branch of economics that focuses in the description and explanation of economic phenomena, the latter is concerned with the application of positive economics with the purpose of giving advice…
Opportunity cost, in microeconomics, is defined as the value of the best possible economic alternative that you reject in order to dedicate your resources to another specific activity. Agents will have to face an opportunity cost in every decision made; therefore, the chosen activity will…
Elasticity is a concept introduced by British economist Alfred Marshall, and is used in order to measure the variation that a variable suffers when another variable is changed. We can distinguish between different types of elasticity depending on the variables we are using.
Probably the…
Decision making under uncertainty is not only characterized by ignorance of the final outcome, as with risk, but also by the impossibility of assigning a probability of the outcome’s distribution, as this is also unknown. Both subjective and objective Big Mac index The Big Mac index was invented in 1986 by the magazine The Economist, and considers the Big Mac hamburger sold in McDonald’s as its basket of reference. This index is based on the purchasing power parity theory. When analysing Pigouvian tax Pigouvian taxes are corrective taxes levied on each unit of output an externality-generator agent produces. It is named after economist Arthur C. Pigou, who developed the idea in his book “The Economics of Welfare”, 1920….
Externalities are the benefits or costs that arise when the decision to consume or to produce generates some positive or negative impact on the environment affecting the welfare of others in a way that is not transmitted through prices or via market mechanisms. When…
A market system is in competitive equilibrium when prices are set in such a way that the market clears, or in other words, demand and supply are equalised. At this competitive equilibrium, firms’ profits will necessarily have to…
The Shaked-Sutton model derives from a series of papers written by Avner Shaked and John Sutton. This model is centred in studying vertical differentiation and its role when discriminating the market, in order for firms to absorb as many consumers’…
The Samuelson criterion, sometimes referred to as the Samuelson condition, was raised by the economist Paul A. Samuelson in his paper “Evaluation of Real National Income”, 1950, and belongs to the theory of welfare economics and used as a…
The Little criterion was developed by Ian M.D. Little in his paper “A Critique of Welfare Economics”, 1949, and it constitutes a further step for compensation principle theory. Little criticises the separation between efficiency and distribution and he demands as in
The Scitovsky criterion was developed by Tibor Scitosky in his paper “A Note on Welfare Propositions in Economics”, 1941, in order to solve the inconsistencies, -known as the Scitovsky paradox-, that Nicholas Kaldor’s and John Richard Hicks’ criteria…
The Hicks criterion is a compensation criterion developed by John Richard Hicks in his paper “The Valuation of the Social Income”, 1940. It is similar to that of Kaldor’s, with different implications although…
The Kaldor criterion is a compensation criterion developed by Nicholas Kaldor in his paper “Welfare Propositions of Economics and Interpersonal Comparisons of Utility”, 1939. This criterion is satisfied if state Y is preferred to state X and there is…
In welfare economics, compensation criteria or the compensation principle is known as a rule of decision for selecting between two alternative states. Two states will be compared; if one state provides an improvement for one part but causes deterioration in the state of the…
Salop’s circular city model is a variant of the Hotelling’s linear city model. Developed by Steven C. Salop in his article “Monopolistic Competition with Outside Goods”, 1979, this locational model is similar to its predecessor´s, but introduces two main differences: firms are located…
Hotelling’s linear city model was developed by Harold Hotelling in his article “Stability in Competition”, in 1929. In this model he introduced the notions of locational equilibrium in a duopoly in which two firms have to choose their location taking…
Public goods are those that are non-rival and non-excludable in consumption. Being non-rival implies that even if someone consumes it does not prevent someone else from doing it as well. Being non-excludable implies that no one will be prevented from consuming the good due to…
The Chamberlin´s model analyses and explains the short and long run equilibriums that occur under monopolistic competition, a market structure consisting of multiple producers acting as monopolists even though the market as a whole resembles…
Product differentiation is a marketing process that has the objective of making customers perceive the product of a specific firm as unique or superior to any other product belonging to the same group, and so creating a sense of value. Differentiation does not always imply changing the product, sometimes…
Microeconomics is a branch of economic theory that is centred in modelling the interactions amongst market agents, specially between consumers, which are trying to maximise their utility, and firms, which try to maximise their profits. It analyses the underlying logic of the individual behaviour…
Transaction costs are the associated costs that derive from the formalisation of complex relationships that necessarily exist for the production of goods and services. As an example, transactions cost can occur during the processes of negotiation, search, determination of prices, etc.
Market failures appear whenever a market is unable to work “successfully”, meaning it cannot achieve equilibrium with an efficient allocation of resources, which is known as Pareto efficiency. This imperfection in the price assignment…
Ceteris paribus is a Latin phrase that translates as “other things the same” and is a frequently used expression in economics. It refers to a phenomenon in which two or more variables intervene and for which it is assumed that, with the exception of the variable that is under…
Exit barriers (or barriers to exit) are obstacles that stop or prevent the exit of a firm from a specific market. It is associated with firms that are incurring in some form of losses, but cannot exit the market as a result of exit barriers that would further increase…
Entry barriers (or barriers to entry) are obstacles that stop or prevent the entrance of a firm in a specific market. It is associated with the situation in which a firm wants to enter a market due to high profits or increasing demand but cannot…
Contestable markets are those in which the short-term threats from potential competitors exert such a degree of pressure over the incumbents, that their behaviour is conditioned. Contestable markets are therefore in a competitive equilibrium even though the market can be considered to have a relatively…
The Edgeworth duopoly model, also known as Edgeworth solution, was developed by Francis Y. Edgeworth in his work “The Pure Theory of Monopoly”, 1897. It is a duopoly model similar to the duopoly model…
A two-part tariff is a price discrimination technique that consists in charging consumers with a lump sum fee for the right to purchase the product and then a price per unit consumed. This practice is specially used in places such as golf clubs and…
First-degree price discrimination, or perfect discrimination, is the highest level of price discrimination, in which each unit of production is sold at the maximum price that the consumer is willing to pay for that specific unit. The firm…
Second-degree price discrimination, or nonlinear pricing, involves setting prices subject to the amount bought, in an attempt to capture part of the consumer surplus. Revenues collected by the firm in this matter will be a nonlinear function. A bulk sale strategy, such as quantity…
Third-degree price discrimination, also referred to as market segmentation of price discrimination, consists of varying prices depending on what segment of the market the consumer belongs to. Each consumer will be charged with a different price, but it will remain constant whatever the amount…
Price discrimination, also referred to as price differentiation, occurs when a firm sells the same product at different prices, either to the same or different consumers. The study of this strategy comes naturally when dealing with monopolies as these seek to sell additional output to…
Natural monopolies occur in those industries in which the total costs of production are lower if a single firm produces the whole output instead of having production divided amongst more than one firm. Although this is the usual definition, which is attributed to William Baumol, who provided it in…
The Lerner index measures a firm’s level of market power by relating price to marginal cost. When either exact prices or information on the cost structure of the firm are hard to get, the Lerner index uses price
Bilateral monopoly is a market structure in which there is only a single buyer (monopsony) and a single seller (monopoly). Game theory is frequently used when analysing this kind of market…
Multiproduct monopolies are those monopolistic firms that sell, at least, more than one product. The firm will have to take into account how a change in the price of one of its products affects the demand of the rest of them, especially when they are…
A multiplant monopoly is given in monopolistic firms that have their production divided into more than one production plant, each one having its own cost structure. Different cost stuctures give place to different marginal costs…
Cobweb models explain irregular fluctuations in prices and quantities that may appear in some markets. The key issue in these models is time, since the way in which expectations of prices adapt determines the fluctuations in prices and quantities. Cobweb models have been analysed…
Comparative statics is a method used to analyse the result of changes in a model’s exogenous parameters by comparing the resulting equilibrium to the original one. However, this analysing method limits itself to comparing equilibriums, not analysing the reasons for the new equilibrium or…
Governments will choose to implement taxes to either individuals or firms in order to increase its revenue. When considering taxes to firms, it must be noted that these taxes will increase the price of goods being produced and sold, which translates into a Market clearing Market clearing occurs in those market situations in which the amount demanded by consumers equals the amount supplied by firms. In market clearing the equilibrium point has its corresponding equilibrium quantity and an equilibrium price. Economic science has developed several adjustment models to…
Firms’ cost structures will change over time, even when the quantity produced is kept constant. The price of an essential input for the production or the cost of rent may inevitably change. New costs will modify a firm’s equilibrium quantity and price. Once the differences between
Cost analysis in the long run is quite different from short run cost analysis. Period analysis tells us that in the long run all factors are variable; this flexibility of factors will consequently be reflected in the long-run cost curves….
Short run cost analysis would not be properly taught without the inclusion of demand and supply curves and their correct understanding, specially how its shifts may affect firms’ cost functions. The total supply of the industry is…
Surplus in economics refers to the profits (in terms of money or welfare) an individual or group of individuals is capable of extracting from the correct functioning of markets. Welfare economics analyses these surpluses in order to determine whether a market structure is…
Demand and supply are possibly the two most fundamental concepts used in economics. The concept of market is usually defined as a number of buyers and sellers of a given good or service that are willing to negotiate in order to exchange those goods. We will first explain them…
Monopoly (from the greek «mónos», single, and «polein», to sell) is a form of market structure of imperfect competition, mainly characterized by the existence of a sole seller and many buyers. This kind of market is normally associated with Bertrand duopoly In some cases, competition in terms of price changes seems more logical than quantity competition, especially in the short run. Besides, one of the assumptions of Cournot’s duopoly model is that firms supply a homogeneous product. Considering this, Bertrand proposed an alternative to…
Monopolistic competition is a market structure defined by four main characteristics: large numbers of buyers and sellers; perfect information; low entry and exit barriers; similar but differentiated goods. This last one is key to distinguish…
Duopoly (from the Greek «duo», two, and «polein», to sell) is a type of oligopoly. This kind of imperfect competition is characterized by having only two firms in the market producing a homogeneous good. For simplicity purposes,…
The Allais paradox was developed by Maurice Allais in his paper “Le Comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école américaine”, 1953 and it describes the empirically demonstrated fact that individuals’ decisions can be inconsistent with expected…
The Ellsberg’s paradox was developed by Daniel Ellsberg in his paper “Risk, Ambiguity, and the Savage Axioms”, 1961. It concerns subjective probability theory, which fails to follow the expected utility theory, and confirms Keynes’ 1921 previous formulation. This paradox is usually explained with…
Oligopoly (from the Greek «oligos», few, and «polein», to sell) is a form of market structure that is considered as half way between two extremes: perfect competition and monopolies. This kind of Oligopsony Oligopsony (from the greek «oligoi», few, and «opsõnía», purchase) is a market structure form of imperfect competition characterized by the existence of a relative small number of buyers, and many sellers. It is a similar case to Monopsony Monopsony (from the greek «mónos», single, and «opsõnía», purchase) is a market structure form of imperfect competition characterized by the existence of a unique buyer and many sellers. It is a similar case to monopoly but…
Imperfect competition or imperfectly competitive markets is one in which some of the rules of perfect competition are not followed. Virtually, all real world markets follow this model, as in practice, all markets have some form of imperfection. When dealing with imperfect competition…
Perfect competition or competitive markets -also referred to as pure, or free competition-, expresses the idea of the combination of a wide range of firms, which freely enter or leave the market and which considers prices as information, since each bidder only provides a relative small share of the…
A market is a set of buyers and sellers, commonly referred to as agents, who through their interaction, both real and potential, determine the price of a good, or a set of goods. The concept of a market structure is therefore understood as those characteristics of…
After John von Neumann and Oskar Morgenstern developed the expected utility theory in their “Theory of Games and Economic Behaviour”, 1944, various different approaches were developed. Although the expected utility function helps us understand the…
Oskar Morgenstern and John von Neumann’s expected utility theory, which analyses individuals’ risk aversion, proves that different individuals have different perspective towards risk. Risk averse individuals have, by definition,…
It is sometimes important to know how averse to risk a certain individual is. To this effect there are a set of tools to measure risk in a quantitative way. The most common and frequently used measure of risk aversion are…
Attitudes and behaviour towards risks have been, and still are, highly studied fields in psychology and their economic applications have been meaningful and of high importance. While some may be willing to assume risks in order to gain economic profits, others will prefer to avoid…
The expected utility theory deals with the analysis of situations where individuals must make a decision without knowing which outcomes may result from that decision, this is, decision making under uncertainty. These individuals will choose the act that will result in the highest expected utility,…
The term expected utility was first introduced by Daniel Bernoulli who used it to solve the St. Petersburg paradox, as the expected value was not sufficient for its resolution. He introduce the term in his paper “Commentarii Academiae Scientiarum Imperialis…
The theory of consumer choice under situations of risk and uncertainty belongs to the field of microeconomics. Risk and uncertainty are sometimes interchangeable terms but their meaning is easily misunderstood. Frank Knight in his “Risk, Uncertainty and Profit” 1921, treated…
The theory of consumer choice under situations of risk and uncertainty belongs to the field of microeconomics. Risk and uncertainty are sometimes interchangeable terms but their meaning is easily misunderstood. Frank Knight in his “Risk, Uncertainty and Profit” 1921, treated…
The Saint Petersburg paradox, is a theoretical game used in economics, to represent a classical example were, by taking into account only the expected value as the only decision criterion, the decision maker will be misguided into an irrational decision. This paradox was presented and solved in Incentives One option for mitigating problems derived from asymmetric information is designing your contract carefully so that whoever buys into it has less to gain from being a lemon. There are many practical implications for this, of which the most clear is probably in the Signalling Signalling is similar to screening, except it is the agent with complete information who decides to move first to mark themselves out as a ‘good’ agent, as a cherry. The most cited example is generally in the job market. When we examine most…
Screening is one of the main strategies for combating adverse selection. It is often confused with signalling, but there is one main difference: in both, ‘good’ agents (the cherries of this world) are set apart from the ‘bad’ agents, or
Economics of information, or information economics, belongs to the field of microeconomics and it studies the importance of information in Economics. The neoclassical theory was developed around the assumptions of perfect information and the absence of uncertainty,…
Moral hazard is a case of asymmetric information. It occurs when both parties (usually an agent and a principal) assign or are subject to a different probability of a same (normally adverse) event occurring. The behaviour of the agent changes…
Adverse selection is a case of asymmetric information. It occurs when both parties assign or are subject to a different probability of a same (normally adverse) event occurring. In this case, the agent that has the best information is clearly at an advantage. We say that…
In their 1981 paper, “Credit Rationing in Markets with Imperfect Information”, Joseph E. Stiglitz and Andrew Weiss define a situation similar to the case of The Market for Lemons, an article by George Akerlof, except in the financial markets….
“The Market for ‘Lemons’” is a key article written by George Akerlof in 1970, which aims to explain some of the market failures derived from imperfect information, in this case asymmetry. The paper itself is available on…
In sequential games, a series of decisions are made, the outcome of each of which affects successive possibilities. In game theory, the analysis of sequential games is of great interest because they usually model reality better than simultaneous games: producers will usually observe…
Folk theorems are used in Economics specially in the field of game theory and specifically to repeated games. This theorem is said to be satisfactorily fulfilled when the equilibrium outcome in a game that is repeated an infinity number of times, is the…
Collusion makes allusion to the cooperation between different firms. This cooperation leads to a restrain of market competition, in any of its forms, which translates into higher profits for the firms in detriment of consumer’s welfare. A cartel is an example of firms belonging to the…
In game theory, repeated games, also known as supergames, are those that play out over and over for a period of time, and therefore are usually represented using the extensive form. As opposed to one-shot games, repeated games introduce a new series of…
In game theory, a subgame is a subset of any game that includes an initial node (which has to be independent from any information set) and all its successor nodes. It’s quite easy to understand how subgames work using the extensive form when…
Stackelberg duopoly, also called Stackelberg competition, is a model of imperfect competition based on a non-cooperative game. It was developed in 1934 by Heinrich Stackelberg in his “Market Structure and Equilibrium” and represented a breaking point in the study of market structure, particularly…
Simultaneous games are those where decisions are simultaneous: both we and the other ‘player’ choose at the same time. The simplest example of this is probably ‘rock, paper, scissors’. Complete information means that we know what we stand to win or lose:…
Cournot duopoly, also called Cournot competition, is a model of imperfect competition in which two firms with identical cost functions compete with homogeneous products in a static setting. It was developed by Antoine A. Cournot in his “Researches Into the Mathematical…
In the battle of the sexes, a couple argues over what to do over the weekend. Both know that they want to spend the weekend together, but they cannot agree over what to do. The man prefers to go watch a boxing match, whereas the woman wants to go…
Mixed strategies need to be analysed in game theory when there are many possible equilibria, which is especially the case for coordination games. The battle of the sexes is a common example of a coordination game where two
Dominant strategies are considered as better than other strategies, no matter what other players might do. In game theory, there are two kinds of strategic dominance:
-a strictly dominant strategy is that strategy that always provides greater utility to a the player,…
The prisoner’s dilemma is probably the most widely used game in game theory. Its use has transcended Economics, being used in fields such as business management, psychology or biology, to name a few. Nicknamed in 1950 by Albert W. Tucker, who developed it from earlier works,…
In game theory, the extensive form is away of describing a game using a game tree. It’s simply a diagram that shows that choices are made at different points in time (corresponding to each node). The payoffs are represented at the end of each…
In game theory, the strategic form (or normal form) is a way of describing a game using a matrix. The game is defined by exhibiting on each side of the matrix the different players (here players 1 and 2), each strategy or choice they…
Game theory is the science of strategic reasoning, in such a way that it studies the behaviour of rational game players who are trying to maximise their utility, profits, gains, etc., in interaction with other players, and therefore in a context of strategic…
The agency theory is based in the relationship between principals and agents. In economics, this theory comes as a result of the separation between business ownership and its management.
The internalisation of a firm’s management instead of hiring external agents is a milestone in Oliver Williamson’s
Nash equilibria are defined as the combination of strategies in a game in such a way, that there is no incentive for players to deviate from their choice. This is the best option a player can make, taking into account the other players’ decision and where a change in…
Common knowledge is a condition usually required in game theory, so the model is completely specified and its analysis is coherently undertaken. It completes the notion of complete information, which requires all players to know the rules of the…
The perfection of information is an important notion in game theory when considering sequential and simultaneous games. It is a key concept when analysing the possibility of punishment strategies in collusion agreements.
Perfect information refers to the fact…
Complete information and incomplete information are terms widely used in economics, especially game theory and behavioural economics. We say that there is complete information when each agent knows the other agent’s utility function and the rules of the game. As Luce and Raiffa…
Prospect theory belongs to behavioural economics and outstands as an alternative model to expected utility theory, as the neoclassical assumption of the rational agent is put into question. This theory was developed by Nobel laureate Daniel Kahneman…
Economies of learning derive from the know-howpicked up through experience. The main difference between this and economies of scale or economies of scope is the fact that it is not correlated to production levels in the same way: it does…
The experience curve (not to be confused with learning curve) is a graphical representation of the phenomenon explained in the mid-1960s by Bruce D. Henderson, founder of the Boston Consulting Group. It refers…
The learning curve (not to be confused with experience curve) is a graphical representation of the phenomenon explained by Theodore P. Wright in his “Factors Affecting the Cost of Airplanes”, 1936. It refers to the effect…
Laissez-faire, laissez-passer is a French expression that translates as “to let do, let pass”, that is letting things work on their own. In a sense it sums up the economic doctrine of physiocracy, expressing that there is a natural order of things, with its own laws,…
X-inefficiency is known as the result of inputs not producing their maximum output as a consequence of an “X” factor. This translates into both cost minimisation and production maximisation failure and, hence, implies a loss of efficiency. This term was first introduced by…
As a contribution to industrial organization, George Stigler developed his “own” theory on cost analysis in his article “Production and Distribution in the Short Run”, 1939, which moved slightly away from
Structure, Conduct and Performance paradigm (SCP) is used as an analytical framework, to make relations amongst market structure, market conduct and market performance. It was developed in 1959 by Joe S. Bain Jr., who described it in his book “Industrial Organization”. The SCP…
The concept of economy of scope is very similar to that of economies of scale. When we talk about economies of scope, we mean that average costs are reduced by introducing another product into our portfolio that can share some of the infrastructure…
In the long run, no cost is fixed. We can determine our production level and adjust plant sizes, investment in capital and labour accordingly. As we can see in the diagrams below, this gives…
In the short run, fixed costs include capital, K, whereas labour, L, is considered variable. Fixed costs are represented as…
Average costs are those associated to one unit of production. Costs per unit grow quicker as production increases, so we find the arithmetic average as the sum of costs divided by the sum of production:
When analysing costs, the first thing to know is that there are fixed and variable costs:
Subadditivity is an important concept because it is often used to justify imperfect competition, the classic nemesis of neo-classicists. The only real way to justify less than perfect competition is the kind…
Say´s Law is a classical economics‘ principle attributed to the French economist Jean-Baptiste Say, and it holds the apparently simple statement that “products are paid for with products”, as Say puts it in his “Traité d’économie politique”, 1803. One of the implications of this…
Period analysis show the inter-temporal dimension of production theory. It was developed by Alfred Marshall in his “Principles of Economics”, 1890, and has remained practically unaltered since. It tries to explain how equilibrium is achieved and explains the adjustment processes to reach it,…
Production in the very long run differs from long run production in that there may be changes in technology. There are three main types of technological advances:
When dealing with long run production, the main change from short run production is that we can vary the levels of fixed inputs we use (capital, K), as well as variable inputs (labour, L). Our levels of production…
The short run is considered the period of time where fixed costs are still fixed, which basically means that, if you have a factory, you have to make do with it because you can neither sell it, nor make it bigger, nor rent…
Multi-product firms are firms that produce multiple goods, and therefore have to deal with allocating inputs more properly in order to attain higher production levels. This is a greater problem than the one single-product firms face, the production maximisation problem,…
New Keynesian Economics argue that menu costs are the reason for price stickiness. Price stickiness, the suboptimal adjustment of prices in response to demand shocks, can result in business cycles.
Menu costs are costs that result from price changes. An easy way to understand menu costs is by means of a typical example: restaurants. When a restaurant manager wants to change prices, the cost of changing the menus (in order to show the new prices) must be taken into…
As in consumer’s theory (where consumption duality is analysed), the firm´s input decision has a dual nature. Finding the optimum levels of inputs, can not only be seen as a question of choosing the lowest isocost line tangent to the…
Cost minimisation tries to answer the fundamental question of how to select inputs in order to produce a given output at a minimum cost.
A firm’s isocost line shows the cost of hiring factor…
Production maximisation must be seen as an optimisation problem regarding the production function, represented by isoquants, and a constraint regarding production costs, represented by an isocost…
The law of returns to scale explains how output behaves in response to a proportional and simultaneous variation of inputs. Increasing all inputs by equal proportions and at the same time, will increase the scale of production. Returns to scale differ from one case to…
Revealed preference theory is attributable to Paul Samuelson in his article “Consumption Theory in Terms of Revealed Preference”, 1948. Consumer theory depends on the existence of preferences which materialise into utility functions. These utility functions are
Characteristics demand theory states that consumers derive utility not from the actual contents of the basket but from the characteristics of the goods in it. This theory was developed by Kelvin Lancaster in 1966 in his working paper “A New…
Goods are something that provides its holder some kind of satisfaction, and therefore has a utility. There are different kinds of goods, and different classifications can be arranged and identified. We can differentiate between consumption goods (durable or perishable) and capital goods. Classification depending…
Price indices are used to monitor changes in prices levels over time. This is useful when separating real income from nominal income, as inflation is a drain on purchasing power. The two most basic indices are the Laspeyres index (named after Etienne Laspeyres) and the Paasche…
A production function shows how much can be produced with a certain set of resources. Generally, when looking at production, we assume there are two factors involved in production: capital (K) and labour (L), as this…
This economic phenomenon occurs when increasing output is translated into a decline of the firm´s average cost of production. Alfred Marshall was the first economist to distinguish economies of scale depending on…
The economic region of production shows the combinations of factors at a certain cost that make economic sense. Areas outside the economic region of production mean that at least one of the inputs has negative marginal productivity. This region is marked by what are called ridge…
An isoquant shows the different combinations of K and L that produce a certain amount of a good or service. Mathematically, an isoquant shows:
f (K,L) = q0
Isocost lines show combinations of productive inputs which cost the same amount. They are the same concept as budget restrictions when looking at consumer behaviour. Mathematically, they can be expressed as:
rK + wL = C
Marshallian and Hicksian demands stem from two ways of looking at the same problem- how to obtain the utility we crave with the budget we have. Consumption duality expresses this problem as two sides…
Utility is the ‘satisfaction’ we get from using, owning or doing something. It is what allows us to choose between options. This can be plotted on a chart.
A preference function therefore assigns values to the…
The foundation for Economics is rationality. Rationality implies that people will act in ways that best suit their particular set of circumstances, including, but not limited to, the choices they face. In order to choose, you must necessarily have a set of preferences over the options you are presented…
Asymmetric information refers to transactions in which one of the parties has better information than the other one. Adverse selection and moral hazard can result from the worst cases of asymmetric information in transactions between economic agents.
A key article on this subject is…
The marginal rate of transformation (MRT) can be defined as how many units of good x have to stop being produced in order to produce an extra unit of good y, while keeping constant the use of production factors and the technology being used. It involves the relation between…
The marginal rate of technical substitution (MRTS) can be defined as, keeping constant the total output, how much input 1 have to decrease if input 2 increases by one extra unit. In other words, it shows the relation between inputs, and the trade-offs amongst them, without changing the level…
The marginal rate of substitution (MRS) can be defined as how many units of good x have to be given up in order to gain an extra unit of good y, while keeping the same level of utility. Therefore, it involves the trade-offs of…
Generally, if the price of something goes down, we buy more of it. This is down to two effects:
There are two ways to solve a consumer’s choice problem. That is, we can either fix a budget and obtain the maximum utility from it (primal demand) or set a level of utility we want to…
Cost minimisation is a way of solving the optimisation problem regarding the utility function and the budget constraint, even though the most common way of doing this is by means of utility maximisation.
Utility maximisation must be seen as an optimisation problem regarding the utility function and the budget constraint. These two sides of the problem, define Marshallian demand curves.
An individual is therefore faced with the following problem: faced with a…
Consumer behaviour is a maximisation problem. It means making the most of our limited resources to maximise our utility. As consumers are insatiable, and utility functions grow with quantity, the only thing that…
This efficiency criterion was developed by Vilfredo Pareto in his book “Manual of Political Economy”, 1906. An allocation of goods is Pareto optimal when there is no possibility of redistribution in a way where at least one individual would…
The production possibility frontier (PPF) represents the quantity of output that can be obtained for a certain quantity of inputs using a given technology. Depending on the technology, the PPF will have a certain shape.
There are two fundamental theorems of welfare economics.
-First fundamental theorem of welfare economics (also known as the “Invisible Hand Theorem”):
any competitive equilibrium leads to a Pareto efficient allocation of resources.
The main idea here is that markets lead to…
Kelvin Lancaster and Richard G. Lipsey, in their article “The General Theory of Second Best”, 1956, following an earlier work by James E. Meade, treated the problem of what to do when certain optimality conditions (which must be considered in order to arrive at a
Welfare economics are a part of normative economics which objective is to evaluate different situations of a given economic system, in order to choose the best one.
Its study can be traced back to Adam Smith, who related an increase of welfare with an…
Indifference curves are lines in a coordinate system for which each of its points express a particular combination of a number of goods or bundles of goods that the consumer is indifferent to consume. This is, the consumer will have no preference between two bundles located in the…
In 1881, Francis Y. Edgeworth came up with a way of representing, using the same axis, indifference curves and the corresponding contract curve in his book “Mathematical Psychics: an Essay on the Application of Mathematics to the Moral Sciences”. However, the representation given, using as an example the work…
The importance of David Ricardo‘s model is that it was one of the first models used in Economics, aimed at explaining how income is distributed in society.
-there is only one industry, agriculture; only one good, grain;
-there…