Archive | economics RSS for this section

The role of economists, and plumbing

We’re sometimes accused of sitting in an ivory tower, feet up and writing in abstract terms about all kinds of things that are not directly relevant to the real world.

Well, there are academics like that, and there is certainly value to doing fundamental research that will be an important input more applied work done by others, but there are also many others.

For instance, they do consulting, and I have the impression that companies or institutions attach great value to this. There is nothing wrong with that, I believe, as long as it stays within limits and they keep doing research and keep teaching. To the contrary even, I see great potential that this will make them better teachers and researchers, because there teaching and research becomes more relevant to practice. Or they are involved in policy design and designing institutions within the scope of projects financed by third parties.

In a recent essay, Esther Duflo from the MIT has argued that attention to detail is not only interesting but really needed and useful. She suggests that economists should be more like plumbers. Worth reading, especially also for Ph.D. students who are making up their mind about the direction they want to go in.

The role of empirical work

I just came across a nice article by Dan Hamermesh in a recent issue of the AER. It was discussed by Einav and Levin in another interesting publication in Science related to big data.

Einav and Levin write:

Hamermesh recently reviewed publications from 1963 to 2011 in top economics journals. Until the mid-1980s, the majority of papers were theoretical; the remainder relied mainly on “ready- made” data from government statistics or surveys. Since then, the share of empirical papers in top journals has climbed to more than 70%.

Isn’t that remarkable? I certainly was under the wrong impression when I was a Ph.D. student in Berkeley and Mannheim and thought that it’s all about theory and methods. Where does this come from? Maybe it was because one tends to see so much theory in the first year of a full-blown Ph.D. program, which is full of core courses in Micro, Macro and Econometrics, covering what is the foundation to doing good economic research. In any case, my advice to Ph.D. students would be to strongly consider working with real data, as soon as possible. There is certainly room for theoretical and methodological contributions, but this should not mean that one never touches data. At least in theory 😉 everybody should be able to do an empirical analysis. And for this, one has to practice early on. Even if one wants to do econometric theory in the end. But even then one should know what one is talking about. Or would you trust somebody who talks about cooking but never cooks himself? OK, I admit, this goes a bit too far.

After having said this let me speculate a bit. My personal feeling is that one of the next big things and maybe a good topic for a PhD could be to combine structual econometrics with some of the methods that are now used and developed in data science (see the Einav and Levin article along with Varian‘s nice piece). In Tilburg, for instance, we have a field course in big data, by the way, and another sequence in structural econometrics (empirical IO).

Ballet, van Gogh and behavioral economics

picture taken from http://commons.wikimedia.org/wiki/File:Vincent_Willem_van_Gogh_128.jpg

picture taken from here

At the recent Netspar Pension Workshop I’ve been talking to Susann Rohwedder from the RAND Corporation. We talked about van Gogh and how he spent his youth in Brabant, not far away from Tilburg. The way he was painting at that time can be described as relatively dark and gloomy and not nearly as amazing as what he produced later in his life in the south of France, with the exception of the potato eaters, probably. Here, what dominates, arguably, is good craftsmanship. What I find remarkable is that he learned painting from scratch before moving on and developing something new.

Likewise, also Picasso first learned painting from scratch, producing paintings that were well done, but way more realistic that what he is known for now. Susann remarked that also for modern dancing people often say that one should first learn ballet dancing, in order to get a good grip on technical skills, before moving on. Interesting.

This discussion made me realize that there is a strong communality with my thinking about behavioral economics. There are many people who do research in behavioral economics without ever learning classical economics from scratch, and I always wondered why they do that. Standard economic theory is the simplest possible model we can think of, and it works just fine for many questions we may want to answer. There is of course lots to be gained by studying behavioral aspects of individual decision making, as recently demonstrated once more by Raj Chetty in his Ely lecture. But I think the best way to get there is to first fully understand classical economic theory and only then build on that. In passing, another thing that Chetty pointed out very nicely was that the best way to go about doing behavioral economics is probably not to point out where the classical theory is wrong—any model is wrong, because it abstracts from some aspects of economic behavior in order to focus on others—but to ask the question how we can use the insights from behavioral economics for policy making.

“You’ve got mail”, Amazon, Hachette, and four changes of an entire industry within just a few decades

Tom Hanks and Mag Ryan in “You’ve got mail”, taken from http://romanceeternal.org/REimages/Yougotmail1.jpg

Here we go, Amazon finally sealed a deal with Hachette, one of the big book publishers. What’s interesting about this deal is that special financial incentives were negotiated that may or may not be anti-competitive, and I would like to speculate later how these may look like.

But first some background. I personally find the book industry a super interesting one because so much has happened recently, and keeps happening. My dad still laments that book stores are dying, and that this would be a loss to society–and I keep arguing that the transformation of the book industry is one of the great things that happened in the last ten years. Now we can finally buy ebooks on our Kindles and the like, and can take them with us and read them in a more convenient way than ever. We read newspapers and magazines on those devices, too, and in passing also save loads of paper.

Let’s have a look at the business side. The good old corner bookshop (think Meg Ryan in You’ve got Mail) earned about 40% of the price of every book sold. And this is exactly why there were many such shops around. Their assortment was relatively small, and coordination happened via best seller lists. One could argue that this was some sort of collusive agreement between publishers and bookshops, which resulted in everybody buying the same books at high prices. Then came the bookstore chains (think Tom Hanks in You’ve got Mail). Yes, they were evil, because they offered a broader assortment and discounts on best sellers, making money with the remaining titles. Small bookshops began to disappear (which is why people think they were evil). Now more people read different books, and the bestsellers were sold at lower prices. Overall, this sounds like a welfare improvement to me.

Then came Amazon. Amazon was, and still is, trying to offer its Kindle devices to customers at fairly low prices, such as 100 dollars. The hope is, it seems, that people would move to Amazon and do some sort of one-stop-shopping for books, magazines, music and movies, which would ultimately allow Amazon to earn money. It would be sustainable because Amazon is then able to negotiate good deals with publishers and movie distributors. So Amazon would be a big player with lots of bargaining power, and customers may even benefit from it. But in order to make this attractive to customers already now, Amazon needs to make sure that content, such as e-books, is available at a low price not only in the future, but now–otherwise a Kindle for even a cheap price is nothing consumers would care about. This is why Amazon decided to not earn any money on e-books when it sold them for $9.99. At the same time, publishers were unhappy because the e-book sales cannibalised their sales elsewhere, for which they earned more (for instance because they did not have to give them as much of a discount as they had to give to Amazon). The underlying reason is that they did not set the price themselves anymore. Put differently, a higher price for e-books sold by Amazon would have been in their interest.

Meanwhile, also Apple tried to counter Amazon’s strategy by negotiating a deal with the book publishers. Essentially, Apple came up with contractual arrangements where the publishers set the price of the e-books themselves, but offer the same price on all e-book platforms, including Amazon. This led to prices that were about 30% higher, also on Amazon. Smart move, Steve Jobs! But the US Department of Justice then deemed this anti-competitive (rightly so!), and made publishers negotiate new deals. This ultimately led to lower prices, again, last year.

After that, Amazon started to push for lower wholesale prices, that is: discounts from the publishers, still with the aim of having lower retail prices, so that consumers would still find it attractive to buy e-readers and then content from Amazon. The negotiation phase was quite tough, with Amazon temporarily not shipping hard copies of Hachette and not taking pre-orders.

The new deal now means that Hachette will set the price, but at a level that Amazon finds favourable. How can that be? I can also only speculate, but the following must be more or less true. Suppose Amazon wants to set a price x*, and Hachette prefers a higher price y* (note the stars). Then, it must be that for each Euro that Hachette sets the price higher than x*, Amazon receives one Euro per sold copy from Hachette. In practice, Amazon and Hachette could have a deal saying: Hachette is completely free to set its price, but Amazon would be happy to compensate Hachette one-to-one for price cuts below a certain baseline price, such as for instance y* (other ones would work too). And in the end, it will be hard to argue that this is anti-competitive just like that. After all, it provides an incentive to set lower prices to Hachette. And Hachette will like that too, because it will then sell more books.

However, I would not jump to conclusions yet. I have a feeling that this will sooner or later come back to us, but by then Amazon will probably have gained market share and the industry will have changed further. The big question is whether authors will be better or worse off. For now, it seems that they are better off, but what will happen once Amazon has all the customers and pushes for high prices and/or low royalties? This could be seen as an abuse of a dominant position, and my hope is that other players will keep offering a competitive alternative to Amazon’s services. I’m thinking of Apple and Netflix here. But this may become increasingly hard, because of network effects that lead to a competitive advantage of platforms that are already big. And Amazon offers all the content on one platform, so it’s not clear what will happen. Exciting!

Correct and incorrect models

Today in class, somebody asked a question in my panel data econometrics class. The question concerned the assumption of strict exogeneity and whether it was violated in the example I gave before. I replied that yes, it could indeed be violated, but most of the time, in one way or another, a model will be mis-specified and assumptions will not hold in the strict sense. What I meant was that in some vague sense, the assumptions was a good enough approximation (without me going into the details of my example, think of the correlation between the error term and the regressor as being almost zero).

That made me think again of Milton Friedman, who argues in a famous essay that a model should be judged by its ability to predict counterfactual outcomes, or in his own words, “to make correct predictions about the consequences of any change in circumstances”. Sometimes, this is what we are after, and this is referred to as a positive approach (being able to make the right predictions)—as opposed to a normative one (where we can talk about welfare and how one can maximize it).

That sounds reasonable at first. But can we really make such a clear distinction? Can’t we simply see welfare as the outcome we would like to predict? Of course, we always need a model to make statements about welfare, but then it could also be that all models agree on the direction of the welfare effects of certain policy changes and only differ with respect to the exact magnitude. Therefore, I prefer to think of a model as a set of assumptions that are for sure wrong in the zero-one sense. But the question is how wrong, and that depends on the purpose the model is meant to serve. So, it’s a matter of degree. If the model allows me to make fairly accurate welfare statements (and I can be sure of that for whatever reasons—this is the red herring here), then I can apply Friedman’s argument that it’s good in his sense, but then I can even use if for welfare comparisons, so it serves a normative purpose. In a way, all this brings me back to an earlier post and in particular the part about Marshak.

PS on September 19, 2014: There are two interesting related articles in the most recent issue of the Journal of Economic Literature, in discussions of the books by Chuck Manski and Kenneth Wolpin, respectively. In these discussions, John Geweke and John Rust touch upon the risk of making mistakes when formulating a theoretical model, and how we should think about that.

How Apple’s business model just became even more beautiful

Last week we saw another one of Apples wonderfully crafted presentations. What a choreography! But besides learning how to present really well (just observe how a lot of information is conveyed in a way that makes it all look so simple and clear), there was something special going on.

First of all, what was it all about? New iPhone models, Apple goes payment services (aka Apple Pay), and the new Apple Watch. Now, at least to me, it seems that the watch is dominating the press coverage. But let’s think about what may be going on in the background for the moment.

The iPhone needed an upgrade anyways. Bigger screens (the competitors had this already, and customers were asking for it), better camera, faster processor, and new technology that allows one to use the phone for super convenient payments. Good move.

Then Apple Pay. How smart is that? Apple positions itself in-between the merchants and the customer and every time somebody wants to make a payment Apple sends a request to the credit card company and the credit card company then sends the money directly to the merchant. Apple is not involved in the actual transaction, has less trouble, and cashes in anyways. Customers benefit, and merchants will want to offer the service. Good move, with lots of potential.

Finally, the Apple Watch. When you read the coverage, then you realize that the watch is actually not yet ready. Battery life is still an issue, and so is the interface. And maybe the design will still change. But there are four truly innovative features almost hiding in the background. First, it’s a fashion item, unlike all the other technical devices that are already on the market. Second, it has more technology packed into it, and third, it’ll have apps on it. Fourth, you can use it to pay, with Apple Pay.

So, what’s so special about this event? It’s all about network effects. I’ve worked on two-sided markets for a while, and there are three types of network effects that play a role. The first ones are direct network effects. These are the ones we know from Facebook: the more people are on Facebook, the more I like to be on Facebook. These ones play less of a role. The second ones are indirect network effects. They arise because app developers find it the more worthwhile to start developing apps the more users will potentially download them. This is why Apple presented the Apple Watch now. Starting from now, until the product will finally be sold, they can develop apps, which will in turn make the watch more attractive to consumers, so it will have positive effects on demand. But developers will already see that now and will therefore produce even more apps. Very smart, and all Apple has to do is to provide the platform, the app store, and cash in every single time somebody buys an app. Finally, Apple Pay. Similar model. The more people use Apple Pay the more merchants will use it, and this will make people buy more Apple devices so that they can use the services, and so on.

So, if you ask me, taken together this is a huge step for Apple. Not because the Apple Watch or the iPhone are particularly great, but because Apple’s business model is incredibly smart. Beautifully smart. And I haven’t even mentioned that sometime soon all the Apple mobile devices will be much better integrated with the operating system on their laptops and desktops. As they said in their own words, something “only Apple can do”.

Theory, data, demand and supply

The following is a slightly altered version of a column I wrote for the December 2011 issue of our student newspaper Nekst.

In 1925, economist Henry Schulz wrote in the Journal of Political Economy that “The common method of fitting a straight line to data involves the arbitrary selection of one of the variables as the independent variable X and the assumption that an observed point fails to fall on the line because of an “error” or deviation in the dependent variable Y alone, the X variable being allowed no deviation.”

At first glance, one may wonder whether this can be right. Haven’t we all learned that we regress Y on X when we are interested in “the effect” of X on Y? In his article, Schulz was interested in estimating demand for sugar. He faced the problem that both, demand Y and price X were measured with error. In such a case, indeed, there is no reason to prefer one of the two regressions described by him to the other. Here, “errors” come about—people realized later—not only because variables are not correctly measured, but also because there were aspects of the relationship between prices and quantities sold in that market that were not explained by a simple model saying that there is a one-to-one relationship between prices and demand.

Here comes the role of theory, and it is fascinating to see in the literature how the following ideas developed. It all started with a book Henry Moore wrote in 1914, entitled Economic Cycles: Their Law and Cause. In there, we can find a regression of the quantity of pig iron sold on its price. The coefficient on price was positive, and Moore interpreted it as a “new type” of dynamic demand curve. Philip Wright, a Harvard economist, reviewed the book in the following year in the Quarterly Journal of Economics and explained that it is very plausible that demand for pig iron was very volatile, whereas the production technology, and hence the supply curve, was not changing much over time. Therefore, the shifts in the demand curve trace out the supply curve, and that is why we are estimating a supply curve when regressing quantities on prices.

A discussion followed and then, after more than 10 years, Appendix B of Philip Wright´s 1928 book The Tariff on Animal and Vegetable Oils contained two derivations of what we know today as the instrumental variables estimator. The idea is that when we regress quantities on prices and use factors shifting the supply curve as instruments for prices (e.g. weather conditions for corn production), then we will estimate a demand curve. Conversely, when we use factors shifting the demand curve as instruments (e.g. a change in value added taxes), then we will estimate a supply curve. Carl Christ provides more details on the history in his 1985 AER article.

One thing one can take away from this is that theory matters. Once we see the world through the lens of theory—here a simple model of supply and demand—we can progress in our understanding of it. It also restrains us, because not everything that can be done should be done. The above example shows that first, we need to understand what we are estimating when we regress one variable on another. This is guided by theory. If we do not know this a priori, i.e. before we have performed this regression, then there is probably no point in buying expensive data sets, collecting data, conducting experiments, studying asymptotic properties of the estimator, or developing more fancy estimation procedures. This is also what Marshak had in mind when he started his 1953 paper by saying that “Knowledge is useful if it helps to make the best decisions.” Highly recommended.

 

References

CHRIST, C. (1985): “Early progress in estimating quantitative economic relationships in America,” American Economic Review, 75(6), 39–52.

MARSCHAK, J. (1953): “Economic measurements for policy and prediction,” in Studies in Econometric Method, ed. by W. Hood, and T. C. Koopmans, pp. 1–26. Wiley, New York.

MOORE, H. (1914): Economic Cycles: Their Law and Cause. Macmillan, New York.

SCHULTZ, H. (1925): “The statistical law of demand as illustrated by the demand for sugar,” Journal of Political Economy, 33(6), 577–631.

WRIGHT, P. G. (1915): “Moore’s Economic Cycles,” Quarterly Journal of Economics, 29(3), 631–641.

WRIGHT, P. G. (1928): The Tariff on Animal and Vegetable Oil. MacMillan, New York.