“You’ve got mail”, Amazon, Hachette, and four changes of an entire industry within just a few decades
Here we go, Amazon finally sealed a deal with Hachette, one of the big book publishers. What’s interesting about this deal is that special financial incentives were negotiated that may or may not be anti-competitive, and I would like to speculate later how these may look like.
But first some background. I personally find the book industry a super interesting one because so much has happened recently, and keeps happening. My dad still laments that book stores are dying, and that this would be a loss to society–and I keep arguing that the transformation of the book industry is one of the great things that happened in the last ten years. Now we can finally buy ebooks on our Kindles and the like, and can take them with us and read them in a more convenient way than ever. We read newspapers and magazines on those devices, too, and in passing also save loads of paper.
Let’s have a look at the business side. The good old corner bookshop (think Meg Ryan in You’ve got Mail) earned about 40% of the price of every book sold. And this is exactly why there were many such shops around. Their assortment was relatively small, and coordination happened via best seller lists. One could argue that this was some sort of collusive agreement between publishers and bookshops, which resulted in everybody buying the same books at high prices. Then came the bookstore chains (think Tom Hanks in You’ve got Mail). Yes, they were evil, because they offered a broader assortment and discounts on best sellers, making money with the remaining titles. Small bookshops began to disappear (which is why people think they were evil). Now more people read different books, and the bestsellers were sold at lower prices. Overall, this sounds like a welfare improvement to me.
Then came Amazon. Amazon was, and still is, trying to offer its Kindle devices to customers at fairly low prices, such as 100 dollars. The hope is, it seems, that people would move to Amazon and do some sort of one-stop-shopping for books, magazines, music and movies, which would ultimately allow Amazon to earn money. It would be sustainable because Amazon is then able to negotiate good deals with publishers and movie distributors. So Amazon would be a big player with lots of bargaining power, and customers may even benefit from it. But in order to make this attractive to customers already now, Amazon needs to make sure that content, such as e-books, is available at a low price not only in the future, but now–otherwise a Kindle for even a cheap price is nothing consumers would care about. This is why Amazon decided to not earn any money on e-books when it sold them for $9.99. At the same time, publishers were unhappy because the e-book sales cannibalised their sales elsewhere, for which they earned more (for instance because they did not have to give them as much of a discount as they had to give to Amazon). The underlying reason is that they did not set the price themselves anymore. Put differently, a higher price for e-books sold by Amazon would have been in their interest.
Meanwhile, also Apple tried to counter Amazon’s strategy by negotiating a deal with the book publishers. Essentially, Apple came up with contractual arrangements where the publishers set the price of the e-books themselves, but offer the same price on all e-book platforms, including Amazon. This led to prices that were about 30% higher, also on Amazon. Smart move, Steve Jobs! But the US Department of Justice then deemed this anti-competitive (rightly so!), and made publishers negotiate new deals. This ultimately led to lower prices, again, last year.
After that, Amazon started to push for lower wholesale prices, that is: discounts from the publishers, still with the aim of having lower retail prices, so that consumers would still find it attractive to buy e-readers and then content from Amazon. The negotiation phase was quite tough, with Amazon temporarily not shipping hard copies of Hachette and not taking pre-orders.
The new deal now means that Hachette will set the price, but at a level that Amazon finds favourable. How can that be? I can also only speculate, but the following must be more or less true. Suppose Amazon wants to set a price x*, and Hachette prefers a higher price y* (note the stars). Then, it must be that for each Euro that Hachette sets the price higher than x*, Amazon receives one Euro per sold copy from Hachette. In practice, Amazon and Hachette could have a deal saying: Hachette is completely free to set its price, but Amazon would be happy to compensate Hachette one-to-one for price cuts below a certain baseline price, such as for instance y* (other ones would work too). And in the end, it will be hard to argue that this is anti-competitive just like that. After all, it provides an incentive to set lower prices to Hachette. And Hachette will like that too, because it will then sell more books.
However, I would not jump to conclusions yet. I have a feeling that this will sooner or later come back to us, but by then Amazon will probably have gained market share and the industry will have changed further. The big question is whether authors will be better or worse off. For now, it seems that they are better off, but what will happen once Amazon has all the customers and pushes for high prices and/or low royalties? This could be seen as an abuse of a dominant position, and my hope is that other players will keep offering a competitive alternative to Amazon’s services. I’m thinking of Apple and Netflix here. But this may become increasingly hard, because of network effects that lead to a competitive advantage of platforms that are already big. And Amazon offers all the content on one platform, so it’s not clear what will happen. Exciting!
Today was the first meeting of the university council. As you may recall from my earlier post, my colleague Martin Salm and I have been elected into that council. Here is our introductory statement.
Introductory speech, held by Tobias Klein on October 3, 2014 in the University Council on behalf of TiU International
Thank you Mister Chairman,
Dear Rector Magnificus, dear President, dear Secretary General, dear other membes of this council, dear guests on the podium, dear Thijs,
TiU International is a new initiative. We founded TiU International because we realized that one group of employees was not sitting at the table when the formal discussions about the new strategy took place in this council: international employees.
We would like to convince you in the upcoming two years that they have a lot to offer. They have moved to the Netherlands because of Tilburg University’s reputation. Some of them have obtained their PhD’s at universities that are better than ours. And in general they have seen how universities are organized outside of Holland. This makes them, to some extent, more independent thinkers.
My colleague Martin Salm and I will do our best to represent these international employees and their ideas in this council.
For us, it is important that documents are available in English and also that discussions take place in English. Otherwise, the many people on campus who do not speak Dutch—employees and students—will keep feeling excluded. We would like to change that.
But our initiative is not primarily about language and we do not only want to represent international employees. Our initiative is about a mindset that we feel is not represented enough in the discussions. Our initiative is about our university becoming a truly international place and we want to represent Dutch employees with an international mindset just as well. This includes many members of the supporting staff who do a great job every single day.
There was an interesting workshop last week—the Rector and Tjits were there as well—and we agree with the main conclusion. Let me put it like this: Just as you can’t be half pregnant a university can’t be half international. We believe that Tilburg University is in many ways close to becoming such a truly international place, and we believe that this offers a wealth of opportunities. This will not only be to the benefit of the employees, but also to the benefit of our students.
We believe that Tilburg University should be ambitious. Being number one in Holland is a nice goal. But we should aim at being one of the best universities worldwide. And some of our departments are actually already among the leading ones in the world. We should learn from them how we can improve, focus more on what we are good at, and also focus on this when telling prospective students why they should come here so that they can make informed choices.
That is: We believe that true excellence in research and teaching is the way to go. The key players in each academic department should be editors of international journals, keynote speakers at international conferences, great teachers and truly respected senior academics.
We can only continue to be successful if we keep hiring outstanding academics on the international job market; and when those who go the extra mile keep getting rewarded.
We believe that good researchers are often also good teachers.
And we believe that it is the obligation and responsibility of the academics to foster our reputation, raise money, and put together attractive study programs. They need outstanding support for this. Therefore, well-functioning service departments with excellent staff are of vital importance.
This is the beginning of a new yearly cycle for the university council. I want to close by addressing our students: It is one of our key priorities to offer you the best possible education you can get in Holland. We believe in diversity. And we believe that everyone will benefit when also an international student will join us in this council in a year’s time. Please do your best to make this happen.
Thank you for your attention.
Yesterday, we had Mirko Draca over as a guest, also presenting in the economics seminar. Over dinner, he mentioned that there are two main lecture series that he would recommend when it comes to learning more about time series analysis and statistics in general. They are:
- Ben Lambert: A large series of undergrad and masters levels short videos, including time series: https://www.youtube.com/user/SpartacanUsuals/playlists
- Joseph Blitzstien: His probability course at Harvard which starts at the basics and then gos onto a lot of useful distributions and stochastic processes: https://www.youtube.com/playlist?list=PLwSkUXSbQkFmuYHLw0dsL3yDlAoOFrkDG
This reminded me of my wish to actually use online resources more actively myself. And I would like to encourage especially Ph.D. students to actively look for interesting content on the web. It seems to me that such web lectures are tentatively underused and underappreciated, and that we usually don’t take the time to watch them as if they were real seminar talks or real lectures. However, that may be a mistake, and by making use of these resources ourselves, we may actually learn how to use the web more effectively when it comes to designing courses.
This is more broadly related to the challenges faced by universities, as described in a piece published by The Economist earlier this year.
But it concerns also conference visits. For example, most people don’t know that the plenary talks of many conferences are freely available on the internet. See here for some nice examples. All of them are highly recommended.
Today in class, somebody asked a question in my panel data econometrics class. The question concerned the assumption of strict exogeneity and whether it was violated in the example I gave before. I replied that yes, it could indeed be violated, but most of the time, in one way or another, a model will be mis-specified and assumptions will not hold in the strict sense. What I meant was that in some vague sense, the assumptions was a good enough approximation (without me going into the details of my example, think of the correlation between the error term and the regressor as being almost zero).
That made me think again of Milton Friedman, who argues in a famous essay that a model should be judged by its ability to predict counterfactual outcomes, or in his own words, “to make correct predictions about the consequences of any change in circumstances”. Sometimes, this is what we are after, and this is referred to as a positive approach (being able to make the right predictions)—as opposed to a normative one (where we can talk about welfare and how one can maximize it).
That sounds reasonable at first. But can we really make such a clear distinction? Can’t we simply see welfare as the outcome we would like to predict? Of course, we always need a model to make statements about welfare, but then it could also be that all models agree on the direction of the welfare effects of certain policy changes and only differ with respect to the exact magnitude. Therefore, I prefer to think of a model as a set of assumptions that are for sure wrong in the zero-one sense. But the question is how wrong, and that depends on the purpose the model is meant to serve. So, it’s a matter of degree. If the model allows me to make fairly accurate welfare statements (and I can be sure of that for whatever reasons—this is the red herring here), then I can apply Friedman’s argument that it’s good in his sense, but then I can even use if for welfare comparisons, so it serves a normative purpose. In a way, all this brings me back to an earlier post and in particular the part about Marshak.
PS on September 19, 2014: There are two interesting related articles in the most recent issue of the Journal of Economic Literature, in discussions of the books by Chuck Manski and Kenneth Wolpin, respectively. In these discussions, John Geweke and John Rust touch upon the risk of making mistakes when formulating a theoretical model, and how we should think about that.
This goes to the ones who already know what they want to do, and it has to do with structural modeling. It’s about how to do this in Stata (of all places).
There are many reasons why you may want to use Stata for your empirical analysis, from beginning to end. Usually, you will use Stata anyways to put together your data set and also to do your descriptive analysis–it’s just so much easier than many other packages because many useful tools come with it. Plus, it’s a quasi industry standard among economists, so using it and providing code will be most effective.
So, if your structural model is not all that complicated, you can just as well estimate it in Stata.
Today, I want to point you to two useful guides for that. The first one is the guide by Glenn Harrison. This is actually how I first learned to program up a simulated maximum likelihood estimator. It’s focused around experiments and the situation you usually have there, namely choices between two alternatives. It’s a structural estimation problem because each alternative will generate utility, and the utility function depends on parameters that we seek to estimate.
Then, today I bumped into the lecture notes by Simon Quinn, which I found particularly insightful and useful if what you’re doing has components of a life cycle model. What I like particularly about his guide is that it explains how you would make some choices related to the specification of your model and functional forms.
Of course, there are also many reasons why you may not want to use Stata for your analysis. But in any case, it may not hurt to give it a thought.
Last week we saw another one of Apples wonderfully crafted presentations. What a choreography! But besides learning how to present really well (just observe how a lot of information is conveyed in a way that makes it all look so simple and clear), there was something special going on.
First of all, what was it all about? New iPhone models, Apple goes payment services (aka Apple Pay), and the new Apple Watch. Now, at least to me, it seems that the watch is dominating the press coverage. But let’s think about what may be going on in the background for the moment.
The iPhone needed an upgrade anyways. Bigger screens (the competitors had this already, and customers were asking for it), better camera, faster processor, and new technology that allows one to use the phone for super convenient payments. Good move.
Then Apple Pay. How smart is that? Apple positions itself in-between the merchants and the customer and every time somebody wants to make a payment Apple sends a request to the credit card company and the credit card company then sends the money directly to the merchant. Apple is not involved in the actual transaction, has less trouble, and cashes in anyways. Customers benefit, and merchants will want to offer the service. Good move, with lots of potential.
Finally, the Apple Watch. When you read the coverage, then you realize that the watch is actually not yet ready. Battery life is still an issue, and so is the interface. And maybe the design will still change. But there are four truly innovative features almost hiding in the background. First, it’s a fashion item, unlike all the other technical devices that are already on the market. Second, it has more technology packed into it, and third, it’ll have apps on it. Fourth, you can use it to pay, with Apple Pay.
So, what’s so special about this event? It’s all about network effects. I’ve worked on two-sided markets for a while, and there are three types of network effects that play a role. The first ones are direct network effects. These are the ones we know from Facebook: the more people are on Facebook, the more I like to be on Facebook. These ones play less of a role. The second ones are indirect network effects. They arise because app developers find it the more worthwhile to start developing apps the more users will potentially download them. This is why Apple presented the Apple Watch now. Starting from now, until the product will finally be sold, they can develop apps, which will in turn make the watch more attractive to consumers, so it will have positive effects on demand. But developers will already see that now and will therefore produce even more apps. Very smart, and all Apple has to do is to provide the platform, the app store, and cash in every single time somebody buys an app. Finally, Apple Pay. Similar model. The more people use Apple Pay the more merchants will use it, and this will make people buy more Apple devices so that they can use the services, and so on.
So, if you ask me, taken together this is a huge step for Apple. Not because the Apple Watch or the iPhone are particularly great, but because Apple’s business model is incredibly smart. Beautifully smart. And I haven’t even mentioned that sometime soon all the Apple mobile devices will be much better integrated with the operating system on their laptops and desktops. As they said in their own words, something “only Apple can do”.
Writing an empirical paper involves—next to the actual writing—reading in data, analyzing it, producing results, and finally presenting them using tables and figures.
When starting a Ph.D., one typically imagines producing tables by means of lots of copy-pasting. But actually, I strongly advise you not to do that and instead to use built-in commands or add ons that allow you to produce LaTeX (or LyX) tables. There are at least two good reasons for this. First, it’ll save you time, fairly soon, maybe already when you put together the first draft of your paper. But at least when you do the first revision of that draft. The reason is that you will produce similar tables over and over again, because you will change your specification, the selection of your sample, or something else. And you will do robustness checks. The second reason why one wants to automate the creation of tables is that it will help you make less mistakes, which can come about when you paste results in the wrong cells or when you accidentally put too many or too few stars denoting significance next to the coefficient estimates.
Here’s an example of one way to do it in Stata and LaTeX (I usually use Stata for organizing my data, matching data sets, producing summary statistics, figures, and so on). I think the way it’s done here is actually quite elegant. This post is also useful when you’re using LyX, by the way, because you can always put LaTeX code into a LyX document.
So far this is all about generating tables. But actually, the underlying idea is that you organize everything in a way so that you can press a button and your data set that you will use for the analysis is built from the raw data, then you press a button and the analysis is run and the tables and figures are presented, and finally you press a button and the paper is typeset anew. This is described very nicely in Gentzkow and Shapiro’s Practitioner’s Guilde that I have already referred to in an earlier post. On the one hand, this is best practice because it ensures replicability of results, but on the other hand it will also save you time when you revise your paper, and believe me, you will likely have to do that many times.