Subscribe to Elucidations:

Episode post here. This is the first in our series of interview transcripts, courtesy of the awesomely talented Caroline Wall. Enjoy!

Matt Teichman:
Hello, and welcome to Elucidations. I’m Matt Teichman, and with me today is Tyler Cowen, professor of economics at George Mason University and the author of numerous books in economics and numerous articles in philosophy. And we’re here to discuss his recently published moral philosophy book, Stubborn Attachments.

Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals came out in 2018 from Stripe Press. He’s also host of the podcast Conversations with Tyler, which I highly recommend. It’s a long-form interview podcast that features interviews with all kinds of people, academic and non-academic.

I guess one thing I’ll mention about your podcast that I really like is the combination of seriousness and levity. I don’t really hear that very often in a long-form interview podcast—lots of questions from left field.

Tyler Cowen:
Maybe the levity is unintended and I’m just strange.

[LAUGHTER]

Matt Teichman:
So I find your book really interesting. And one way I might describe your approach to moral philosophy is to start with economic growth. Economic growth ends up being something that gets heavily prioritized in the approach to philosophy that you recommend. And I have kind of a naïve question about it. I had the impression from the book that you were assuming a close connection between economic growth and economic health. And I was wondering—are those the same thing? And if so, why are they the same thing?

So maybe to illustrate, imagine you had a small island with, like, 20 people on it, and new people were born at exactly the same rate as people died, so that at any given moment there were exactly, I don’t know, 500 people on the island. Could that population be healthy economically, always doing the exact same amount of work and always producing the same amount of money, and not growing? Or does an economy need to always grow in order to be healthy?

Tyler Cowen:
The title of my book is Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals. And I think of this book as coming out of the tradition of social choice theory. How can we say, ever, that one outcome, socially speaking, is better than another? And one of the arguments is, in a society where sustained economic growth is possible, if one society grows at a higher sustainable rate than another, after decades, or even centuries, the one society will be much better off for virtually everyone. And that’s the best we can do to solve aggregation problems.

But to get to your island question, I also suggest there are a number of cases—for instance, much of the world before the industrial revolution—where growth simply wasn’t on the table. So whatever you might think about the benefits of growth, whether you agree or not, if you can’t grow, you can’t grow. And in those societies, I think the case for deontology is much stronger, because you cannot create very large practical benefits. There’s not that much value to be handed out, so the recipe of “simply do the right thing” seems a lot more compelling—though if I understand your island example, it seems to me like a prime case where deontology should be applied. Lifeboat examples—well, it’s just a few people. Maybe you’re even all going to die, or you’re faced with a high risk of death. There’s not going to be economic growth. So the practical benefits of being a utilitarian are capped pretty low. So maybe, just do the right thing.

Matt Teichman:
Hmm.

Tyler Cowen:
So it’s an oddly relativistic approach to deontology, which I expect most actual deontologists would hate.

Matt Teichman:
And maybe let’s spell out what deontology means. So I guess I would understand that term as the idea that there are certain rules that you should follow about what’s right and wrong, and those rules are, you know, fairly absolute. And doing something right versus doing something wrong is just a matter of whether or not you follow those rules.

Tyler Cowen:
Right. Like Kant or early Nozick. Respect people’s rights in some way, don’t violate them—it’s more or less an absolute command. And that becomes weaker the more a society has the ability to produce large amounts of pluralist value, including utility, but not only utility.

Matt Teichman:
So in other words, if we’re in kind of an end-of-times scenario, where the human race is not looking like it’s going to last very long, that’s maybe the kind of scenario in which a rules-centric approach to ethics shines.

Tyler Cowen:
But if lying to someone would create an additional $10 million in value, which would then be used to cure babies of polio, the case for lying is, all of the sudden, actually pretty good. Matt Teichman: Yeah. That’s interesting. So it seems like—this is another thing you’ve written about, but, you know, you’re not against moral rules. And some of them, anyway, you think are fairly absolute. But they’re, like, absolute-ish. There can still be some exceptions in extreme cases. And it seems like maybe what you just mentioned is one of those exception cases. If one snap decision very clearly will lead to massive benefit for a huge number of people, then that would be an exception case. If breaking a rule meant making that happen, that’s OK. But otherwise, basic rules of ethics are fairly absolute. Tyler Cowen: Yes. So “fairly” is carrying a lot of weight, there. So if you think of ethics as making sense within some sphere, within some background, some set of suppositions, and one of those suppositions is simply that humans or other sentient beings exist, you can then think within that context, some rules are quite absolute. But if you needed to engage in a mass rights violation to save the entire world, without which there would, in essence, be no ethics, then it becomes permissible at those margins to do even the most horrible things if the alternative is, you know, complete extinction of all meaningful life. Matt Teichman: Right. And I guess you might think that’s kind of a safe backdoor, or a safe exception case—because how often are we really in that situation, where both the future of the human race is under immediate threat, and some simple, quick action will definitely lead to saving it? That doesn’t happen very often. So we can kind of feel comfortable that we’re going to follow the rules almost all the time. Tyler Cowen: Yes. But if you think of, say, drafting large numbers of young men to fight the Nazis for WWII, that’s a pretty coercive thing to do. But there, you have a case. The world was not literally, physically going to go “poof.” But the stakes were so, so high. I would say you’re in a position where a rights violation of that magnitude has a pretty good justification. Matt Teichman: Oh, that’s interesting. Mm-hmm. Tyler Cowen: Because the world as we know it was actually in danger. The Nazis, the Axis powers could have won. Matt Teichman: I wonder whether there’s a risk of future conflicts being kind of, like, marketed as being WWII-like in order to coerce people unjustly. Tyler Cowen: This is done all the time, of course. So the second war against Saddam—so there could be a Straussian argument against talking too loudly about the exceptions, because on average, they are likely to be abused for public choice reasons. But nonetheless, if we’re just stepping back in a philosopher-king kind of way and asking, “Is this an exception?”—you know, I think we should grant that it is. But step quietly with your exceptions if you think, on net, your rules are under-followed rather than over-followed, which at the social level certainly is my view. Matt Teichman: And what are some examples of some of these moral rules that people should generally follow? We mentioned don’t lie, we mentioned— Tyler Cowen: Well, I wouldn’t even put “don’t lie” on my list. Matt Teichman: OK. Yeah, what is on your list? Tyler Cowen: That might be a good prudential rule. I would endorse it. But it’s not on my list in the book. The absolute human rights in my book are very basic, simple things: don’t kill innocent people; don’t torture innocent people. And it’s a very short list. It’s not like the UN Declaration of Human Rights, where it goes on for dozens of pages. Matt Teichman: Right. Makes all kinds of anachronistic assumptions about what a human life is, and— Tyler Cowen: Right. And they’ll tell you you have the right to a toilet. Well, it’s wonderful that people can use toilets. Matt Teichman: Yeah. Tyler Cowen: But I don’t think it makes sense, in the metaphysical sense, to have a right to a toilet. Matt Teichman: Right. But maybe—yeah, right. Maybe the right not to be tortured—it’s hard to see how that isn’t going to be eternal or something. Tyler Cowen: Right. Killed and tortured—of innocent people. Matt Teichman: Right. Tyler Cowen: Separate issue, but I actually personally think that killing and torturing guilty people is problematic. But that’s something I don’t cover in the book at all. It’s easier to just sell people on the “innocent victims” case. Matt Teichman: Yeah. You’d really have to be out of your mind, I think, to deny that it’s wrong to kill innocent people. Tyler Cowen: But the doctrine of rights forfeiture we still take for granted in modern American society. But I think it’s actually pretty hard to justify. Matt Teichman: Hmm. And what’s the doctrine of rights forfeiture? Tyler Cowen: Well, if you murder someone, you forfeit your rights, and we can, in essence, do almost anything we want with you. We can put you in solitary confinement for life. We can execute you. Matt Teichman: Right. Tyler Cowen: I recognize a practical problem of needing to deal with violent individuals. Matt Teichman: Yeah. Tyler Cowen: But it’s not obvious to me that they lose their rights just because they did something very, very wrong. Matt Teichman: Right. And certainly, we’d want to take the details on a case-by-case basis. But there certainly seem to be at least some cases where you wonder if it’s motivated, really, by the desire to protect ordinary citizens from crime, rather than something like sadism. Tyler Cowen: Or even deterrence. Again, it’s taken for granted to be an acceptable motive, but you can hang up the innocent man to deter the potentially guilty. And that’s morally unjust. So to punish the guilty for reasons of deterrence I also find problematic. Matt Teichman: Yeah. I’m definitely sympathetic to that. So the question I still have about economic growth is—is the fact that economic growth is something that we should aim for contingent on the population always growing as well? Is that why economic growth needs to happen—because we’re just getting bigger and bigger as a population? Tyler Cowen: I think a larger human population, all other things equal, is clearly better than the smaller human population. But say you had a nation with more or less constant population. And many countries in the world are like that. England and France are slightly above replacement level, but they’re pretty close to constant. Matt Teichman: It’s hard to calibrate it exactly, right? Tyler Cowen: Yes. But if they grow, say, 3% a year, rather than 2% a year, over the course of 100 years, you’ll have a very different society. And I think the argument holds for constant or even shrinking population cases, as well. It’s harder to grow with a shrinking population. So that’s one reason to favor a growing population. So paying the bills of your government becomes harder if your total fertility rate is 1.2. In South Korea, I think it’s even below 1. That’s arguably their largest problem. Matt Teichman: But wouldn’t a smaller population incur fewer costs for the government? Or does it not work like that? Tyler Cowen: Well, usually, the way you get to a smaller population is to have a lower birth rate and a lot of aging. So if you think of old people, on net, as receiving transfers—if you have fewer young people, you’re going to have a high level of taxes, your most talented individuals wanting to leave, a lot of money spent maintaining people, relatively less spent on innovating. And if you could have the same societies with higher birth rates, I think they would do better by virtually everyone, not just the new people who get born. You can get into all sorts of Parfit-ian dilemmas, but just society as a whole will be more dynamic, more positive, have more space to explore new ideas. Doing something like, say, sending people to the moon is unlikely to happen in a society of declining population. Matt Teichman: Right. And maybe, eventually, it’ll even be necessary if the population gets big enough. Or maybe into the sea—one of those places. Tyler Cowen: A lot is still pretty empty, though—rural Nevada? It’s not all radioactive. Matt Teichman: We’ll start with Nevada, and then proceed from there. So another interesting notion that you have in your book you call “wealth plus,” which is—I think of it as more of—it’s similar to measuring the gross domestic product of a country, but then you’re also looking at other stuff, too. So it’s a little more qualitative. So what exactly is the difference between wealth and what you call “wealth plus”? Tyler Cowen: Well, when I argue for maximizing the rate of sustainable economic growth, I don’t exactly mean GDP. GDP picks up some critical elements of economic growth, but it’s missing a number of important things. So the value of the environment is often misrepresented in GDP as we use it. The value of household production, as economists call it—work that people do at home that’s not compensated in the marketplace—is missing. The value of leisure time, in some regards—just sitting around, talking philosophy—that’s missing from GDP, the way we measure it. Matt Teichman: And indeed, I feel people often act like those are in conflict—like sitting around, enjoying yourself is hurting the economy. What we should all be doing is busting our butts 24 hours a day or something. Tyler Cowen: But leisure has value, too. So if you properly count leisure into wealth, say, some Western European nations will be somewhat wealthier than they otherwise appear. I would say that’s a justified correction. Some Asian economies will be somewhat less wealthier than they otherwise appear. Also, the environment is important. And for the most part, we know how to make these corrections. It’s an interesting question, why policymakers don’t do it. But simply, when I call for maximizing wealth, or what I call “wealth plus,” I mean wealth properly understood—not just the number you see in the newspaper as the rate of GDP growth, because that can be misleading. Matt Teichman: And is this a quantitative thing? Do you have a formula to calculate “wealth plus,” or is it just a general intuition that these things you mentioned—leisure time, good environment—are other things we also need to aim for, besides just maximizing GDP? Tyler Cowen: Economists already have methods of calculating for the values of these other goods. The environment is trickier, just because uncertainty is so high. So if you think, well, global warming is a looming threat to our health, which I would agree with—you can think that problem is quite serious, but it’s still hard to put a number on. That’s an issue everyone has. It’s not unique to my framework. Most of “wealth plus” we have ways of valuing that are already well established and, more or less, non-controversial. Matt Teichman: I really like this idea. It’s almost like—to me, it sounds like a little more of a humane notion of wealth than at least some of the notions of wealth that have been handed down to me from the culture. The idea that it’s important to be productive, it’s important to do things, contribute to society—but it’s also important to live a good life and not burn out. Something like that. Tyler Cowen: Sure. But there’s still commensurability. So this gets back to—the core feature of the book is, how do we overcome aggregation problems? By putting these other goods into the framework of wealth, there is commensurability between wealth and, say, sitting around, talking about philosophy. And again, over the short run, it will look very messy. The comparisons won’t appear to make much sense. But again, if you just consider two societies, one growing at 1% for centuries, the other growing at 2% for centuries, the one growing at 2% will be several times wealthier and a much better place. It will be able to support more philosophers, support more leisure time, give people nicer, safer, more creative jobs—many other benefits. So ultimately, I’m a pluralist. But I see wealth/utility as sort of at the relevant margins driving a lot of the ways that we actually get to more and better plural values. Matt Teichman: OK. Let’s define some of those terms. So one thing you mentioned is aggregation problems. So what, exactly, is an aggregation problem? Tyler Cowen: Say there’s a policy, and it helps me, and it harms you. In standard economics, right off the bat, we don’t know upfront whether that policy is a good idea or not. Now, if you’re a Benthamite, you could add up my utility, and consider the change in your utility, and try to set one off against the other, and then make a judgment. I don’t think those comparisons are very defensible, necessarily. But I think with higher rates of economic growth over time, again, you have a case where you’re doing something like—well, you’re comparing the living standards of, say, the United States, or Denmark, to those of Albania. And you can’t, in a rigorous sense, prove Denmark is a better place to live than Albania. But I think there’s overwhelming evidence that that is the case. And you can also look at where people choose to migrate. Very few Danes are banging down the doors trying to get to Albania for anything other than their vacations. And I think if you look at human health, life expectancy, again, value of jobs, ability to support higher aesthetic goods—just many, many features of life—they tend to be better in much wealthier societies than much poorer societies. And while that is not a perfect way of getting around aggregation problems, it’s simply, I think, the best one we will ever find. And since we have to make choices, why not go with the best way we will ever find of getting around what economists would call the Arrow impossibility theorem. Philosophers have somewhat different frameworks, but it’s the same basic idea of if a bunch of people are better off, and others worse off, how do you decide what to do? Matt Teichman: Right. So it seems like your preferred approach to what might appear to be a trade-off between the interests of me and the interests of you—the interests of Person A or the interests of Person B—is to ascend to the population level, and just look at what’s beneficial for the population. If you do that, you don’t have to arbitrarily decide to prioritize Person A over Person B. Tyler Cowen: That’s right. And look for long-run settings where there’s not that much of a trade-off at all, which is like the Denmark versus Albania case. Matt Teichman: Yeah. Tyler Cowen: I’m not saying no one in Albania is happier than a bunch of people in Denmark. But if you can’t make that judgment, on some level, I don’t think the person is taking actual life seriously. Matt Teichman: You mention Ayn Rand a couple of times in your book, both positively and negatively. One thing about this that reminds me a little bit of Ayn Rand is sort of the idea that maybe conflicts of interest between people aren’t as prevalent as we think they are, or something like that. Tyler Cowen: That’s right. And she argued that. I mean, as a formal philosopher, she’s not very good. But some of her practical observations—that, say, wealth over time carries a lot of plural values—I think are on the mark. Matt Teichman: So in Episode 35 of Elucidations, we talked to Martha Nussbaum about the capabilities approach to development, which she’s done collaboratively with Amartya Sen, as an alternative to GDP as a measure of how good the quality of life in a country is, or something like that. Is your idea of “wealth plus” similar to the capabilities approach? Or do you take there to be differences? Tyler Cowen: I would flip it a bit. I would say the capabilities approach is closer to some modified notion of GDP than it wants to let on. Matt Teichman: Oh, OK. Tyler Cowen: Plus the regional that is claimed. So if you’re just comparing levels, you can say, well, we’ll look at capabilities. People at Kerala, part of India—their social indicators are pretty high. Their capabilities are better than their wealth alone might make it look. That’s fine. But when you get into the issue of choices at the margin and commensurability—how do you trade capabilities off against each other?—I think you have to end up converting them into something like wealth or modified GDP to have commensurability. And people who push the capabilities approach—it’s always levels, levels, levels, with a kind of moralizing tacked on. But if you’re just hard-nosed, and focus on trade-offs at the margin, commensurability, it’s not really that different. I think it’s been overmarketed, that idea. While I do sympathize with it, it’s not that different. Matt Teichman: Yeah. So maybe “wealth plus” is “capabilities” turned up to 11 or something—more capabilities than “capabilities.” Tyler Cowen: “Capabilities with commensurability,” you could say. Matt Teichman: So another word you mentioned was “pluralism.” And I think that means different things in different areas of philosophy. So what do you mean by “pluralism” in the context of moral philosophy here? Tyler Cowen: That there is a multiplicity of goods. Maybe happiness or utility would count—both preference satisfaction and felicity in the Benthamite sense—but aesthetics may have some kind of independent value above and beyond their contribution to happiness. Some notion of justice is relevant. Everyone I’ve actually met, no matter how they describe their supposed philosophic position—”I’m a utilitarian,” “I’m a deontologist”—they all turn out to be pluralists. So when you ask them, “How do you make different notions of utility commensurable with each other?” they’ll sneak in some implicit pluralism. You know, Socrates versus the pig—there’s other value judgments in there. You ask Kantians, “How do you make trade-offs at the margin?” which is, to a deontologist, an embarrassing question. Like, OK, rights violations are absolutely wrong. But does that mean you spend 100% of GDP on the police force? “Well, no, we don’t do that.” But then, they’re back to needing a notion of commensurability, and it collapses a bit into pluralism. So I’m just upfront about a framework that I think virtually everyone shares. I don’t pretend to know the true content of the actual, fully-realized pluralist bundle. But it just seems to me ethics is complex, that differences of perspective have still persisted across intelligent, well-meaning people for literally millenia. I think it has to mean there is this multiplicity of goods, and we should care about many of them. Matt Teichman: Yeah. So in other words, there isn’t just one good that’s the best possible good, and which we should expend absolutely all of our resources aiming only at that. It isn’t like—well, making bagels is a great thing. Everybody agrees on that. But it isn’t like making bagels is the one thing we should drop everything and prioritize above everything else. There is no one thing like that. Tyler Cowen: Right. But you do want to look for, what are the findable cases where the bunch of values we care about more or less co-move? Matt Teichman: Right. So then, pluralism says, I’m not exactly sure what goes on that list of ultimate goods, but there’s going to be more than one thing. Tyler Cowen: And then, we look for our best judgment about co-movement of the plural values. And on those cases, we can render a kind of judgment. Often, there’s not much co-movement, and in those cases, we should be fairly agnostic as to what’s right or wrong. Matt Teichman: So I think it’s fair to say that one of the main moves you make in your book is to say we should care about future people, and the lives of future people, just as much as we care about the lives of presently living people, and that we have an instinctive tendency to care more about the here and now than the future. Is that right? And so, why should we care just as much about future people as current people? Tyler Cowen: Biological beings are impatient with their own lives. So we’re programmed to have these intuitions that the here and now matters more. But when future joys, pains, or aesthetic values, or justices arrive, they won’t be any less real than those same values today. And the fact that we are impatient, say, to eat now, or get some benefit now, is not a moral justification for counting the more distant future for less, more generally. There are many other arguments you can make. If you just think about Einsteinian physics, with the universe as a frozen, four-dimensional block of spacetime, what is the future depends on the standpoint of the observer. But there’s no moral reason in an Einsteinian framework why you should discount, say, for time any more than you should discount for space. Now, I would discount for the uncertainty of the future. That makes perfect sense. But uncertainty of the future does not operate like length of time, right? It doesn’t expand exponentially, necessarily or generally. So if I’m going to take an act that means 50 years from now, a current unborn will suffer severe pain—say I destroy the environment in some way—in the meantime, that person is not sitting around waiting, right? When the person is born and the pain comes, it will be the here and now to that person just as much as the here and now is the here and now to us two. Matt Teichman: Right. Yeah, I think this dovetails in interesting ways with “Episode 97,” where we talked with Megan Sullivan about what she calls “time biases.” And in that episode, she actually argues that it’s irrational to prefer present benefits over future benefits. But I think she had more in mind future benefits for the same person. Tyler Cowen: That’s right. Matt Teichman: But yeah, so this is, again, doing a similar thing, but at the population level. Tyler Cowen: There’s a few embedded questions here. So the argument of my book requires only zero discounting for future and different people. That said, I also believe in zero discounting for oneself, though my arguments do not require that additional and stronger view. Embedded in this also—well, what about the Parfitian distinction? The future you—is it somehow a different “you,” metaphysically? I’m not sure there’s a factual answer to that question, but nonetheless— Matt Teichman: Yeah. Nobody knows. Tyler Cowen: —your future “you” is not exactly the same as you, in any physical sense or even moral sense. So there’s a way in which the decisions you make that influence your future self involve externalities. And that at least pushes us a bit toward a zero discount rate for yourself. But I think it’s much easier to argue for a zero rate across people over time than to argue for a zero rate for a single person. Matt Teichman: I see. So where a zero rate for a single person would be, I should care exactly as much about what happens to 50-year-old Matt as I care about what happens to 35-year-old Matt. Tyler Cowen: But here’s, I think, the ambiguity with zero discounting for a single person. Say you are at least partly a preference-satisfaction utilitarian. Then it’s not just happiness, but satisfying a preference matters. And say a person has a preference—well, they just want the New York Knicks to win the NBA title in 2021, and for the Knicks to win the title in 2030 is somehow not good enough. You might say, well, that’s an irrational preference. The Knicks winning in 2030 should be as good as for them to win in 2021. Matt Teichman: Well, tell that to the Knicks who don’t win in 2021— Tyler Cowen: They don’t ever win. I know. But once you’re a preference utilitarian—it seems there’s a lot of our preferences you can’t justify at all. Why should you care about the Knicks? What’s wrong with the Nets? What’s wrong with the Blazers? But we don’t dismiss preference-utilitarian desires simply because the preference is ungrounded. So wanting the Knicks to win sooner rather than later—it might be totally ungrounded, but I’m not sure that’s a general reason to dismiss a preference-based desire once we’re counting preference-based desires at all. And that’s why I think the zero discounting view within a life—it’s not that easy to pin down in a hard and fast way. Matt Teichman: Right. Whereas it is comparatively easy to say it’s totally unfair to some child 30 years from now if they’re born with a birth defect because I messed with the water supply. Tyler Cowen: That’s right. Yeah. I think Parfit is the example of, like, Future Tuesday indifference. What if you have funny preferences? You want good things to happen on Friday, bad things to happen on Tuesday—well, is that justifiable? But again, most of our preferences are not justifiable in the sense that would be required. Matt Teichman: So you mentioned utilitarianism. Maybe this would be a good opportunity to go into just generally what utilitarianism is. Because I think one interesting thing you do in your book is show how to make utilitarianism more palatable than it usually is to people. But let’s back up first and say a little bit about what does a utilitarian think about good and bad? Tyler Cowen: Well, there are so many variations of utilitarianism. Matt Teichman: Yeah, it’s kind of a trick question, I know. Tyler Cowen: There’s the Benthamite view where you sum up utilities in some way, and the rate of discount can vary. Economists think they’re a kind of utilitarian in their formal theory. But I would say they’re preference-satisfaction utilitarians who are inconsistent, and they think everything’s captured in market demands, in some way. If you’re a pluralist, you think a whole bunch of different kinds of utility matter, and you want to look for settings where they’ll co-move. That, to me, is the most coherent kind of utilitarianism. But I don’t see it out there that frequently. Matt Teichman: OK. So in other words, doing the right thing means getting the greatest number of these many different plural goods for the biggest number of people, or something roughly like that. Tyler Cowen: Right. And I don’t have a very strict view on how happiness utilitarianism and preference-satisfaction utilitarianism should be aggregated. Because they conflict in many cases, as you well know, and I’m not sure we’ll ever solve those. Well, what if you could take a pill that made you indifferent to all the world’s suffering? You might be happier, but it feels to many of us that that’s wrong, and indeed, you might not want to take that pill. How do you solve that conundrum? I don’t feel I have that answer. But again, if you look at cases where enough of the happiness and preference-satisfaction utilitarianisms co-move in a positive way, you can to some extent skirt those dilemmas. But anyway, the way you make utilitarianism more palatable is if you have a zero discount rate, the Bernard Williams conundrum isn’t everyone obliged to run off and be a doctor to very poor people in Africa. Well, your actual obligation is to produce social value. Most but not all people will produce the most social value by working, and being creative, and being loyal to a free society in a wealthy economy. That will, in turn, do a lot to elevate poor individuals around the world. We’ve seen phenomenal catchup growth in emerging economies over the last few decades. That’s certainly more effective than everyone running off to poor countries to just be a doctor. To have catchup growth, the wealthy nations do need to be wealthy. But nonetheless, at the margin, some people should run off and do public health work in Africa, South Asia, wherever it may be. So I try to reframe that as a bit of a game-theoretic problem. Not everyone should run off and fight malaria in poor countries, but some people should. Like most game-theoretic problems, there’s not a single correct solution, but you can argue that you should think of it in terms of randomized Nash equilibrium. The people who can do that at lowest cost are the ones who should do it. Those are the people who more or less are the ones who want to do it, or find it rewarding. And the idea that people who find it rewarding to fight malaria in South Asia are the ones who should do it, and most of us shouldn’t—that doesn’t sound crazy, right? It’s not this extreme obligation where you think utilitarianism is so inconsistent with common sense morality. So a zero discount rate, some economics, and a dose of game theory considerably bridge the gap between consequentialism/utilitarianism and common sense morality. And Parfit, in this later two books—he doesn’t have that. Sedgwick, in a way, came closer to that. But I feel that’s a missing insight in the current literature. People are too distracted by the wonderful rhetoric of Bernard Williams. But it’s less of a problem than Williams thought. Matt Teichman: Right. So if you take the view that, given a choice between doing something that will make me very happy and the entire rest of the world a little bit happy—well, if you can sum up the happiness, maybe I’d benefit the rest of the world more. And if that’s the case, I should do that. That seems kind of intuitive to a lot of people, because a lot of people want to be generous and help others, and all that kind of stuff. Tyler Cowen: Sure. Matt Teichman: But then, if you really follow that view through to its logical conclusion, it seems like it demands this very monkish lifestyle of a person. Like, they have to give all of their money, pretty much, to charity, except for the bare minimum of what they need to eat. Or maybe, as you mentioned, another option would be drop all of your life plans. Really, I want to be a tap dancer, and that’s my calling. But instead, I’m going to go be a doctor, even though I don’t want to be, because that’s going to help more people who are in need. So this is a real tension, I think, with a lot of people feeling pulled in two directions. On the one hand, I should drop everything, move to Africa, be a doctor. On the other hand, man, that’s pretty emotionally intense. I mean, I kind of just want to chill out and have a good time here—you know, and try to be the best person I can. But that’s a huge sacrifice. So it seems like the way you’d want to get around that is by saying, well, if it was this lifeboat scenario that we talked about earlier, this end-of-times scenario, maybe that’s what it would make sense to do. But given that it seems like there’s going to be a future human race for an indefinite amount of time, really, looking after that future involves doing something a little bit closer to what we’re already doing. Of course, there are maybe little adjustments that we can make here and there. But it does not require everyone dropping everything and moving to Africa. Tyler Cowen: Sure. And I want to say—you’re a programmer, you move to Seattle, you work for Microsoft, and you earn$350K a year. And you’re just “selfish,” but you buy a lot of goods from China, South Korea, eventually other countries in the world. You’re driving a phenomenal amount of economic growth. The biggest growth miracle the world has seen has come from export orientation—poorer countries exporting goods to wealthier countries to mostly selfish consumers. A lot of foreign aid is not actually very effective. I do believe we should have foreign aid, but the model of just being selfish and spending money on foreign goods very often drives more benevolence than anything else you can do.

Matt Teichman:
Yeah I feel like people often don’t talk about that that much. When it comes to China, for example, a few generations ago, there was mass starvation. And look at everybody now. The descendents of those people are now not starving.

Tyler Cowen:
And they are, in turn, elevating poorer countries who are their neighbors, like Cambodia—or, you know, “one belt, one road.” So there’s something cumulative about it that foreign aid often does not have.

Matt Teichman:
Yeah.

Tyler Cowen:
The foreign aid I’m most optimistic about is foreign aid that tends to have cumulative, ongoing benefits. So particular public health problems that lead to, say, malnutrition and lower IQs, which can set nations back for very long periods of time—it seems to me foreign aid in those areas can be pretty effective. But a lot of foreign aid, again, is not. Charities have high overhead. A lot of the money is wasted.

Matt Teichman:
You know, there’s a philosopher—I don’t think I want to out him on the podcast, so let’s just call him “Mitch.” When Mitch was a grad student, he donated his entire graduate student stipend to charity, pretty much. And he just squatted in the graduate student computer lab. And he did this because he was a committed hedonic utilitarian. He wanted to give the maximum number of people the maximum amount of pleasure. And it seemed to him that the best way to do that was just to give away his entire income.

And so it seems like what you were saying earlier is that the people that are cut out to live that way, in terms of emotional temperament or something—those are the people that should do it. But if we forget about just the here and now and look at the long run, things will be much better for everybody if we divide up the self-sacrificing labor in that way.

Tyler Cowen:
I do think at the margin, we should definitely be more charitable. And I think it’s definitely important that people live their philosophy, in some way. So for my book Stubborn Attachments, I donated all of the royalties to one poor family in rural Ethiopia. And I’ve been sending them money, rather than receiving that money myself. And that’s also part of the principles of the book—that economic growth, say, having a successful book and selling some copies of it—that generates revenue. That can go to help poorer people. And yes, we should buy things from poor countries, but at the margin, we should give more. And I’m trying to do more myself.

So I think someone like Peter Singer is helpful and useful, because most of his actual impact is at the margin. People are not en masse dropping their lives and running to poor countries to do whatever. I’m not even sure if they would be very useful if they did, for the most part.

Matt Teichman:
Yeah.

Tyler Cowen:
But some people can be.

Matt Teichman:
Yes. Yeah, it’s funny. In a way, I feel like you and Peter Singer kind of end up in a similar place. His version is, well, we have this really difficult-to-meet ideal. And he himself admits, I can’t actually live the way I think we should all live. But that doesn’t mean we can’t have it as the ideal, and inch a little bit closer and closer to it. But I think it seems like the difference with your approach is, well, actually, maybe we’re a lot closer to the ideal than we think, currently.

Tyler Cowen:
Mine’s a bit like Peter Singer without so much guilt. Like, oh, you’re only doing something at the margin. Well, it depends what margin you’re at, but maybe that’s fine. There is a stipulation in the book that people are, in some way, obliged to be very productive. It’s a broad notion of productivity, but saving, and investing, and working, and trying to be creative—that’s a strong obligation in my framework, in a way that many people do actually find somewhat impressive. But I’m fine with that. I’m completely willing to bite that bullet.

So I think Americans as a whole—they spend too much. They don’t save enough. Investment would drive more growth, for instance. They’re not cosmopolitan enough at the relevant margins. A lot of people squander their talents. They don’t work hard enough. So there are strong obligations in my framework, but they’re more, you might say, “Puritan,” in a way.

Matt Teichman:
That is sort of a question that I had when reading the book. As we mentioned earlier, you can get around some of these what we called “aggregation problems” by ascending to the population level. If we just look at what’s good for the population in general, we can maybe not worry as much about how, exactly, are we going to resolve a trade-off between what this person wants and what this person wants when there’s a conflict.

But that sort of makes me wonder how actionable the view is, or how operationalizable the view is. Does it mean that I can’t, as an individual person, decide to follow your program, but that rather, the American people have to collectively decide to follow your program? Or is there a way that your moral philosophy can impact my personal decisions? I guess maybe investment is one thing you mentioned.

Tyler Cowen:
Sure. I think there is implicit advice for individuals, and it boils down to figuring out what you can do to best maximize your contribution to “wealth plus,” or this modified notion of GDP growth. There’s not concrete advice in the sense of, like, “you should be a carpenter,” “you should be a philosopher”—that will depend on the facts of the case. But the notion that your real obligation at the margin is to work, save, and invest more, be more creative if you can, create jobs for other people if you’re in a position to do so—and that replaces the Peter Singer-like imperative—that’s absolutely there. And I want people to take that more seriously. And I think it’s quite close to common sense morality—sort of what your proverbial grandmother/grandfather would tell you to do. You know, work hard, get ahead, be loyal to your friends, have and raise a family—I’m not saying everyone can or should do that. There’s people who don’t want a family. Maybe they don’t like kids. Or maybe, they’d prefer to work really hard and not have kids. That all fits into the framework. But nonetheless, your obligation to try and figure out your maximum contribution is there, and it’s real.

Matt Teichman:
And when you mention investing, do you mean, like, investing in the stock market? Or is it a broader idea, like, I’m going to invest time in cultivating this friendship, or—

Tyler Cowen:
Broader idea. If you invest in the stock market, you’re just buying secondary claims from someone else. I mean net investment in real things. So again, not everyone is in this position. But if you can have a startup, or build a factory, or start a new business, that’s net real investment, and that counts. At least if it’s a good idea.

But the stock market in that sense is overrated, right? Secondary claims. You might think, well, if I buy stocks, it pushes up the prices. It encourages other people to invest more by issuing additional shares because the price is higher. I mean, maybe, but that’s feeling somewhat tenuous to me.

Matt Teichman:
Yeah. It does seem like a complicated thing. Like, at some level, we need a lot of people active in the financial industry to keep the economy going. But there would be something weird if literally everyone did nothing but that.

Tyler Cowen:
Sure.

Matt Teichman:
What’s an example of a policy choice that makes the mistake of not caring enough about people in the future? And then, what would be an example of correcting that policy mistake by caring sufficiently about people in the future?

Tyler Cowen:
Our government spends far too little money subsidizing basic science, and research and development. And that has gone down over time, when it should be going up. We’re a wealthier society. We can afford to do more. Hardly anyone view that as an imperative. Politicians don’t campaign on it. Voters don’t seem to care about it. I think it can properly be viewed as a non-partisan issue.

More controversially, I think we, in most regards, regulate business far too much. We should liberate business from many of the shackles imposed on it. In cases where businesses create genuine negative externalities—again, carbon emissions are a very simple case—we should be much tougher with regulations. So be tougher when it matters, but in most cases, it’s too hard to run a business because the attention of, say, the CEO is distracted by legal and regulatory matters, rather than growing the company.

Matt Teichman:
Yeah. So in environmental areas, you favor tighter regulations. But maybe in some other areas, you favor looser regulations.

Tyler Cowen:
In most areas, looser regulations. Finance, I would say, we should have fewer but tougher regulations. Environment, carbon—much tougher regulations. But I wouldn’t defend all environmental regulations, per se. A lot of it may be pointless. But carbon seems to be clearly an issue that matters and will cut into sustainable growth—especially the sustainable part of the equation.

Matt Teichman:
And what would be an example of a regulation that unfairly shackles the ability of a company to grow and make a contribution to society?

Tyler Cowen:
Well, say you want to build more housing in San Francisco or Oakland. That’s very, very hard to do. Existing homeowners, for the most part, keep you out. This is called NIMBY—“not in my backyard.” So for prospective entrepreneurs to start a new company in the Bay Area, you’re saying to your potential employees, well, if you move here, your rent every month will be whatever.

Matt Teichman:
Ridiculous.

Tyler Cowen:
Insanely high. And it’s much harder to start those businesses. So that, to me, is a very simple example.

Matt Teichman:
Yeah. That’s a good example. I certainly feel a strong desire not to move there precisely for that reason.

Tyler Cowen:
Yeah. Chicago’s a pretty cheap city by American standards.

Matt Teichman:
I think this is a sort of typical formal philosopher’s worry, but one thing I thought of when thinking about your position is—let’s say, for the sake of argument, that there’s no mass extinction event on the horizon, and humans are going to be around for quite a long time. Well, it seems to follow from that that there’s just going to be way more future people than present people. And since there are so many more future people than present people, people who currently exist, the priorities of those people, just numerically, seems like it could swamp the priorities of people now.

And then, I wonder if we get into a situation where we’re forever deferring a benefit. So because there’s so many more future people, I’m not really doing what I’m doing for anybody in the here and now. I’m doing what I’m doing for them, because there’s so many more of them. But then, in the future, there are going to be even more “future-future” people. And in the future, they’re not going to be doing anything for anybody in the here and now. They’re going to be doing stuff for future people. And I wonder if this is going to be this infinite regress, where nobody’s ever really doing anything to actually benefit from it. They’re doing it to hypothetically benefit people in the future forever, or something.

Tyler Cowen:
I think empirically, there’s a fair amount of concordance across what’s good for us and what will be good for the generations to follow us. So if you think, “What will actually help our children and grandchildren?” if we can bequeath them some amount of wealth, of course—and producing wealth now is good for us, for the most part. More generally, good, healthy institutions—a well-functioning democracy, checks and balances, or a better regulated capitalistic system, as opposed to a dysfunctional one—that’s what we actually want to pass down to them. And those same things will be good for us now.

I’m not saying there’s no trade-off. And I think with some environmental issues, you see the trade-off pretty starkly. But if it’s mostly concordance, and then the theory says, well, at the margin, you should worry more about some number of environmental issues, it seems to me that’s the correct intuition. It being the correct intuition doesn’t prove it’s correct. But it’s not violating intuitions. There’s not some hand-me-down game where no one ever gets to have fun, right?

If we were living a Spartan—and I mean the word “Spartan” literally, as people lived in Sparta—a Spartan existence today, how good would that be for our grandkids? Well, they’re not getting that much out of it. They’re much better off being descendents of people in the US, Canada, Denmark—other more or less well-functioning countries.

Matt Teichman:
Maybe this is the same kind of answer, but I guess you could say on the pay-it-forward principle, it’s true I’m not working for any benefits to the people in the here and now, and that does seem a little bit intuitively odd. But I’m also receiving the benefits from the work that earlier people did.

Tyler Cowen:
Yeah. There’s a lot of philosophical conundrums. I don’t think you can answer them without consulting the empirical in some broad way. And I don’t mean very particular facts. But just the notion that there are gains from trade across generations, that good institutions benefit many people, and they’re durable—those are very broad empirical facts. And the applicability of my arguments in Stubborn Attachments—they do rely on that, to some extent. I don’t view that as a weakness. But the sort of pure thought experiment—I think you’re just left not knowing what the right answer is. But if you then wake up and see you’re in a world where good institutions matter, well, you have a way forward, and you should take it.

Matt Teichman:
So another question I think somebody might have about this position is, how are we supposed to know what’s going to lead to long-term economic growth? Like, we can’t even predict if it’s going to rain in two weeks, really. So how could I possibly really know that much about whether the creation of blabbity-blah institute, or this company, or whatever particular administrative decision—how can I ever really know for certain about the contribution that’s going to make to economic growth?

Tyler Cowen:
Well, I don’t think you know for certain. There is a body of empirical literature on what boosts growth, and it’s often easier to learn what harms growth. Venezuela right now, right? It’s obviously not going very well. East versus West Germany.

But that said, you ought to be fairly uncertain. So you’re not completely at sea. It’s wrong to think we know nothing about what boosts economic growth in terms of expected value. If you’re starting a new company with a reasonable chance of success, on average, that will contribute to economic growth. But you should not be so sure of your own particular political views. You shouldn’t be so sure you’re choosing exactly the right course of action. You should do what you think is best, but with a kind of floating agnosticism. Like, gee, the chance that this is best—you know, maybe it’s only 5%. But my other selections—they were, like, at 2% or 3%.

So pick the better one. Do it with a kind of modesty and openness to change—and especially on political matters, you don’t see that very often. You have people being really quite sure what is good for either their preferred ends, or what is good for growth, and it’s not highly certain. For me, that’s a feature of the theory, not a bug.

Matt Teichman:
I’m inclined to agree. It’s been a big hobby horse of mine for a while that just emotionally, culturally, we need to get comfortable with not being “on mission” all the time. Just work with the information you have, make your best guess—you know, prudent, cautious, trial and error, try to learn from your mistakes, et cetera, et cetera. But it seems like in the political culture, there’s often this mindset where we shouldn’t do anything unless it’s 100% guaranteed to do exactly what we think it’s going to do.

Tyler Cowen:
And you can bring classic philosophic thought experiments to bear on this. So even if you feel pretty sure, oh, my company will succeed, or this is a good policy, as you know, different choices you make—it remixes the whole future of humanity by changing the timing of individual conceptions. A whole new set of babies get born because you stopped at the red light rather than plowing through it. Maybe your actions lead to a future Hitler, rather than a future leader of great benevolence. And of course you can’t know that. So you have to be fairly uncertain, though at the end of the day, you still need to do what rational argument suggests would be best.

Matt Teichman:
Tyler Cowen, thanks so much for joining us. Hope to have you back sometime.

Tyler Cowen:
My pleasure. Thank you, Matt.

Elucidations isn't set up for blog comments currently, but if you have any thoughts or questions, please feel free to reach out on Twitter!