'The Interview': Peter Singer Wants to Shatter Your Moral Complacency AI transcript and summary - episode of podcast The Daily
Go to PodExtra AI's episode page ('The Interview': Peter Singer Wants to Shatter Your Moral Complacency) to play and view complete AI-processed content: summary, mindmap, topics, takeaways, transcript, keywords and highlights.
Go to PodExtra AI's podcast page (The Daily) to view the AI-processed content of all episodes of this podcast.
View full AI transcripts and summaries of all podcast episodes on the blog: The Daily
Episode: 'The Interview': Peter Singer Wants to Shatter Your Moral Complacency
Author: The New York Times
Duration: 00:42:29
Episode Shownotes
The controversial philosopher discusses societal taboos, Thanksgiving turkeys and whether anyone is doing enough to make the world a better place.Unlock full access to New York Times podcasts and explore everything
from politics to pop culture. Subscribe today at nytimes.com/podcasts or
on Apple Podcasts and Spotify.
Full Transcript
00:00:04 Speaker_03
From the New York Times, this is The Interview. I'm David Marchese. Maybe it sounds corny, but in my own little way, I really do try to make the world a better place. I think about the ethics of what I eat. I donate to charity.
00:00:20 Speaker_03
I give time and energy to helping those less fortunate in my community. And according to Peter Singer, those efforts pretty much add up to bupkis. Singer is arguably the world's most influential living philosopher.
00:00:33 Speaker_03
His work rose out of utilitarianism, the view that a good action is one that, within reason, maximizes the well-being of the greatest number of lives possible.
00:00:43 Speaker_03
He spent decades trying to get people to take a more critical look at their own ethics and what well-meaning, comfortable people can actually do to make the world a better place.
00:00:52 Speaker_03
His landmark 1975 book, Animal Liberation, helped popularize vegan and vegetarian eating habits. His new book, Consider the Turkey, builds on those ideas as a polemic against a Thanksgiving meal.
00:01:05 Speaker_03
And his writing on what the wealthy owe the poor, which is a lot more than they're giving,
00:01:09 Speaker_03
was an important building block for the data-driven philanthropic movement known as effective altruism, which has gotten a lot of attention recently because of some of its high-profile adherence in Silicon Valley, including the disgraced cryptocurrency entrepreneur Sam Bankman-Fried.
00:01:23 Speaker_03
But Singer, who is 78, is as controversial as he is influential. Some of his ideas, like that parents should be allowed to pursue euthanasia for severely disabled infants, have led people to call him dangerous and worse.
00:01:37 Speaker_03
Some of his ideas make me personally uneasy, too. But my discomfort and the way his work forces me to reconsider my own ethical intuitions and assumptions is precisely why I wanted to talk with him. Here's my conversation with Peter Singer. Hi, Peter.
00:01:57 Speaker_03
I'm David. Nice to meet you.
00:01:59 Speaker_02
Very nice to meet you, David.
00:02:01 Speaker_03
You might be wondering why the journalist interviewing you today is sitting in a clothing closet.
00:02:07 Speaker_03
And just for your own context, I normally record in a normal room, but my neighbor has decided today was the day to do some construction just outside my window.
00:02:18 Speaker_01
Lovely.
00:02:19 Speaker_03
Is there an ethical way I can get revenge?
00:02:21 Speaker_01
No. You should just let it go.
00:02:23 Speaker_03
Well, that's not what I wanted you to say.
00:02:27 Speaker_01
Just look at the Middle East, you can see where revenge gets you.
00:02:30 Speaker_03
There you go, yeah. I promise I don't mean this question at all in a facetious way. The question is about why you wrote this book, Consider the Turkey. So it's a small book. There aren't really new arguments in it.
00:02:44 Speaker_03
How do you decide whether writing that book was the best use of your time? Could that time have been better spent doing something else? Is that something that you think about?
00:02:55 Speaker_02
This is an important issue. We're talking about over 200 million turkeys who are reared in a way that comes close to being described as torture. That is, they're mutilated in various ways. They're bred
00:03:10 Speaker_02
to live in such a way that it hurts them when they're getting near full weight. It hurts them to stand up because their immature leg bones don't bear the immense weight that they've been bred to put on in a very short time.
00:03:23 Speaker_02
They suffer at slaughter, and then as I describe in the book, if they get bird flu, the entire shed is killed by heat stroke. Quite commonly, it's not the only method used in the United States, but it's used on millions of birds.
00:03:38 Speaker_02
The ventilation is stopped in the shed, Heaters are brought in, and they are deliberately heated to death over a period of hours.
00:03:45 Speaker_02
I think that's something that Americans don't know, and it's really important that they should know, because it should stop. So my concern is to reduce unnecessary, avoidable suffering where I can.
00:03:56 Speaker_02
That's one of my major goals throughout my career in philosophy and as an activist. And I think that that's definitely worth the time it took to write this book.
00:04:08 Speaker_03
But in reading the book, it feels pretty hard to deny the unacceptable level of suffering that goes into our Thanksgiving turkey dinners. But millions of people are still going to have them.
00:04:19 Speaker_03
So do you feel at all like you're banging your head against a wall with this stuff?
00:04:24 Speaker_02
No, I don't really feel like I'm banging my head against a wall. I feel like I'm banging my head against something which is pretty hard but not completely unyielding.
00:04:36 Speaker_02
In some parts of the world, we've made progress in the laws and regulations concerning animals.
00:04:42 Speaker_02
The entire European Union has legislation that provides better animal welfare conditions for animals in industrial agriculture than United States laws do, with the exception of a small number of states, California being the most notable.
00:04:59 Speaker_02
that have had citizens initiated referenda to produce better conditions.
00:05:04 Speaker_02
So on the whole, you know, yes, things are still very bad, but I think it's possible to make progress, and I think we have to keep bringing these facts in front of the public and getting them to think about what they're eating.
00:05:18 Speaker_02
And the Thanksgiving meal, as it's a family festive occasion, seems like a really good place to start.
00:05:25 Speaker_03
You know, there's a cliched journalistic trope of how to talk to your ideologically opposed relative at Thanksgiving. Have you learned anything about
00:05:38 Speaker_03
how we can talk to people who disagree with our ideas in a way that doesn't just make them sort of roll their eyes and ignore you.
00:05:47 Speaker_03
Like, if someone reads your book and then thinks, well, now I have something to say about whether or not we should be eating this turkey at Thanksgiving, what guidance can you give them about how to have that conversation?
00:05:57 Speaker_01
Well, that, of course, will depend on who your relatives are, what sort of relationship you have with them.
00:06:03 Speaker_02
So there's all different sorts of possibilities. But I do think that you can make progress with many people, be civil and reasonable, say, have a look at some of these facts, and say, do you really want to support this?
00:06:17 Speaker_02
Do you really want to be complicit in these practices?
00:06:21 Speaker_02
If somebody doesn't accept the argument and just insists that, you know, this is irrelevant or they're not going to listen, at some point you might say, well, if you want me at your Thanksgiving, I don't want to be there with a big bird sitting on the table who I know has suffered in the ways that the standard American Thanksgiving turkey has suffered.
00:06:42 Speaker_03
At some point you would suggest drawing that hard line for someone.
00:06:46 Speaker_02
Yes, at some point you want to say, and I mean, isn't that true of important moral issues? And I think this is that you just say, look, I'm sorry, I can't go along with that.
00:06:55 Speaker_03
This is just a question I have about what it's like to be you. There aren't a lot of well-known philosophers around. Do you find that sort of in your life people come to you looking for ethical advice?
00:07:11 Speaker_02
Oh, they certainly do. They come to me online a lot nowadays. And in fact, in order to provide that and save my time for more effective things, I have set up Peter Singer AI.
00:07:24 Speaker_02
And so on my website, you can connect to a chatbot who has been trained on all of my works. And actually, You know, does remarkably well in terms of channeling my views to people with ethical queries. I didn't, you know, set it up.
00:07:38 Speaker_02
I had some support friends doing this who knew more about the technical side of it. But I have to say they've done a remarkably good job.
00:07:46 Speaker_03
How do you feel about the fact that an AI has been able to adequately replicate your ethical responses to questions?
00:07:53 Speaker_01
Oh, I'm really happy about it. I mean, partly just for the time-saving reason that I mentioned, but also in a sense it means that I can be immortal, you know.
00:08:01 Speaker_01
I mean, this me is not going to be around for, well, I hope another decade maybe, but not too much more than that probably. Whereas the Peter Singer AI could be around for indefinitely. So that's great. It's a kind of immortality.
00:08:16 Speaker_03
I'm sure this is arguable, but I think of you as being best known for your work on animals and ethics, which, you know, I think flow out of utilitarian principles, which, you know, basically the belief that the right action is the one that produces the least suffering or the most good.
00:08:33 Speaker_03
But you're also seen as one of the godfathers of effective altruism. Can you explain what effective altruism is and how it's different or builds on utilitarianism?
00:08:47 Speaker_02
Sure. So effective altruism is the view that firstly, we ought to try to make the world a better place. That ought to be one of the goals of our life.
00:08:58 Speaker_02
Doesn't mean that we all have to become saints and think about that in everything we do, but it should be an important goal for people to think, what can I do to make the world better, which might mean to reduce suffering, might mean to reduce premature death, and to think about that
00:09:18 Speaker_02
in a global way, not just for me and my family and those close to me, but to think about it for people anywhere in the world and indeed for beings capable of suffering who are not of our species.
00:09:32 Speaker_02
So, affective altruism then developed into a kind of a social movement to encourage people to do that and to think in that way.
00:09:40 Speaker_02
And effective altruists have done a lot of research to try to find which are the most effective charities in different areas. So it's become an important social movement. What is the connection with utilitarianism?
00:09:55 Speaker_02
I think if you are a utilitarian, you ought to be an effective altruist. Because if you're a utilitarian, you ought to want to reduce suffering and increase happiness. And given that we all have limited resources to do that,
00:10:11 Speaker_02
Even Bill Gates has limits and most of us have much tighter limits on what we can do to make the world a better place.
00:10:19 Speaker_02
Surely we should be using those resources as effectively as possible to do as much good as we can with the money we can donate or the time we can volunteer or whatever it is.
00:10:28 Speaker_02
We want to make sure that that isn't spent on something that does less good than some other alternative open to us.
00:10:37 Speaker_03
And I think the rationality aspect of effective altruism is one of the reasons why it's been so broadly attractive, but also why it's been particularly attractive among entrepreneurs and in the tech world.
00:10:53 Speaker_03
I think these are people who are sort of interested in the idea of rationality and quantification and return on investment. But of course we know that some pretty prominent advocates have been highly irrational.
00:11:07 Speaker_03
You know, the most egregious example would be a Sam Bankman Freed, or, you know, you could even look at something like, was it that the Effective Ventures Foundation paid 15 million pounds for an English Abbey?
00:11:19 Speaker_03
Like, surely that money could have been used in ways that caused more well-being. And my question for you is, what advice do you have for effective altruists to guard against self-interested self-rationalization?
00:11:34 Speaker_02
Yeah, I think that is a serious problem, and I think that may have been the problem with Sam Bankman Freed. It's not totally clear. Perhaps it wasn't exactly self-rationalization, but it was certainly
00:11:51 Speaker_02
Maybe a sense that I don't have to follow the ordinary rules that other people do because, you know, I'm such a whiz kid, I've been possible that there was some of that sort of thinking.
00:12:00 Speaker_02
And I certainly think anybody who is very successful needs to guard against that belief that somehow they are above the rules. But I don't see that generally as the case in the effective altruism movement and the people who I talk to.
00:12:19 Speaker_02
I think most of them are genuine and they're not self-deceived. And yes, there may be a couple of conspicuous exceptions or mistakes that have been made.
00:12:32 Speaker_02
So I think you need to take a hard look at that, but I really think that they're the exception and I don't think that that's a reason for rejecting affective altruism as a positive social force.
00:12:46 Speaker_03
And, you know, an offshoot of effective altruism is a long-termism. Basically, the idea that we have as much ethical responsibility to address threats to humanity far off in the future as we do to threats to human lives in the present.
00:13:03 Speaker_03
And I'm just curious, what do you make of long-termism?
00:13:07 Speaker_02
I accept the idea that when suffering occurs is not affected by time.
00:13:14 Speaker_02
So if I could be certain that something I did now would do more to reduce suffering in 100 or even theoretically 1,000 years than anything I could do to relieve suffering in the present, then sure, I would think that would be the right thing to do.
00:13:33 Speaker_02
But of course, we don't have that certainty about the future. So I think that's a big barrier to making a real priority to think about the future as more important than thinking about the present.
00:13:48 Speaker_02
The other question that needs to be raised is quite a deep philosophical question about the risk of extinction of our species, because that's what a lot of long-termists are focused on. They're saying,
00:14:00 Speaker_02
If our species sort of survives, gets through the next century or two, then it's likely that humans will be around, not just for thousands, but for many millions of years, because by then we'll be able to colonize other planets.
00:14:14 Speaker_02
And you say, yes, but if we become extinct, none of that will happen. So we must give a very high priority to reducing the risk of extinction of our species.
00:14:25 Speaker_02
And that raises the question of, is it as bad that beings do not come into existence and therefore do not have happy lives as it is that an already existing being who could have a happy life is prevented from having a happy life or even has a miserable life?
00:14:44 Speaker_03
And what's the answer?
00:14:46 Speaker_02
Well, as I say, that's a really difficult philosophical question. I think it's still an open question, really. Personally,
00:14:55 Speaker_02
I do think that it would be a tragic loss if our species became extinct, but how do we compare that tragedy with tragedies that might occur now to a billion people or several billion people? And I can't really give a good answer to that.
00:15:14 Speaker_02
So in other words, what I'm saying is it might be reasonable to discount the future of these beings who might not exist at all. I think that's possible.
00:15:25 Speaker_02
I think it could be reasonable to say, no, we should focus on the present, where we can have greater confidence in what we're doing, than focus on the long-term, really distant future.
00:15:37 Speaker_03
I mean, I'm just a ding-dong, but for me, it sort of seems like there are common-sense objections to long-termism. It's like, what would an example be? If I see there's an immediate fire in my yard that I could put out and save
00:15:57 Speaker_03
some people, like, shouldn't I obviously do that rather than say, well, I'm working on a fire-retardant system that could save millions of lives at some undefined point in the future? That's always what the long-termism stuff sounds like.
00:16:13 Speaker_03
It sounds like sci-fi philosophizing. Do you think there's like a common-sense problem it runs into?
00:16:20 Speaker_02
It runs into what appears to be a common sense problem because our intuitions obviously are to help the people right there now, right?
00:16:27 Speaker_02
We've evolved to deal with problems that are right there and now, and our ancestors survived because they dealt with those problems.
00:16:33 Speaker_02
They didn't survive because they had strong intuitions that we ought to act for the distant future, because there was nothing that they could do about the distant future. We now are in a position where we have
00:16:44 Speaker_02
more influence on whether there will be a human future or not. So I'm inclined not really to trust those common sense intuitions.
00:16:54 Speaker_02
My answer would still be, sure, you should put out the fire, not because that's just your common sense intuition, but because you can be highly confident that you can do a lot of good there.
00:17:04 Speaker_02
And anyway, you can put out the fire and go back to your work on the fire retardant tomorrow.
00:17:13 Speaker_03
I think not trusting your common sense intuitions is sort of Peter Singer's whole bag.
00:17:19 Speaker_02
I think that's right. I think a lot of my work, you know, don't trust your common intuitions to think that you ought to help your neighbors in your affluent community rather than distant people elsewhere in the world that you can't relate to.
00:17:32 Speaker_02
That's part of what I talk about. Don't trust your intuitions in thinking that really it's only humans that matter or human suffering that always is. a higher priority than any number of non-human animals suffering. Yeah, I think you're right.
00:17:47 Speaker_02
I'm somewhat skeptical about trusting those moral intuitions.
00:17:51 Speaker_03
Yeah. So you take these subjects or these moral intuitions about things that people really hold closely, like what we eat or how we spend our money, or even the notion that we're good. And you say, well, hold on a second. Are you really?
00:18:08 Speaker_03
Where do you think your impulse to do that comes from?
00:18:13 Speaker_02
Well, it's something that came gradually, I believe, that I started thinking about particular issues where it was obvious that you could reduce suffering, but people had intuitive reasons for not doing so.
00:18:29 Speaker_02
And one of those was actually in the area of biomedical ethics because I got involved in those questions because I was interested in issues about death and dying. And I, for a very long time, been a supporter of medical assistance in dying.
00:18:52 Speaker_02
And when I started talking to people about that, especially doctors,
00:18:57 Speaker_02
they would say, look, you know, it's all right for us to allow people who are suffering to die by not treating them, but we can't cross that line that actually assists them in dying, because, and some of them would quote this little thing that said, thou shalt not kill, but need not try, officiously, to keep alive.
00:19:23 Speaker_02
And they would just trot that out as a kind of thing that, yes, that's obviously true. And I would say, well, why? So I mean, I think that example was one where I was critical of intuitions. They were perhaps religiously based intuitions.
00:19:40 Speaker_02
That was one part of it. So the fact that I wasn't religious may have led me to challenge those intuitions.
00:19:46 Speaker_02
But then I started thinking about a whole range of other intuitions that are probably not religious, but may, like the example I gave, be based in
00:19:55 Speaker_02
What is it that helped our ancestors to survive in the circumstances in which they were trying to survive and reproduce, when those circumstances may no longer apply to us?
00:20:07 Speaker_03
I was reading the academic journal that you edit, which is called the Journal of Controversial Ideas.
00:20:13 Speaker_03
The idea, as I understand, behind the journal is to give sort of a rigorous academic treatment and platform to ideas that might be seen as beyond the pale for other outlets.
00:20:24 Speaker_03
And there are, you know, plenty of what seem to me relevant arguments to do with like public health and sort of learning in academia.
00:20:32 Speaker_03
And then there were also, you know, there were like multiple pieces about when blackface should be allowed, or I think the specific term is cross-racial makeup.
00:20:41 Speaker_03
Or there was another piece in there in one of the issues about arguing for zoophilia, probably people, more people know as bestiality. And I thought, well, who's clamoring for deeper arguments in support of either of these things?
00:20:58 Speaker_03
What is the point other than provocation?
00:21:02 Speaker_02
I think both those issues, although they're certainly far less significant than many of the other issues that articles in the journal discuss, I think they both have some significance.
00:21:13 Speaker_02
I mean, the question about blackface, which was the word that was used in the journal, is relevant to drawing lines about what are people gonna get criticized for, and the article takes a nuanced approach to that.
00:21:29 Speaker_02
it acknowledges that there would be cases in which, you know, using blackface would be offensive and, say, inappropriate, but it also refers to other cases in which it's not objectionable.
00:21:41 Speaker_02
And so, if people are going to be sort of outed in some way for doing this, and I know it happened to Justin Trudeau, I think, for having done that a long time ago, then you do need to say, well, what are the cases in which this is not such a bad thing to do, and which of the cases should be?
00:21:58 Speaker_02
And in the case of zoophilia, I mean... Yeah, tell me that one. Well, this is a crime. People go to jail for this, and they may not be causing any harm.
00:22:08 Speaker_02
I think that it's reasonable to say, if somebody is going to be sent to prison, to ask, have you harmed any sentient being? Should this be a crime? Why should it be a crime?
00:22:19 Speaker_02
Now, this may be a very small number of cases would get prosecuted, but I think that's enough justification for airing the issue.
00:22:29 Speaker_03
And I know that people have criticized you for not taking enough into account aspects of personal experience about which you might be fundamentally ignorant. The example I'm thinking of here is
00:22:46 Speaker_03
the idea that parents should have the right to terminate babies born with severe disabilities that might cause them to suffer terribly.
00:22:54 Speaker_03
And the critics say that, you know, you just can't wrap your head around the fact that lives very different from your own might be just as valuable or involve just as much happiness.
00:23:05 Speaker_03
And also that, you know, sort of these ideas might be stigmatizing or objectifying of non-normative bodies. And I don't have a particularly insightful way of putting the question, but do you think there's something to that criticism?
00:23:17 Speaker_03
That just sort of rationally theorizing from a distance is missing something essential?
00:23:25 Speaker_02
I think that rationally theorizing from a distance easily can miss something essential, certainly.
00:23:32 Speaker_02
But I don't think that applies to my views about these cases because I formed those views after having discussions not only with doctors in charge of treating infants born with severe disabilities, but also some of the parents of those infants or parents of those children who were no longer infants.
00:23:55 Speaker_02
I had discussed this with a number of people, both in person and in letters that I had from people who, I remember one who said something, it was really bitter, said, the doctors got to play with their toys, meaning their surgical equipment and their skills at helping my son to survive.
00:24:17 Speaker_02
And then they handed the baby over to us and the result has been that my child has suffered for nine years. So I do think I find it strange that people in the disability or some people I should say in the disability movement
00:24:32 Speaker_02
who are mentally as gifted as anyone, but happen to be in wheelchairs, think that the fact that they are in a wheelchair gives them greater insight into what it's like to be a child with severe disabilities that are not just physical, but also mental, or what it's like to be the parents of children like that.
00:24:54 Speaker_03
But I don't know that they're saying necessarily that it gives them particular insights into that specific example.
00:25:01 Speaker_03
I think they're saying they might have specific insights into what it's like to live a different kind of life that you, for example, don't have and can't have access to.
00:25:12 Speaker_02
Yeah, that's true, but that's generally not the kind of case that I'm talking about in suggesting that parents ought to have the option of euthanasia in cases of very severe disabilities.
00:25:27 Speaker_03
But do you think there's any way in which airing some of the more controversial philosophical views you have has maybe been detrimental to your larger project?
00:25:39 Speaker_03
And this is the idea that people might be turned off by what Peter Singer has to say about people with disabilities, and therefore they're not going to pay attention to what he has to say about animal rights.
00:25:52 Speaker_03
Do you think there's any trade-off there between saying what you think is true and saying what do you think will have the most impact?
00:26:00 Speaker_02
I think there is a possible trade-off, yes. But it's particularly difficult as a philosopher because I will always get asked these kinds of questions.
00:26:10 Speaker_02
And if I start to prevaricate or to try to be fuzzy about the answer, I think my reputation standing as a philosopher falls because of that. I think it's important to try to follow the argument wherever it goes.
00:26:24 Speaker_02
And yes, there may be some costs to it, but it's hard to balance those costs against the fact that you're regarded as a rigorously thinking philosopher. And so people pay more attention to what you say for that reason.
00:26:36 Speaker_03
I read your memoir, and I thought it was interesting. Three of your four grandparents, I think they were living in Vienna, died at the hands of the Nazis in the Holocaust. And you write about your grandfather. Is his name David Oppenheim?
00:26:50 Speaker_02
That's correct, yes.
00:26:51 Speaker_03
David Oppenheim, who was a collaborator of Freud's. And you have a line in there where you write that he spent his life trying to understand his fellow human beings, yet seems to have failed to take the Nazi threat to the Jews seriously enough.
00:27:05 Speaker_03
Maybe he had too much confidence in human reason and humanist values. And I just wonder what the connection is between your grandfather's work and your work. Do you see them as sort of interacting with each other or paralleling each other in any way?
00:27:23 Speaker_02
possibly paralleling, but not really interacting, because I didn't read my grandfather's work until the late 1990s, and I'd already written Animal Liberation, I'd already written Practical Ethics, I'd already written Rethinking Life and Death, so those books expressed my ideas relating to animals, relating to global poverty, relating to abortion and assisted dying,
00:27:50 Speaker_02
But what you could point to, I suppose, would be that some of my grandfather's general attitudes were passed down to me by my mother. She may have got them from her father, and that would include the fact that I'm not religious.
00:28:04 Speaker_02
So, some of that, I think, did get passed down to me, but not in terms of my specific views about suffering.
00:28:11 Speaker_02
Now, were they influenced by the knowledge of the suffering that the Nazis inflicted on my grandparents and other members of my extended family, and indeed on my parents by driving them out of their home in Vienna, of course? Yes, perhaps.
00:28:27 Speaker_02
And perhaps the brutality of what the Nazis did, the horror of that has had an effect on me, and that might have led to why
00:28:42 Speaker_02
trying to reduce suffering, trying to prevent unnecessary suffering has been a very leading impulse in the work that I've written.
00:28:51 Speaker_03
You say it might have led, are you just being nice to my line of questioning or do you think it did lead to that?
00:28:58 Speaker_02
No, I honestly don't know. I mean, I don't have the sort of self-awareness to say, to what extent was this knowledge of the Holocaust background of my family decisive in leading me in that direction?
00:29:12 Speaker_02
Would I not have had that if I had not had that background? I think, you know, it's really impossible to answer that question.
00:29:21 Speaker_03
And this is a self-awareness question. When your mom was dying from Alzheimer's?
00:29:28 Speaker_02
Yeah, it was some form of dementia. I don't know if it was Alzheimer's exactly, but she certainly had dementia, yes.
00:29:35 Speaker_03
You know, you spent a fair amount of money on providing her care towards the end of her life, which is obviously completely understandable. But was that the... the most utilitarian use of your money at that time?
00:29:51 Speaker_03
And if not, did that teach you something about the limits of rational thinking when it comes to helping people?
00:30:00 Speaker_02
I think it was probably not the most utilitarian thing to do with those resources, but there would have been personal costs to me, both in thinking that I hadn't looked after my mother, and also I had a sister. If I had said,
00:30:18 Speaker_02
you can pay for our mother's care, but I'm not going to. Obviously, that would have totally disrupted the really close and warm relationship that I had with my sister all the way through her life, and that would have been a really heavy cost to me.
00:30:32 Speaker_02
Now, you could argue that, okay, but the money could have helped many people in important ways, and therefore I was being, in a sense, self-interested in not wanting to cause that family rupture, but no, I think it was
00:30:47 Speaker_02
Yes, so I guess that gets to your second question, does it say there's limits? Yes, I think there are limits and certainly I'm aware that there are limits to things that I am prepared to do in order to produce the greatest good, right?
00:31:00 Speaker_02
So to give a philosophical sort of mock example, if
00:31:06 Speaker_02
I'm at a beach, and the current has swept a number of people out to sea, and I'm a strong swimmer, and I can jump in and save my daughter who's being swept out to my left, or I can jump in and save two people, strangers, who are being swept out to the right.
00:31:24 Speaker_02
Am I going to save more people and let my daughter drown? No.
00:31:28 Speaker_02
So yes, in that sense there were limits, but these limits still allow us obviously to do much, much more good than most people are doing because generally we don't have to make those tragic choices between saving our children and saving a larger number of strangers.
00:31:46 Speaker_02
So, yes, I'm working mostly in that area between those extremely demanding things that ethics may require and where most people are, where they don't even make very small sacrifices, arguably not even sacrifices at all, given the fulfillment and meaning that people get out of helping others.
00:32:09 Speaker_03
Are those limits you just described, are they a version of common sense?
00:32:14 Speaker_02
Well, I think they're a version of what we can reasonably expect people to do, and maybe it's not good to ask people to do more than we can reasonably expect them to do. So, to put it in ethical terms, I think there's a distinction between
00:32:28 Speaker_02
what would be the right thing to do to the extent that we act in a perfectly ethical way, and what is the right thing to ask others to do, and perhaps even to do yourself, to think about or to feel guilty if you don't do yourself.
00:32:46 Speaker_02
And that might take more account of the fact that we are not perfectly rational beings, not perfectly ethical beings. that we are to some extent self-interested.
00:32:55 Speaker_02
And it's not going to be very productive or effective to ask people to do more than those limits.
00:33:11 Speaker_03
After the break, I ask Peter Singer about the places where his heart is in conflict with his head.
00:33:18 Speaker_00
Let's say I'm punishing people who are really evil and have done horrible, brutal things using the death penalty. I can feel a pull of that. I feel a retributive sense of that. Hi Professor Singer, how are you? I'm very well.
00:33:59 Speaker_03
I pulled up AI Peter Singer and was messing around with it. You know, it punts on questions that I bet you're willing to have more definitive answers to.
00:34:12 Speaker_03
You know, just for example, I asked it, you know, is it okay to kill one innocent person in order to save two? And it doesn't give an answer.
00:34:20 Speaker_03
It just suggests I consider different perspectives, you know, the perspectives of virtue ethics or the perspective of utilitarianism.
00:34:28 Speaker_03
What's the point of AI, Peter Singer, if it's unable or unwilling to answer specific ethical questions related to your work with definitive answers like real life Peter Singer can?
00:34:42 Speaker_00
Well, thank you for trying it out. You know, we are still at the trial stage. We've been getting some feedback and I am actually
00:34:51 Speaker_00
aware of what you've just described and I am in contact with the person who does the actual tinkering with the algorithms and I think that's a good point.
00:35:03 Speaker_00
Obviously we don't want Peter Singer AI to make very definitive statements on areas on questions where I would not be prepared to give a definitive answer but certainly I think it should give straighter answers than it does.
00:35:19 Speaker_03
It made me wonder if legal considerations are baked into A.I. Peter Singer.
00:35:23 Speaker_00
Not as far as I'm aware. Or you think somebody might sue it?
00:35:29 Speaker_03
Not sue it, but maybe there could be liability issues or uncomfortable issues might arise if someone were to ask A.I. Peter Singer for ethical advice, you know, in matters of life or death and then went ahead and... I see.
00:35:45 Speaker_00
So it would become an accomplice to the crime. I don't know. I mean, interesting.
00:35:52 Speaker_00
Perhaps that would depend on the free speech, the constitutional situation of freedom of speech in the country in which the person was a really interesting issue, which I've never thought about yet.
00:36:04 Speaker_03
But you need to get your legal team on it.
00:36:07 Speaker_00
If I had a legal team, yes.
00:36:12 Speaker_03
One of the things that I find myself struggling with about your philosophical ideas is, you know, it relates to what Derek Parfit called the repugnant conclusion, that if you follow some of your ideas through to their logical conclusions, you can wind up in some sort of morally disturbing places.
00:36:31 Speaker_03
You know, example would be that you know, and tell me if I'm wrong, but according to your thinking, a large number of people with lives barely worth living could be considered better than a smaller number of people living great lives.
00:36:45 Speaker_03
And your response to that is what?
00:36:48 Speaker_00
My response on that particular case is that I'm actually, that's not, now I'm being, like you said, Peter Singer AI, that's not my clear view.
00:37:01 Speaker_00
I'm still somewhat open-minded on that issue, but maybe you're asking a broader question about whether views of this type, whether I might hold views that leave me uncomfortable in some way or other.
00:37:14 Speaker_00
Yes, I think there are such views that I hold that will leave me quite uncomfortable.
00:37:21 Speaker_03
Like what?
00:37:24 Speaker_00
Views about distribution of well-being. Suppose that you have the choice of helping people who are very badly off by a small amount, or helping people who are reasonably well off already by a much larger amount. And you can't do both.
00:37:43 Speaker_00
I mean, I think you can imagine places where you spend a vast amount of resources making people, a small number of people who are really badly off, slightly, just barely, perceptibly better off.
00:37:54 Speaker_00
or you make, let's say, 95% of the population very significantly better off.
00:38:00 Speaker_00
I think the right thing to do is to make 95% of the population significantly better off, but I'm uncomfortable about the thought that, well, here are these people who are worst off, and you could help them, but you don't.
00:38:13 Speaker_03
But I'm trying to understand if there's ever a scenario in which an action is warranted simply because we believe it's the right thing, regardless of what the empirical balance in lives lost or not might be.
00:38:28 Speaker_03
I mean, do you ever, can you, is there an example of an ethical place where your heart wins out over your head?
00:38:36 Speaker_00
Oh, I've just given you one, right?
00:38:38 Speaker_03
Well, but not in like a thought experiment way, like a practical real life way.
00:38:47 Speaker_00
Let's let's say i'm punishing really people who are really evil and done horrible cruel things using the death penalty i can feel a pull of that i feel a retributive. Sense of that but i'm not a retributive.
00:39:05 Speaker_03
I think most people, or I suspect most people, see themselves as trying to make the world a better place or on balance a net good for the world. But how does someone know if they're doing enough to make the world a better place?
00:39:30 Speaker_00
Very few people are doing enough to make the world a better place that probably not i don't think that i'm doing enough to make the world a better place.
00:39:38 Speaker_00
But if you want to know how would how would you know you would look around for other ways of doing more to make the world a better place and you would say. There are no that's the extreme position.
00:39:50 Speaker_00
As I say, I can't claim to live up to that myself, but that would be the ultimate limit where you could be confident that you've done everything you could to make the world a better place.
00:40:01 Speaker_03
So where's the line short of that?
00:40:05 Speaker_00
The line short of that, I think, is to say, I'm doing a lot. I'm thinking about how to make the world a better place. I'm doing a lot more than the current social standard is. I'm trying to raise that standard.
00:40:21 Speaker_00
I'm setting an example of doing more than the current standard is. I think if you can say all of those things, you can be content with what you're doing.
00:40:32 Speaker_03
Professor Singer, thank you for taking the time to speak with me. I appreciate it.
00:40:35 Speaker_00
Thanks very much, David. I've enjoyed both the conversations.
00:40:39 Speaker_03
Hey, wait, one last bonus ethical question. I'm in the closet again.
00:40:43 Speaker_00
Yeah, I've noticed. Because a different... Different neighbors.
00:40:47 Speaker_03
Because a different neighbor is doing construction? Do you give me permission to have sweet revenge on that guy?
00:40:52 Speaker_00
No, no revenge, but maybe more double blazing would help to keep the sand out. All right, that I can do. That I could do. Thank you very much. Great. Thanks a lot, David.
00:41:11 Speaker_03
That's Peter Singer. His latest book, Consider the Turkey, is available now. This conversation was produced by Wyatt Orm. It was edited by Annabelle Bacon. Mixing by Efim Shapiro. Original music by Dan Powell, Diane Wong, and Marion Lozano.
00:41:30 Speaker_03
Photography by Adam Ferguson. Our senior booker is Priya Matthew, and Seth Kelly is our senior producer. Our executive producer is Allison Benedict.
00:41:39 Speaker_03
Special thanks to Rory Walsh, Renan Borelli, Jeffrey Miranda, Maddie Masiello, Jake Silverstein, Paula Schumann, and Sam Dolnik. If you like what you're hearing, follow or subscribe to The Interview wherever you get your podcasts.
00:41:53 Speaker_03
To read or listen to any of our conversations, you can always go to NYTimes.com slash The Interview. And you can email us anytime at TheInterview at NYTimes.com. I'm David Marchese, and this is The Interview from The New York Times.