Sign in

Education
Technology
Rob, Luisa, Keiran, and the 80,000 Hours team
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
Total 260 episodes
Go to
#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

Rebroadcast: this episode was originally released in April 2020. Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on their most plausible paths, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths. I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford's Global Priorities Institute, and these days I'm 80,000 Hours' Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers. So we thought it would be useful to discuss some on the show for everyone to hear. • Links to learn more, summary and full transcript. • See over 500 vacancies on our job board. • Apply for one-on-one career advising. Among other common topics, we cover: • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in. • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it's wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations. • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties. • Why many listeners aren't spending enough time finding out about what the day-to-day work is like in paths they're considering, or reaching out to people for advice or opportunities. • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you're already accomplishing. I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it. If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people: 1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address. 2. Who don’t yet have close connections with people working at effective altruist organisations. 3. Who aren’t strongly locationally constrained. If you’re unsure, it doesn’t take long to apply, and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds. Also in this episode: • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with. • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path. • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:14:5030/12/2020
#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

From one point of view academia forms one big 'epistemic' system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems. How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future. With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers leeway to try to understand how the world works. Links to learn more, summary and full transcript. Instead of you having to pay for a masters degree, the RSP pays *you* to spend significant amounts of time thinking about high-level questions, like "What is important to do?” and “How can I usefully contribute?" Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible. The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics. It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?” Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory. Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career. RSP applications are set to open in the Spring of 2021 — but Owen thinks it’s helpful for people to think about it in advance. In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover: • Extinction risk classification and reduction strategies • Preventing small disasters from becoming large disasters • How likely we are to go from being in a collapsed state to going extinct • What most people should do if longtermism is true • Advice for mathematically-minded people • And much more Chapters: • Rob’s intro (00:00:00)• The interview begins (00:02:22)• Extinction risk classification and reduction strategies (00:06:02)• Defense layers (00:16:37)• Preventing small disasters from becoming large disasters (00:23:31)• Risk factors (00:38:57)• How likely are we to go from being in a collapsed state to going extinct? (00:48:02)• Estimating total levels of existential risk (00:54:35)• Everyday longtermism (01:01:35)• What should most people do if longtermism is true? (01:12:18)• 80,000 Hours’ issue with promoting career paths (01:24:12)• The existential risk of making a lot of really bad decisions (01:29:27)• What should longtermists do differently today (01:39:08)• Biggest concerns with this framework (01:51:28)• Research careers (02:04:04)• Being a mathematician (02:13:33)• Advice for mathematically minded people (02:24:30)• Rob’s outro (02:37:32)  Producer: Keiran Harris Audio mastering: Ben Cordell Transcript: Zakee Ulhaq
02:38:1217/12/2020
#88 – Tristan Harris on the need to change the incentives of social media companies

#88 – Tristan Harris on the need to change the incentives of social media companies

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages. Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it's hard to remember how recently it was a fringe view. It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality. But while it all feels plausible, how strong is the evidence that it's true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory. At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. Fears about new technologies aren't always misguided. Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques. • Links to learn more, summary and full transcript. • FYI, the 2020 Effective Altruism Survey is closing soon: https://www.surveymonkey.co.uk/r/EAS80K2 Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address. Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what's in our interests as users and citizens? One way is to encourage a shift to a subscription model. One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site. But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50. Despite all the negatives, Tristan doesn’t want us to abandon the technologies he's concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world. Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we've ever had — tools that could educate and organise people better than anything that has come before. The tricky and open question is how to get there. Rob and Tristan also discuss: • Justified concerns vs. moral panics • The effect of social media on politics in the US and developing countries • Tips for individuals Chapters:Rob’s intro (00:00:00)The interview begins (00:01:36)Center for Humane Technology (00:04:53)Critics (00:08:19)The Social Dilemma (00:13:20)Three categories of harm (00:20:31)Justified concerns vs. moral panics (00:30:23)The messy real world vs. an imagined idealised world (00:38:20)The persuasion apocalypse (00:47:46)Revolt of the Public (00:56:48)Global effects (01:02:44)US politics (01:13:32)Potential solutions (01:20:59)Unintended consequences (01:42:57)Win-win changes (01:50:47)Big wins over the last 5 or 10 years (01:59:10)The subscription model (02:02:28)Tips for individuals (02:14:05)The current state of the research (02:22:37)Careers (02:26:36)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.
02:35:3903/12/2020
Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

In the last '80k team chat' with Ben Todd and Arden Koehler, we discussed what effective altruism is and isn't, and how to argue for it. In this episode we turn now to what the effective altruism community most needs. • Links to learn more, summary and full transcript • The 2020 Effective Altruism Survey just opened. If you're involved with the effective altruism community, or sympathetic to its ideas, it's would be wonderful if you could fill it out: https://www.surveymonkey.co.uk/r/EAS80K2 According to Ben, we can think of the effective altruism movement as having gone through several stages, categorised by what kind of resource has been most able to unlock more progress on important issues (i.e. by what's the 'bottleneck'). Plausibly, these stages are common for other social movements as well. • Needing money: In the first stage, when effective altruism was just getting going, more money (to do things like pay staff and put on events) was the main bottleneck to making progress. • Needing talent: In the second stage, we especially needed more talented people being willing to work on whatever seemed most pressing. • Needing specific skills and capacity: In the third stage, which Ben thinks we're in now, the main bottlenecks are organizational capacity, infrastructure, and management to help train people up, as well as specialist skills that people can put to work now. What's next? Perhaps needing coordination -- the ability to make sure people keep working efficiently and effectively together as the community grows. Ben and I also cover the career implications of those stages, as well as the ability to save money and the possibility that someone else would do your job in your absence. If you’d like to learn more about these topics, you should check out a couple of articles on our site: • Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions • How replaceable are the top candidates in large hiring rounds? Why the answer flips depending on the distribution of applicant ability Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
01:25:2112/11/2020
#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

If you want to make the world a better place, would it be better to help your niece with her SATs, or try to join the State Department to lower the risk that the US and China go to war? People involved in 80,000 Hours or the effective altruism community would be comfortable recommending the latter. This week's guest — Russ Roberts, host of the long-running podcast EconTalk, and author of a forthcoming book on decision-making under uncertainty and the limited ability of data to help — worries that might be a mistake. Links to learn more, summary and full transcript. I've been a big fan of Russ' show EconTalk for 12 years — in fact I have a list of my top 100 recommended episodes — so I invited him to talk about his concerns with how the effective altruism community tries to improve the world. These include: • Being too focused on the measurable • Being too confident we've figured out 'the best thing' • Being too credulous about the results of social science or medical experiments • Undermining people's altruism by encouraging them to focus on strangers, who it's naturally harder to care for • Thinking it's possible to predictably help strangers, who you don't understand well enough to know what will truly help • Adding levels of wellbeing across people when this is inappropriate • Encouraging people to pursue careers they won't enjoy These worries are partly informed by Russ' 'classical liberal' worldview, which involves a preference for free market solutions to problems, and nervousness about the big plans that sometimes come out of consequentialist thinking. While we do disagree on a range of things — such as whether it's possible to add up wellbeing across different people, and whether it's more effective to help strangers than people you know — I make the case that some of these worries are founded on common misunderstandings about effective altruism, or at least misunderstandings of what we believe here at 80,000 Hours. We primarily care about making the world a better place over thousands or even millions of years — and we wouldn’t dream of claiming that we could accurately measure the effects of our actions on that timescale. I'm more skeptical of medicine and empirical social science than most people, though not quite as skeptical as Russ (check out this quiz I made where you can guess which academic findings will replicate, and which won't). And while I do think that people should occasionally take jobs they dislike in order to have a social impact, those situations seem pretty few and far between. But Russ and I disagree about how much we really disagree. In addition to all the above we also discuss: • How to decide whether to have kids • Was the case for deworming children oversold? • Whether it would be better for countries around the world to be better coordinated Chapters:Rob’s intro (00:00:00)The interview begins (00:01:48)RCTs and donations (00:05:15)The 80,000 Hours project (00:12:35)Expanding the moral circle (00:28:37)Global coordination (00:39:48)How to act if you're pessimistic about improving the long-term future (00:55:49)Communicating uncertainty (01:03:31)How much to trust empirical research (01:09:19)How to decide whether to have kids (01:24:13)Utilitarianism (01:34:01)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Zakee Ulhaq.
01:49:3603/11/2020
#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

Had World War 1 never happened, you might never have existed. It’s very unlikely that the exact chain of events that led to your conception would have happened otherwise — so perhaps you wouldn't have been born. Would that mean that it's better for you that World War 1 happened (regardless of whether it was better for the world overall)? On the one hand, if you're living a pretty good life, you might think the answer is yes – you get to live rather than not. On the other hand, it sounds strange to say that it's better for you to be alive, because if you'd never existed there'd be no you to be worse off. But if you wouldn't be worse off if you hadn't existed, can you be better off because you do? In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can't be better for someone to exist vs. not. Links to learn more, summary and full transcript. Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn't better for them, and thus, perhaps, that it's not better at all. This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn't otherwise have existed) — which would affect how we try to make the world a better place. Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned. Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out. This is our second episode with Professor Greaves. The first one was a big hit, so we thought we'd come back and dive into even more complex ethical issues. We discuss: • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible • What it means for us to be 'clueless' about the consequences of our actions • Moral uncertainty -- what we should do when we don't know which moral theory is correct • Whether we should take a bet on a really small probability of a really great outcome • The field of global priorities research at the Global Priorities Institute and beyondChapters:The interview begins (00:02:53)The Case for Strong Longtermism (00:05:49)Compatible moral views (00:20:03)Defining cluelessness (00:39:26)Why cluelessness isn’t an objection to longtermism (00:51:05)Theories of what to do under moral uncertainty (01:07:42)Pascal’s mugging (01:16:37)Comparing Existence and Non-Existence (01:30:58)Philosophers who reject existence comparativism (01:48:56)Lives framework (02:01:52)Global priorities research (02:09:25) Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:24:5421/10/2020
Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Today’s episode is the latest conversation between Arden Koehler, and our CEO, Ben Todd. Ben’s been thinking a lot about effective altruism recently, including what it really is, how it's framed, and how people misunderstand it. We recently released an article on misconceptions about effective altruism – based on Will MacAskill’s recent paper The Definition of Effective Altruism – and this episode can act as a companion piece. Links to learn more, summary and full transcript. Arden and Ben cover a bunch of topics related to effective altruism: • How it isn’t just about donating money to fight poverty • Whether it includes a moral obligation to give • The rigorous argument for its importance • Objections to that argument • How to talk about effective altruism for people who aren't already familiar with it Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
01:24:0722/09/2020
Ideas for high impact careers beyond our priority paths (Article)

Ideas for high impact careers beyond our priority paths (Article)

Today’s release is the latest in our series of audio versions of our articles. In this one, we go through some more career options beyond our priority paths that seem promising to us for positively influencing the long-term future. Some of these are likely to be written up as priority paths in the future, or wrapped into existing ones, but we haven’t written full profiles for them yet—for example policy careers outside AI and biosecurity policy that seem promising from a longtermist perspective. Others, like information security, we think might be as promising for many people as our priority paths, but because we haven’t investigated them much we’re still unsure. Still others seem like they’ll typically be less impactful than our priority paths for people who can succeed equally in either, but still seem high-impact to us and like they could be top options for a substantial number of people, depending on personal fit—for example research management. Finally some—like becoming a public intellectual—clearly have the potential for a lot of impact, but we can’t recommend them widely because they don’t have the capacity to absorb a large number of people, are particularly risky, or both. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey before it closes on Sunday (13th of September). You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
27:5407/09/2020
Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

Today’s bonus episode is a conversation between Arden Koehler, and our CEO, Ben Todd. Ben’s been doing a bunch of research recently, and we thought it’d be interesting to hear about how he’s currently thinking about a couple of different topics – including different types of longtermism, and things 80,000 Hours might be getting wrong. Links to learn more, summary and full transcript. This is very off-the-cut compared to our regular episodes, and just 54 minutes long. In the first half, Arden and Ben talk about varieties of longtermism: • Patient longtermism • Broad urgent longtermism • Targeted urgent longtermism focused on existential risks • Targeted urgent longtermism focused on other trajectory changes • And their distinctive implications for people trying to do good with their careers. In the second half, they move on to: • How to trade-off transferable versus specialist career capital • How much weight to put on personal fit • Whether we might be highlighting the wrong problems and career paths. Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at [email protected], and we might make them a more regular feature. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
57:5101/09/2020
Global issues beyond 80,000 Hours’ current priorities (Article)

Global issues beyond 80,000 Hours’ current priorities (Article)

Today’s release is the latest in our series of audio versions of our articles. In this one, we go through 30 global issues beyond the ones we usually prioritize most highly in our work, and that you might consider focusing your career on tackling. Although we spend the majority of our time at 80,000 Hours on our highest priority problem areas, and we recommend working on them to many of our readers, these are just the most promising issues among those we’ve spent time investigating. There are many other global issues that we haven’t properly investigated, and which might be very promising for more people to work on. In fact, we think working on some of the issues in this article could be as high-impact for some people as working on our priority problem areas — though we haven’t looked into them enough to be confident. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
32:5428/08/2020
#85 - Mark Lynas on climate change, societal collapse & nuclear energy

#85 - Mark Lynas on climate change, societal collapse & nuclear energy

A golf-ball sized lump of uranium can deliver more than enough power to cover all of your lifetime energy use. To get the same energy from coal, you’d need 3,200 tonnes of black rock — a mass equivalent to 800 adult elephants, which would produce more than 11,000 tonnes of CO2. That’s about 11,000 tonnes more than the uranium. Many people aren’t comfortable with the danger posed by nuclear power. But given the climatic stakes, it’s worth asking: Just how much more dangerous is it compared to fossil fuels? According to today’s guest, Mark Lynas — author of Six Degrees: Our Future on a Hotter Planet (winner of the prestigious Royal Society Prizes for Science Books) and Nuclear 2.0 — it’s actually much, much safer. Links to learn more, summary and full transcript. Climatologists James Hansen and Pushker Kharecha calculated that the use of nuclear power between 1971 and 2009 avoided the premature deaths of 1.84 million people by avoiding air pollution from burning coal. What about radiation or nuclear disasters? According to Our World In Data, in generating a given amount of electricity, nuclear, wind, and solar all cause about the same number of deaths — and it's a tiny number. So what’s going on? Why isn’t everyone demanding a massive scale-up of nuclear energy to save lives and stop climate change? Mark and many other activists believe that unchecked climate change will result in the collapse of human civilization, so the stakes could not be higher. Mark says that many environmentalists — including him — simply grew up with anti-nuclear attitudes all around them (possibly stemming from a conflation of nuclear weapons and nuclear energy) and haven't thought to question them. But he thinks that once you believe in the climate emergency, you have to rethink your opposition to nuclear energy. At 80,000 Hours we haven’t analysed the merits and flaws of the case for nuclear energy — especially compared to wind and solar paired with gas, hydro, or battery power to handle intermittency — but Mark is convinced. He says it comes down to physics: Nuclear power is just so much denser. We need to find an energy source that provides carbon-free power to ~10 billion people, and we need to do it while humanity is doubling or tripling (or more) its energy demand. How do you do that without destroying the world's ecology? Mark thinks that nuclear is the only way. Read a more in-depth version of the case for nuclear energy in the full blog post. For Mark, the only argument against nuclear power is a political one -- that people won't want or accept it. He says that he knows people in all kinds of mainstream environmental groups — such as Greenpeace — who agree that nuclear must be a vital part of any plan to solve climate change. But, because they think they'll be ostracized if they speak up, they keep their mouths shut. Mark thinks this willingness to indulge beliefs that contradict scientific evidence stands in the way of actually fully addressing climate change, and so he’s helping to build a movement of folks who are out and proud about their support for nuclear energy. This is only one topic of many in today’s interview. Arden, Rob, and Mark also discuss: • At what degrees of warming does societal collapse become likely • Whether climate change could lead to human extinction • What environmentalists are getting wrong about climate change • And much more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:08:2620/08/2020
#84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked

#84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked

When COVID-19 struck the US, everyone was told that hand sanitizer needed to be saved for healthcare professionals, so they should just wash their hands instead. But in India, many homes lack reliable piped water, so they had to do the opposite: distribute hand sanitizer as widely as possible. American advocates for banning single-use plastic straws might be outraged at the widespread adoption of single-use hand sanitizer sachets in India. But the US and India are very different places, and it might be the only way out when you're facing a pandemic without running water. According to today’s guest, Shruti Rajagopalan, Senior Research Fellow at the Mercatus Center at George Mason University, that's typical and context is key to policy-making. This prompted Shruti to propose a set of policy responses designed for India specifically back in April. Unfortunately she thinks it's surprisingly hard to know what one should and shouldn't imitate from overseas. Links to learn more, summary and full transcript. For instance, some places in India installed shared handwashing stations in bus stops and train stations, which is something no developed country would advise. But in India, you can't necessarily wash your hands at home — so shared faucets might be the lesser of two evils. (Though note scientists have downgraded the importance of hand hygiene lately.) Stay-at-home orders offer a more serious example. Developing countries find themselves in a serious bind that rich countries do not. With nearly no slack in healthcare capacity, India lacks equipment to treat even a small number of COVID-19 patients. That suggests strict controls on movement and economic activity might be necessary to control the pandemic. But many people in India and elsewhere can't afford to shelter in place for weeks, let alone months. And governments in poorer countries may not be able to afford to send everyone money — even where they have the infrastructure to do so fast enough. India ultimately did impose strict lockdowns, lasting almost 70 days, but the human toll has been larger than in rich countries, with vast numbers of migrant workers stranded far from home with limited if any income support. There were no trains or buses, and the government made no provision to deal with the situation. Unable to afford rent where they were, many people had to walk hundreds of kilometers to reach home, carrying children and belongings with them. But in some other ways the context of developing countries is more promising. In the US many people melted down when asked to wear facemasks. But in South Asia, people just wore them. Shruti isn’t sure whether that's because of existing challenges with high pollution, past experiences with pandemics, or because intergenerational living makes the wellbeing of others more salient, but the end result is that masks weren’t politicised in the way they were in the US. In addition, despite the suffering caused by India's policy response to COVID-19, public support for the measures and the government remains high — and India's population is much younger and so less affected by the virus. In this episode, Howie and Shruti explore the unique policy challenges facing India in its battle with COVID-19, what they've tried to do, and how it has gone. They also cover: • What an economist can bring to the table during a pandemic • The mystery of India’s surprisingly low mortality rate • Policies that should be implemented today • What makes a good constitution Chapters: • Rob’s intro (00:00:00)• The interview begins (00:02:27)• What an economist can bring to the table for COVID-19 (00:07:54)• What India has done about the coronavirus (00:12:24)• Why it took so long for India to start seeing a lot of cases (00:25:08)• How India is doing at the moment with COVID-19 (00:27:55)• Is the mortality rate surprisingly low in India? (00:40:32)• Why Southeast Asians countries have done so well so far (00:55:43)• Different attitudes to masks globally (00:59:25)• Differences in policy approaches for developing countries (01:07:27)• India’s strict lockdown (01:25:56)• Lockdown for the average rural Indian (01:39:11)• Public reaction to the lockdown in India (01:44:39)• Policies that should be implemented today (01:50:29)• India’s overall reaction to COVID-19 (01:57:23)• Constitutional economics (02:03:28)• What makes a good constitution (02:11:47)• Emergent Ventures (02:27:34)• Careers (02:47:57)• Rob’s outro (02:57:51)  Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:58:1413/08/2020
#83 - Jennifer Doleac on preventing crime without police and prisons

#83 - Jennifer Doleac on preventing crime without police and prisons

The killing of George Floyd has prompted a great deal of debate over whether the US should reduce the size of its police departments. The research literature suggests that the presence of police officers does reduce crime, though they're expensive and as is increasingly recognised, impose substantial harms on the populations they are meant to be protecting, especially communities of colour. So maybe we ought to shift our focus to effective but unconventional approaches to crime prevention, approaches that don't require police or prisons and the human toll they bring with them. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three alternative ways to effectively prevent crime: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation, Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that lead reduction might be the best buy of all in crime prevention… Blog post truncated due to length limits. Finish reading the full post here. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:23:0331/07/2020
#82 – James Forman Jr on reducing the cruelty of the US criminal legal system

#82 – James Forman Jr on reducing the cruelty of the US criminal legal system

No democracy has ever incarcerated as many people as the United States. To get its incarceration rate down to the global average, the US would have to release 3 in 4 people in its prisons today.  The effects on Black Americans have been especially severe — Black people make up 12% of the US population but 33% of its prison population. In the early 2000's when incarceration reached its peak, the US government estimated that 32% of Black boys would go to prison at some point in their lives, 5.5 times the figure for whites.  Contrary to popular understanding, nonviolent drug offenders make up less than a fifth of the incarcerated population. The only way to get its incarceration rate near the global average will be to shorten prison sentences for so-called 'violent criminals' — a politically toxic idea. But could we change that? According to today’s guest, Professor James Forman Jr — a former public defender in Washington DC, Pulitzer Prize-winning author of Locking Up Our Own: Crime and Punishment in Black America, and now a professor at Yale Law School — there are two things we have to do to make that happen. Links to learn more, summary and full transcript. First, he thinks we should lose the term 'violent offender', and maybe even 'violent crime'. When you say 'violent crime', most people immediately think of murder and rape — but they're only a small fraction of the crimes that the law deems as violent. In reality, the crime that puts the most people in prison in the US is robbery. And the law says that robbery is a violent crime whether a weapon is involved or not. By moving away from the catch-all category of 'violent criminals' we can judge the risk posed by individual people more sensibly. Second, he thinks we should embrace the restorative justice movement. Instead of asking "What was the law? Who broke it? What should the punishment be", restorative justice asks "Who was harmed? Who harmed them? And what can we as a society, including the person who committed the harm, do to try to remedy that harm?" Instead of being narrowly focused on how many years people should spend in prison as retribution, it starts a different conversation. You might think this apparently softer approach would be unsatisfying to victims of crime. But James has discovered that a lot of victims of crime find that the current system doesn't help them in any meaningful way. What they primarily want to know is: why did this happen to me? The best way to find that out is to actually talk to the person who harmed them, and in doing so gain a better understanding of the underlying factors behind the crime. The restorative justice approach facilitates these conversations in a way the current system doesn't allow, and can include restitution, apologies, and face-to-face reconciliation. That’s just one topic of many covered in today’s episode, with much of the conversation focusing on Professor Forman’s 2018 book Locking Up Our Own — an examination of the historical roots of contemporary criminal justice practices in the US, and his experience setting up a charter school for at-risk youth in DC. Chapters:Rob’s intro (00:00:00)The interview begins (00:02:02)How did we get here? (00:04:07)The role racism plays in policing today (00:14:47)Black American views on policing and criminal justice (00:22:37)Has the core argument of the book been controversial? (00:31:51)The role that class divisions played in forming the current legal system (00:37:33)What are the biggest problems today? (00:40:56)What changes in policy would make the biggest difference? (00:52:41)Shorter sentences for violent crimes (00:58:26)Important recent successes (01:08:21)What can people actually do to help? (01:14:38) Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
01:28:0827/07/2020
#81 - Ben Garfinkel on scrutinising classic AI risk arguments

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:38:2809/07/2020
Advice on how to read our advice (Article)

Advice on how to read our advice (Article)

This is the fourth release in our new series of audio articles. If you want to read the original article or check out the links within it, you can find them here. "We’ve found that readers sometimes interpret or apply our advice in ways we didn’t anticipate and wouldn’t exactly recommend. That’s hard to avoid when you’re writing for a range of people with different personalities and initial views. To help get on the same page, here’s some advice about our advice, for those about to launch into reading our site. We want our writing to inform people’s views, but only in proportion to the likelihood that we’re actually right. So we need to make sure you have a balanced perspective on how compelling the evidence is for the different claims we make on the site, and how much weight to put on our advice in your situation. This piece includes a list of points to bear in mind when reading our site, and some thoughts on how to avoid the communication problems we face..." As the title suggests, this was written with our web site content in mind, but plenty of it applies to the careers sections of the podcast too — as well as our bonus episodes with members of the 80,000 Hours team, such as Arden and Rob’s episode on demandingness, work-life balance and injustice, which aired on February 25th of this year. And if you have feedback on these, positive or negative, it’d be great if you could email us at [email protected].
15:2329/06/2020
#80 – Stuart Russell on why our approach to AI is broken and how to fix it

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed. In his new book, Human Compatible, he outlines the 'standard model' of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we've stated explicitly. This is so obvious it almost doesn't even seem like a design choice, but it is. Unfortunately there's a big problem with this approach: it's incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we've asked it to. That's true even if the goal isn't what we really want, or the methods it's choosing are ones we would never accept. We already see AIs misbehaving for this reason. Stuart points to the example of YouTube's recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn't something we wanted, but it helped achieve the algorithm's objective: maximise viewing time. Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we've asked for. Links to learn more, summary and full transcript. This 'alignment' problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we're ever to hand over much of the economy to thinking machines, we can't count on ourselves correctly saying exactly what we want the AI to do every time. Stuart isn't just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles: 1. The AI system's objective is to achieve what humans want. 2. But the system isn't sure what we want. 3. And it figures out what we want by observing our behaviour. Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.  For instance, a machine built on these principles would be happy to be turned off if that's what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, "you can't fetch the coffee if you're dead." These principles lend themselves towards machines that are modest and cautious, and check in when they aren't confident they're truly achieving what we want. We've made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we've rejected an option because we've considered it and decided it's a bad idea, and when we simply haven't thought about it at all. Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political. When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?Chapters:Rob’s intro (00:00:00)The interview begins (00:19:06)Human Compatible: Artificial Intelligence and the Problem of Control (00:21:27)Principles for Beneficial Machines (00:29:25)AI moral rights (00:33:05)Humble machines (00:39:35)Learning to predict human preferences (00:45:55)Animals and AI (00:49:33)Enfeeblement problem (00:58:21)Counterarguments (01:07:09)Orthogonality thesis (01:24:25)Intelligence explosion (01:29:15)Policy ideas (01:38:39)What most needs to be done (01:50:14)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Zakee Ulhaq.
02:13:1722/06/2020
What anonymous contributors think about important life and career questions (Article)

What anonymous contributors think about important life and career questions (Article)

Today we’re launching the final entry of our ‘anonymous answers' series on the website. It features answers to 23 different questions including “How have you seen talented people fail in their work?” and “What’s one way to be successful you don’t think people talk about enough?”, from anonymous people whose work we admire. We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they span a very wide range of opinions. So we decided to share some highlights here with you podcast subscribers. This is only a sample though, including a few answers from just 10 of those 23 questions. You can find the rest of the answers at 80000hours.org/anonymous or follow a link here to an individual entry: 1. What's good career advice you wouldn’t want to have your name on? 2. How have you seen talented people fail in their work? 3. What’s the thing people most overrate in their career? 4. If you were at the start of your career again, what would you do differently this time? 5. If you're a talented young person how risk averse should you be? 6. Among people trying to improve the world, what are the bad habits you see most often? 7. What mistakes do people most often make when deciding what work to do? 8. What's one way to be successful you don't think people talk about enough? 9. How honest & candid should high-profile people really be? 10. What’s some underrated general life advice? 11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower? 12. What are the biggest flaws of 80,000 Hours? 13. What are the biggest flaws of the effective altruism community? 14. How should the effective altruism community think about diversity? 15. Are there any myths that you feel obligated to support publicly? And five other questions. Finally, if you’d like us to produce more or less content like this, please let us know your opinion [email protected].
37:1005/06/2020
#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

#79 – A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what? She's not so bad." Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history. He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His next book will ask: if we reframe global problems as puzzles, would the world be a better place? Links to learn more, summary and full transcript. This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.) We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.) Another reason to listen is for the facts:The Bayer aspirin company invented heroin as a cough suppressantCoriander is just the British way of saying cilantroDogs have a third eyelid to protect the eyeball from irritantsA.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples) (The Know-It-All.)One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the Bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.) I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; the construction worker who told me how to get to my subway platform on the morning of the interview; and Queen Jadwiga for making bagels popular in the 1300s, which kept me going during the recording. (Thanks a Thousand.) We also discuss: • Blackmailing yourself • The most extreme ideas A.J.’s ever considered • Doing good as a writer • And much more.Chapters:Rob’s intro (00:00:00)The interview begins (00:01:51)Puzzles (00:05:41)Radical honesty (00:12:18)The Year of Living Biblically (00:24:17)Thanks A Thousand (00:38:04)Drop Dead Healthy (00:49:22)Blackmailing yourself (00:57:46)The Know-It-All (01:03:00)Effective altruism (01:31:38)Longtermism (01:55:35)It’s All Relative (02:01:00)Journalism (02:10:06)Writing careers (02:17:15)Rob’s outro (02:34:37)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Zakee Ulhaq
02:38:4701/06/2020
#78 – Danny Hernandez on forecasting and the drivers of AI progress

#78 – Danny Hernandez on forecasting and the drivers of AI progress

Companies use about 300,000 times more computation training the best AI systems today than they did in 2012 and algorithmic innovations have also made them 25 times more efficient at the same tasks.These are the headline results of two recent papers — AI and Compute and AI and Efficiency — from the Foresight Team at OpenAI. In today's episode I spoke with one of the authors, Danny Hernandez, who joined OpenAI after helping develop better forecasting methods at Twitch and Open Philanthropy. Danny and I talk about how to understand his team's results and what they mean (and don't mean) for how we should think about progress in AI going forward. Links to learn more, summary and full transcript. Debates around the future of AI can sometimes be pretty abstract and theoretical. Danny hopes that providing rigorous measurements of some of the inputs to AI progress so far can help us better understand what causes that progress, as well as ground debates about the future of AI in a better shared understanding of the field. If this research sounds appealing, you might be interested in applying to join OpenAI's Foresight team — they're currently hiring research engineers. In the interview, Danny and I (Arden Koehler) also discuss a range of other topics, including: • The question of which experts to believe • Danny's journey to working at OpenAI • The usefulness of "decision boundaries" • The importance of Moore's law for people who care about the long-term future • What OpenAI's Foresight Team's findings might imply for policy • The question whether progress in the performance of AI systems is linear • The safety teams at OpenAI and who they're looking to hire • One idea for finding someone to guide your learning • The importance of hardware expertise for making a positive impactChapters:Rob’s intro (00:00:00)The interview begins (00:01:29)Forecasting (00:07:11)Improving the public conversation around AI (00:14:41)Danny’s path to OpenAI (00:24:08)Calibration training (00:27:18)AI and Compute (00:45:22)AI and Efficiency (01:09:22)Safety teams at OpenAI (01:39:03)Careers (01:49:46)AI hardware as a possible path to impact (01:55:57)Triggers for people’s major decisions (02:08:44)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Zakee Ulhaq
02:11:3722/05/2020
#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

#77 – Marc Lipsitch on whether we're winning or losing against COVID-19

In March Professor Marc Lipsitch — Director of Harvard's Center for Communicable Disease Dynamics — abruptly found himself a global celebrity, his social media following growing 40-fold and journalists knocking down his door, as everyone turned to him for information they could trust. Here he lays out where the fight against COVID-19 stands today, why he's open to deliberately giving people COVID-19 to speed up vaccine development, and how we could do better next time. As Marc tells us, island nations like Taiwan and New Zealand are successfully suppressing SARS-COV-2. But everyone else is struggling. Links to learn more, summary and full transcript. Even Singapore, with plenty of warning and one of the best test and trace systems in the world, lost control of the virus in mid-April after successfully holding back the tide for 2 months. This doesn't bode well for how the US or Europe will cope as they ease their lockdowns. It also suggests it would have been exceedingly hard for China to stop the virus before it spread overseas. But sadly, there's no easy way out. The original estimates of COVID-19's infection fatality rate, of 0.5-1%, have turned out to be basically right. And the latest serology surveys indicate only 5-10% of people in countries like the US, UK and Spain have been infected so far, leaving us far short of herd immunity. To get there, even these worst affected countries would need to endure something like ten times the number of deaths they have so far. Marc has one good piece of news: research suggests that most of those who get infected do indeed develop immunity, for a while at least. To escape the COVID-19 trap sooner rather than later, Marc recommends we go hard on all the familiar options — vaccines, antivirals, and mass testing — but also open our minds to creative options we've so far left on the shelf. Despite the importance of his work, even now the training and grant programs that produced the community of experts Marc is a part of, are shrinking. We look at a new article he's written about how to instead build and improve the field of epidemiology, so humanity can respond faster and smarter next time we face a disease that could kill millions and cost tens of trillions of dollars. We also cover: • How listeners might contribute as future contagious disease experts, or donors to current projects • How we can learn from cross-country comparisons • Modelling that has gone wrong in an instructive way • What governments should stop doing • How people can figure out who to trust, and who has been most on the mark this time • Why Marc supports infecting people with COVID-19 to speed up the development of a vaccines • How we can ensure there's population-level surveillance early during the next pandemic • Whether people from other fields trying to help with COVID-19 has done more good than harm • Whether it's experts in diseases, or experts in forecasting, who produce better disease forecasts Chapters:Rob’s intro (00:00:00)The interview begins (00:01:45)Things Rob wishes he knew about COVID-19 (00:05:23)Cross-country comparisons (00:10:53)Any government activities we should stop? (00:21:24)Lessons from COVID-19 (00:33:31)Global catastrophic biological risks (00:37:58)Human challenge trials (00:43:12)Disease surveillance (00:50:07)Who should we trust? (00:58:12)Epidemiology as a field (01:13:05)Careers (01:31:28)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Zakee Ulhaq.
01:37:0518/05/2020
Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Article: Ways people trying to do good accidentally make things worse, and how to avoid them

Today’s release is the second experiment in making audio versions of our articles. The first was a narration of Greg Lewis’ terrific problem profile on ‘Reducing global catastrophic biological risks’, which you can find on the podcast feed just before episode #74 - that is, our interview with Greg about the piece. If you want to check out the links in today’s article, you can find those here. And if you have feedback on these, positive or negative, it’d be great if you could email us at [email protected]
26:4612/05/2020
#76 – Tara Kirk Sell on misinformation, who's done well and badly, & what to reopen first

#76 – Tara Kirk Sell on misinformation, who's done well and badly, & what to reopen first

Amid a rising COVID-19 death toll, and looming economic disaster, we’ve been looking for good news — and one thing we're especially thankful for is the Johns Hopkins Center for Health Security (CHS). CHS focuses on protecting us from major biological, chemical or nuclear disasters, through research that informs governments around the world. While this pandemic surprised many, just last October the Center ran a simulation of a 'new coronavirus' scenario to identify weaknesses in our ability to quickly respond. Their expertise has given them a key role in figuring out how to fight COVID-19. Today’s guest, Dr Tara Kirk Sell, did her PhD in policy and communication during disease outbreaks, and has worked at CHS for 11 years on a range of important projects. • Links to learn more, summary and full transcript. Last year she was a leader on Collective Intelligence for Disease Prediction, designed to sound the alarm about upcoming pandemics before others are paying attention. Incredibly, the project almost closed in December, with COVID-19 just starting to spread around the world — but received new funding that allowed the project to respond quickly to the emerging disease. She also contributed to a recent report attempting to explain the risks of specific types of activities resuming when COVID-19 lockdowns end. We can't achieve zero risk — so differentiating activities on a spectrum is crucial. Choosing wisely can help us lead more normal lives without reviving the pandemic. Dance clubs will have to stay closed, but hairdressers can adapt to minimise transmission, and Tara, who happens to be an Olympic silver-medalist in swimming, suggests outdoor non-contact sports could resume soon without much risk. Her latest project deals with the challenge of misinformation during disease outbreaks. Analysing the Ebola communication crisis of 2014, they found that even trained coders with public health expertise sometimes needed help to distinguish between true and misleading tweets — showing the danger of a continued lack of definitive information surrounding a virus and how it’s transmitted. The challenge for governments is not simple. If they acknowledge how much they don't know, people may look elsewhere for guidance. But if they pretend to know things they don't, the result can be a huge loss of trust. Despite their intense focus on COVID-19, researchers at CHS know that this is no one-off event. Many aspects of our collective response this time around have been alarmingly poor, and it won’t be long before Tara and her colleagues need to turn their mind to next time. You can now donate to CHS through Effective Altruism Funds. Donations made through EA Funds are tax-deductible in the US, the UK, and the Netherlands. Tara and Rob also discuss: • Who has overperformed and underperformed expectations during COVID-19? • Whe are people right to mistrust authorities? • The media’s responsibility to be right • What policy changes should be prioritised for next time • Should we prepare for future pandemic while the COVID-19 is still going? • The importance of keeping non-COVID health problems in mind • The psychological difference between staying home voluntarily and being forced to • Mistakes that we in the general public might be making • Emerging technologies with the potential to reduce global catastrophic biological risks Chapters:Rob’s intro (00:00:00)The interview begins (00:01:43)Misinformation (00:05:07)Who has done well during COVID-19? (00:22:19)Guidance for governors on reopening (00:34:05)Collective Intelligence for Disease Prediction project (00:45:35)What else is CHS trying to do to address the pandemic? (00:59:51)Deaths are not the only health impact of importance (01:05:33)Policy change for future pandemics (01:10:57)Emerging technologies with potential to reduce global catastrophic biological risks (01:22:37)Careers (01:38:52)Good news about COVID-19 (01:44:23)Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
01:53:0008/05/2020
#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours

Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on their most plausible paths, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths.  I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford's Global Priorities Institute, and these days I'm 80,000 Hours' Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers. So we thought it would be useful to discuss some on the show for everyone to hear. • Links to learn more, summary and full transcript. • See over 500 vacancies on our job board. • Apply for one-on-one career advising. Among other common topics, we cover: • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in. • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it's wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations. • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties. • Why many listeners aren't spending enough time finding out about what the day-to-day work is like in paths they're considering, or reaching out to people for advice or opportunities. • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you're already accomplishing. I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it. If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people: 1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address. 2. Who don’t yet have close connections with people working at effective altruist organisations. 3. Who aren’t strongly locationally constrained. If you’re unsure, it doesn’t take long to apply, and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds. Also in this episode: • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with. • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path. • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob. Chapters:Rob’s intro (00:00:00)The interview begins (00:02:50)The process of advising (00:09:34)We’re not just excited about our priority paths (00:14:37)Common things Michelle says during advising (00:18:13)Interpersonal comparisons (00:31:18)Thinking about current impact (00:40:31)Applying to different kinds of orgs (00:42:29)Difference in impact between jobs / causes (00:49:04)Common mistakes (00:55:40)Career change stories (01:11:44)When is advising really useful for people? (01:24:28)Managing risk in careers (01:55:29)Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. 
02:13:0628/04/2020
#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

#74 – Dr Greg Lewis on COVID-19 & catastrophic biological risks

Our lives currently revolve around the global emergency of COVID-19; you’re probably reading this while confined to your house, as the death toll from the worst pandemic since 1918 continues to rise.  The question of how to tackle COVID-19 has been foremost in the minds of many, including here at 80,000 Hours. Today's guest, Dr Gregory Lewis, acting head of the Biosecurity Research Group at Oxford University's Future of Humanity Institute, puts the crisis in context, explaining how COVID-19 compares to other diseases, pandemics of the past, and possible worse crises in the future. COVID-19 is a vivid reminder that we are unprepared to contain or respond to new pathogens. How would we cope with a virus that was even more contagious and even more deadly? Greg's work focuses on these risks -- of outbreaks that threaten our entire future through an unrecoverable collapse of civilisation, or even the extinction of humanity. Links to learn more, summary and full transcript. If such a catastrophe were to occur, Greg believes it’s more likely to be caused by accidental or deliberate misuse of biotechnology than by a pathogen developed by nature. There are a few direct causes for concern: humans now have the ability to produce some of the most dangerous diseases in history in the lab; technological progress may enable the creation of pathogens which are nastier than anything we see in nature; and most biotechnology has yet to even be conceived, so we can’t assume all the dangers will be familiar. This is grim stuff, but it needn’t be paralysing. In the years following COVID-19, humanity may be inspired to better prepare for the existential risks of the next century: improving our science, updating our policy options, and enhancing our social cohesion. COVID-19 is a tragedy of stunning proportions, and its immediate threat is undoubtedly worthy of significant resources. But we will get through it; if a future biological catastrophe poses an existential risk, we may not get a second chance. It is therefore vital to learn every lesson we can from this pandemic, and provide our descendants with the security we wish for ourselves. Today’s episode is the hosting debut of our Strategy Advisor, Howie Lempel. 80,000 Hours has focused on COVID-19 for the last few weeks and published over ten pieces about it, and a substantial benefit of this interview was to help inform our own views. As such, at times this episode may feel like eavesdropping on a private conversation, and it is likely to be of most interest to people primarily focused on making the long-term future go as well as possible. In this episode, Howie and Greg cover: • Reflections on the first few months of the pandemic • Common confusions around COVID-19 • How COVID-19 compares to other diseases • What types of interventions have been available to policymakers • Arguments for and against working on global catastrophic biological risks (GCBRs) • How to know if you’re a good fit to work on GCBRs • The response of the effective altruism community, as well as 80,000 Hours in particular, to COVID-19  • And much more. Chapters:Rob’s intro (00:00:00)The interview begins (00:03:15)What is COVID-19? (00:16:05)If you end up infected, how severe is it likely to be? (00:19:21)How does COVID-19 compare to other diseases? (00:25:42)Common confusions around COVID-19 (00:32:02)What types of interventions were available to policymakers? (00:46:20)Nonpharmaceutical Interventions (01:04:18)What can you do personally? (01:18:25)Reflections on the first few months of the pandemic (01:23:46)Global catastrophic biological risks (GCBRs) (01:26:17)Counterarguments to working on GCBRs (01:45:56)How do GCBRs compare to other problems? (01:49:05)Careers (01:59:50)The response of the effective altruism community to COVID-19 (02:11:42)The response of 80,000 Hours to COVID-19 (02:28:12)Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:37:1717/04/2020
Article: Reducing global catastrophic biological risks

Article: Reducing global catastrophic biological risks

In a few days we'll be putting out a conversation with Dr Greg Lewis, who studies how to prevent global catastrophic biological risks at Oxford's Future of Humanity Institute. Greg also wrote a new problem profile on that topic for our website, and reading that is a good lead-in to our interview with him. So in a bit of an experiment we decided to make this audio version of that article, narrated by the producer of the 80,000 Hours Podcast, Keiran Harris. We’re thinking about having audio versions of other important articles we write, so it’d be great if you could let us know if you’d like more of these. You can email us your view at [email protected]. If you want to check out all of Greg’s graphs and footnotes that we didn’t include, and get links to learn more about GCBRs - you can find those here. And if you want to read more about COVID-19, the 80,000 Hours team has produced a fantastic package of 10 pieces about how to stop the pandemic. You can find those here.
01:04:1515/04/2020
Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

Emergency episode: Rob & Howie on the menace of COVID-19, and what both governments & individuals might do to help

From home isolation Rob and Howie just recorded an episode on: 1. How many could die in the crisis, and the risk to your health personally. 2. What individuals might be able to do help tackle the coronavirus crisis. 3. What we suspect governments should do in response to the coronavirus crisis. 4. The importance of personally not spreading the virus, the properties of the SARS-CoV-2 virus, and how you can personally avoid it. 5. The many places society screwed up, how we can avoid this happening again, and why be optimistic.  We have rushed this episode out to share information as quickly as possible in a fast-moving situation. If you would prefer to read you can find the transcript here. We list a wide range of valuable resources and links in the blog post attached to the show (over 60, including links to projects you can join). See our 'problem profile' on global catastrophic biological risks for information on these grave threats and how you can contribute to preventing them. We have also just added a COVID-19 landing page on our site. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris.
01:52:1219/03/2020
#73 – Phil Trammell on patient philanthropy and waiting to do good

#73 – Phil Trammell on patient philanthropy and waiting to do good

To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million. This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now. He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science. ADDED: Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here. • Links to learn more, summary and full transcript.  What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways?  And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own.  Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse?  Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended?  Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'.  Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good.  Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my 80,000 Hours colleague Howie Lempel, we try to answer that, and also discuss:  • Real attempts at patient philanthropy in history and how they worked out  • Should we have a mixed strategy, where some altruists are patient and others impatient?  • Which causes most need money now, and which later?  • What is the research frontier here?  • What does this all mean for what listeners should do differently?  Chapters:Rob’s intro (00:00:00)The interview begins (00:02:23)Consequences for getting this question wrong (00:06:03)What have people had to say about this question in the past? (00:07:22)The case for saving (00:11:51)Hundred year leases (00:29:28)Should we be concerned about one group taking control of the world? (00:34:51)Finding better interventions in the future (00:37:20)The hinge of history (00:43:46)Does uncertainty lead us to wanting to wait? (01:01:52)Counterarguments (01:11:36)What about groups who have a particular sense of urgency? (01:40:46)How much should we actually save? (02:01:35)Implications for career choices (02:19:49)  Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. 
02:35:2217/03/2020
#72 - Toby Ord on the precipice and humanity's potential futures

#72 - Toby Ord on the precipice and humanity's potential futures

This week Oxford academic and 80,000 Hours trustee Dr Toby Ord released his new book The Precipice: Existential Risk and the Future of Humanity. It's about how our long-term future could be better than almost anyone believes, but also how humanity's recklessness is putting that future at grave risk — in Toby's reckoning, a 1 in 6 chance of being extinguished this century. I loved the book and learned a great deal from it (buy it here, US and audiobook release March 24). While preparing for this interview I copied out 87 facts that were surprising, shocking or important. Here's a sample of 16: 1. The probability of a supervolcano causing a civilisation-threatening catastrophe in the next century is estimated to be 100x that of asteroids and comets combined. 2. The Biological Weapons Convention — a global agreement to protect humanity — has just four employees, and a smaller budget than an average McDonald’s. 3. In 2008 a 'gamma ray burst' reached Earth from another galaxy, 10 billion light years away. It was still bright enough to be visible to the naked eye. We aren't sure what generates gamma ray bursts but one cause may be two neutron stars colliding. 4. Before detonating the first nuclear weapon, scientists in the Manhattan Project feared that the high temperatures in the core, unprecedented for Earth, might be able to ignite the hydrogen in water. This would set off a self-sustaining reaction that would burn off the Earth’s oceans, killing all life above ground. They thought this was unlikely, but many atomic scientists feared their calculations could be missing something. As far as we know, the US President was never informed of this possibility, but similar risks were one reason Hitler stopped… N.B. I've had to cut off this list as we only get 4,000 characters in these show notes, so: Click here to read the whole list, see a full transcript, and find related links. And if you like the list, you can get a free copy of the introduction and first chapter by joining our mailing list. While I've been studying these topics for years and known Toby for the last eight, a remarkable amount of what's in The Precipice was new to me. Of course the book isn't a series of isolated amusing facts, but rather a systematic review of the many ways humanity's future could go better or worse, how we might know about them, and what might be done to improve the odds. And that's how we approach this conversation, first talking about each of the main threats, then how we can learn about things that have never happened before, then finishing with what a great future for humanity might look like and how it might be achieved. Toby is a famously good explainer of complex issues — a bit of a modern Carl Sagan character — so as expected this was a great interview, and one which Arden Koehler and I barely even had to work for. Some topics Arden and I ask about include: • What Toby changed his mind about while writing the book • Are people exaggerating when they say that climate change could actually end civilization? • What can we learn from historical pandemics? • Toby’s estimate of unaligned AI causing human extinction in the next century • Is this century the most important time in human history, or is that a narcissistic delusion? • Competing vision for humanity's ideal future • And more. Get this episode by subscribing: type '80,000 Hours' into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
03:14:1707/03/2020
#71 - Benjamin Todd on the key ideas of 80,000 Hours

#71 - Benjamin Todd on the key ideas of 80,000 Hours

The 80,000 Hours Podcast is about “the world’s most pressing problems and how you can use your career to solve them”, and in this episode we tackle that question in the most direct way possible. Last year we published a summary of all our key ideas, which links to many of our other articles, and which we are aiming to keep updated as our opinions shift. All of us added something to it, but the single biggest contributor was our CEO and today's guest, Ben Todd, who founded 80,000 Hours along with Will MacAskill back in 2012. This key ideas page is the most read on the site. By itself it can teach you a large fraction of the most important things we've discovered since we started investigating high impact careers. • Links to learn more, summary and full transcript. But it's perhaps more accurate to think of it as a mini-book, as it weighs in at over 20,000 words. Fortunately it's designed to be highly modular and it's easy to work through it over multiple sessions, scanning over the articles it links to on each topic. Perhaps though, you'd prefer to absorb our most essential ideas in conversation form, in which case this episode is for you. If you want to have a big impact with your career, and you say you're only going to read one article from us, we recommend you read our key ideas page. And likewise, if you're only going to listen to one of our podcast episodes, it should be this one. We have fun and set a strong pace, running through: • Common misunderstandings of our advice • A high level overview of what 80,000 Hours generally recommends • Our key moral positions • What are the most pressing problems to work on and why? • Which careers effectively contribute to solving those problems? • Central aspects of career strategy like how to weigh up career capital, personal fit, and exploration • As well as plenty more. One benefit of this podcast over the article is that we can more easily communicate uncertainty, and dive into the things we're least sure about, or didn’t yet cover within the article. Note though that our what’s in the article is more precisely stated, our advice is going to keep shifting, and we're aiming to keep the key ideas page current as our thinking evolves over time. This episode was recorded in November 2019, so if you notice a conflict between the page and this episode in the future, go with the page! Get the episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
02:57:2902/03/2020
Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Arden & Rob on demandingness, work-life balance & injustice (80k team chat #1)

Today's bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice. Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. Issues we talk about include: • If you’re not going to be completely moral, should you try being a bit more ethical, or give up? • Should you feel angry if you see an injustice, and if so, why? • How much should we ask people to live frugally? So far the feedback on the post-episode chats that we've done have been positive, so we thought we'd go ahead and try out this freestanding one. But fair warning: it's among the more difficult episodes to follow, and probably not the best one to listen to first, as you'll benefit from having more context! If you'd like to listen to more of Arden you can find her in episode 67, David Chalmers on the nature and ethics of consciousness, or episode 66, Peter Singer on being provocative, EA, and how his moral views have changed. Here's more information on some of the issues we touch on: • Consequentialism on Wikipedia • Appropriate dispositions on the Stanford Encyclopaedia of Philosophy • Demandingness objection on Wikipedia • And a paper on epistemic normativity. ——— I mention the call for papers of the Academic Workshop on Global Priorities in the introduction — you can learn more here. And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I've read it and very much enjoyed it. Find out where you can pre-order it here. We'll have an interview with him coming up soon.
44:1225/02/2020
#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

#70 - Dr Cassidy Nelson on the 12 best ways to stop the next pandemic (and limit nCoV)

nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad though it is, it's much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both. Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can't do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all. • Links to learn more, summary and full transcript. This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like. In today's episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University's Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we're to keep the risk at acceptable levels. The ideas are: Science 1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go. 2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes. 3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria. Response 4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely. 5. Rigorously evaluate in what situations travel bans are warranted. (They're more often counterproductive.) 6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible. 7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms. 8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out. Oversight  9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens. 10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen. 11. Require full cost-benefit analysis of 'dual-use' research projects that can generate global risks.  12. And finally, to maintain momentum, it's necessary to clearly assign responsibility for the above to particular individuals and organisations. These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem. In the episode Rob and Cassidy also talk about: • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential. • The pros, and significant cons, of travel restrictions. • Whether the same policies work for natural and anthropogenic pandemics. • Ways listeners can pursue a career in biosecurity. • Where we stand with nCoV as of today.Chapters:Rob’s intro (00:00:00)The interview begins (00:03:27)Where we stand with nCov today (00:07:24)Policy idea 1: A drastic change to diagnostic testing (00:34:58)Policy idea 2: Vaccine platforms (00:47:08)Policy idea 3: Broad-spectrum therapeutics (00:54:48)Policy idea 4: Develop a national plan for responding to a severe pandemic, regardless of the cause (01:02:15)Policy idea 5: A different approach to travel bans (01:15:59)Policy idea 6: Data sharing (01:16:48)Policy idea 7: Prevention (01:24:45)Policy idea 8: transparency around lab accidents (01:33:58)Policy idea 9: DNA synthesis screening (01:39:22)Policy idea 10: Dual Use Research oversight (01:48:47)Policy idea 11: Pandemic tabletop exercises (02:00:00)Policy idea 12: Coordination (02:12:20) Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Transcriptions: Zakee Ulhaq.
02:26:3313/02/2020
#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

#69 – Jeffrey Ding on China, its AI dream, and what we get wrong about both

The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard? In his paper Deciphering China's AI Dream, today's guest, PhD student Jeff Ding, outlines why he believes none of these claims are true. • Links to learn more, summary and full transcript. • What’s the best charity to donate to? He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development? Jeff emphasises that China's AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It's connected with a plan to develop an 'Internet of Things', and linked to a history of strategic planning for technology in areas like aerospace and biotechnology. And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse. What are the different levers that China is pulling to try to spur AI development? Here, Jeff wanted to challenge the myth that China's AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government. Are China's AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China's progress in AI. Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China's AI capabilities have surpassed the US or make it the world's leading AI power. Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we'd need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016. Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance.  He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues.  In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover:  • The best analogies for thinking about the growing influence of AI  • How do prominent Chinese figures think about AI?  • Coordination with China • China’s social credit system  • Suggestions for people who want to become professional China specialists  • And more. Chapters:Rob’s intro (00:00:00)The interview begins (00:01:02)Deciphering China’s AI Dream (00:04:17)Analogies for thinking about AI (00:12:30)How do prominent Chinese figures think about AI? (00:16:15)Cultural cliches in the West and China (00:18:59)Coordination with China on AI (00:24:03)Private companies vs. government research (00:28:55)Compute (00:31:58)China’s social credit system (00:41:26)Relationship between China and other countries beyond AI (00:43:51)Careers advice (00:54:40)Jeffrey’s talk at EAG (01:16:01)Rob’s outro (01:37:12)  Producer: Keiran Harris.Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
01:37:1406/02/2020
Rob & Howie on what we do and don't know about 2019-nCoV

Rob & Howie on what we do and don't know about 2019-nCoV

Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.See this list of resources, including many discussed in the episode, to learn more.In the 1h15m conversation we cover:• What is it? • How many people have it? • How contagious is it? • What fraction of people who contract it die?• How likely is it to spread out of control?• What's the range of plausible fatalities worldwide?• How does it compare to other epidemics?• What don't we know and why? • What actions should listeners take, if any?• How should the complexities of the above be communicated by public health professionals?Here's a link to the hygiene advice from Laurie Garrett mentioned in the episode.Recorded 2 Feb 2020.The 80,000 Hours Podcast is produced by Keiran Harris.
01:18:4403/02/2020
#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?  A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die. And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others. • Links to learn more, summary and full transcript. • Job opportunities at the Global Priorities Institute. To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person. That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies. As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today. Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise. So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it? This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers. Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here. So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover: • Are, or are we not, living in the most influential time in history? • The culture of the effective altruism community • Will's new lower estimate of the risk of human extinction • Why Will is now less focused on AI • The differences between Americans and Brits • Why feeling guilty about characteristics you were born with is crazy • And plenty more. Chapters:Rob’s intro (00:00:00)The interview begins (00:04:03)The paralysis argument (00:15:42)The case for strong longtermism (00:55:21)Longtermism for risk-averse altruists (00:58:01)Are we living in the most influential time in history? (01:14:37)The risk of human extinction in the next hundred years (02:15:20)Implications for the effective altruism community (02:50:03)Culture of the effective altruism community (03:06:28)Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
03:25:3624/01/2020
#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

Rebroadcast: this episode was originally released in October 2018. Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening — Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world. A few of the topics we cover are:• Why Paul expects AI to transform the world gradually rather than explosively and what that would look like • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us • Why AI systems will probably be granted legal and property rights • How an advanced AI that doesn't share human goals could still have moral value • Why machine learning might take over science research from humans before it can do most other tasks • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan. • Links to learn more, summary and full transcript. • Rohin Shah's AI alignment newsletter. Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree. If given plenty of time — and enough arguments, counterarguments and counter-counter-arguments between all the experts — should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over? In other words: does 'debate', in principle, lead to truth? According to Paul Christiano — researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities — this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are. It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight. If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful. But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case. The 80,000 Hours Podcast is produced by Keiran Harris.
03:51:1415/01/2020
#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan in episode #32, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including: • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened? • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians? • What long-shot drugs can people take in their 70s to stave off death? • Can science extend human (waking) life by cutting our need to sleep? • How bad would it be if a solar flare took down the electricity grid? Could it happen? • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it? • Will lifelike robots make us more inclined to dehumanise one another? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
01:25:1108/01/2020
#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?• Full transcript, key points & links to articles discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism (EA) community. In this interview we discuss a wide range of topics: • How would we go about a ‘long reflection’ to fix our moral errors? • Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? • If we basically solve existential risks, what does humanity do next? • What are some of Will’s most unusual philosophical positions? • What are the best arguments for and against utilitarianism? • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? • What are some the biases we should be aware of within academia? • What are some of the downsides of becoming a professor? • What are the merits of becoming a philosopher? • How does the media image of EA differ to the actual goals of the community? • What kinds of things would you like to see the EA community do differently? • How much should we explore potentially controversial ideas? • How focused should we be on diversity? • What are the best arguments against effective altruism? Get this episode by subscribing: type '80,000 Hours' into your podcasting app.  The 80,000 Hours Podcast is produced by Keiran Harris.
01:52:3931/12/2019
#67 – David Chalmers on the nature and ethics of consciousness

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. • Links to learn more, summary and full transcript. • Advice on how to read our advice. • Anonymous answers on: bad habits, risk and failure. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness.  This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter.  These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?  Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far.  Chapters:Rob's intro (00:00:00)The interview begins (00:02:11)Philosopher’s survey (00:06:37)Free will (00:13:37)Survey correlations (00:20:06)Progress in philosophy (00:35:01)Simulations (00:51:30)The problem of consciousness (01:13:01)Dualism and panpsychism (01:26:52)Is consciousness an illusion? (01:34:52)Idealism (01:43:13)Integrated information theory (01:51:08)Moral status and consciousness (02:06:10)Higher order views of consciousness (02:11:46)The views of philosophers on eating meat (02:20:23)Artificial consciousness (02:34:25)The zombie and vulcan trolley problems (02:38:43)Illusionism and moral status (02:56:12)Panpsychism and moral status (03:06:19)Mind uploading (03:15:58)Personal identity (03:22:51)Virtual reality and the experience machine (03:28:56)Singularity (03:42:44)AI alignment (04:07:39)Careers in academia (04:23:37)Having fun disagreements (04:32:54)Rob’s outro (04:42:14) Producer: Keiran Harris.
04:41:5016/12/2019
#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off. According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention. But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times. • Singer's book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free e-book and audiobook, read by a range of celebrities. Get it here. • Links to learn more, summary and full transcript. Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one? Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences, but Singer says that he gives public relations considerations plenty of thought. One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump. Another is the focus of the effective altruism community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement. He suspects there's a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns. Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover: • What does he think is the most plausible alternatives to consequentialism? • Is it more humane to eat wild caught animals than farmed animals? • The re-release of The Life You Can Save • His most and least strategic career decisions • Population ethics, and other arguments for and against prioritising the long-term future • What led to his changing his mind on significant questions in moral philosophy? • And more. In the post-episode discussion, Rob and Arden continue talking about: • The pros and cons of keeping EA as one big movement • Singer’s thoughts on immigration • And consequentialism with side constraints. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript.  Producer: Keiran Harris.  Audio mastering: Ben Cordell.  Transcriptions: Zakee Ulhaq.  Illustration of Singer: Matthias Seifarth.
02:01:2105/12/2019
#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that they wouldn't go to countries and sell that knowledge."  Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security.  Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. In 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS). But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries. • Links to learn more, summary and full transcript. • Talks from over 100 other speakers at EA Global. • Having trouble with podcast 'chapters' on this episode? Please report any problems to keiran at 80000hours dot org. What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next. Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the previous one. Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda discussed at length in episode 27. Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the 9/11 Commission. Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11.  And as if that all weren't curious enough four years ago Bonnie decided to go vegan. We talk about her work so far as well as:  • How listeners can start a career like hers  • Mistakes made by Mr Obama and Mr Trump • Networking, the value of attention, and being a vegan in DC  • And 2020 Presidential candidates.Chapters:Rob’s intro (00:00:00)The interview begins (00:01:54)What is Bonnie working on at the moment? (00:02:45)Bonnie’s time at the Department of State (00:04:08)The history of Cooperative Threat Reduction work (00:08:48)Biggest uncontrolled nuclear material threats today (00:11:36)Biggest security issues in the world today (00:13:57)The Biological Weapons Convention (00:17:52)Projects Bonnie worked on that she’s particularly proud of (00:20:55)The day to day life of an Ambassador on an issue (00:23:03)Biggest misunderstandings of the field (00:25:41)How do we get more done in this area? (00:29:48)The Global Health Security Agenda (00:32:52)The implications for countries who give up WMDs (00:34:55)The fallout from a change in government (00:38:40)Listener submitted questions (00:39:39)How might listeners be able to contribute to solving these problems with their own careers? (00:54:55)Is Bonnie glad she went into the military early in her career? (01:06:25)Networking in DC (01:12:27)What are the downsides to pursuing a career like Bonnie’s? (01:15:27)Being a vegan in DC (01:16:47)Women of Color Advancing Peace, Security and Conflict Transformation (01:19:15)The value of attention in DC (01:28:25)Any ways WCAPS could accidentally make things worse? (01:30:08)Message for women of colour in the audience (01:33:05)TV shows relevant to Bonnie’s work (01:35:19)Candidates for 2020 (01:36:57) The 80,000 Hours Podcast is produced by Keiran Harris.
01:40:3219/11/2019
#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.  November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don't see how they can figure it out.  What on Earth happens next?  Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result.  Unfortunately the US has no recovery system for a situation like this, unlike parliamentary democracies, which can just rerun the election a few weeks later.Links to learn more, summary and full transcript.Motivating article: Information security careers for global catastrophic risk reduction by Zabel and MuehlhauserThe Constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker. Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn't fair. Schneier thinks there's a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage. And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits. According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they're designing, because they have a bureaucrat's rather than a hacker's mindset. The ideal computer security expert walks into a shop and thinks, "You know, here's how I would shoplift." They automatically see where the cameras are, whether there are alarms, and where the security guards aren't watching. In this episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn't get access to them.We also cover:  • How can we have surveillance of dangerous actors, without falling back into authoritarianism?  • When if ever should information about weaknesses in society's security be kept secret?  • How secure are nuclear weapons systems around the world?  • How worried should we be about deep-fakes?  • Schneier’s critiques of blockchain technology  • How technologists should be vital in shaping policy  • What are the most consequential computer security problems today?  • Could a career in information security be very useful for reducing global catastrophic risks?  • And more.Chapters:Rob’s intro (00:00:00)Bruce’s Codex talk (00:02:23)The interview begins (00:15:42)What is Bruce working on at the moment? (00:16:35)How technologists could be vital in shaping policy (00:18:52)Most consequential computer security problems today (00:24:12)How secure are nuclear weapons systems around the world? (00:34:41)Stuxnet and NotPetya (00:42:29)Messing with democracy (00:44:44)How worried should we be about deepfakes? (00:50:02)The similarities between hacking computers and potentially hacking biology in the future (00:55:08)Bruce’s critiques of crypto (01:00:05)What are some of the most kind of widely-held but incorrect beliefs among computer security people? (01:03:04)The hacking mindset (01:05:35)Voting machines (01:09:22)How secretive should people be about potentially harmful information? (01:16:48)Could a career in information security be very useful for reducing global catastrophic risks? (01:21:46)How to develop the skills needed in computer security (01:33:44)Ubiquitous surveillance (01:52:46)Why is Bruce optimistic? (02:05:28)Rob’s outro (02:06:43)The 80,000 Hours Podcast is produced by Keiran Harris.
02:11:0425/10/2019
Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Rob Wiblin on plastic straws, nicotine, doping, & whether changing the long-term is really possible

Today's episode is a compilation of interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast.  If you've listened to absolutely everything on this podcast feed, you'll have heard four interviews with me already, but fortunately I don't think these two include much repetition, and I've gotten a decent amount of positive feedback on both.  First up, I speak with David Kadavy on his show, Love Your Work.  This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics.  • Our annual impact survey is about to close — I'd really appreciate if you could take 3–10 minutes to fill it out now.  • The blog post about this episode. At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar.  The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts 2 hours and 15 minutes into this recording.  Neoliberalism in the sense used by this show is not the free market fundamentalism you might associate with the term. Rather it's a centrist or even centre-left view that supports things like social liberalism, multilateral international institutions, trade, high rates of migration, racial justice, inclusive institutions, financial redistribution, prioritising the global poor, market urbanism, and environmental sustainability.  This is the more demanding of the two conversations, as listeners to that show have already heard of effective altruism, so we were able to get the best arguments Jeremiah could offer against focusing on improving the long term future of the world.  Jeremiah is more of a fan of donating to evidence-backed global health charities recommended by GiveWell, and does so himself.  I appreciate him having done his homework and forcing me to do my best to explain how well my views can stand up to counterarguments. It was a challenge for me to paint the whole picture in the half an hour we spent on longterm and I expect there's answers in there which will be fresh even for regular listeners.  I hope you enjoy both conversations! Feel free to email me with any feedback. The 80,000 Hours Podcast is produced by Keiran Harris.
03:14:3325/09/2019
Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

Have we helped you have a bigger social impact? Our annual survey, plus other ways we can help you.

1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's new, every 2 weeks or so. 5. Or follow our pages on Facebook and Twitter. —— Once a year 80,000 Hours runs a survey to find out whether we've helped our users have a larger social impact with their life and career. We and our donors need to know whether our services, like this podcast, are helping people enough to continue them or scale them up, and it's only by hearing from you that we can make these decisions in a sensible way. So, if 80,000 Hours' podcast, job board, articles, headhunting, advising or other projects have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how. You can also let us know where we've fallen short, which helps us fix problems with what we're doing. We've refreshed the survey this year, hopefully making it easier to fill out than in the past. We'll keep this appeal up for about two weeks, but if you fill it out now that means you definitely won't forget! Thanks so much, and talk to you again in a normal episode soon. — RobTag for internal use: this RSS feed is originating in BackTracks.
03:3916/09/2019
#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

#63 – Vitalik Buterin on better ways to fund public goods, blockchain's failures, & effective giving

Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future? Today's guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire. At the same time, far from indulging hype about these so-called 'blockchain' technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, *"blockchains as they currently exist are in many ways a joke, right?"* But Buterin is not just a realist. He's also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals. Links to learn more, summary and full transcript. By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of 'cryptoeconomics'. Economist Tyler Cowen hasobserved that, "at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers, without knowing about the papers at all." Along with previous guest Glen Weyl, Buterin has helped develop a model for so-called 'quadratic funding', which in principle could transform the provision of 'public goods'. That is, goods that people benefit from whether they help pay for them or not. Examples of goods that are fully or partially 'public goods' include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world. But these and other related proposals face major hurdles. They're vulnerable to collusion, might be used to fund scams, and remain untested at a small scale — not to mention that anything with a square root sign in it is going to struggle to achieve societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges? In today's extensive three-hour interview, Buterin and I cover: • What the blockchain has accomplished so far, and what it might achieve in the next decade; • Why many social problems can be viewed as a coordination failure to provide a public good; • Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work; • His view of 'effective altruism' and 'long-termism'; • Why he is optimistic about 'quadratic funding', but pessimistic about replacing existing voting with 'quadratic voting'; • Why humanity might have to abandon living in cities; • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
03:18:2403/09/2019
#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

#62 – Paul Christiano on messaging the future, increasing compute, & how CO2 impacts your brain

Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out? In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably are. • Links to learn more, summary, and full transcript. • Paul's first appearance on the show in episode 44. • An out-take on decision theory. We could tell them hard-won lessons from history; mention some research questions we wish we'd started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons. But, as Christiano points out, even if we could satisfactorily figure out what we'd like to be able to tell our ancestors, that's just the first challenge. We'd need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth's surface quickly gets buried far underground. But even if we figure out a satisfactory message, and a ways to ensure it's found, a civilization this far in the future won't speak any language like our own. And being another species, they presumably won't share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn't break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery? That's just one of many playful questions discussed in today's episode with Christiano — a frequent writer who's willing to brave questions that others find too strange or hard to grapple with. We also talk about why divesting a little bit from harmful companies might be more useful than I'd been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider. Finally, we get a big update on progress in machine learning and efforts to make sure it's reliably aligned with our goals, which is Paul's main research project. He responds to the views that DeepMind's Pushmeet Kohli espoused in a previous episode, and we discuss whether we'd be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors. Some other issues that come up along the way include: • Are there any supplements people can take that make them think better? • What implications do our views on meta-ethics have for aligning AI with our goals? • Is there much of a risk that the future will contain anything optimised for causing harm? • An out-take about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
02:11:4705/08/2019
#61 - Helen Toner on emerging technology, national security, and China

#61 - Helen Toner on emerging technology, national security, and China

From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did. Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century. In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances. How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace. • Links to learn more, summary and full transcript • Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and Will MacAskill • The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman • AI strategy and governance roles on the job board Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed. Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day. Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands. But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy? In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China. We cover: • Why immigration is the main policy area that should be affected by AI advances today. • Why talking about an 'arms race' in AI is premature. • How Bobby Kennedy may have positively affected the Cuban Missile Crisis. • Whether it's possible to become a China expert and still get a security clearance. • Can access to ML algorithms be restricted, or is that just not practical? • Whether AI could help stabilise authoritarian regimes. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
01:54:5717/07/2019
#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

#60 - Phil Tetlock on why accurate forecasting matters for everything, and how you can do it better

Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race. Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day. He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better. Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely. • Links to learn more, summary and full transcript • The calibration training app • Sign up for the Civ-5 counterfactual forecasting tournament • A review of the evidence on good forecasting practices • Learn more about Effective Altruism Global In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014. That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement. We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.) We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades. We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
02:11:3928/06/2019
#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

#59 – Cass Sunstein on how change happens, and why it's so often abrupt & unpredictable

It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?Sunstein — coauthor of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. • Links to learn more, summary and full transcript. • 80,000 Hours Annual Review 2018. • How to donate to 80,000 Hours. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Chapters:• Rob’s intro (00:00:00)• Cass's Harvard lecture on How Change Happens (00:02:59)• Rob & Cass's conversation about the book (00:41:43) The 80,000 Hours Podcast is produced by Keiran Harris.
01:43:2417/06/2019
#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

#58 – Pushmeet Kohli of DeepMind on designing robust & reliable AI systems and how to succeed in AI

When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.Far from being an overhead on the 'real' work, it’s an essential part of making AI systems work at all. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether. • Want to be notified about high-impact opportunities to help ensure AI remains safe and beneficial? Tell us a bit about yourself and we’ll get in touch if an opportunity matches your background and interests. • Links to learn more, summary and full transcript. • And a few added thoughts on non-research roles. With the goal of designing systems that are reliably consistent with desired specifications, DeepMind have recently published work on important technical challenges for the machine learning community. For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an 'adversary' that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable. He's also looking into 'training specification-consistent models' and formal verification', while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards. In today’s interview, we focus on the convergence between broader AI research and robustness, as well as: • DeepMind’s work on the protein folding problem • Parallels between ML problems and past challenges in software development and computer security • How can you analyse the thinking of a neural network? • Unique challenges faced by DeepMind’s technical AGI safety team • How do you communicate with a non-human intelligence? • What are the biggest misunderstandings about AI safety and reliability? • Are there actually a lot of disagreements within the field? • The difficulty of forecasting AI development Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
01:30:1203/06/2019