Sign in

Technology
Business
Sanjay Puri
Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast. You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world. Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
Total 63 episodes
1
2
Go to
Overcoming the Cultural Clash Between AI Innovation and Data Privacy with Norman Sadeh, Professor of Computer Science, Co-Founder and Co-Director, Privacy Engineering Program, Carnegie Mellon University

Overcoming the Cultural Clash Between AI Innovation and Data Privacy with Norman Sadeh, Professor of Computer Science, Co-Founder and Co-Director, Privacy Engineering Program, Carnegie Mellon University

AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.Key Takeaways:(02:09) How Professor Sadeh’s work in AI and privacy began.(05:30) Privacy engineers are in AI governance.(08:45) Why AI governance must integrate with existing company structures.(12:10) The challenges of data ownership and consent in AI applications.(15:20) Privacy implications of foundational models in AI.(18:30) The limitations of current regulations like GDPR in addressing AI concerns.(22:00) How user expectations shape the principles of AI governance.(26:15) The growing debate around the need for specialized AI regulations.(30:40) The role of transparency in AI governance for building trust.(35:50) The potential impact of open-source AI models on security and privacy.Resources Mentioned:Professor Norman Sadeh -https://www.linkedin.com/in/normansadeh/Carnegie Mellon University | LinkedIn -https://www.linkedin.com/school/carnegie-mellon-university/Carnegie Mellon University | Website -https://www.cmu.edu/EU AI Act - https://artificialintelligenceact.eu/General Data Protection Regulation (GDPR) -https://gdpr-info.eu/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
41:2819/11/2024
Championing Diversity, AI Skills, and Youth Empowerment: Reshaping Education and the Future of Work

Championing Diversity, AI Skills, and Youth Empowerment: Reshaping Education and the Future of Work

In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.
28:1507/11/2024
Democratizing AI: The Role of Governments and Ethical Insights in Shaping Policy

Democratizing AI: The Role of Governments and Ethical Insights in Shaping Policy

In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics & Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.
17:5407/11/2024
AI Compliance Challenges: Navigating the European AI Act and Regulatory Frameworks

AI Compliance Challenges: Navigating the European AI Act and Regulatory Frameworks

In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.Our guest, Sean Musch, Founder and CEO of AI & Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.
20:5707/11/2024
Harnessing Geospatial Data: Crisis Response and AI Integration | RegulatingAI Podcast Ft Paul Uithol

Harnessing Geospatial Data: Crisis Response and AI Integration | RegulatingAI Podcast Ft Paul Uithol

In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.
17:5707/11/2024
Global AI Regulation: Balancing Compliance, Innovation, and Supervision Across Diverse Laws

Global AI Regulation: Balancing Compliance, Innovation, and Supervision Across Diverse Laws

In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.
28:3207/11/2024
Bridging the Gap:Navigating AI Governance and Legal Innovation with Hadassah Drukarch

Bridging the Gap:Navigating AI Governance and Legal Innovation with Hadassah Drukarch

In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.
18:2107/11/2024
How AI is Revolutionising Disaster Response: Bridging the Gap for Vulnerable Communities

How AI is Revolutionising Disaster Response: Bridging the Gap for Vulnerable Communities

In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.
14:1107/11/2024
Transform Your Organization with AI: Augment, Reskill, Improve HCI, & Hire Ethically

Transform Your Organization with AI: Augment, Reskill, Improve HCI, & Hire Ethically

In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.
18:3707/11/2024
How AI Is Reshaping Industries and Society with Professor Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University

How AI Is Reshaping Industries and Society with Professor Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University

On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.Key Takeaways:(02:14) The need to regulate AI to prevent monopolization by large corporations.(06:03) The dangers of AI-driven misinformation and its impact on public opinion.(10:32) The risks AI poses in job displacement across multiple industries.(14:22) How deepfake technology is evolving and its potential consequences.(18:47) The challenge of balancing AI innovation with data privacy concerns.(22:10) AI’s growing role in military applications and the need for careful oversight.(26:05) How AI agents could autonomously interact and the risks involved.(31:30) The potential for AI to surpass human performance in certain professions.(37:14) Why international collaboration is critical for effective AI regulation.(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.Resources Mentioned:Ruslan Salakhutdinov - https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/OpenAI’s Sora Technology - https://openai.com/index/sora/Geoffrey Hinton and his contributions to AI -https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/Carnegie Mellon University -https://www.cmu.eduThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
50:4205/11/2024
Breaking Down the Senate AI Policy Roadmap with Senator Todd Young of the United States Senate

Breaking Down the Senate AI Policy Roadmap with Senator Todd Young of the United States Senate

The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.Key Takeaways:(01:54) The bipartisan effort behind the Senate AI Working Group.(03:34) How existing laws adapt to an AI-enabled world.(05:17) Identifying AI risks and regulatory barriers.(07:41) The role of government expertise in AI-related areas.(10:12) Understanding the significance of the $32 billion AI public investment.(13:17) Applying AI innovations across various industries.(15:27) The impact of China on AI competition and US strategy.(17:44) Why semiconductors are vital to AI development.(20:26) Balancing open-source and closed-source AI models.(22:51) The need for global AI standards and harmonization.Resources Mentioned:Senator Todd Young - https://www.linkedin.com/in/senator-todd-young/Todd Young - https://www.young.senate.gov/United States Senate - https://www.linkedin.com/company/ussenate/National AI Research Resource - https://nairrpilot.org/CHIPS and Science Act - https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/Senate AI Policy Roadmap - https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdfNational Security Commission on Artificial Intelligence - https://reports.nscai.gov/final-report/introductionThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
27:1716/10/2024
AI's Role in Accelerating Drug Development and Clinical Trials with Raphael Townshend, PhD, Founder and CEO of Atomic AI

AI's Role in Accelerating Drug Development and Clinical Trials with Raphael Townshend, PhD, Founder and CEO of Atomic AI

AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.Key Takeaways:(02:15) Raphael's background in AI and biology, and founding of Atomic AI.(05:59) Reducing time and failure rate in drug discovery with AI.(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.(09:23) Ensuring transparency and accountability in AI-driven drug discovery.(12:22) Navigating intellectual property concerns in healthcare AI.(15:34) Integrating AI with wet lab testing for accurate drug discovery results.(17:31) Balancing intellectual property and open research in biotech.(20:02) Addressing data privacy and security in AI algorithms.(22:30) Educating users and healthcare professionals about AI in drug discovery.(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.Resources Mentioned:Raphael Townshend - https://www.linkedin.com/in/raphael-townshend-9154962a/Atomic AI | LinkedIn - https://www.linkedin.com/company/atomic-ai-rna/AlphaFold - https://deepmind.google/technologies/alphafold/Atomic AI Homepage - https://atomic.ai/ATOM-1 Large Language Model - https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-developmentThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
31:3001/10/2024
Addressing Bias in AI To Build Trust in Technology with Dr. Rashawn Ray, Vice President of the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland, and Senior Fellow at The Brookings Institution

Addressing Bias in AI To Build Trust in Technology with Dr. Rashawn Ray, Vice President of the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland, and Senior Fellow at The Brookings Institution

In this episode, I’m joined by Dr. Rashawn Ray, Vice President at the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland and Senior Fellow at The Brookings Institution. Dr. Ray’s innovative work lies at the powerful intersection of policing, technology and social equity, where he explores how AI can be designed and implemented to enhance fairness, reduce inequality and ultimately be a force for positive change in both local communities and the broader world.Key Takeaways:(01:00) Regulating AI without stifling innovation is crucial.(07:06) How virtual reality enhances police training by addressing implicit bias.(12:22) The impact of diverse teams on equitable AI development.(19:36) Overcoming challenges in implementing VR training in smaller law enforcement agencies.(25:50) Tech companies collaborating on socially impactful AI projects is vital.(31:55) Community involvement is critical in shaping AI and VR technologies.(36:21) The role of DEI initiatives in improving AI’s fairness and effectiveness.(42:09) The future of AI legislation and its potential to democratize technology.Resources Mentioned:Dr. Rashawn Ray - https://www.linkedin.com/in/sociologistray/AIR | Website - https://www.air.org/AIR Equity Initiative | LinkedIn - https://www.linkedin.com/showcase/air-equity-initiative/about/AIR Equity Initiative Website - https://www.air.org/air-equity-initiative-bridge-more-equitable-worldLab for Applied Social Science Research - https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29Brookings Institution - https://www.brookings.eduDr. Rashawn Ray - AIR - https://www.air.org/experts/person/rashawn-rayDr. Rashan Ray’s | Website - https://www.rashawnray.com/“Extracting Protest Events from Newspaper Articles with ChatGPT” (working paper) - https://uncmap.org/publication/chat-wp/“5 questions policymakers should ask about facial recognition, law enforcement and algorithmic bias” - https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/“Examining equity in transportation safety enforcement” - https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
46:0823/09/2024
Regulating AI Innovation for National Security and Healthcare with Mike Rounds, US Senator for South Dakota, Co-Chair of the Senate AI Caucus and Member of the Bipartisan Senate AI Working Group

Regulating AI Innovation for National Security and Healthcare with Mike Rounds, US Senator for South Dakota, Co-Chair of the Senate AI Caucus and Member of the Bipartisan Senate AI Working Group

On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.Key Takeaways:(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.(05:07) Why intellectual property protections are essential in AI development.(07:27) National security implications of AI in weapons systems and defense.(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.(10:55) How AI can aid in detecting and combating biological threats.(15:00) The importance of workforce training to mitigate AI-driven job displacement.(19:05) The role of community colleges in preparing the workforce for an AI-driven future.(24:00) Insights from international collaboration on AI regulation.Resources Mentioned:Senator Mike Rounds Homepage - https://www.rounds.senate.gov/GUIDE AI Initiative - https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-packageMedshield - https://www.linkedin.com/company/medshield-llcThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
28:5218/09/2024
Protecting Consumer Rights in the Age of AI with Attorney General Charity Rae Clark and Representative Monique Priestley

Protecting Consumer Rights in the Age of AI with Attorney General Charity Rae Clark and Representative Monique Priestley

In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.Key Takeaways:(02:10) “Free” apps and websites take payment with your data.(08:15) The Data Privacy Act includes stringent provisions to protect children online.(10:05) Protecting consumer privacy and reducing security risks.(15:29) Vermont’s legislative journey includes educating lawmakers.(18:45) Innovation and regulation must be balanced for future AI development.(23:50) Collaboration and education can overcome intense pressure from lobbyists.(30:02) AI’s potential to exacerbate discrimination demands regulation.(36:15) Deepfakes present a growing threat.(42:40) Consumer trust could be lost due to premature releases of AI products.(50:10) The necessity of a strong foundation in data privacy. Resources Mentioned:Charity Rae Clark -https://www.linkedin.com/in/charityrclark/Monique Priestley -https://www.linkedin.com/in/mepriestley/Vermont -https://www.linkedin.com/company/state-of-vermont/“The Age of Surveillance Capitalism” by Shoshana Zuboff -https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697“Why Privacy Matters” by Neil Richards -https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
01:03:1528/08/2024
Protecting Creative Rights in the AI Era with Keith Kupferschmid, Chief Executive Officer of Copyright Alliance

Protecting Creative Rights in the AI Era with Keith Kupferschmid, Chief Executive Officer of Copyright Alliance

Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.Key Takeaways:(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.(06:00) There have been 17 or 18 AI copyright cases filed recently.(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.(27:34) Education and public awareness are vital for understanding copyright issues related to AI.Resources Mentioned:Keith Kupferschmid - https://www.linkedin.com/in/keith-kupferschmid-723b19a/Copyright Alliance - https://copyrightalliance.orgU.S. Copyright Office - https://www.copyright.govGetty Images Licensing - https://www.gettyimages.comNational Association of Realtors - https://www.nar.realtorThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
33:4022/08/2024
AI Development and Cultural Values with Maria Luciana Axente

AI Development and Cultural Values with Maria Luciana Axente

The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?  Today, I’m joined by Maria Luciana Axente, Head of Public Policy and Ethics at PwC UK and Intellectual Forum Senior Research Associate at Jesus College Cambridge, who offers key insights into the ethical implications of AI.Key Takeaways:(03:56) The importance of integrating ethical principles into AI.(08:22) Preserving humanity in the age of AI.(12:19) Embedding value alignment in AI systems.(15:59) Fairness and voluntary commitments in AI.(21:01) Participatory AI and including diverse voices.(24:05) Cultural value systems shaping AI policies.(26:25) The importance of reflecting on AI’s impact before implementation.(27:48) Learning from other industries to govern AI better.(28:59) AI as a socio-technical system, not just technology.Resources Mentioned:Maria Luciana Axente - https://www.linkedin.com/in/mariaaxente/ PwC UK - https://www.linkedin.com/company/pwc-uk/Jesus College Cambridge - https://www.linkedin.com/company/jesus-college-cambridge/PWC homepage - https://www.pwc.co.uk/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
34:0708/08/2024
Empowering Diverse Creators in the AI Era with Lianne Baron

Empowering Diverse Creators in the AI Era with Lianne Baron

Can AI spark new creative revolutions? On this episode, I’m joined by Lianne Baron, Strategic Partner Manager for Creative Partnerships at Meta. Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.Key Takeaways:(03:50) Embrace AI's changes; it challenges traditional methods.(05:13) AI speeds up the journey from imagination to delivery.(07:15) The move to cinematic quality sparks excitement and fear.(08:30) Education is key in democratizing AI for all.(15:00) Risk of bias without diverse voices in AI development.(17:15) Ideas, not skills, are the new currency in AI.(26:16) Imagination and human experience are irreplaceable by AI.(29:11) AI can democratize storytelling, sharing diverse narratives.(33:00) AI breaks down barriers, fostering new creative opportunities.(36:20) Understanding authenticity is crucial in an AI-driven world.Resources Mentioned:Lianne Baron - https://www.linkedin.com/in/liannebaron/Meta - https://www.meta.com/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
34:5203/08/2024
Balancing Innovation and Regulation in AI with Zico Kolter

Balancing Innovation and Regulation in AI with Zico Kolter

The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?On this episode, I’m joined by Professor Zico Kolter, Professor and Director of the Machine Learning Department at Carnegie Mellon University and Chief Expert at Bosch USA, who shares his insights on AI regulation and its challenges.Key Takeaways:(02:41) AI innovation outpaces legislation. (04:00) Regulating technology vs. its usage is crucial. (06:36) AI is advancing faster than ever. (11:14) Companies must prevent AI misuse. (15:30) Bias-free algorithms are not feasible. (21:34) Human interaction in AI decisions is essential. (27:49) The competitive environment benefits AI development. (32:26) Perfectly accepted regulations indicate mistakes. (37:52) Regulations should adapt to technological changes. (42:49) AI developers aim to benefit people.(45:16) Human-in-the-loop AI is crucial for reliability. (46:30) Addressing gaps in AI systems is critical.Resources Mentioned:Zico Kolter - https://www.linkedin.com/in/zico-kolter-560382a4/Carnegie Mellon University - https://www.linkedin.com/school/carnegie-mellon-university/Bosch USA - https://www.linkedin.com/company/boschusa/EU AI Act - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_enOpenAI - https://www.openai.com/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
49:0020/07/2024
Harnessing Evolutionary Principles To Guide AI Development with Professor Paul Rainey

Harnessing Evolutionary Principles To Guide AI Development with Professor Paul Rainey

On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of EMBO & European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel. Key Takeaways:(00:04) Evolutionary transitions form higher-level structures.(00:06) Eukaryotic cells parallel future AI-human interactions.(00:08) Major evolutionary transitions inform AI-human interactions.(00:11) Algorithms can evolve with variation, replication and heredity.(00:13) Natural selection drives complexity.(00:18) AI adapts to selective pressures unpredictably.(00:21) Humans risk losing autonomy to AI.(00:25) Societal engagement is needed before developing self-replicating AIs.(00:30) The challenge of controlling self-replicating systems.(00:33) Interdisciplinary collaboration is crucial for AI challenges.Resources Mentioned:Max Planck Institute for Evolutionary BiologyProfessor Paul Rainey - Max Planck InstituteMax Planck Research Magazine - Issue 3/2023Paul Rainey’s article in The Royal Society PublishingThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
35:0416/07/2024
Understanding China’s AI Policy and Tech Growth with Jaap van Etten

Understanding China’s AI Policy and Tech Growth with Jaap van Etten

In this episode, I’m joined by Jaap van Etten, CEO and Co-Founder of Datenna, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.Key Takeaways:(01:30) Transitioning from diplomat to tech entrepreneur.(05:23) Key differences in AI approaches between China, Europe and the US.(07:20) The Chinese entrepreneurial mindset and its impact on innovation.(10:03) China’s strategy in AI and the importance of being a technological leader.(17:05) Challenges and misconceptions about China’s technological capabilities.(23:17) Recommendations for AI regulation and international cooperation.(30:19) Jaap’s perspective on the future of AI legislation.(35:12) The role of AI in policymaking and decision-making.(40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.Resources:Jaap van Etten - https://www.linkedin.com/in/jaapvanetten/Datenna - https://www.linkedin.com/company/datenna/https://www.nytimes.com/2006/05/15/technology/15fraud.htmhttp://www.china.org.cn/english/scitech/168482.htm https://en.wikipedia.org/wiki/Hanxin https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/ https://github.com/Kkevsterrr/geneva https://geneva.cs.umd.edu https://www.grc.com/sn/sn-779.pdfThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
47:3612/07/2024
Understanding Robot Learning and Its Societal Impact with Dr. Abhinav Valada

Understanding Robot Learning and Its Societal Impact with Dr. Abhinav Valada

On this episode, I’m joined by Dr. Abhinav Valada, Professor and Director of the Robot Learning Lab at the University of Freiburg, to explore the future of robotics and the essential regulations needed for their integration into society.Key Takeaways:(00:00) The potential economic impact of AI. (03:37) The distinction between perceived and actual AI capabilities. (04:24) Challenges in training robots with real-world data. (08:51) Limitations of current AI reasoning capabilities. (13:16) The importance of conveying robot intent for collaboration. (17:33) The need for specific guidelines for robotic systems. (21:00) Mandating AI ethics courses in Germany. (25:10) Collaborative robots and workforce implications. (30:00) Privacy issues in human-robot interaction.(35:02) The importance of pilot programs for autonomous vehicles. (39:00) International collaboration in AI legislation. (40:38) Inclusion of diverse voices in robotics research.Resources Mentioned:Dr. Abhinav Valada - https://www.linkedin.com/in/avalada/University of Freiburg - https://www.linkedin.com/company/university-of-freiburg/EU AI Act - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligenceRobot Learning Lab, University of Freiburg - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
41:0803/07/2024
AI's Impact on Healthcare and Legislation with Congressman Buddy Carter

AI's Impact on Healthcare and Legislation with Congressman Buddy Carter

Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman Buddy Carter, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.Key Takeaways:(01:48) President Biden's Executive Order on AI aims to set new standards.(04:34) AI's potential in healthcare, including telehealth and drug development.(05:47) Legal implications for doctors not using available AI technologies.(07:55) AI could speed up the drug development process.(10:52) The need for constantly updated AI standards.(11:56) Debate on creating a separate regulatory body for AI.(14:03) Importance of including diverse voices in AI regulation.(16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.Resources Mentioned:Buddy Carter - https://www.linkedin.com/in/buddycarterga/President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceSection 230 of the Communications Decency Act - https://www.eff.org/issues/cda230Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
20:1901/07/2024
Shaping AI Policy To Safeguard Our Technological Future with Daniel Colson

Shaping AI Policy To Safeguard Our Technological Future with Daniel Colson

On this episode, I am joined by Daniel Colson, Executive Director of the AI Policy Institute, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.Key Takeaways:(02:15) Daniel analyzes President Biden's recent executive order on AI.(04:13) Differentiating risks in AI technologies and their applications.(08:52) Concerns about the open-sourcing of AI models and abuse potential.(16:45) The importance of inclusive discussions in AI policymaking.(19:25) Challenges and risks of regulatory capture in the AI sector.(26:45) Balancing innovation with regulation.(33:14) The potential for AI to transform employment and the economy.(37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.Resources Mentioned:Daniel Colson - https://www.linkedin.com/in/danieljcolson/AI Policy Institute - https://www.linkedin.com/company/aipolicyinstitute/AI Policy Institute | Website - https://www.theaipi.org/#AIRegulation #AISafety #AIStandard
40:0901/07/2024
Balancing AI Innovation and Equitable Health Benefits with Professor Effy Vayena

Balancing AI Innovation and Equitable Health Benefits with Professor Effy Vayena

On this episode of Regulating AI, I sit down with Professor Effy Vayena, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the Swiss Federal Institute of Technology (ETH) and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.Key Takeaways:(03:45) The importance of developing and using technology in ways that meet ethical standards.(10:31) The necessity of agile regulation and continuous dialogue with tech developers.(13:19) The concept of regulatory sandboxes for testing policies in a controlled environment. (17:07) Balancing AI innovation with patient privacy and data security.(24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.(35:10) Considering the global impact of AI and the digital divide.(41:06) Including and educating the public in AI regulatory processes.(44:04) The importance of international collaboration in AI regulation.Resources Mentioned:Professor Effy Vayena - https://www.linkedin.com/in/effy-vayena-467b1353/Swiss Federal Institute of Technology (ETH) - https://www.linkedin.com/school/eth-zurich/ETH Zurich - https://ethz.ch/en.htmlEuropean Union’s AI Act - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-aiU.S. FDA guidelines on AI in medical devices - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-deviceThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
45:3610/06/2024
Ensuring AI Safety and Reliability in Healthcare with Dr. Brennan Spiegel of Cedars-Sinai

Ensuring AI Safety and Reliability in Healthcare with Dr. Brennan Spiegel of Cedars-Sinai

The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with Dr. Brennan Spiegel to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; Cedars-Sinai Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.Key Takeaways:(03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.(05:47) Evaluating AI for implicit bias in mental health applications.(08:03) The need for standardized guidance and rigorous oversight in AI applications.(10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.(16:42) The evolving role of doctors in the context of AI integration.(21:22) The importance of traditional knowledge alongside AI in medical practice.(24:44) International collaboration and standardized approaches to AI in healthcare.Resources Mentioned:Dr. Brennan Spiegel - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/Cedars-Sinai - https://www.linkedin.com/company/cedars-sinai-medical-center/Brennan Spiegel on X - https://x.com/BrennanSpiegelThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
27:3603/06/2024
Understanding the Legal and Ethical Implications of AI in Healthcare with Carmel Shachar

Understanding the Legal and Ethical Implications of AI in Healthcare with Carmel Shachar

In this episode, I welcome Carmel Shachar, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at Harvard Law School Center for Health Law and Policy Innovation. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.Key Takeaways:(00:00) AI’s challenges in balancing patient data needs.(03:09) The revolutionary potential of AI in healthcare innovation.(04:30) How AI is driving precision and personalized medicine.(06:19) The urgent need for healthcare system evolution.(09:00) Potential negative impacts of poorly implemented AI.(12:00) The unique challenges posed by AI as a medical device.(15:10) Minimizing regulatory handoffs to enhance AI efficacy.(18:00) How AI can reduce healthcare disparities.(20:00) Ethical considerations and biases in AI deployment.(25:00) AI’s growing impact on healthcare operations and management.(30:00) Enhancing patient-physician communication with AI tools.(39:00) Future directions in AI and healthcare policy.Resources Mentioned:Carmel Shachar - https://www.linkedin.com/in/carmel-shachar-7b3a8525/Harvard Law School Center for Health Law and Policy Innovation - https://www.linkedin.com/company/harvardchlpi/Carmel Shachar's Faculty Profile at Harvard Law School - https://hls.harvard.edu/faculty/carmel-shachar/Precision Medicine, Artificial Intelligence and the Law Project - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-lawPetrie-Flom Center Blog - https://blog.petrieflom.law.harvard.edu/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
42:0828/05/2024
The Importance of Diverse Perspectives in Shaping AI Policies with Ari Kaplan

The Importance of Diverse Perspectives in Shaping AI Policies with Ari Kaplan

On this episode, I welcome Ari Kaplan, Head Evangelist of Databricks, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.Key Takeaways:(04:42) Insights on the rapid advancements in AI technology and legislative responses.(10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.(13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.(16:56) Ethical concerns in AI across different countries.(21:21) The necessity for both industry-specific and overarching AI regulations.(25:09) Automation’s potential to improve efficiency also raises employment risk.(29:17) A balanced, educational approach in the age of AI is crucial.(32:45) Risks associated with generative AI and the importance of intellectual property rights.Resources Mentioned:Ari Kaplan - https://www.linkedin.com/in/arikaplan/Databricks - https://www.linkedin.com/company/databricks/Unity Catalog Governance Value Levers - https://www.databricks.com/blog/unity-catalog-governance-value-leversPresident Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/EU AI Act Information - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
40:4924/05/2024
AI and Regulatory Frameworks in Telecommunications with Nicolas Kourtellis

AI and Regulatory Frameworks in Telecommunications with Nicolas Kourtellis

In this episode, I welcome Nicolas Kourtellis, Co-Director of Telefónica Research and Head of Systems AI Lab at Telefónica Innovación Digital, a company of the Telefonica Group. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment. Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.Key Takeaways:(00:00) AI research focuses and applications in telecommunications.(03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.(06:00) How Telefónica uses AI to improve customer service through AI chatbots.(12:03) The ethical considerations and sustainability of AI models.(16:08) Democratizing AI to make it accessible and beneficial for all users.(18:09) Designing AI systems with privacy and security from the start.(27:00) The challenges and opportunities AI presents for the workforce.(30:25) The potential of 6G and its reliance on AI technologies.(32:16) The integral role of AI in future technological advancements and network optimizations.(39:35) The societal impacts of AI in telecommunications.Resources Mentioned:Nicolas Kourtellis - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/Telefónica Innovación Digital - https://www.linkedin.com/company/telefonica-innovacion-digital/Telefonica Group - https://www.linkedin.com/company/telefonica/You can find all of Nicolas’ publications on his Google Scholar page: http://scholar.google.com/citations?user=Q5oWwiQAAAAJ Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
39:5622/05/2024
Supporting Vulnerable Populations With AI-Driven Initiatives with Dr. Irina Mirkina of UNICEF

Supporting Vulnerable Populations With AI-Driven Initiatives with Dr. Irina Mirkina of UNICEF

On this episode of the Regulating AI Podcast, I'm joined by Dr. Irina Mirkina, Innovation Manager and AI Lead at UNICEF's Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.Key Takeaways:(03:31) The role of international organizations like UNICEF in shaping global AI regulations.(07:06) Challenges of democratizing AI across different regions to overcome the digital divide.(10:28) The importance of developing AI systems that cater to local contexts.(13:23) The transformative potential and limitations of AI in personalized education.(16:37) Engaging vulnerable populations directly in AI policy discussions.(20:47) UNICEF's use of AI in addressing humanitarian challenges.(25:10) The role of civil society in AI regulation and policymaking.(33:50) AI's risks and limitations, including issues of open-source management and societal impact.(38:57) The critical need for international collaboration and standardization in AI regulations.Resources Mentioned:Dr. Irina Mirkina - https://www.linkedin.com/in/irinamirkina/UNICEF Office of Innovation - https://www.unicef.org/innovation/Policy Guidance on AI for Children by UNICEF - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdfThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
44:4717/05/2024
Understanding the Role of Government and Big Tech in China’s AI Landscape with Angela Zhang

Understanding the Role of Government and Big Tech in China’s AI Landscape with Angela Zhang

On this episode, I’m joined by Professor Angela Zhang, Associate Professor of Law at the University of Hong Kong and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.Key Takeaways:(02:14) The introduction of China’s approach to AI regulation.(06:40) Discussion on the volatile nature of Chinese regulatory processes.(10:26) How China’s AI strategy impacts international relations and global standards.(13:32) Angela explains the strategic use of law as an enabler in China’s AI development.(18:53) High-level talks between the US and China on AI risk have not led to substantive actions.(22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.(24:13) Unintended consequences of the Chinese regulatory system.(29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.Resources Mentioned:Professor Angela Zhang - http://www.angelazhang.netHigh Wire by Angela Zhang - https://global.oup.com/academic/product/high-wire-9780197682258Article: The Promise and Perils of China’s Regulation - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676Research: Generative AI and Copyright: A Dynamic Perspective - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233Research: The Promise and Perils of China's Regulation of Artificial Intelligence - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676Angela Zhang’s Website - https://www.angelazhang.net/High Wire Book Trailer - https://www.youtube.com/watch?v=u6OPSit6k6sPurchase High Wire by Angela Zhang - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&keywords=high+wire+angela+zhang&qid=1706441967&sprefix=high+wire+angela+zha,aps,333&sr=8-1Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
36:5103/05/2024
Advocating for Stronger AI Regulations To Safeguard Civil Liberties with Congressman Joseph Morelle

Advocating for Stronger AI Regulations To Safeguard Civil Liberties with Congressman Joseph Morelle

On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.Key Takeaways:(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.(04:27) Deep fakes and their growing threat to privacy and integrity.(07:13) Introducing federal legislation against non-consensual deep fakes.(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.(19:46) The No AI Fraud Act and protecting individual likeness in AI use.(23:06) The importance of adaptable and 'living' statutes in technology regulation.(32:59) The critical role of continuous education and skill adaptation in the AI era.(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.Resources Mentioned:Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
40:1830/04/2024
Empowering Innovators for a Brighter AI Tomorrow with Dr. Sethuraman Panchanathan

Empowering Innovators for a Brighter AI Tomorrow with Dr. Sethuraman Panchanathan

On this episode, I welcome Dr. Sethuraman Panchanathan, Director of the U.S. National Science Foundation and a professor at Arizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.Key Takeaways:(00:21) AI’s pivotal role in enhancing speech-language services.(01:28) Introduction to Sethuraman’s visionary leadership at NSF.(02:36) NSF’s significant AI investment totaled over $820 million.(06:19) The shift toward interdisciplinary AI research at NSF.(10:26) NSF’s initiative of launching 25 AI institutes for innovation.(18:26) Emphasis on AI democratization through education and training.(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.(30:21) Focus on ethical AI development to build public trust.(40:10) AI’s transformative applications in healthcare, agriculture and more.(42:45) The importance of ethical guardrails in AI’s development.(43:08) Advancing AI through international collaborations.(44:53) Lessons from a career in AI and advice for the next generation.(50:19) Motivating young researchers and entrepreneurs in AI.(52:24) Advocating for AI innovation and accessibility for everyone.Resources Mentioned:Dr. Sethuraman Panchanathan -https://www.linkedin.com/in/drpanch/U.S. National Science Foundation | LinkedIn -https://www.linkedin.com/company/national-science-foundation/U.S. National Science Foundation | Website -https://www.nsf.gov/Arizona State University -https://www.linkedin.com/school/arizona-state-university/ExpandAI Program -https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-buildingDr. Sethuraman Panchanathan’s NSF Profile -https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchanNSF Regional Innovation Engines -https://new.nsf.gov/funding/initiatives/regional-innovation-enginesNational AI Research Resource (NAIRR) -https://new.nsf.gov/focus-areas/artificial-intelligence/nairrNSF Focus on Artificial Intelligence -https://new.nsf.gov/focus-areas/artificial-intelligenceNSF AI Research Funding -https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-researchGRANTED Initiative for Broadening Participation in STEM -https://new.nsf.gov/funding/initiatives/broadening-participation/grantedThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
54:4324/04/2024
Evaluating the Effectiveness of AI Legislation in Cybersecurity with Bruce Schneier

Evaluating the Effectiveness of AI Legislation in Cybersecurity with Bruce Schneier

The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.Key Takeaways:(00:00) I discuss with Bruce the challenges of regulating AI in the US.(02:28) Bruce explains the role and future potential of AI in cybersecurity.(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.(07:22) The need for robust regulations akin to those in the EU.(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.(19:56) The critical role of knowledgeable staff in supporting legislators.(22:24) The challenges of effectively regulating AI.(26:15) The potential of AI to transform enforcement across various sectors.(30:58) Reflections on the future of AI governance and ethical considerations.Resources Mentioned:Bruce Schneier Website - https://www.schneier.com/EU AI Strategy - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligenceThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
33:0523/04/2024
AI's Potential in Public Services with Trooper Sanders

AI's Potential in Public Services with Trooper Sanders

On this episode, I’m joined by Trooper Sanders, CEO of Benefits Data Trust and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.Key Takeaways:(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.Resources Mentioned:Trooper Sanders -https://www.linkedin.com/in/troopersanders/Benefits Data Trust | LinkedIn -https://www.linkedin.com/company/benefits-data-trust/Benefits Data Trust | Website -https://bdtrust.org/White House National Artificial Intelligence Advisory Committee -https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/BDT Launches AI and Human Services Learning Hub -https://bdtrust.org/bdt-launches-ai-learning-lab/Our Vision for an Intelligent Human Services and Benefits Access System -https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-systemHumans Must Control Human-Serving AI -https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/Trooper Sanders’ Bio -https://bdtrust.org/trooper-sanders/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
41:3919/04/2024
The Impact of AI on Global Military Strategies with Dr. Paul Lushenko

The Impact of AI on Global Military Strategies with Dr. Paul Lushenko

I'm thrilled to be joined by Dr. Paul Lushenko, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the U.S. Army War College. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.Key Takeaways:(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.(06:37) The gaps in global governance regarding AI and autonomous weapon systems.(08:30) U.S. policies on the responsible use of AI in military operations.(16:29) The importance of cutting-edge research in informing legislative actions on AI.(18:49) The risk of biases in AI systems used in national security.(20:09) Discussion on automation bias and its consequences in military operations.(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.Resources Mentioned:Dr. Paul Lushenko -https://www.linkedin.com/in/paul-lushenko-phd-5b805113/U.S. Army War College -https://www.linkedin.com/school/united-states-army-war-college/Political Declaration on Responsible Use of AI in Military Technologies -https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdfMemorandum on Ethical Use of AI - White House 2023 -https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdfThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
41:3418/04/2024
Harnessing AI for Equitable Education with Randi Weingarten, President of American Federation of Teachers

Harnessing AI for Equitable Education with Randi Weingarten, President of American Federation of Teachers

On this episode, I welcome Randi Weingarten, President of the American Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.Key Takeaways:(01:08) Introduction of Randi Weingarten and her role in the AFT.(05:00) The critical issue of ensuring equitable access to AI technologies in education.(08:06) Addressing bias and discrimination within AI-driven educational systems.(11:53) The importance of inclusive participation in the implementation of educational technologies.(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.(18:08) Concerns surrounding data privacy and security within AI-driven platforms.(20:25) The need for regulation and oversight in the application of AI in educational settings.(25:22) The potential for productive industry collaboration in developing AI tools for education.(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.Resources Mentioned:Randi Weingarten - https://www.linkedin.com/in/randi-weingarten-05896224/American Federation of Teachers - https://www.aft.org/Testimony to Senator Schumer by Randi Weingarten on equity in AI - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefitsBiden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
36:4901/04/2024
Crafting Effective AI Policies for National Security With Insights From Anja Manuel

Crafting Effective AI Policies for National Security With Insights From Anja Manuel

AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome Anja Manuel, the Executive Director of the Aspen Strategy Group and the Aspen Security Forum, as well as Co-Founder and Partner at Rice, Hadley, Gates & Manuel, LLC. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.Key Takeaways:(00:17) The functionality of intelligence committees across party lines.(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.(03:10) The rapid innovation in military technology and the US’s efforts to adapt.(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.(07:09) AI regulation is needed both globally and nationally.(11:21) International collaboration plays a vital role in AI regulation.(13:39) Ethical considerations unique to AI applications in national security.(14:31) National security agencies’ openness to regulatory frameworks.(15:35) Public-private collaboration in addressing national security considerations.(17:08) Establishing standards in AI technology for national security is necessary.(18:28) Regulation of autonomous weapons and international agreements.(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.(20:17) AI’s role and risks in intelligence and privacy.(21:13) Regulating AI in cybersecurity and other areas is a challenge.Resources Mentioned:Anja Manuel - https://www.linkedin.com/in/anja-manuel-26805023/Aspen Strategy Group - https://www.aspeninstitute.org/programs/aspen-strategy-group/Aspen Security Forum - https://www.aspensecurityforum.org/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
24:3526/03/2024
Shaping the Future of Manufacturing With AI Insights with Dr. Gunter Beitinger

Shaping the Future of Manufacturing With AI Insights with Dr. Gunter Beitinger

On this episode, I’m joined by Dr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at Siemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.Key Takeaways:(02:17) Dr. Beitinger’s extensive background and role at Siemens.(05:13) Specific examples of AI-driven improvements in Siemens’ operations.(07:52) The measurable productivity gains attributed to AI in manufacturing.(10:02) The impact of AI on employment and the importance of re-skilling.(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.(16:24) The role of AI in improving the working conditions of industrial workers.(26:53) The potential for smaller companies to leverage AI and compete with industry giants.(36:49) AI’s future role in creating digital twins and the industrial metaverse.Resources Mentioned:Dr. Gunter Beitinger -https://www.linkedin.com/in/gunter-dr-beitinger/Siemens | LinkedIn -https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-textSiemens | Website -https://www.siemens.com/https://blog.siemens.com/space/artificial-intelligence-in-industry/https://blog.siemens.com/2023/07/the-need-to-rethink-production/https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.htmlThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
40:1019/03/2024
Exploring AI’s Impact on National Security and Legislation with Sarah Kreps

Exploring AI’s Impact on National Security and Legislation with Sarah Kreps

On this episode, I’m joined by Sarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at Cornell Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.Key Takeaways:(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.(03:27) AI's multifaceted applications and its national security implications.(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.(20:30) Concerns about potential AI monopolies and the economic consequences.(28:16) Ethical and practical aspects of AI assistance in legislative processes.(30:13) The critical need for human involvement in AI-augmented military decisions.(35:32) National security agencies' approach to AI regulatory frameworks.(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.Resources Mentioned:Sarah Kreps - https://www.linkedin.com/in/sarah-kreps-51a3b7257/Cornell - https://www.linkedin.com/school/cornell-university/Sarah Kreps’ paper for the Brookings Institution - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/Discussions on AI Global Governance - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfmSarah Kreps - Cornell University - https://government.cornell.edu/sarah-krepsThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
44:4614/03/2024
The Ethical Boundaries of AI and Robotics with Professor Emeritus Ronald Arkin

The Ethical Boundaries of AI and Robotics with Professor Emeritus Ronald Arkin

On this episode, I’m joined by Professor Ronald Arkin, a renowned expert in robotics and roboethics from the Georgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.Key Takeaways:(02:40) Ethical guidelines for AI and robotics.(03:19) IEEE’s role in creating soft law guidelines.(06:56) Robotics’ overshadowing by large language models.(10:13) The necessity of oversight and compliance in AI development.(15:30) Ethical considerations for emotionally expressive robots.(23:41) Liability frameworks for ethical lapses in robotics.(27:43) The debate on open-sourcing robotics software.(29:52) The impact of robotics on workforce and employment.(33:37) Human rights implications in robotic deployment.(42:55) Final insights on cautious advancement in AI regulation.Resources Mentioned:Ronald Arkin - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/Ronald Arkin | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/Georgia Tech Mobile Robot Lab - https://sites.cc.gatech.edu/ai/robot-lab/Georgia Institute of Technology - https://www.linkedin.com/school/georgia-institute-of-technology/IEEE Standards Association - https://standards.ieee.org/United Nations Convention on Certain Conventional Weapons - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&clang=_en&mtdsg_no=XXVI-2&src=TREATYThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
42:4309/03/2024
Navigating AI Innovation and Ethics in Legislation with Steve Mills

Navigating AI Innovation and Ethics in Legislation with Steve Mills

On this episode, I welcome Steve Mills, Global Chief AI Ethics Officer for Boston Consulting Group and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.Key Takeaways:(00:26) The role clear regulations play in fostering innovation.(02:43) The importance of consultation with industry to set achievable regulations.(04:07) Addressing the uncertainty surrounding AI regulation.(06:19) The necessity of sector-specific AI regulations.(07:33) The debate over establishing a separate AI regulatory body.(09:22) Adapting AI policy to keep pace with technological advancements.(11:40) Enhancing AI literacy and upskilling the workforce.(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.(15:01) Strategies for ensuring AI systems are fair and equitable.(20:10) The discussion on open-source AI and combating monopolies.(22:00) The importance of transparency in AI usage by companies.Resources Mentioned:Steve Mills - https://www.linkedin.com/in/stevndmills/Boston Consulting Group - https://www.linkedin.com/company/boston-consulting-group/Responsible AI Ethics - https://www.bcg.com/capabilities/artificial-intelligence/responsible-aiStudy on the impact of AI in the workforce - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-riskThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
25:2307/03/2024
The Impact of Rapid AI Evolution with Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament

The Impact of Rapid AI Evolution with Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament

On this episode, I welcome Kai Zenner, Head of Office and Digital Policy Advisor at the European Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.Key Takeaways:(01:36) Diverse perspectives in AI legislation play a significant role.(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.(11:50) The global approach of the EU AI Act and its focus on international alignment.(14:28) Ethical considerations in AI development addressed by the AI Act.(16:21) Implementation and enforcement mechanisms of the EU AI Act.(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.(29:51) The importance of educating the public on AI issues.(33:12) Concerns about deepfake technology and election interference.Resources Mentioned:Kai Zenner - https://www.linkedin.com/in/kzenner/?originalSubdomain=beEuropean Parliament - https://www.linkedin.com/company/european-parliament/EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligenceThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
38:1304/03/2024
The Role of AI in Society with Lexy Kassan, Lead Data and AI Strategist of Databricks

The Role of AI in Society with Lexy Kassan, Lead Data and AI Strategist of Databricks

On this episode, I’m joined by Lexi Kassan, Lead Data and AI Strategist of Databricks and Founder and Host of the Data Science Ethics Podcast. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.Key Takeaways:(02:44) The global impact of the EU AI Act.(03:46) The necessity for risk-based AI model assessments.(08:20) Ethical challenges hidden within AI applications.(11:45) Strategies for inclusive AI benefiting marginalized communities.(13:29) Core ethical principles for AI systems.(19:50) The complexity of creating unbiased AI data sets.(21:58) Categories of unacceptable risks in AI according to the EU Act.(27:18) Accountability in AI deployment.(30:53) The role of open-source models in AI development.(36:24) Businesses seek clear regulatory guidelines.Resources Mentioned:Lexi Kassan - https://www.linkedin.com/in/lexykassan/?originalSubdomain=ukData Science Ethics Podcast - https://www.linkedin.com/company/dsethics/EU AI Act - https://artificialintelligenceact.eu/Databricks - https://www.databricks.com/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
39:0829/02/2024
Existential Risk in AI with Otto Barten

Existential Risk in AI with Otto Barten

In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by Otto Barten, Founder of the Existential Risk Observatory. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.Key Takeaways:(00:18) Public awareness of AI risks is rising rapidly.(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.(02:51) The European Union’s political consensus on the EU AI Act.(04:11) Otto explains multiple AI threat models leading to existential risks.(07:01) Why distinguish between AGI and current AI capabilities?(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.(12:15) The potential dangers of open-sourcing AGI.(14:17) The current regulatory landscapes and potential improvements.(17:01) The concept of a “pause button” for AI development is introduced.(20:13) Balancing AI development with ethical considerations and existential risks.(23:51) Increasing public and legislative awareness of AI risks.(29:01) The significance of transparency and accountability in AI development.Resources Mentioned:Otto Barten - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nlExistential Risk Observatory - https://www.linkedin.com/company/existential-risk-observatory/European Union AI Act - The Bletchley Process for global AI safety summits - Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
37:5128/02/2024
A Vision for a Balanced AI Future with Daniel Jeffries of AI Infrastructure Alliance and Kentauros AI

A Vision for a Balanced AI Future with Daniel Jeffries of AI Infrastructure Alliance and Kentauros AI

On this episode, I'm joined by Daniel Jeffries, Managing Director of the AI Infrastructure Alliance and CEO of Kentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.Key Takeaways:(02:05) Recent executive orders on AI, watermarking and model size regulation.(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.(07:52) The rapid evolution of AI and the legislative challenge to keep pace.(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.(13:29) The role of open source in fostering innovation.(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.(22:33) Recommendations for policymakers to focus on real-world problems.Resources Mentioned:Daniel Jeffries - https://www.linkedin.com/in/danjeffries/AI Infrastructure Alliance - https://www.linkedin.com/company/ai-infrastructure-alliance/Kentauros - https://www.linkedin.com/company/kentauros-ai/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
29:2517/02/2024
Crafting Equitable AI Policies for Work and Education with Alex Swartsel

Crafting Equitable AI Policies for Work and Education with Alex Swartsel

On this episode, I welcome Alex Swartsel, Managing Director of Insights at JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.Key Takeaways:(00:16) AI’s transformative impact on employment.(02:35) The role AI plays in job transformation and skill enhancement.(04:30) The automation and augmentation of tasks by AI.(06:10) Rethinking education and skill development in the age of AI.(09:22) The significance of soft skills in conjunction with technical knowledge.(11:00) AI’s potential to customize learning experiences.(17:20) The pivotal role of community colleges in workforce training.(21:33) The imperative of reskilling and the government’s role.(29:51) Using AI for personalized education and career guidance.(35:09) Promoting AI as a tool for human advancement.Resources Mentioned:Alex Swartsel - https://www.linkedin.com/in/alexswartsel/JFFLabs’ New Center for Artificial Intelligence and the Future of Work - https://www.jff.org/The AI-Ready Workforce report - https://info.jff.org/ai-readyIMF Report on AI’s Impact on Jobs - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
35:4914/02/2024
Envisioning a Harmonious Future Between AI and Humanity with Avi Loeb

Envisioning a Harmonious Future Between AI and Humanity with Avi Loeb

On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence. Key Takeaways:(00:36) The essential role of academia in fostering dialogue across differing viewpoints.(06:58) Professor Loeb's concerns about AI's unpredictability.(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.(10:59) Assigning responsibility for AI's actions.(14:29) The need for international treaties to regulate AI's use in national security and warfare.(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.Resources Mentioned:Harvard University's Galileo Project - https://projects.iq.harvard.edu/galileo/homeRubin Observatory - https://rubinobservatory.org/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
35:3409/02/2024
The Potential Effect of AI and Autonomous Flying Robots on National Security  with Timothy Bean of Fortem Technologies

The Potential Effect of AI and Autonomous Flying Robots on National Security with Timothy Bean of Fortem Technologies

In this latest episode, I'm joined by Timothy Bean, President and COO of Fortem Technologies, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.Key Takeaways:(02:42) The evolution of national security tools and the advent of AI.(03:49) The importance of data privacy in AI legislation and national security.(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.(12:13) The impending impact of quantum computing on AI and national security.(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia. (17:25) Public-private partnerships in enhancing national security through AI.(18:39) The role of transparency and accountability in AI applications for national security.(22:16) Debating the merits of open-sourcing AI models in the context of national security.(24:55) The significance of educating the public on data privacy and the potential of AI.Resources Mentioned:Timothy Bean -https://www.linkedin.com/in/meghalred/Fortem Technologies -https://www.linkedin.com/company/fortem-technologies/President Biden’s Executive Order on AI -https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/Department of Defense AI Ethics Principles -https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.htmlThanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
33:2806/02/2024
AI Education and Policy with Nathan Grant of Teach AI

AI Education and Policy with Nathan Grant of Teach AI

On this episode, I'm thrilled to chat with Nathan Grant, Policy Fellow of TeachAI, an initiative championed by notable organizations including Code.org, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.Key Takeaways:(01:16) Introduction of Nathan Grant and the TeachAI initiative.(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.(03:45) Perspectives on President Biden's Executive Order on AI.(06:27) AI literacy's critical role across all subjects in K-12 education.(07:30) Addressing the digital and AI divide for equitable education.(09:03) Engaging students in the AI legislation dialogue.(12:44) Concerns over banning AI tools like ChatGPT in schools.(14:33) The risk of AI tool monopolization by a few large tech companies.(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.(18:59) The potential for standardized AI education guidelines.Resources Mentioned:Nathan Grant - https://www.linkedin.com/in/nathan-grant-t/Code.org - https://www.linkedin.com/company/code-org/President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/TeachAI initiative - https://www.teachai.org/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard
26:5303/02/2024