Listen "Artificial Intelligence and Bias"
Episode Synopsis
It is hard to find a discussion of artificial intelligence these days that does not include concerns about Artificial Intelligence (AI) systems' potential bias against racial minorities and other identity groups. Facial recognition, lending, and bail determinations are just a few of the domains in which this issue arises. Laws are being proposed and even enacted to address these concerns. But is this problem properly understood? If it's real, do we need new laws beyond those anti-discrimination laws that already govern human decision makers, hiring exams, and the like?Unlike some humans, AI models don't have malevolent biases or an intention to discriminate. Are they superior to human decision-making in that sense? Nonetheless, it is well established that AI systems can have a disparate impact on various identity groups. Because AI learns by detecting correlations and other patterns in a real world dataset, are disparate impacts inevitable, short of requiring AI systems to produce proportionate results? Would prohibiting certain kinds of correlations degrade the accuracy of AI models? For example, in a bail determination system, would an AI model which learns that men are more likely to be repeat offenders produce less accurate results if it were prohibited from taking gender into account?Featuring: -- Stewart Baker, Partner, Steptoe & Johnson LLP-- Nicholas Weaver, Researcher, International Computer Science Institute and Lecturer, UC Berkeley -- Moderator: Curt Levey, President, Committee for Justice
More episodes of the podcast FedSoc Forums
A Seat at the Sitting - November 2025
05/11/2025
SAP, Motorola, and the Future of PTAB Reform
31/10/2025
Law Firm Discrimination Investigations
31/10/2025
Can State Courts Set Global Climate Policy?
10/10/2025
A Seat at the Sitting - October 2025
03/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.