Episode notes
In the early 1980s, a computer at St. George's Hospital Medical School in London was automatically rejecting qualified applicants — not because they lacked credentials, but because their names sounded foreign. The algorithm had learned to discriminate by studying years of human admissions decisions, and nobody caught it for years. This episode examines how automated systems don't just reflect human prejudice — they industrialize it.
We start with the St. George's case as a concrete entry point into the broader problem of algorithmic discrimination, then expand outward to examine how automated decision-making systems across hiring, criminal justice, healthcare, and financial services have been caught replicating and scaling patterns of human prejudice at speeds and volumes that no individual human decision-maker could match.