Podcast episodes

  • Season 2

  • Being Picky with Piki: Ironing out the Biases in Song Quality Algorithms

    Being Picky with Piki: Ironing out the Biases in Song Quality Algorithms

    Listen to Krystyn Gutu, M.S. introduce Sasha Stoikov, a senior research associate at Cornell Financial Engineering in Manhattan (CFEM). His research studies algorithms in high-frequency financial trading, online ratings systems, and recommendation systems. In these various domains, he has come across algorithmic biases such as survivorship bias, popularity bias, and inflation bias. He is also the founder of Piki, a startup that gamifies music ratings. Ratings produced by users on Piki can mitigate algorithmic biases, which he discusses in a recent paper, aimed at answering a simple but provocative question: “Are the popularities of artists like Justin Bieber or Taylor Swift truly justified?” Check out his paper, Better Than Bieber? Measuring Song Quality Using Human Feedback, to find out. He also authored Evaluating Music Recommendations with Binary Feedback for Multiple Stakeholders, and Picky Eaters Make For Better Raters. In this episode, we discuss – the roles of survivorship bias, popularity bias, and inflation bias – Piki, the music ratings app designed exclusively for those who know what they like – how algorithms like Instagram, TikTok, and Spotify compare to how Piki analyzes song quality – data used to train these and other platforms – how interfaces collecting data unintentionally encourage certain biases – implicit vs explicit data collection – how data is collected and how it addresses the main concerns of their users – how Piki incentives its users to listen to a larger music selection and nudges them into being fair with their ratings – the golden era of data and the tremendous opportunities and dangers that lie ahead If you like what you hear, follow us on LinkedIn (@Gakovii) or on Instagram (@gakovii__). Technically Biased is available on Spotify, Apple, and Amazon, among others. #AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii

  • Algorithmic Auditing... it's Not Rocket Science, it's Astrophysics

    Algorithmic Auditing... it's Not Rocket Science, it's Astrophysics

    Tune in to learn about the applicability of algorithmic auditing. Guest, Shea Brown, founder and CEO of BABL AI, joins us to share his perspective. An Associate Professor of Instruction at the University of Iowa, Brown has his PhD in Astrophysics and specializes in AI Ethics and Machine Learning, with a focus on Algorithmic Auditing and AI Governance. In this episode, we discuss: – Brown’s use of AI in astrophysics and its applicability, given the extent of data from the sky – Brown’s transition to founding BABL AI after realizing a big problem: countless examples of bias in AI – BABL AI and the consulting work they do and what the process entails – algorithmic auditing which, like any audit, is a check and balance aimed at ensuring people and organizations meet the required standards and follow appropriate procedures (i.e., we don’t want to harm people, we don’t want to infringe on people’s rights, people should be involved in algorithmic decision-making, etc.) – algorithmic auditing, with a focus on the socio-technical aspect of how tech is being used more broadly – what to expect for the future of AI regulation and governance – the importance of accountability and transparency, and the trade-offs that come with regulation – algorithmic transparency, and whether your audience knows an algorithm is being used and the data being processed; accountability and transparency are important for building trust within a society which should demand more from the companies using their data – how historical data will always hold a level of bias that needs to be considered If you like what you hear, follow us on LinkedIn (@Gakovii) or Instagram (@gakovii__). Technically Biased is available on Spotify, Apple, and Amazon, among others. #AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii

  • Muslim American Indentities and Their Many Facets

    Muslim American Indentities and Their Many Facets

    Listen to Krystyn Gutu, M.S. introduce Hauwa Abbas and Lama Aboubakr. Abbas is the founder of The Halimah Project, a mentoring and tutoring program created for refugee youth in the Greater Lansing (Michigan) Area. She is a graduate student at American University and a grant writer at Miftaah Institute. Aboubakr runs her own practice, where she is a Mindset & Life Coach, combining cognitive psychology-based techniques with Islamic Spirituality coaching. She specializes in dating and relationships for Muslims. Tune in to listen to them share their perspective on: ~ the work they do ~ working with Muslim refugee youth ~ working with Muslim and Muslim American couples / individuals and helping them navigate dating and relationships ~ the importance of faith and respect for Allah ~ the perpetuation of bias and microaggressions, internally and externally If you like what you hear, follow us on any of our social media linked below. Technically Biased is available on Spotify, Apple, and Amazon, among others. #AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii

  • Season 1

  • Algorithmic Accountability... and What That Means from a Human Rights Perspective | 1.15

    Algorithmic Accountability... and What That Means from a Human Rights Perspective | 1.15

    Damini Satija is a Human Rights and Public Policy Professional, as well as Head of the Algorithmic Accountability Lab and Interim Director at Amnesty Tech. Satija has experience working on data and AI, with a focus on government surveillance, algorithmic discrimination, welfare automation, and tech equity and justice. She has her Master of Public Administration (MPA) from Columbia University, with a specialization in tech policy, and a BA in Economics from the University of California, Berkeley. In this episode, she and Gutu discuss how: Bias and discrimination generally emerge in AI algorithms Human rights implications play a big role in data and consequently, in policy and regulation We need to understand what needs to be addressed to properly mitigate AI harms... is it the model that should be optimized or the data (i.e., model-centric vs data-centric)? Our biases are codified We can go about ensuring more inclusivity, more representation, and less bias in tech Net neutrality, encryption laws, copyright, and content moderation effect us AI is playing an increasingly bigger role in Hollywood, art, and media. Is it possible to reclaim our data? Is data ownership a myth? What are the implications of assigning property rights to personal data? The hype of ChatGPT and GenerativeAI are overdone; and how environmentally unsustainable they are. Should ChatGPT be trained on people's writing, such as their books, articles, and/or poetry? How do property rights and copyright law apply? To be more mindful with technology and the ways it uses our data   Check out our website, LinkedIn, or Instagram to stay up to date! #AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii

  • Legal Decisions are Being Codified and the Models are Perpetuating Historical Biases | Episode 1.14

    Legal Decisions are Being Codified and the Models are Perpetuating Historical Biases | Episode 1.14

    𝗣𝗮𝘁𝗿𝗶𝗰𝗸 𝗞. 𝗟𝗶𝗻 is a lawyer and researcher focused on AI, privacy, and technology regulation. He is the author of 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘚𝘦𝘦, 𝘔𝘢𝘤𝘩𝘪𝘯𝘦 𝘋𝘰, a book that explores the ways public institutions use technology to surveil, police, and make decisions about the public, as well as the historical biases that impact that technology. Patrick has extensive experience in litigation and policy, having worked for the ACLU, FTC, EFF, and other organizations that advocate for digital rights and social justice. He is passionate about addressing the ethical and legal challenges posed by emerging technologies, especially in the areas of surveillance, algorithmic bias, and data privacy. He has also published articles and papers on facial recognition, data protection, and copyright law. This podcast episode covers some of the many crazy topics Lin dives into throughout his book. Some of which include the following discussions: Robert Moses would often quote the saying “Legislation can always be changed. It’s very hard to tear down a bridge once it’s up.” Unsurprisingly then, Moses had a lot of influence in shaping the physical layout and infrastructure of New York City and its surrounding suburbs (i.e., hundreds of miles of road, Central Park Zoo, United Nations (UN) Headquarters, Lincoln Center, and more). Today, the digital landscape is similarly being built on a foundation of bias. Can history be biased? How do we codify bias and build legal models that perpetuate discrimination in policy? It is important to understand what a model outputs and what inputs are considered in the overall assessment. Algorithms like COMPAS, which is used in the police system, consider variables such as education, which is indirectly classist, as education is a proxy for wealth. (120) The government uses surveillance technology disproportionately to target immigrant communities; and the deployment of new systems and technologies are usually tested on immigrants first. This is yet another example of how those most influenced are those who are already most marginalized. Bias is present throughout all stages of policing – from the criminal trial case (where judges use biased algorithms to validate their already biased perspectives, i.e., confirmation bias), to the recidivism assessment process (i.e., models like the aforementioned COMPAS), to cash bail, and many others. Generative AI uses nonconsensual pornography in its training data. How can we mitigate such breaches of privacy? Intellectual property and copyright law play an interesting role and work in the best interest of the AI Industry, which is incentivized to keep the space unregulated. Overrepresentation is an indicator of discriminatory purposes in a model’s training data. What we can to do hedge for such bias in an algorithm’s early phases? #AlgorithmicBias #PredatoryTech #TechnicallyBiasedPodcast #Gakovii