Lutz & Jasper powered by Cherry Ventures

by Cherry Ventures

Breaking down the latest developments in AI with two experts — Jasper Masemann, investment partner at Cherry Ventures, and Lutz Finger, a visiting senior lecturer at Cornell University's SC Johnson College of Business and President of Product and Development at Marpai, Inc.

Podcast episodes

  • Season 2

  • Did our predictions hold true?

    Did our predictions hold true?

    We are back! In this episode, we revisit our 2023 predictions. What happened? What did not happen? Dive into topics like the state of MLOps or the hardware evolution. Tune in and we are very happy to welcome you back for the 2024 season of Lutz & Jasper!

  • Season 1

  • The synergy between LLMs and knowledge graphs

    The synergy between LLMs and knowledge graphs

    Sounding intelligent doesn’t equate to actual intelligence. This is a challenge with generative AI. While LLMs might pass the Turing test, they don’t always possess factual knowledge. In this episode, we’re delving deeper into how startups and enterprises can leverage generative AI effectively in our conversation Mike Dilinger, a leading expert in knowledge graphs (KGs). Think of a KG as a repository of facts—something LLMs often lack. Mike and Lutz explore how KGs are crucial for building successful business strategies using generative language models.

  • Large Language Models: one interface to rule them all?

    Large Language Models: one interface to rule them all?

    Learn how LLMs can be used as a unified UX interface in this deep dive with Tariq Rauf, founder of Qatalog, as we jump into the product and technical challenges of using generative AI on top of enterprise products.

  • You asked, we answered

    You asked, we answered

    We're always thrilled to see your burning questions and this time we decided to answer them on air. Curious about how AI use cases differ for frontline workers vs. high-skilled knowledge workers? Or wondering about the VC hype and the massive investment flowing into AI? We've got it covered.

  • Are we one step closer to using LLMs in practice?

    Are we one step closer to using LLMs in practice?

    LLMs + vector search = RAG.In this episode, we introduce you to Retrieval-Augmented Generation (RAG) and how it might allow us to better and more easily implement LLMs in our day-to-day work. We believe current LLMs are best used as an interface, and if we want to make enterprise search useful, RAGs can be the solution.