Reader

Podcast: Meryem Arik on LLM Deployment, State-of-the-art RAG Apps, and Inference Architecture Stack

| InfoQ | Default

In this podcast, Meryem Arik, Co-founder/CEO at TitanML, discusses the innovations in Generative AI and Large Language Model (LLM) technologies including current state of large language models, LLM Deployment, state-of-the-art Retrieval Augmented Generation (RAG) apps, and inference architecture stack for LLM applications.

By Meryem Arik