
Making artificial intelligence practical, productive, and accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, etc). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Similar Podcasts

Ship It! DevOps, Infra, Cloud Native
A show about getting your best ideas into the world and seeing what happens. We talk about code, ops, infrastructure, and the people that make it happen. Gerhard Lazu and friends explore all things DevOps, infra, and running apps in production. Whether you’re cloud native, Kubernetes curious, a pro SRE, or just operating a VPS… you’ll love coming along for the ride. Ship It honors the makers, the shippers, and the visionaries that see it through. Some people search for ShipIt or ShipItFM and can’t find the show, so now the strings ShipIt and ShipItFM are in our description too.

The Changelog: Software Development, Open Source
Conversations with the hackers, leaders, and innovators of the software world. Hosts Adam Stacoviak and Jerod Santo face their imposter syndrome so you don’t have to. Expect in-depth interviews with the best and brightest in software engineering, open source, and leadership. This is a polyglot podcast. All programming languages, platforms, and communities are welcome. Open source moves fast. Keep up.

Founders Talk: Startups, CEOs, Leadership
In-depth, one-on-one conversations with founders, CEOs, and makers. The journey, lessons learned, and the struggles. Let’s do this! Host Adam Stacoviak dives deep into the trials, tribulations, successes, and failures of industry leading entrepreneurs, leaders, innovators, and visionaries.
AI trailblazers putting people first
According to Solana Larsen: “Too often, it feels like we have lost control of the internet to the interests of Big Tech, Big Data — and now Big AI.” In the latest season of Mozilla’s IRL podcast (edited by Solana), a number of stories are featured to highlight the trailblazers who are reclaiming power over AI to put people first. We discuss some of those stories along with the issues that they surface.
Government regulation of AI has arrived
On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. It was signed by 29 countries, including the United States and China, the global leaders in AI research. In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.
Self-hosting & scaling models
We’re excited to have Tuhin join us on the show once again to talk about self-hosting open access models. Tuhin’s company Baseten specializes in model deployment and monitoring at any scale, and it was a privilege to talk with him about the trends he is seeing in both tooling and usage of open access models. We were able to touch on the common use cases for integrating self-hosted models and how the boom in generative AI has influenced that ecosystem.
Deep learning in Rust with Burn 🔥
It seems like everyone is interested in Rust these days. Even the most popular Python linter, Ruff, isn’t written in Python! It’s written in Rust. But what is the state of training or inferencing deep learning models in Rust? In this episode, we are joined by Nathaniel Simard, the creator burn. We discuss Rust in general, the need to have support for AI in multiple languages, and the current state of doing “AI things” in Rust.
AI's impact on developers
Chris & Daniel are out this week, so we’re bringing you a panel discussion from All Things Open 2023 moderated by Jerod Santo (Practical AI producer and co-host of The Changelog) and featuring keynoters Emily Freeman and James Q Quick.
Generative models: exploration to deployment
What is the model lifecycle like for experimenting with and then deploying generative AI models? Although there are some similarities, this lifecycle differs somewhat from previous data science practices in that models are typically not trained from scratch (or even fine-tuned). Chris and Daniel give a high level overview in this effort and discuss model optimization and serving.
Automate all the UIs!
Dominik Klotz from askui joins Daniel and Chris to discuss the automation of UI, and how AI empowers them to automate any use case on any operating system. Along the way, the trio explore various approaches and the integration of generative AI, large language models, and computer vision.
Fine-tuning vs RAG
In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also chat about OpenAI Enterprise, results from the MLOps Community LLM survey, and the orchestration and evaluation of generative AI workloads.
Automating code optimization with LLMs
You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike from TurinTech to hear about practical code optimizations using AI “translation” of slow to fast code. We learn about their process for accomplishing this task along with impressive results when automated code optimization is run on existing open source projects.
The new AI app stack
Recently a16z released a diagram showing the “Emerging Architectures for LLM Applications.” In this episode, we expand on things covered in that diagram to a more general mental model for the new AI app stack. We cover a variety of things from model “middleware” for caching and control to app orchestration.
Blueprint for an AI Bill of Rights
In this Fully Connected episode, Daniel and Chris kick it off by noting that Stability AI released their SDXL 1.0 LLM! They discuss its virtues, and then dive into a discussion regarding how the United States, European Union, and other entities are approaching governance of AI through new laws and legal frameworks. In particular, they review the White House’s approach, noting the potential for unexpected consequences.
Vector databases (beyond the hype)
There’s so much talk (and hype) these days about vector databases. We thought it would be timely and practical to have someone on the show that has been hands on with the various options and actually tried to build applications leveraging vector search. Prashanth Rao is a real practitioner that has spent and huge amount of time exploring the expanding set of vector database offerings. After introducing vector database and giving us a mental model of how they fit in with other datastores, Prashanth digs into the trade offs as related to indices, hosting options, embedding vs. query optimization, and more.
There's a new Llama in town
It was an amazing week in AI news. Among other things, there is a new NeRF and a new Llama in town!!! Zip-NeRF can create some amazing 3D scenes based on 2D images, and Llama 2 from Meta promises to change the LLM landscape. Chris and Daniel dive into these and they compare some of the recently released OpenAI functionality to Anthropic’s Claude 2.
Legal consequences of generated content
As a technologist, coder, and lawyer, few people are better equipped to discuss the legal and practical consequences of generative AI than Damien Riehl. He demonstrated this a couple years ago by generating and copyrighting every possible musical melody. Damien joins us to answer our many questions about generated content, copyright, dataset licensing/usage, and the future of knowledge work.
A developer's toolkit for SOTA AI
Chris sat down with Varun Mohan and Anshul Ramachandran, CEO / Cofounder and Lead of Enterprise and Partnership at Codeium, respectively. They discussed how to streamline and enable modern development in generative AI and large language models (LLMs). Their new tool, Codeium, was born out of the insights they gleaned from their work in GPU software and solutions development, particularly with respect to generative AI, large language models, and supporting infrastructure. Codium is a free AI-powered toolkit for developers, with in-house models and infrastructure - not another API wrapper.