All Episodes
Displaying 61 - 90 of 312 in total
Open source, on-disk vector search with LanceDB
Prashanth Rao mentioned LanceDB as a stand out amongst the many vector DB options in episode #234. Now, Chang She (co-founder and CEO of LanceDB) joins us to talk thro...

The state of open source AI
The new open source AI book from PremAI starts with “As a data scientist/ML engineer/developer with a 9 to 5 job, it’s difficult to keep track of all the innovations.”...

Suspicion machines ⚙️
In this enlightening episode, we delve deeper than the usual buzz surrounding AI’s perils, focusing instead on the tangible problems emerging from the use of machine l...

The OpenAI debacle (a retrospective)
Daniel & Chris conduct a retrospective analysis of the recent OpenAI debacle in which CEO Sam Altman was sacked by the OpenAI board, only to return days later with a n...

Generating product imagery at Shopify
Shopify recently released a Hugging Face space demonstrating very impressive results for replacing background scenes in product imagery. In this episode, we hear the b...

AI trailblazers putting people first
According to Solana Larsen: “Too often, it feels like we have lost control of the internet to the interests of Big Tech, Big Data — and now Big AI.” In the latest seas...

Government regulation of AI has arrived
On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Two d...

Self-hosting & scaling models
We’re excited to have Tuhin join us on the show once again to talk about self-hosting open access models. Tuhin’s company Baseten specializes in model deployment and m...

Deep learning in Rust with Burn 🔥
It seems like everyone is interested in Rust these days. Even the most popular Python linter, Ruff, isn’t written in Python! It’s written in Rust. But what is the stat...

AI's impact on developers
Chris & Daniel are out this week, so we’re bringing you a panel discussion from All Things Open 2023 moderated by Jerod Santo (Practical AI producer and co-host of The...

Generative models: exploration to deployment
What is the model lifecycle like for experimenting with and then deploying generative AI models? Although there are some similarities, this lifecycle differs somewhat ...

Automate all the UIs!
Dominik Klotz from askui joins Daniel and Chris to discuss the automation of UI, and how AI empowers them to automate any use case on any operating system. Along the w...

Fine-tuning vs RAG
In this episode we welcome back our good friend Demetrios from the MLOps Community to discuss fine-tuning vs. retrieval augmented generation. Along the way, we also ch...

Automating code optimization with LLMs
You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike...

The new AI app stack
Recently a16z released a diagram showing the “Emerging Architectures for LLM Applications.” In this episode, we expand on things covered in that diagram to a more gene...

Blueprint for an AI Bill of Rights
In this Fully Connected episode, Daniel and Chris kick it off by noting that Stability AI released their SDXL 1.0 LLM! They discuss its virtues, and then dive into a d...

Vector databases (beyond the hype)
There’s so much talk (and hype) these days about vector databases. We thought it would be timely and practical to have someone on the show that has been hands on with ...

There's a new Llama in town
It was an amazing week in AI news. Among other things, there is a new NeRF and a new Llama in town!!! Zip-NeRF can create some amazing 3D scenes based on 2D images, an...

Legal consequences of generated content
As a technologist, coder, and lawyer, few people are better equipped to discuss the legal and practical consequences of generative AI than Damien Riehl. He demonstrate...

A developer's toolkit for SOTA AI
Chris sat down with Varun Mohan and Anshul Ramachandran, CEO / Cofounder and Lead of Enterprise and Partnership at Codeium, respectively. They discussed how to streaml...

Cambrian explosion of generative models
In this Fully Connected episode, Daniel and Chris explore recent highlights from the current model proliferation wave sweeping the world - including Stable Diffusion X...

Automated cartography using AI
Your feed might be dominated by LLMs these days, but there are some amazing things happening in computer vision that you shouldn’t ignore! In this episode, we bring yo...

From ML to AI to Generative AI
Chris and Daniel take a step back to look at how generative AI fits into the wider landscape of ML/AI and data science. They talk through the differences in how one ap...

AI trends: a Latent Space crossover
Daniel had the chance to sit down with @swyx and Alessio from the Latent Space pod in SF to talk about current AI trends and to highlight some key learnings from past ...

Accidentally building SOTA AI
Lately.AI has been working for years on content generation systems that capture your unique “voice” and are tailored to your unique audience. At first, they didn’t kno...

Controlled and compliant AI applications
You can’t build robust systems with inconsistent, unstructured text output from LLMs. Moreover, LLM integrations scare corporate lawyers, finance departments, and secu...

Data augmentation with LlamaIndex
Large Language Models (LLMs) continue to amaze us with their capabilities. However, the utilization of LLMs in production AI applications requires the integration of p...

Creating instruction tuned models
At the recent ODSC East conference, Daniel got a chance to sit down with Erin Mikail Staples to discuss the process of gathering human feedback and creating an instruc...

The last mile of AI app development
There are a ton of problems around building LLM apps in production and the last mile of that problem. Travis Fischer, builder of open AI projects like @ChatGPTBot, joi...

Large models on CPUs
Model sizes are crazy these days with billions and billions of parameters. As Mark Kurtz explains in this episode, this makes inference slow and expensive despite the ...
