Capabilities of LLMs π€―
Large Language Model (LLM) capabilities have reached new heights and are nothing short of mind-blowing! However, with so many advancements happening at once, it can be overwhelming to keep up with all the latest developments. To help us navigate through this complex terrain, weβve invited Raj - one of the most adept at explaining State-of-the-Art (SOTA) AI in practical terms - to join us on the podcast.
Raj discusses several intriguing topics such as in-context learning, reasoning, LLM options, and related tooling. But thatβs not all! We also hear from Raj about the rapidly growing data science and AI community on TikTok.
Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!
Sponsors:
- Fastly β Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com
- Fly.io β The home of Changelog.com β Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
Featuring:
- Rajiv Shah β Website, GitHub, LinkedIn, X
- Chris Benson β Website, GitHub, LinkedIn, X
- Daniel Whitenack β Website, GitHub, X
Show Notes:
- Solving AI Tasks with ChatGPT and its Friends in HuggingFace | GitHub
- Generative Agents: Interactive Simulacra of Human Behavior
- Wolfram ChatGPT
- Comparing LLMs
- LangChain
- Learn about LLMs:
- Learning Prompting
- Getting Started with Transformers:
- Training your own LLM Models:
- Dolly blog post
- Illustrating Reinforcement Learning from Human Feedback
Something missing or broken? PRs welcome!
β
Support this podcast β
Creators and Guests
