Listen "#77- Ring Attention and 1M context window, is RAG dead?"
Episode Synopsis
Hello guys, in this episode I explain how we can scale the context window of an LLM to more than 1M tokens using Ring Attention. In the episode, I also discuss if RAG is dead or not based on these advancements in the context window.
Paper Lost in the Middle: https://arxiv.org/pdf/2307.03172.pdf
Gemini technical report: https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf
Paper Ring Attention: https://arxiv.org/pdf/2310.01889.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
Paper Lost in the Middle: https://arxiv.org/pdf/2307.03172.pdf
Gemini technical report: https://storage.googleapis.com/deepmind-media/gemini/gemini_v1_5_report.pdf
Paper Ring Attention: https://arxiv.org/pdf/2310.01889.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
More episodes of the podcast Life with AI
#99- GraphRAG.
05/12/2024
#98- On-device AI with SmolLM.
07/11/2024
#96- Maritaca AI, the brazilian LLM company.
24/10/2024
#95- Why Chain of Thought works?
26/09/2024
#94- OpenAI o1
19/09/2024
#93- Different types of AI.
12/09/2024
#92- Llama3 benchmarks, vision and speech.
22/08/2024
#91- Llama 3 training.
15/08/2024
#90- Llama 3 paper overview.
25/07/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.