Listen "#78- RAFT: Why just to use RAG if you can also fine tune?"
Episode Synopsis
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs.
In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning.
Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1
Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning.
Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1
Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
More episodes of the podcast Life with AI
#99- GraphRAG.
05/12/2024
#98- On-device AI with SmolLM.
07/11/2024
#96- Maritaca AI, the brazilian LLM company.
24/10/2024
#95- Why Chain of Thought works?
26/09/2024
#94- OpenAI o1
19/09/2024
#93- Different types of AI.
12/09/2024
#92- Llama3 benchmarks, vision and speech.
22/08/2024
#91- Llama 3 training.
15/08/2024
#90- Llama 3 paper overview.
25/07/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.