ConvLLaVA's Visual Compression, Efficient LLVM, Multilingual Aya 23, and AutoCoder's Code Mastery

28/05/2024 11 min Episodio 36
ConvLLaVA's Visual Compression, Efficient LLVM, Multilingual Aya 23, and AutoCoder's Code Mastery

Listen "ConvLLaVA's Visual Compression, Efficient LLVM, Multilingual Aya 23, and AutoCoder's Code Mastery"

Episode Synopsis


ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal
Models

Meteor: Mamba-based Traversal of Rationale for Large Language and Vision
Models

Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to
the Edge of Generalization

Aya 23: Open Weight Releases to Further Multilingual Progress

Stacking Your Transformers: A Closer Look at Model Growth for Efficient
LLM Pre-Training

AutoCoder: Enhancing Code Large Language Model with
AIEV-Instruct

More episodes of the podcast AI Papers Podcast