OpenAI Drama Continues 😈 // Shallow Networks as Transformer Alternative 🤔 // SelfEval for Generative Model Evaluation ✅

21/11/2023 13 min

Listen "OpenAI Drama Continues 😈 // Shallow Networks as Transformer Alternative 🤔 // SelfEval for Generative Model Evaluation ✅"

Episode Synopsis

The drama at OpenAI with Sam Altman trying to return as CEO and staff threatening to quit unless the board resigns. We also explore the potential of using shallow neural networks as an alternative to attention layers in transformers, and a paper that proposes a method called SelfEval for evaluating generative models. Additionally, we discuss a paper that explores the effectiveness of using shallow feed-forward networks as an alternative to the attention mechanism in the Transformer model.
Contact:  [email protected]
Timestamps:
00:34 Introduction
01:39 Sam Altman is still trying to return as OpenAI CEO
02:52 OpenAI Staff Threaten to Quit Unless Board Resigns
04:34 Large Language Models and Lost in the Middle
06:09 Fake sponsor
07:33 LLMs cannot find reasoning errors, but can correct them!
09:00 Rethinking Attention: Exploring Shallow Feed-Forward Neural Networks as an Alternative to Attention Layers in Transformers
10:41 SelfEval: Leveraging the discriminative nature of generative models for evaluation
12:28 Outro

More episodes of the podcast GPT Reviews