Listen ""SolidGoldMagikarp (plus, prompt generation)""
Episode Synopsis
---client: lesswrongproject_id: curatedfeed_id: ai, ai_safety, ai_safety__technicalnarrator: pwqa: km---TL;DRAnomalous tokens: a mysterious failure mode for GPT (which reliably insulted Matthew)We have found a set of anomalous tokens which result in a previously undocumented failure mode for GPT-2 and GPT-3 models. (The 'instruct' models “are particularly deranged” in this context, as janus has observed.)Many of these tokens reliably break determinism in the OpenAI GPT-3 playground at temperature 0 (which theoretically shouldn't happen).Prompt generation: a new interpretability method for language models (which reliably finds prompts that result in a target completion). This is good for:eliciting knowledgegenerating adversarial inputsautomating prompt search (e.g. for fine-tuning)In this post, we'll introduce the prototype of a new model-agnostic interpretability method for language models which reliably generates adversarial prompts that result in a target completion. We'll also demonstrate a previously undocumented failure mode for GPT-2 and GPT-3 language models, which results in bizarre completions (in some cases explicitly contrary to the purpose of the model), and present the results of our investigation into this phenomenon. Further detail can be found in a follow-up post.Original article:https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generationNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.
More episodes of the podcast TYPE III AUDIO (All episodes)
Part 10: How to make your career plan
14/06/2023
Part 8: How to find the right career for you
14/06/2023
Part 6: Which jobs help people the most?
14/06/2023
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.