BLEU: Automatic Machine Translation Evaluation

10/09/2025 20 min

Listen "BLEU: Automatic Machine Translation Evaluation"

Episode Synopsis

This July 2002 paper introduced BLEU (Bilingual Evaluation Understudy), an automatic and inexpensive method for evaluating machine translation (MT) quality. It highlights the limitations of human evaluation, such as its high cost and time consumption, and proposes BLEU as a quick, language-independent alternative that correlates strongly with human judgment. The core concept of BLEU involves measuring the "closeness" of a machine translation to one or more human reference translations through a modified n-gram precision metric and a brevity penalty. The paper details the mathematical formulation of the BLEU score, its evaluation against human and machine translations, and its proven correlation with human assessment across various languages.Source:https://dl.acm.org/doi/10.3115/1073083.1073135