Listen "Normalization in type theory: where it is needed, and where not"
Episode Synopsis
Normalization (every term reaches a normal form via some reduction sequence) is needed essentially in type theory due to the Curry-Howard isomorphism: diverging programs become unsound proofs. Traditionally, type theorists have also desired normalization or even termination (every term reaches a normal form no matter what reduction sequence is explored in a nondeterministic operational semantics) for conversion checking. This is the process of confirming that types are equivalent during type checking, which, due to dependent types, can require checking program equivalence. The latter is usually restricted to just beta-equivalence (where beta-reduction is substitution of argument for input variable when applying a function), because richer notions of program equivalence are usually undecidable. I have a mini-rant in this episode explaining why this usual requirement of normalization for conversion checking is not sensible.Also I note that you can find the episodes of the podcast organized by chapter on my web page.
More episodes of the podcast Iowa Type Theory Commute
Measure Functions and Termination of STLC
14/11/2025
Schematic Affine Recursion, Oh My!
22/08/2025
The Stunner: Linear System T is Diverging!
19/08/2025
Terminating Computation First?
01/08/2025
A Measure-Based Proof of Finite Developments
16/04/2025
Nominal Isabelle/HOL
31/01/2025
The Locally Nameless Representation
02/01/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.