Listen "This is AGI (S1E4): Hallucinations"
Episode Synopsis
Hallucinating LLMs are a critical step towards artificial general intelligence (AGI). We should not try to fix them but instead build more complex agents that will channel the LLMs’ runaway creativity into self-perpetuating cycles of knowledge discovery.'This Is AGI', a podcast about the path to the artificial general intelligence. Listen every Monday morning on your favourite podcast platform.
More episodes of the podcast This is AGI
This Is AGI (S1E8): World Models Wtf?
27/10/2025
This Is AGI (S1E7): Latent Spaces Wtf?
19/10/2025
This Is AGI (S1E6): Succession
13/10/2025
This Is AGI (S1E5): Can AGI cure cancer?
06/10/2025
This is AGI (S1E4 Teaser): Hallucinations
26/09/2025
Do LLMs Learn?
22/09/2025
Define 'artificial general intelligence'
16/09/2025
Is AI rational?
16/09/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.