Listen "#15: Digging into explainable AI"
Episode Synopsis
Finally an AI model that can tell you why it gives you the answers it does!!
Angelo Dalli is building a new kind of AI that fixes the problems of current large language models. Existing models generate errors, AKA “hallucinations” but can’t tell you why.
His AI, built using neurosymbolic techniques, will be able to get rid of these errors, but even better, explain why it makes the decisions it does.
Here he talks to me about the state of the art of current AIs and where we are going.
Sponsored by AI Top Tools: www.aitoptools.com
Angelo Dalli is building a new kind of AI that fixes the problems of current large language models. Existing models generate errors, AKA “hallucinations” but can’t tell you why.
His AI, built using neurosymbolic techniques, will be able to get rid of these errors, but even better, explain why it makes the decisions it does.
Here he talks to me about the state of the art of current AIs and where we are going.
Sponsored by AI Top Tools: www.aitoptools.com
More episodes of the podcast Unaligned with Robert Scoble
#30: AI can help you get a job
05/12/2024
#34: AI Music is Cooking
22/11/2024
#29: an AI-based engineering mentor
12/09/2024
#28: A new AI-driven spreadsheet arrives
12/09/2024
#26: Draw in mid air and an AI makes an app
09/06/2024
#25: AI helps you with your goals
09/06/2024
#24: Groq brings ultra fast AIs
09/06/2024
#23: The newfangled vacuum salesman
09/05/2024
#22: AI's for non-techies
09/05/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.