Listen "AI Morality"
Episode Synopsis
This episode explores whether AI can embody moral values, challenging the neutrality thesis that argues technology is value-neutral. Focusing on artificial agents that make autonomous decisions, the episode discusses two methods for embedding moral values into AI: artificial conscience (training AI to evaluate morality) and ethical prompting (guiding AI with explicit ethical instructions). Using the MACHIAVELLI benchmark, the episode presents evidence showing that AI agents equipped with moral models make more ethical decisions. The episode concludes that AI can embody moral values, with important implications for AI development and use.https://arxiv.org/pdf/2408.12250
More episodes of the podcast Agentic Horizons
AI Storytelling with DOME
19/02/2025
Intelligence Explosion Microeconomics
18/02/2025
Theory of Mind in LLMs
15/02/2025
Designing AI Personalities
14/02/2025
LLMs Know More Than They Show
12/02/2025
AI Self-Evolution Using Long Term Memory
10/02/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.