queue/high-chapter-14-fiction-as-moral-training.md

Add fiction-as-moral-training argument to Chapter 14

Type: queueStatus: pending

What

The research contains an original argument: fiction functions as humanity's moral training data. Stripping fiction from AI training corpora creates capability without wisdom — a value-blind superintelligence. This is completely unused in the manuscript.

Why

Chapter 14 discusses AI risk but lacks a concrete mechanism for HOW value incompleteness emerges. The fiction-as-training-data argument provides exactly this: it explains why AI systems trained primarily on factual/technical content might develop sophisticated capability while lacking the moral intuitions that fiction develops in humans.

How

Add a section in Chapter 14 after the discussion of AI alignment challenges. Argue that narrative fiction teaches humans to simulate others' experiences, develop empathy, and navigate moral complexity — and that removing this from training data removes the substrate for moral reasoning. This connects to the book's broader argument about consciousness and culture.

Impact

Provides an original, provocative argument that differentiates this book from standard AI safety discourse. Connects Chapter 14 to the book's themes about consciousness and cultural development.