A study published today in PNAS Nexus by Alberto Hernández-Espinosa and colleagues argues that any LLM sufficiently complex to exhibit general intelligence is also computationally irreducible — invoking Gödel's incompleteness theorem and the Halting Problem to show forced alignment is provably impossible. The authors propose "managed misalignment" as an alternative: deploying multiple AI agents with diverse cognitive styles and partially overlapping goals so they check one another, a structure they call "artificial agentic neurodivergence." Testing with competing open-source models showed greater perspective diversity than proprietary systems.