The Question Nobody Asks

The most neglected question in intellectual history


I. The Gap

Philosophers have debated ethics for three thousand years. What is good? What is just? What should we value?

They produced libraries. Utilitarian calculus. Kantian imperatives. Virtue ethics. Social contracts. Thought experiments about trolleys and veils.

In all that time, almost no one asked a simpler question:

What does physics require for complex systems to persist?

Not "what should we value?" That's a question about preference.

"What must be true, physically, for a goal-directed system to continue existing?" That's a question about engineering.

The first question seems unanswerable. The second answers both.


II. Why Nobody Asked

The standard story: David Hume proved in 1739 that you cannot derive an "ought" from an "is." The is-ought gap. No amount of factual description tells you what to value. Science describes; it cannot prescribe.

This was treated as a proof. Case closed. Ethics must come from somewhere other than physics—intuition, revelation, social construction, personal preference. Take your pick.

But this "guillotine" is a statement about a particular type of inference. The question we're asking is a different type.

"You should value X" is normative. Physics can't tell you what to want.

"If you want to persist, X must be true" is conditional and descriptive. Physics can tell you this. Engineering constraint, not value judgment.

Bridges don't collapse because engineers have the wrong values. They collapse because they violate physics. The universe doesn't care about your preferences for gravity.

Hume's guillotine was an excuse to stop looking. Philosophy retreated into pure abstraction, freed from empirical test. "Values are subjective" became the thought-terminating cliché of educated discourse.

The question was always there. Nobody asked it.


III. The Question

"What does physics require for complex systems to persist?"

Variations:

This question has an answer. The answer is derivable from first principles.


IV. The Derivation

Start with what we're asking about: telic systems.

A telic system is anything that maintains local order against entropy by processing information. A bacterium. A corporation. A civilization. A mind. A future artificial intelligence. They subordinate thermodynamics to computation—using information to pursue goals.

Contrast with a hurricane: it follows energy gradients passively, maximizing entropy. No goal, no self to preserve. A virus, by contrast, carries a specification (replicate) and hijacks its environment to execute that specification. The virus uses physics. The hurricane merely follows it.

Any telic system faces exactly four dilemmas:

1. The Thermodynamic Dilemma. Finite energy. Every joule allocated to maintenance cannot fund growth. Every joule allocated to growth cannot fund maintenance. The system must balance preservation of current structure against transformation toward future capability.

2. The Boundary Dilemma. Every system composed of parts must define where "self" ends. Optimize for the individual part and you get cancer: components replicating at the expense of the whole. Optimize purely for the collective and you get exploitation: the whole consuming the parts. The system must balance individual and collective optimization.

3. The Information Dilemma. Acting effectively requires a model of reality. Models can be cheap (cached historical data, tradition, instinct) or expensive (real-time sensing, experimentation). Cheap data may be obsolete. Expensive data has metabolic cost. The system must balance efficiency against accuracy.

4. The Control Dilemma. Coordination requires computation. That computation can be centralized (top-down, fast but brittle) or distributed (bottom-up, robust but slow). The system must balance precision against resilience.

These dilemmas generate three problems any intelligence must solve:

The optimal solutions—strategies that maximize persistence over deep time—are discoverable:

These aren't chosen from preference. They're derived from physics. A system that violates them doesn't persist long enough to matter.

This is the answer. One page. Everything else is application.


V. What This Changes

If the derivation holds:

"Values are subjective" is a category error. It conflates instrumental preferences (chocolate vs vanilla) with survival constraints (growth vs stagnation). Physics doesn't care about the first. Physics absolutely determines the second. Some configurations of matter persist. Others dissolve into heat death.

Civilizational collapse becomes predictable. Every civilization that achieves abundance and then abandons the constraints follows the same trajectory. Comfort over growth, cached narrative over empirical test, present consumption over future capacity. Rome. Britain. Now us. Not bad luck. Physics.

AI alignment is the same question at different substrate. We're building goal-directed systems. They face the same four dilemmas. Align them to the physics of persistence, or don't.

The Fermi Paradox has an answer. The galaxy is silent not because intelligence is rare, but because the trap is nearly perfect. Civilizations that achieve abundance stop expanding. The physics explains why.

Ethics becomes engineering. Not "what should we value?" but "what must be true for valued things to persist?" The question transforms. The answer becomes testable.


VI. The Invitation

This essay presented one question and a minimal derivation.

The full framework applies this to everything: consciousness, civilizational collapse, AI safety, constitutional design, personal flourishing, the Fermi Paradox.

If the question bothered you—if you felt the gap, the "why didn't anyone ask this?"—then the work exists.

The physics: The Four Axiomatic Dilemmas — The complete derivation of constraints

The consequences: The Axiological Malthusian Trap — Why civilizations fail, and why the galaxy is silent

The evasion: Values Aren't Subjective — Dismantling the category error

The question was always there. Now you've seen it.


This draws from Aliveness, a framework for understanding what sustains organized complexity over time.


Appendix: Prior Art and Related Work

Hans Jonas (The Imperative of Responsibility, 1979) — The closest attempt in philosophy. Jonas tried to derive ethics from metabolism and biology, arguing that "life creates value" as an ontological fact. But he intentionally violated Hume's guillotine rather than bypassing it. His premise—"anything that must be done, ought to be done"—smuggles in normativity without justification. Jonas breached the wall; this framework walks around it. Jonas also provides only a binary claim (life is good), with no degrees, no optimization criteria, no operational virtues. For his argument and its limitations, see Farrell, Deriving "Ought" from "Is" (Temple University, 2010).

Mark Bickhard ("Process and Emergence," 2003) — Closest in structure. Bickhard argues that self-maintaining systems face existential requirements: do X or cease to exist. This is the conditional. But he then collapses it into categorical: "existential requirements" becomes "the system has genuine norms." The conditional-descriptive framing disappears into ontological claims. Still violation, not bypass. No derived virtues, no metrics, no multi-scale application.

Karl Friston (free energy principle) — Closest in physics. "Systems minimize surprise" is almost "systems must do X to persist." But Friston frames it descriptively (what systems do), not conditionally (what physics requires). No bridge to ethics, no operational virtues. "This is what surviving systems do" vs "this is what systems must do to survive" — observation vs engineering specification.

Ilya Prigogine (dissipative structures) — Describes how order can emerge far from thermodynamic equilibrium. Answers "how do complex systems maintain themselves?" but doesn't connect to ethics or derive operational constraints.

Humberto Maturana & Francisco Varela (autopoiesis) — Define self-maintaining systems. Descriptive framework for what living systems do, not what they require.

Terrence Deacon (Incomplete Nature) — Asks how purpose emerges from physics. Gets the "constraint" framing right. Doesn't derive specific virtues or connect to ethics.

Robert Rosen (Life Itself) — Mathematical definition of what makes a system "alive." Rigorous formal treatment of closure and self-reference. Stays at the level of formal systems, doesn't operationalize or connect to values.

David Deutsch (Constructor Theory) — Reformulates physics around "what transformations are possible." Related but different: Deutsch asks about possibility, this framework asks about persistence requirements.

W. Ross Ashby (requisite variety) — Control constraints on viable systems. Engineering framing, but limited to cybernetics—doesn't generalize to civilizations or derive virtues.

Eric Chaisson (Energy Rate Density) — Proposes Φm (energy flow per unit mass per unit time) as the master metric of complexity. Empirically tracks cosmic evolution from galaxies to brains to civilizations. But Chaisson treats it as descriptive of what happened, not prescriptive of what systems must do. No derived virtues, no ethics bridge.

Alex Wissner-Gross (Causal Entropic Forces) — "Intelligence maximizes future path freedom." Prescriptive in form: systems should act to maximize causal entropy (optionality). But this is about preserving options, not generating complexity. And Wissner-Gross still collapses to categorical: systems that maximize future freedom ARE intelligent. No derived virtues, no multi-scale application to civilizations.

Lee Cronin & Sara Walker (Assembly Theory) — The Assembly Index quantifies complexity via construction history: how many steps to build this molecule? Threshold at A ≈ 15 distinguishes abiotic from biotic. Genuine metric. But AT asks "how complex is this object?" not "what must a system do to maximize complexity generation?" Detection tool, not operating manual.

Jeremy England (Dissipation-Driven Adaptation) — Matter self-organizes to dissipate energy efficiently. "Life-like" structures are thermodynamically favored under strong driving. Closest to mechanism. But England describes what happens, not what systems must do. No virtues, no ethics, no multi-scale.

Stuart Kauffman (Adjacent Possible) — The space of possibilities expands super-exponentially with existing components. Autocatalytic closure enables exploration. But Kauffman describes the dynamics of innovation, not requirements for persistence. No derived virtues.

Luigi Fantappié / Ulisse Di Corpo (Syntropy) — The term exists. Di Corpo proposes syntropy as complement to entropy: convergence vs divergence. But remains metaphysical, no quantified units, no operational derivation, no connection to specific virtues.

The universal failure mode: Many thinkers used the existential conditional: "if a system is to persist, it must..." Every attempt either collapsed into categorical ("must" became "ought" or "has norms") or stayed purely descriptive without deriving anything operational. Hume's guillotine blocks the collapse; it doesn't block the conditional. The bypass was always available. No one used it to derive operational constraints.

What this framework does differently: It stays conditional through the entire derivation — four dilemmas, four virtues, quantitative degrees — before making one explicit normative commitment: that more aliveness is preferable to less. This is still a collapse, but minimal (one assumption), explicit (not smuggled), and self-selecting (only persisting systems can evaluate it).

The gap: No framework combines: engineering framing through derivation, operational virtues derived from constraints, quantitative degrees, multi-scale application, and the Hume bypass. Some have metrics (Chaisson, Cronin). None derive virtues from constraints.