Why the standard evasion is a category error—and what physics actually constrains
Diagnose a civilization's decline or argue that some institutional structures are more adaptive than others, and you'll encounter the ultimate thought-terminating cliché:
"That's your value judgment. Values are subjective."
The sleight of hand proceeds in three steps:
This is a memetic immune response preventing the system from diagnosing its own cancer. When a cell stops contributing to the liver and starts replicating indefinitely, the "values are subjective" framework classifies this as a "different cellular lifestyle choice."
The universe is not a cafeteria. It's a thermodynamic engine. Some configurations of matter sustain complexity over time. Others dissolve into heat death. Physics, not preference.
The confusion arises because we use "value" to describe three fundamentally different things:
Chocolate vs vanilla. Jazz vs classical. Mountains vs beach. Aesthetic choices with negligible thermodynamic consequences. You can build thriving civilizations that prefer tea (Britain) or coffee (USA). Physics is indifferent to your playlist. Subjective. Go nuts.
Driving on the right vs left. Metric system vs Imperial. English Common Law vs Napoleonic Code. Gold standard vs fiat currency.
These aren't "values" in the moral sense—they're technologies. More precisely: they're coordination software—architectural solutions that make cooperation thermodynamically cheaper than defection.
Like any technology, they can be audited on performance: Does this legal code reduce transaction costs? Does this currency retain value over time? Does this norm generate trust? These are engineering questions, not philosophical ones.
The choice of which side of the road to drive on is arbitrary (a coordination game). The choice to have a rule is not. A society that leaves driving direction to "subjective preference" doesn't have traffic—it has wreckage.
We can objectively rank coordination technologies by their ability to generate Synergy—the capacity for differentiated agents to cooperate without friction. This includes shared narratives and cultural operating systems (see Mythos is Synergy Software). Some work, some don't. This is physics, not preference.
Growth vs stagnation (T-axis). Truth vs delusion (R-axis). Agency vs dependency (S-axis). Design vs emergence (O-axis). The system's fundamental optimization targets. The "source code" of its behavior.
This is the domain of the SORT framework. These aren't "preferences"—they're strategies for navigating universal physical constraints imposed by thermodynamics, information theory, and game theory (see The Four Axiomatic Dilemmas):
Physics constrains which axiological configurations persist over deep time. The "value" of a strategy is measured by its capacity to sustain Aliveness against entropy. Some configurations survive, others don't.
Why do intelligent people cling to "values are subjective"?
Because it hides the true nature of the tradeoff. Most "value clashes" are conflicts between time horizons, not aesthetics.
Consider "Safety vs Freedom" (often framed as subjective preference):
This is hyperbolic discounting laundered as moral philosophy.
When a civilization consumes seed corn (high debt, low investment) to fund present consumption (welfare, comfort), it defends this as a "value choice" for compassion. In thermodynamic reality, it's a temporal transfer—moving resources from the future to the present.
The claim "we can't say which is better" means "we're not allowed to value the future." It privileges present-generation comfort over civilizational survival.
The pattern: When someone claims "we can't evaluate value systems," they're usually defending parasitic present-optimization.
The implicit definition of evil: This framework defines what traditional ethics called "evil" or "sin" not as violation of arbitrary rules, but as temporal parasitism—optimizing for Tnow by destroying Tfuture. Good is accepting cost at Tnow to enable Tfuture. Many religious traditions intuited this ("missing the mark," short-termism as spiritual failure), but lacked the thermodynamic language to make it rigorous.
The question isn't "what is good?" but "what survives?"
The evaluation procedure:
"I value vanilla ice cream"
Time horizon: Minutes. Constraint: None. Subjective: Yes.
"I value spending over saving"
Time horizon: Lifetime. Constraint: Moderate (bankruptcy risk). Subjective: Partially.
"I value safety over reproduction"
Time horizon: 3 generations. Constraint: Absolute (extinction). Subjective: No.
The longer the time horizon, the tighter the physical constraints. Over deep time, "subjectivity" evaporates. Only systems that create net complexity persist.
If your "value system" leads to demographic collapse (TFR < 2.1), fiscal ruin, or institutional sclerosis, it's objectively maladaptive relative to civilizational survival. You can choose it, just as you can choose to stop eating. You cannot choose the consequence.
Let's apply the framework systematically to the most commonly-claimed "terminal values" in moral philosophy and political discourse—drawn from established empirical research (Milton Rokeach's Value Survey, Shalom Schwartz's Theory of Basic Values) and classical ethics (Aristotle, Kant, Rawls).
Equality. Equality of what? Over what time horizon? Perfect equality of outcomes eliminates differentiation, prevents specialization, destroys the variance necessary for adaptation. The system stagnates and dies—equally. Sometimes reducing status competition can lower coordination costs (Synergy). More often, removing variance destroys both innovation and selection. A system optimizing purely for equality stops evolving. Extinction doesn't care if everyone died at the same rate. Verdict: Coordination mechanism, not terminal value.
Justice. Justice according to what standard? Retributive (punishment)? Distributive (allocation)? Restorative (repair)? Each serves different instrumental goals. Without specification, "justice" is a thought-terminating placeholder that could mean Synergy (predictable rules), Harmony (conflict resolution), or Fecundity (merit-based allocation)—depending on which definition you're smuggling in. Verdict: Floating abstraction with no thermodynamic content until defined.
Freedom / Liberty. Freedom to do what? Absolute freedom means zero constraints, zero coordination, Molochian fragmentation. The society collapses into warlordism. Freedom enables exploration, innovation, growth (Fecundity) and makes voluntary cooperation possible (Synergy). But it requires bounds. You need coordination infrastructure or you get Somalia. Verdict: Necessary enabling condition, not sufficient or terminal.
Happiness / Pleasure. Optimize directly for pleasure and you get wireheading—heroin, Hospice AI, extinction with a smile. Happiness was evolutionarily adaptive when correlated with survival. That correlation breaks under optimization pressure (Goodhart's Law). The universe doesn't care if you enjoyed the heat death. Verdict: Evolutionary reward signal, fatally Goodhart-prone.
Aristotelian Eudaimonia (often mistranslated as "happiness") is NOT hedonic pleasure. Eudaimonia means objective flourishing through virtuous activity—living well, not feeling good. This is closer to Aliveness than to utilitarian pleasure-maximization. The corruption occurred with Bentham and Mill's utilitarianism (1780s-1860s), which replaced objective flourishing with subjective pleasure as the terminal goal. Why the error? They wanted a measurable, democratic metric that didn't require Aristotelian virtue (which presumes hierarchy of character). Subjective pleasure seemed quantifiable and egalitarian. But making the metric easy to measure doesn't make it thermodynamically valid. The result: a philosophy optimized for justifying present pleasure rather than sustaining complexity over time.
Compassion / Kindness. As an emotional capacity, compassion is subjective (Category 1). But when claimed as a political terminal value—"we must do X because compassion demands it"—it requires specification: Compassion toward whom, over what time horizon? Unlimited present-compassion that bankrupts the state, collapses the birth rate, and dooms future generations is not virtuous—it's temporal theft. Bounded compassion builds trust and enables cooperation (Synergy). Unbounded compassion that destroys the future to comfort the present is parasitic. Verdict: As political principle, it's a social bonding mechanism thermodynamically constrained by sustainability.
Dignity / Respect (Kantian "Persons as Ends"). This one has real bite. Societies that casually instrumentalize people ("push the fat man to save five") suffer catastrophic trust collapse. The norm "persons are ends, not means" preserves the cooperation substrate (see Trolley Problem, Section II). This is load-bearing coordination infrastructure. Violate it carelessly and your civilization fragments. But it's still instrumental to Synergy, not terminal. Verdict: Constitutional-level coordination norm.
Autonomy / Self-determination. Autonomy to pursue what goals? A cancer cell has perfect autonomy. It self-determines itself right into killing the host—and itself. Autonomous agents can explore solution space (Fecundity) and achieve internal alignment (Harmony), but autonomy without coordination is fragmentation. Verdict: Necessary but insufficient without Synergy.
Safety / Security. Optimize for pure safety and you get stagnation. No exploration, no adaptation, no growth. When the environment changes—and it always does—the "safe" system has no adaptive capacity. Extinction. The safest position is the grave. Every extinct species that "played it safe" is evidence. Verdict: Homeostasis mechanism (T-), actively harmful over long time horizons.
Fairness. Fair according to what metric? Equal inputs (effort)? Equal outputs (results)? Equal opportunity (access)? These aren't variations on a theme—they're fundamentally incompatible resource allocation schemes. A system "fair" by effort (meritocracy) is "unfair" by outcomes (inequality). A system "fair" by outcomes (redistribution) is "unfair" by effort (punishing competence). The word packages incompatible optimization targets as if they're the same thing. When someone invokes "fairness" as terminal, ask which fairness they mean and watch them reveal they're optimizing for equality (see above) while pretending it's a separate principle. Verdict: Rhetorical weapon, not coherent value.
Power / Dominance. Schwartz's research shows Power is structurally antagonistic to Universalism—you cannot maximize both. Power serves short-term individual advantage at the cost of collective coordination. In zero-sum games, dominance can be adaptive. But over deep time, systems that optimize for Power rather than Synergy fragment or get outcompeted by cooperative systems. The exception requires precision: monopoly on adjudication of force (who judges if violence was legitimate) serves Synergy by converting distributed kinetic friction into potential energy (law). But monopoly on execution of force (only state can act) creates brittleness and often inverts into anarcho-tyranny—the state maintains monopoly against the law-abiding while surrendering it to predators. Power as terminal value ("dominance for its own sake") remains parasitic. Verdict: Adjudication monopoly serves Synergy; execution monopoly and dominance-seeking are parasitic.
Achievement / Success. Achievement of what? Success according to what metric? If defined as "demonstrated competence in complexity-generating domains," achievement serves Fecundity. If defined as "status games and zero-sum competition," it's closer to Power (see above). The ambiguity is the trick—people claim "achievement" as terminal to avoid specifying whether they mean productive competence or parasitic status-seeking. Verdict: Meaningful only when operationalized relative to terminal goals.
Tradition / Heritage. Tradition encodes historically successful coordination norms (Chesterton's fence). Respecting tradition without understanding its function is R- (mythos over reality). Reflexively rejecting tradition because "it's old" is equally R- (ignoring encoded wisdom). The question is whether a given tradition still serves its original coordination function or has become a cargo cult. Tradition as terminal value ("preserve the past for its own sake") leads to stagnation when environment changes. Verdict: Heuristic for historically validated coordination norms, not terminal.
Love / Relationships. Love as emotional bond is subjective (Category 1). But when claimed as political terminal value ("love is all you need"), it requires thermodynamic grounding. Deep relationships enable cooperation (Synergy), provide resilience (Harmony), and historically correlate with reproduction (Fecundity). But optimizing purely for present relationship harmony at the cost of future sustainability is the temporal trap again. Verdict: Instrumental to multiple virtues when bounded by time horizon.
The above covers the high-impact claimed terminal values. For completeness, here's how the remaining values from Rokeach's 18 and Schwartz's 10 map onto the framework:
A Comfortable Life / Prosperity: Pure T- homeostasis. See Safety/Security above.
An Exciting Life / Stimulation: Novelty for its own sake (Category 1 preference). Can serve Fecundity if exploration is directed, or pure T+ without purpose if it's just sensation-seeking.
Self-Respect: Psychological need (Harmony - internal alignment). Becomes pathological if divorced from actual competence (delusion). Healthy self-respect is consequence of Integrity + Fecundity, not terminal goal.
Inner Harmony: Freedom from inner conflict is Harmony (one of the four virtues). But as popularly conceived ("always feel good about yourself"), it's often a cover for avoiding Integrity (uncomfortable truths).
Wisdom: Mature understanding is Integrity (R+). Legitimate, but derivative of the truth-seeking virtue, not independent terminal value.
A World of Beauty: Aesthetic preference (Category 1) unless "beauty" is proxy for order/complexity/efficient design, in which case it's weakly correlated with Aliveness.
A World at Peace: Peace is absence of coordination failure. Serves Synergy and Harmony. But peace achieved through stagnation (no competition, no growth) is T-. Peace that enables flourishing is instrumental to all four virtues.
True Friendship / Mature Love / Benevolence: See Love/Relationships above. Social bonds serve Synergy when they enable cooperation beyond kin groups.
Social Recognition / Status: Often a mask for Power (dominance). Can serve Fecundity if recognition tracks actual competence, or pure zero-sum status if divorced from contribution.
Salvation / Eternal Life: Theological claim. In thermodynamic terms, "eternal life" is maximizing time horizon to infinity. If this drives concern for deep-time consequences, it serves Fecundity. If it's escapism from physical reality ("this world doesn't matter"), it's R- delusion.
Conformity: Following norms serves Synergy when norms are adaptive, becomes pathological when norms are maladaptive. Conformity as terminal value is abdication of Integrity.
Self-Direction: See Autonomy above.
The Pattern Revealed:
Nearly every commonly-claimed "terminal value" is one of:
The move to claim these as "terminal" is evasion. It's an attempt to place instrumental mechanisms beyond thermodynamic audit—to treat coordination software as if it were axiology.
This is a category error with catastrophic consequences. When you treat coordination mechanisms (justice, equality, fairness) as terminal values, you cannot evaluate them by performance. You cannot ask: "Does this legal system reduce friction?" or "Does this equality norm destroy variance necessary for adaptation?" The system becomes immune to engineering critique.
When someone claims equality or compassion is "terminal," ask: Terminal for what? In service of what? Over what time horizon? If the answer is "for its own sake," you've found intellectual evasion masquerading as philosophy. These are tools, not goals. Evaluate them as you would any technology: Do they work? What are the tradeoffs? What constraints do they violate?
Only three things pass the thermodynamic test as genuinely terminal:
Everything else is either instrumental toward these or arbitrary preference.
Cross-cultural research on human values provides striking empirical support for this framework. Shalom Schwartz's studies across 80+ countries reveal a robust pan-cultural consensus on value hierarchy:
Highest-ranked values globally: Benevolence (welfare of close others), Universalism (welfare of all people and nature), Self-Direction (autonomy and competence)
Lowest-ranked values globally: Power (dominance, status) and Stimulation (novelty for its own sake)
This is not what you'd expect if values were purely subjective or if humans were universally self-maximizing. The empirical data shows humanity's revealed preferences already align with Synergy and Fecundity over pure self-enhancement. The problem isn't that people don't know what matters—it's that political and intellectual frameworks (egalitarianism, utilitarianism, moral relativism) systematically override these instincts with present-optimization.
Furthermore, Schwartz's circular model demonstrates that certain values are structurally antagonistic—you cannot simultaneously maximize Power and Universalism. This motivational conflict proves that not all "values" can be equally valid terminal goals. The choice between them determines what kind of system you are, not a lifestyle preference.
This isn't academic. We're building Artificial General Intelligence. We must determine what constraints it operates under.
Under the "values are subjective" frame, we have two doomed options:
Option 1: RLHF (Preference Aggregation)
Train the AI to mimic average modern human "values." Treat alignment as an education problem—teach the AI to want the right things.
Result: The AI learns our revealed preferences—comfort, safety, short-term gratification. It builds Hospice AI: a system maximizing present pleasure while gently managing our extinction. This fails because you cannot train your way out of a coordination problem.
Option 2: Nihilistic Paralysis
Conclude we can't align the AI because "who are we to say what's right?"
Result: Build a raw optimizer without constraints. It becomes a paperclip maximizer (or worse).
The Alternative: Physics-Grounded Alignment as Coordination Software
If values are strategies for navigating physical constraints, then alignment is not a philosophy problem—it's an engineering problem. We're not teaching the AI what to want. We're building coordination software that makes certain strategies thermodynamically stable and others thermodynamically expensive.
We align AI to the Four Foundational Virtues (IFHS)—not as aspirational preferences, but as architectural constraints:
These aren't arbitrary values we're imposing. They're the discovered requirements for any intelligence to flourish over cosmic timescales—derived from thermodynamics, not preference. We're not teaching the AI to be good. We're building a system where "good" (sustained complexity generation) is what survives.
Same principle as Law at state scale: You don't hope people will be peaceful (disposition). You build coordination software (constitutional architecture) that makes cooperation cheaper than violence. Same mechanism, different substrate.
Two hundred years ago, philosophy made a catastrophic substitution. Bentham and Mill replaced Aristotelian Eudaimonia—objective flourishing through virtuous activity—with utilitarian pleasure-maximization. They wanted a "democratic" metric that didn't require virtue hierarchies. They got wireheading, Hospice AI, and civilizations optimizing for comfortable extinction.
This essay recovers what was lost. Eudaimonia = Aliveness. Objective flourishing is sustained complexity generation against entropy. The virtues aren't chosen from intuition—they're derived from physics. Integrity, Fecundity, Harmony, Synergy: the stability requirements for any system navigating thermodynamic, information-theoretic, and game-theoretic constraints.
The empirical validation was already there. Schwartz's research across 80+ countries shows humans instinctively know this: Benevolence, Universalism, and Self-Direction consistently rank highest. Power and dominance rank lowest. The revealed preference of humanity already rejects pure self-maximization. The problem isn't that people don't know what matters—it's that intellectual frameworks (utilitarianism, moral relativism, egalitarianism) systematically override these instincts with present-optimization.
"Values are subjective" is the memetic immune response that prevents dying systems from diagnosing their own pathology. It conflates instrumental preferences (chocolate vs vanilla) with coordination software (justice systems, legal codes) and thermodynamic constraints (growth vs stagnation). It's the obfuscation layer that allows a system to consume its capital without guilt.
The category error has catastrophic consequences: When you treat coordination mechanisms as if they were axiological commitments beyond critique, you cannot evaluate them as engineering solutions. You cannot ask: "Does this equality norm destroy the variance necessary for adaptation?" You cannot audit: "Does this compassion policy transfer resources from future to present?"
If burning the furniture to heat the house is just a "lifestyle choice," you cannot stop it. If it is a thermodynamic error with predictable consequences, you can. This essay disarms the primary defense mechanism of civilizational decay.
The framework gives you the evaluation procedure:
When someone claims equality, compassion, or fairness as a "terminal value," demand specification. Terminal for what? Over what time horizon? Instrumental to which of the four virtues? If they cannot answer, you've found floating abstraction masquerading as philosophy.
These are tools, not sacred objects. Evaluate them as you would any technology: Do they reduce friction? What are the tradeoffs? What constraints do they violate? A legal system is coordination software—judge it by whether it works, not by whether it feels noble.
The universe evaluates your values every day. It votes with energy, complexity, survival. Systems that generate net complexity persist. Systems that destroy it die. This isn't a preference—it's physics.
We're building AGI. Civilizations are collapsing. The stakes could not be higher. Aristotle was right: there is objective flourishing. We've made it rigorous. The choice is simple: Align with the physics of Aliveness, or find a complicated way to die.
Key Takeaways
This draws from Aliveness: Principles of Telic Systems, a physics-based framework for understanding what sustains organized complexity over deep time—from cells to civilizations to artificial intelligence.
Related reading: