Why good intentions produce bad outcomes—and why we can't see it happening
When a policy fails, you'll hear the same explanations:
What you'll never hear: "The policy was structurally wrong, and the structure made the failure invisible until it was too late."
This is complexity laundering: the process by which complex causal chains allow harmful policies to seem righteous, because the harm is invisible while the intent is visible.
The structure:
This isn't malice. It isn't stupidity. It's structural epistemological failure—the mismatch between system complexity and human cognitive capacity to model it.
The pattern is so pervasive that even the critique is fragmented. This analysis falls between disciplines—economics, political science, psychology, systems theory—so no one owns the question.
All complexity laundering collapses to three mechanisms:
Costs happen elsewhere—to other groups, other regions, other countries.
Example: Rent control. Visible: "Help renters by capping prices." Invisible: Reduced housing supply (developers build elsewhere), deteriorating buildings (landlords can't afford maintenance), insider benefits (those who got units keep them, newcomers face black markets). The costs are spatially dispersed to people who aren't in the room when the policy is debated.
Costs happen later—next decade, next generation, after my term ends.
Example: Pension promises. Visible: "You will receive €X at retirement." Invisible: Whether the system can fund €X in 30 years requires actuarial modeling most people can't do. The promise feels like a gift. The unfunded liability feels like nothing. By the time the bill comes due, the promisers are retired or dead.
Costs are many steps removed—policy → X → Y → Z → harm.
Example: "Tax the rich more." Visible: Rich pay more taxes → more money for services. Invisible (requires modeling): Capital flight, reduced investment, brain drain, changed incentives, Laffer effects, reduced future tax base. These effects are real—their magnitude varies by context. But the advocate typically isn't modeling them at all—they're running a static model in a dynamic system. That's the laundering: second-order effects aren't entering the decision calculus.
The pattern: Benefits are designed to be visible—that's how politics works. Costs are complex, delayed, and distributed—that's how complex systems work. The asymmetry is structural.
Seeing through complexity laundering requires three things: cognitive capacity, relevant frameworks, and willingness to look. Most people are missing at least one.
Some people cannot model second-order effects regardless of education—that's the distribution of cognitive ability. Complex policy requires modeling chains like "A causes B causes C causes D." Many people max out at "A causes B." The system assumes capacity that's absent in the majority.
Even with sufficient capacity, seeing through laundering requires conceptual tools that aren't default human cognition: game theory, incentive analysis, systems thinking, second-order effects, unintended consequences. These are acquired frameworks. A smart person unfamiliar with incentive reasoning won't spontaneously derive Goodhart's Law or notice that a policy optimizes the metric while destroying the goal.
Human cognition is optimized for:
Modern policy operates at:
The mismatch isn't closed by raw intelligence—it requires specific training in how complex systems behave. Most education doesn't provide this. Economics, game theory, and systems thinking remain niche rather than foundational.
Worse: most people value belonging as an end in itself; truth is valued only insofar as it serves belonging. When complexity laundering analysis threatens group identity ("our welfare state is good," "our democracy works"), the analysis triggers tribal defense, not epistemic update.
The person who points out laundered costs isn't processed as "providing information." They're processed as "attacking the tribe." The defense isn't "your model is wrong"—it's social punishment for disloyalty.
This explains why complexity laundering persists even after being explained. The explanation requires Mode 3 (actual epistemology), but political discourse operates in Mode 2 (tribal signaling). The content never reaches evaluation. It's dismissed as hostile signal before being processed as information.
The standard model of electoral accountability requires specific conditions to function. Political science has identified these; empirical research shows most fail systematically.
| Condition | Required For | Failure Mode |
|---|---|---|
| Voter competence | Correct decisions | Systematic bias, not random error (Caplan) |
| Voter independence | Wisdom of crowds | Tribal herding, media correlation (Condorcet) |
| Issue-based voting | Policy responsiveness | Identity dominates policy (Achen & Bartels) |
| Outcome traceability | Feedback learning | Complexity laundering hides causation |
| Responsibility assignment | Electoral punishment | Coalition diffusion, blame-shifting |
| Time horizon alignment | Long-term optimization | Political cycle ≪ policy effects |
Complexity destroys most of these simultaneously. Policies are too complex to evaluate. Effects are too delayed to create feedback. Responsibility is too diffuse to assign. The political cycle (~4 years) is shorter than most policy feedback loops (decades). And from a deep-time perspective, current voters lack legitimate standing to bind future generations—yet standard democratic theory treats this as unproblematic.
The result: democracies systematically select for policies with visible benefits, invisible costs, present focus, and future burden. This predicts exactly what we observe—unsustainable fiscal trajectories, deferred maintenance, short-term optimization, inability to solve long-term problems.
The complete list of conditions required for mass democracy to produce good outcomes, drawn from Condorcet jury theorem, Achen & Bartels' "folk theory" critique, accountability literature, and this framework:
| # | Condition | Source |
|---|---|---|
| 1 | Competence — Voters >50% likely to be correct | Condorcet; Caplan |
| 2 | Independence — Voters decide independently, no herding | Condorcet |
| 3 | Information access — Relevant facts available and accessed | Folk Theory |
| 4 | Issue-based voting — Policy preferences, not tribal identity | Achen & Bartels |
| 5 | Preference coherence — Individual preferences can aggregate consistently | Arrow |
| 6 | Aggregation validity — Majority preference → good policy | Folk Theory |
| 7 | Outcome traceability — Decisions linkable to results | Accountability lit |
| 8 | Responsibility assignable — Someone owns the outcome | Accountability lit |
| 9 | Feedback correction — Electoral response actually corrects errors | Accountability lit |
| 10 | Intergenerational standing — Current voters legitimately bind future | This framework |
| 11 | Time horizon alignment — Voters optimize for policy-relevant timescale | This framework |
| 12 | Non-pathological discounting — Future weighted appropriately | This framework + behavioral econ |
Conditions 1-9 are from canonical democratic theory. Conditions 10-12 are the Aliveness framework's contribution: standard theory asks "given preferences, does democracy aggregate them correctly?" The deeper question is "are those preferences worth aggregating? Do current voters have standing over deep time?"
The paradox: Democracy's legitimacy rests on informed choice, but structural complexity makes informed choice impossible. The more complex society becomes, the worse standard democratic mechanisms perform—yet complexity generally increases with development.
There's a specific time horizon beyond which democratic feedback becomes impossible:
| Time Horizon | Feedback Capacity | Example |
|---|---|---|
| < 4 years (political cycle) | Voters can verify | "Did my taxes go up?" |
| 1-2 generations (20-50 years) | Voters who approved ≠ voters who pay | Pension promises, debt accumulation |
| > Living memory (80+ years) | No one remembers the decision | Demographic policy, institutional decay |
The rule: Any policy with feedback loop longer than the political cycle cannot be managed through ongoing electoral accountability. Effects beyond the cycle can't generate electoral feedback—the politician is already re-elected or retired before consequences arrive.
This creates selection pressure favoring policies with costs beyond the horizon. A policy with visible benefits now and invisible costs in 20 years outcompetes a policy with balanced timing. Politicians who exploit this win; those who don't, lose. It's not that politicians CAN exploit long-horizon policies—it's that selection pressure guarantees they will.
Currently, pension promises (T = 30-50 years), demographic policy (T = 20-80 years), debt accumulation (T = variable), and infrastructure maintenance (T = decades) all operate beyond the Laundering Horizon. They're systematically exploited by every government in every democracy. The pattern is structural, not coincidental—selection pressure makes it inevitable. The only solutions are mechanisms that remove ongoing electoral discretion: constitutional constraints, independent institutions, algorithmic triggers.
The most insidious form: complex causal chains let you feel moral while causing harm.
Structure:
Example: Minimum wage increases.
These effects are real—their magnitude varies by context. But the advocate typically isn't modeling them at all—they're responding to the first-order effect. The moral feeling attaches to the visible action, not the net outcome. Even if the policy is net positive, the advocate isn't reaching that conclusion through modeling; they're reaching it through moral intuition about the visible part.
This pattern appears in many feel-good policies: welfare programs, worker protections, education funding. The advocate is morally insulated from potential harms by causal complexity. They're not lying—they often can't see the connection. The complexity hides the second-order effects behind the moral feeling about first-order effects.
Complex systems diffuse responsibility so no one is accountable.
Structure:
Example: Coalition government.
This is proportional representation's core dysfunction: it's an accountability laundering machine. The system is designed to prevent any party from owning outcomes—and therefore from being punished for bad ones.
In 2024, a senior executive at Kone (one of Finland's largest corporations) was caught manipulating summer job applications to favor a family member. Internal investigation confirmed nepotism. The consequence: a verbal conversation. No warning. No record. The executive returned to work. (See Theatrical Accountability for the full pattern.)
The Finnish Bar Association resolved 594 cases in 2023. Two resulted in license revocation. Norway, with comparable population, revokes roughly 24 per year—12x the rate. Finnish lawyers aren't 12x more ethical. The system is designed to produce a different outcome.
This is accountability laundering in pure form: the ritual simulates the feeling of justice without the reality of consequence. The complexity of the disciplinary process—boards, hearings, appeals, procedures—hides the absence of actual accountability behind procedural sophistication.
The three core types (spatial, temporal, causal) are the fundamental mechanisms. All extended types reduce to one or more of these:
| Type | Core Mechanism | Description |
|---|---|---|
| Spatial | S | Costs happen elsewhere—other groups, regions, countries |
| Temporal | T | Costs happen later—next decade, generation, term |
| Causal | C | Costs are many steps removed—policy → X → Y → Z → harm |
Morality Laundering [C]: Complex causal chains let you feel moral while causing harm. I support policy X; X causes Y causes Z; Y and Z are invisible; I feel moral for supporting X; I'm insulated from moral responsibility for Z. The advocate is morally insulated from harm by the very complexity that causes it.
Accountability Laundering [S+C]: Complex systems diffuse responsibility so no one is accountable. Many actors make small decisions; aggregate effect is bad; no single actor made "the bad decision"; "the system" is blamed; but "the system" can't be held accountable. Coalition government is the pure form: five parties share power, policy fails, each says "we wanted something different but had to compromise," voters can't assign blame.
Illegibility Laundering [Cross-cutting]: Effects that can't be measured don't enter decision calculus. James Scott's "Seeing Like a State" shows how states can't see local knowledge. The flip side: citizens can't "see like an economy." Finland optimizes PISA scores (legible) while agency development (illegible) deteriorates. What you can't measure, you can't manage—or blame.
Expertise Laundering [C]: Experts CAN model complexity, but experts also have interests. Complexity prevents non-experts from checking experts. "Trust the experts" + experts benefit from X = X happens. Self-dealing laundered as expertise. The Ministry of Finance has genuine expertise AND institutional interests—citizens can't distinguish which drives policy.
Narrative Laundering [C]: Humans think in stories. Stories have protagonists and antagonists. Complex systems don't. A simple narrative ("the rich are taking everything") is imposed on complex reality. The narrative is wrong but feels true. Complexity is illegible; story is legible. Story dominates.
Selection Effects Laundering [T+C]: Systems select for certain types over time. Selection is invisible. Survivors assume merit. Brain drain: high-agency people leave. Remaining population is selected for low-agency. But they don't see themselves as "the ones who stayed"—they see themselves as "normal." The filtering is misattributed as meritocracy.
Feedback Destruction Laundering [T+C]: Complexity breaks action→result→learning loops. In simple systems: Action → Visible result → Learning. In complex systems: Action → ??? → Result (maybe, eventually, somewhere). By the time effects are visible, policy is forgotten or normalized. Can't learn from mistakes you can't attribute.
Preference Laundering [C]: Can't distinguish "I want X" from "X is good." Complexity prevents falsification. "I benefit from welfare state" + "Welfare state is good" are conflated. Whether it's actually good is complex. Belief is laundered preference, not independent judgment.
Variance Laundering [S+C]: Human variance is real (IQ, agency, time preference). Variance has consequences (different outcomes). Variance denial makes consequences illegible. Policies assume non-existent homogeneity. When outcomes differ, it must be "system failure" (not variance). Solution is always "more resources." Dysfunction compounds because the actual cause is invisible.
Trust Laundering [Cross-cutting]: Not all trust enables laundering equally. Active trust ("I trust but verify") maintains feedback. Dependency trust ("I trust because I can't evaluate") severs feedback entirely. The chain: High Trust → Low Audit → High Laundering Capacity → Structural Corruption. High-trust societies with dependency trust (Finland) are maximally vulnerable. Low-trust societies (Italy) are ironically protected—citizens assume the state is lying and check. This explains the "Finnish paradox": the "cleanest" country has massive hidden liabilities because no one audits.
Parasitic Empathy Laundering [C]: Policy fails → More misery → More empathy triggered → More resources to failed policy → Policy grows → More failure. Failure increases resources. Success decreases resources. System optimizes for failure. Complexity hides the feedback inversion—you see the empathy, not the perverse incentive.
Shared Blindness [C]: Classic principal-agent problems assume the agent CAN do the right thing but WON'T (misaligned incentives). This is the extension: even with aligned incentives, complexity prevents both principal and agent from knowing what actions produce desired outcomes. Politicians genuinely want good outcomes; bureaucrats genuinely try to deliver; neither can model correctly. The dysfunction isn't corruption—it's shared incapacity invisible because detecting it requires the modeling capacity both lack.
Semantic Laundering [C]: Words like "adequate," "sustainable," and "quality" are semantic voids—they sound meaningful but have no operational definition. "We must ensure adequate healthcare" → What is "adequate"? Undefined. The void is where laundering hides. Tradeoffs disappear into undefined terms. Everyone agrees on the word, no one agrees on the meaning.
Algorithmic Laundering [C]: Policy logic encoded in proprietary or unintelligible algorithms. "The computer decided." No one can audit the calculation. The complexity of the code launders administrative failure, bias, or error. Example: automated welfare denial systems where the bureaucrat says "the system calculated it" but no human understands the model. Black-box AI is the modern accountability escape.
Crisis Laundering [T+C]: Declaring emergency to suspend normal cost-benefit analysis. "Necessity knows no law." The urgency launders the lack of scrutiny. Invisible costs: permanent expansion of state power, massive waste, suspension of rights. Example: COVID procurement—billions spent without tender because "it's an emergency." The panic button bypasses the modeling requirement entirely.
Consensus Laundering [C]: Manufacturing appearance of expert unanimity to silence debate. "All serious people agree." "There is no alternative" (TINA). Political choices laundered as technical necessity. The complexity of the domain is weaponized to claim only one path is possible. Invisible cost: suppression of valid dissent, groupthink, fragility. Example: Euro crisis management framed as having no alternative when alternatives existed but were politically inconvenient.
All forms of complexity laundering interact and reinforce each other:
Reality (complex, high-dimensional)
↓ cognitive limits can't model full reality
Simplification (necessary but lossy)
↓ systematic errors: visible/present/simple favored
Policies (optimize legible, ignore illegible)
↓ invisible costs accumulate in background
Compound dysfunction (effects interact)
↓ delayed collapse when invisible becomes visible
"Unexpected" crisis (was actually predictable)
The pattern repeats because each iteration increases complexity, which increases laundering, which increases invisible costs, which increases eventual collapse magnitude. It's a positive feedback loop toward catastrophe.
This is Moloch operating through epistemic channels. The selection pressure (compete for votes) + metric drift (optimize visible metrics, ignore invisible costs) → stable dysfunction (policies that feel good and cause harm). Complexity laundering is how Moloch achieves coordination failures without anyone intending them.
This is one mechanism behind the "everything seems fine until it isn't" pattern—Rome, the USSR, many financial crises. Complexity laundering hides the rot. By the time it's visible, it's often terminal.
Not all complexity laundering is accidental. Actors deliberately create complexity to launder their actions: financial products designed to obscure risk, regulations written to benefit insiders who can navigate them, legal structures that require expensive expertise to decode, bureaucratic processes that serve as job security for the bureaucracy.
Strategic complexity-creation is distinct from emergent complexity—and harder to fix, because the complexity serves someone's interest. The beneficiaries will defend it. They'll frame simplification as "naive" or "dangerous." The complexity becomes load-bearing for their position, so they have strong incentives to maintain it.
The fundamental equation: System Complexity > Aggregate Modeling Capacity → Laundering Possible
There are three ways to fix this inequality. Most reform proposals only address one. A complete solution requires all three.
Make systems simple enough to model.
Who fights this: Central governments, those who benefit from scale and complexity, the compliance industry, anyone whose power depends on being the only one who knows the maze.
Make the collective smarter at modeling complexity.
Who fights this: Egalitarian ideology (any selection is "elitist"), current representatives (delegation threatens their role), those whose power depends on citizen ignorance, those who would be revealed as wrong by prediction markets.
Design systems that work despite cognitive limits.
Force definitions. Pop the semantic voids.
| Laundered | Deflated |
|---|---|
| "Adequate healthcare" | "Max wait time < 14 days, cost cap < X% GDP" |
| "Sustainable fiscal policy" | "Debt/GDP ratio < Y%, declining trajectory" |
| "Quality education" | "Graduate employment rate > Z% within 2 years" |
The void is where laundering hides. Force definitions and the tradeoffs become visible.
Bring consequences inside the political cycle.
If feedback takes 30 years, it won't happen. Design systems where feedback takes 3 years.
Shift from "did we follow the right process?" to "did we get the right outcome?"
Current system: Voters evaluate proposals (impossible—too complex). Target system: Voters evaluate outcomes (possible—did the promise get kept?).
Example: Instead of "trust us, the healthcare model works," require: "Healthcare within 14 days. If we fail, patient compensated €X per day automatically."
Citizens don't need to understand how the system achieves goals. They just need to verify: Did it achieve them? Y/N.
The Goodhart caveat: Output metrics can themselves be gamed. The system may hit the metric while missing the goal (14-day wait times achieved by redefining "wait" or triaging away complex cases). There's no final escape from the complexity problem—only better and worse architectures. Output legitimacy is better than input legitimacy, but not perfect.
For decisions beyond the Laundering Horizon, remove democratic discretion entirely. Examples: debt brakes (Switzerland's constitutional spending limit passed with 85% referendum support), independent institutions with long tenure (central banks, supreme courts), and algorithmic rules that can't be evaded by politicians.
Democracy cannot process signals from beyond the Laundering Horizon. Pretending it can is itself a form of laundering.
Let people vote with their feet.
Exit provides feedback without requiring modeling. You don't need to understand WHY a place is failing—you just need to observe that it is and leave. This is why mobility is essential for good governance.
The complete solution: Any one approach is insufficient. You need:
Current reform proposals typically address only one dimension. That's why they fail.
The standard response: "If we explain complexity laundering clearly enough, people will demand better policies."
This is itself simulated metamorphosis—the feeling of changing things as the mechanism by which things stay the same. Reading this essay, understanding the concept, feeling informed: this is the pressure valve that prevents structural change. You feel like something happened. Nothing happened.
Education campaigns, "awareness," civic participation—these are the simulation layer. They let participants feel engaged while the actual architecture remains untouched. The citizen who understands complexity laundering and votes accordingly is still feeding input into a system that cannot process signals beyond the Laundering Horizon.
The real question isn't "how do we explain this better?" It's "how do we build architecture where understanding is required at the design layer, not the operational layer?" Someone must understand—but not everyone, and not for every decision.
The required function is a Fourth Branch: institutional architecture that continuously asks "Does this institution produce its stated outcome?" Not process compliance. Not activity metrics. Actual outcomes compared to stated purposes.
This is the "Department of Aliveness" concept: constitutional protection for the mechanism audit function. When a pension system diverges from sustainability, when a welfare system creates dependency, when a disciplinary process produces no discipline—something must detect and flag the divergence before it compounds.
The bootstrapping problem: Who designs architecture that requires understanding at the design layer? The Fourth Branch itself must be designed by people who understand the problem it solves. Constitutional moments—rare windows where new institutional architecture becomes possible—are the historical answer. The question is whether such moments can be deliberately created or only exploited when crisis makes them available.
The function exists nowhere in current architecture. Building it is the prerequisite for everything else.
The system isn't broken by bad actors. It's broken by complexity exceeding cognitive and institutional capacity to process it. This is actually hopeful: we're not fighting malice (hard to change) or stupidity (can't change), but structural mismatch (can partially fix). Build architecture that works without requiring comprehensive modeling.
Complexity laundering is one instance of a larger pattern: the strategic utilization of asymmetry. Any situation where agents leverage information gaps, cognitive limits, or structural opacity to sever authority from accountability.
Related phenomena that don't fit the "complexity" framing exactly include: information laundering (legitimizing disreputable sources through citation chains), narrative laundering (manufacturing consensus through repetition), preference falsification (Kuran's "private truths, public lies"), and pure information asymmetry exploitation (classic adverse selection). These share the asymmetry-exploitation structure but operate through different mechanisms than causal complexity specifically.
The common thread: asymmetry enables accountability escape. Complexity laundering is the variant where the asymmetry is cognitive—the gap between system complexity and modeling capacity. Other variants exploit different asymmetries (information access, social pressure, temporal position). The solution pattern is also shared: structural recoupling of decisions to consequences, whether through architectural design, commitment devices, or feedback-forcing mechanisms.
| Type | Asymmetry Exploited | Mechanism | Solution Direction |
|---|---|---|---|
| Complexity Laundering | Cognitive capacity | System complexity exceeds modeling capacity; costs hidden in causal chains | Simplification, delegation, output metrics |
| Information Laundering | Source verification cost | Disreputable claims legitimized through citation chains until origin forgotten | Provenance tracking, source transparency |
| Narrative Laundering | Attention/repetition | Repetition manufactures consensus; "everyone knows" without anyone checking | Adversarial verification, prediction markets |
| Preference Falsification | Social punishment cost | True preferences hidden; public consensus masks private dissent (Kuran) | Anonymous aggregation, revealed preference mechanisms |
| Adverse Selection | Information access | One party knows more; exploits ignorance of counterparty | Disclosure requirements, signaling mechanisms |
| Fiscal Illusion | Tax visibility | Indirect taxes, withholding, debt hide true burden (Puviani/Buchanan) | Tax consolidation, explicit cost statements |
| Rational Ignorance | Information acquisition cost | Cost of learning exceeds benefit of single vote; ignorance is rational (Downs) | Delegation, liquid democracy, skin-in-game |
| Concentrated Benefits/Dispersed Costs | Organization cost | Beneficiaries organize; victims don't (Olson) | Class actions, automatic standing, advocacy defaults |
| Bootleggers & Baptists | Motive visibility | Rent-seeking hidden behind moral coalition (Yandle) | Cui bono analysis, interest disclosure |
| Agnotology | Doubt production cost | Manufacturing uncertainty cheaper than proving certainty (Proctor) | Burden of proof assignment, prediction markets |
| Organized Irresponsibility | Causal attribution | Fragmented decisions prevent liability assignment (Beck) | Clear ownership, decision logging, automatic triggers |
| Structural Secrecy | Organizational hierarchy | Information segregated by structure; decision-makers don't know (Vaughan) | Flat reporting, whistleblower protection, red teams |
| Blame Avoidance | Procedural complexity | Protocolization and ambiguity deflect liability (Hood) | Output accountability, automatic consequences |
| Strategic Complexity | Expertise barrier | Deliberate opacity to prevent evaluation (financial products) | Mandatory simplification, approval regimes |
| Temporal Displacement | Discount rate mismatch | Benefits now, costs later; voters who approve ≠ voters who pay | Constitutional constraints, intergenerational representation |
| Sanctioning Asymmetry | Enforcement cost | Cheap to break rules, expensive to enforce them (petty crime, spam, regulatory violations) | Automated enforcement, deposit systems, bonds |
| Grievance Asymmetry | Voice intensity | Angry minorities loud; satisfied majorities silent. Policy skews to squeaky wheels | Random sampling (citizen lottery), silent majority polling |
| Regulatory Arbitrage | Jurisdictional boundaries | Moving activity to where rules don't apply (tax havens, carbon leakage) | Border adjustments, global minimums, harmonization |
| Build vs Maintain | Visibility gap | Building new is visible (ribbon cutting); maintaining old is invisible | Depreciation accounting, "Maintenance First" laws |
| Metric Hacking (Goodhart) | Proxy-goal gap | Optimizing the measure (test score) destroys the goal (education) | Paired metrics, adversarial audits, outcome sampling |
| Algorithmic Opacity | Code intelligibility | "The computer decided." Black-box models launder bias and error | Explainability requirements, algorithmic audits |
| Crisis Exception | Urgency | Emergency suspends normal scrutiny; "necessity knows no law" | Sunset clauses, mandatory post-crisis review |
Pattern: All exploit gaps between authority and accountability. Authority is exercised now, here, by identifiable actors. Accountability requires tracing consequences through space, time, causation, or social structure. Any asymmetry in tracing capacity can be exploited.
Meta-solution: Structural recoupling—architecture that reconnects decisions to consequences regardless of cognitive limits. Commitment devices, automatic triggers, exit rights, prediction markets, output legitimacy.
Key Takeaways
This framework synthesizes several existing research traditions:
The contribution here is synthesis: these phenomena share a common structure (complexity exceeding modeling capacity enables cost-hiding) and a common solution space (architectural recoupling of decisions to consequences). The prior literature tends to treat these as separate problems in separate domains.
This essay draws from the diagnostic framework developed in Aliveness, applying thermodynamic epistemology to the question of why modern governance systematically fails.
Related reading: