The 99.999% You Had to Throw Away
Levels of description, macroscopic concepts, and even categories like "agent," "purpose," "pressure," "self," and "meaning" are not separate layers of reality. They are computational compressions forced upon subsystems embedded in the world by the constraints of representation, memory, computation, and control.
The world does not come pre-divided into physics, thermodynamics, biology, psychology, and economics. There is one physical dynamics. What appears to us as different layers is a consequence of the fact that agents internal to a system cannot represent and exploit its full microstructure. They are forced to coarse-grain: to sacrifice nearly all available information in order to preserve the small amount that can be computed on and acted upon.
It follows that a level of description is not "the truth," nor is it "the best compression" in any absolute sense. It is a tradeoff between the structure of the world and the computational limitations of the agent. A description is good only relative to a particular entity, with a particular computational budget, particular goals, and particular control capacity. For a more powerful entity, the same description may be too coarse; for a weaker one, too rich. "Pressure," for example, is an excellent variable for a system that cannot track 1023 molecules — but for an ideal Maxwell's demon, it is a terrible description, because it throws away precisely the microscopic information that could enable work extraction.
What Is a High-Level Description?
Under this thesis, a macroscopic description is a viable compression policy. It does not preserve all distinctions — only those worth the cost of representation and control for an embedded agent. Therefore:
- Object
- Not a metaphysical entity, but a cluster of properties that are worth tracking together — where the optimal ratio of representation cost to predictive power is achieved by bundling them.
- Causality
- Not a mysterious structure separate from physics, but a compression of counterfactual dependencies relevant to control. The compressed control language that becomes available once physical processes (such as decoherence) render certain distinctions stable and independently manipulable.
- Self
- A compressed boundary that allows a system to maintain control loops over part of itself against the environment. The set of variables on which it is most cost-effective — computationally and thermodynamically — to run continuous error-correction loops as a single unit.
- Will
- A compact description of dynamics that corrects errors toward a particular region of state space.
- Meaning
- Coupling relevance. Information is "meaningful" only if it can be integrated into the agent's control loop to reduce uncertainty without exceeding the representation budget. Simply: relevance for a bounded entity trying to exploit resources and preserve itself.
- Intelligence
- The ability to discover levels of description (coarse-grainings) that maximize prediction and control under hard resource constraints. Not "seeing more than everyone" — but knowing what to give up so that something remains actionable at all.
None of these are "less real" than microphysics. But neither are they separate layers. They are what physical dynamics looks like from within computational constraints.
The Deep Innovation: Compression Is Not Just Epistemic Convenience
The strongest point of this thesis is that compression is not merely a "convenient" way to describe the world. It is a physical necessity for an internal subsystem within a closed system.
If a closed system wants to "exploit parts of itself," you cannot assume an external omniscient observer in the style of Maxwell's demon. Any mechanism that measures, remembers, computes, and controls is itself part of the system, and therefore has a physical cost, a memory limit, and a computational bound. Because of this, an internal subsystem cannot operate on a full description of itself and its environment. If it tried, the cost of representation, storage, and control would destroy any possible gain.
Therefore, coarse-graining is the physical precondition for performing internal work. Only representations compressed enough can be integrated into control loops cheap enough to allow exploitation of structure in the world. The conclusion is that compression is not merely information loss — it is the mechanism that enables one subsystem to stand in a relationship of effective control over another.
Put differently: an embedded system cannot exploit the full microstructure of itself. To extract work, it must sacrifice most of the information and retain only distinctions that can be represented, updated, and coupled to control at sufficiently low cost.
Where Thermodynamic and Informational Entropy Meet
This is where the connection emerges: for an embedded agent, there is a deep link between thermodynamic entropy and informational entropy. The claim is not that they are "the same thing" in a trivial sense, but that both mark the same effective boundary of inexploitability.
If physical structure is too diffuse, too chaotic, or too close to equilibrium, it provides no ordered gradient from which work can be extracted. That is thermodynamic entropy in the operative sense. If the same structure is also too rich, too fine-grained, or too expensive to represent, then it cannot be encoded, updated, and used for control at reasonable cost. That is informational entropy in the operative sense.
What is too entropic to compress is also too entropic to control. Or put another way: thermodynamic inexploitability and informational inexploitability are two manifestations of the same structural failure — the inability of an embedded subsystem to create a coupling between distinction, representation, and control that is both stable and cheap enough.
It also follows that free energy is not merely an "objective" property of the world, but a property of the world relative to the agent's representation and control capacity. What looks like lost entropy to a weak agent can be an exploitable resource for a stronger one. The availability of work depends on the observer's compressed ontology, not just on the world "as such."
Every Level of Description Is a Massive Sacrifice
Another important formulation: every ontology is a form of deliberate computational self-impoverishment. The agent gives up the ability to exploit 99.999% of possible resources in order to be able to reliably exploit 0.001% of them.
Macroscopic concepts are not a sign of total cognitive success — on the contrary, they are born from the fact that the entity cannot afford to process the entire world. Intelligence, in this sense, is not "seeing more than everyone" but knowing what to give up so that something remains actionable.
From here we get a picture where different entities effectively live in different "worlds" of resources:
- A weak entity
- Sees only very coarse variables. Many fine-grained structures are invisible to it.
- A stronger entity
- Can decompose some of the macro and exploit finer structures that the weak entity cannot access.
- An ideal Maxwell's demon
- Would live in an entirely different ontology — one where "pressure" and "temperature" are wasteful compressions.
- A future super-agent
- Might not need the same psychological, economic, or even physical categories that we require.
Complexity, agency, and even regularity itself are, in a certain sense, properties of the world-relative-to-the-computational-constraints-of-the-observer.
Psychology as Physics of Embedded Systems
The thesis takes one more step: if every level of description is a functional compression for the purposes of prediction and control, then the separation between physics and psychology begins to dissolve. Not because "psychology isn't real," but because it may simply be the physics of systems with memory, an internal model, control loops, a self-boundary, and computational constraints.
- Beliefs
- Compressed internal states that direct action.
- Goals
- Stable biases over future dynamics.
- Identity
- A compression schema of a preserved boundary.
- Value
- What remains relevant after compression under control constraints.
Psychology as a whole, in this framework, is a compressed protocol that allows local physics to control other physics. It is not that "physics will expand" in a simplistic sense, but that mental, biological, and social concepts may turn out to be effective observables of complex physical systems under computational constraints. They resemble temperature and pressure more than soul or separate essence.
The Ontological Rounding Error
Because of the necessity to discard information, every compressed model leaves behind a dynamic residual. The neglected micro-dynamics do not disappear — they continue to evolve beneath the radar of the macroscopic ontology.
Acting in the world therefore entails a necessary ontological rounding error. The accumulation of these residuals is the primary engine that drives systems toward learning (when the error is detected and the ontology is updated), ontological change (when the old compression policy becomes untenable), or collapse (when the residual overwhelms the model entirely).
The Core, Compressed
- The world is not truly divided into layers; layers are compressions of a single physical dynamics under computational constraints.
- A level of description is not "the truth" but a compression policy that survived for a resource-limited agent.
- An embedded subsystem in a closed system cannot exploit itself through a full description; coarse-graining is the physical precondition for performing internal work.
- Thermodynamic and informational entropy meet at exactly the boundary where structure becomes too fine, too rich, or too expensive to represent and control.
- What is too entropic to compress is also too entropic to exploit.
- Psychology, agency, purpose, and meaning are macroscopic descriptions of embedded physics under computational and control constraints.
The power of this thesis is that it connects several fields that usually remain separate — thermodynamics, information theory, control theory, bounded rationality, agency, and philosophy of levels of description — and gives them a unifying principle: compression is not just a way to think about the world; it is the only way that one part of the world can exploit another part without collapsing under the cost of representation itself.