Is A New Law Needed to Explain the World’s Complexity
As suggested by some authors and Quanta?
The way in which Mr. Philip Ball, the science writer of this Quanta article, enthusiastically embraces, endorses, and portrays other people’s research keeps amusing me. Every new controversial theory he covers for Quanta and then gets parroted by Wired, he presents it as the new theory that will revolutionise science, explain life, time, and everything else, I guess this time for good unlike the last one. This is a travesty of science because it suggests everything and nothing. Science should not be promoted as a commercial product to sell.
Philip Ball also continues promoting a subset of what one can presume are his favourite theories and authors — like Assembly Theory, proven false — and authors like Cronin, Walker, and Davies, multiple times, without allowing any other digression or perspectives from other authors to occur. Here is an independent view of this new theory and its limitations — something Mr. Ball does not ask others outside the biased circle he has chosen for, and simply and enthusiastically covers topics without any critical thinking, true neutrality, or proper literature review to seek balance.
In addition to other criticisms made that suggest that this new ‘law’ is rather describing rather than explaining or predicting anything, here is a list of issue I see from my area of expertise:
- Conceptual Vagueness and Misapplication of Randomness
Wong and Hazen et al. posit that as systems evolve under selection pressures, they exhibit increased functional information. This is illustrated through probabilistic sampling from configuration spaces, with the assumption that randomness supplies the combinatorial raw material from which selection extracts functionality. However, this use of “randomness” is imprecise, rooted in informal or Shannon-type statistical entropy. It fails to engage with the formal, algorithmic definition of randomness, in which a string or configuration is random if it is incompressible — i.e., it lacks any shorter generative description. The result is a confusion between entropy, stochasticity, and algorithmic structure. This confusion is not trivial. Algorithmic randomness (as developed by Kolmogorov, Chaitin, and Solomonoff) provides a model-independent standard for randomness, essential for comparing systems across domains and scales even when Philip Ball cites another author claiming to know that Kolmogorov complexity is unable to explain how molecules form but clearly ignore about Algorithmic Probability. Without this grounding, any probabilistic claim about the emergence or frequency of configurations risks circularity or anthropic tautology, just as Assembly Theory did.
2. Algorithmic Probability Already Explains Emergence of Functional Structures
Wong and Hazen’s claim that selection over random ensembles leads to increasing functional information amounts to a restatement of what algorithmic probability (Levin’s Universal Distribution) already predicts: structures with lower Kolmogorov complexity are more likely to occur because they are generated by shorter programs. In other words, functional, modular, and simple structures are biased to appear more frequently than arbitrary ones — not due to any new law, but due to the intrinsic properties of computation. Algorithmic Probability (AP) predicts the following foundational phenomena, each of which is relevant to the emergence of biological or physical complexity: These claims have been formalised and demonstrated in empirical and theoretical detail in our work: “Algorithmically Probable Mutations Reproduce Aspects of Evolution, such as Convergence Rate, Genetic Memory, and Modularity,” Royal Society Open Science, 5:180399, 2018. This paper shows how modularity, reusability, and simplicity bias naturally arise from algorithmic probability and can be quantified in real-world systems.
Modularity: Structures composed of smaller, reusable units (modules) have shorter descriptions and thus higher probability. Modular systems can be generated from compact programs using repetition, recursion, or symmetry, all of which are favoured by AP.
Bias toward simplicity: AP inherently biases generation toward simple patterns over complex ones, in accordance with Occam’s Razor. This explains the prevalence of regular, compressible structures across nature without invoking new selective laws.
Causal and generative sufficiency, and completeness: AP is closely tied to the likelihood of a system arising from a small set of generative rules. Systems with low algorithmic complexity have higher causal probability under the algorithmic prior. AP also guarantees completeness meaning that any meaningful pattern (for causal characterisation, for example) will be characterised by AP, something that neither Shannon entropy nor any of these theories (Hazen law increasing complexity or Assembly Theory), can.
Robustness and reusability: Simpler generating mechanisms are not only more likely but also more robust to perturbation and easier to recombine, which aligns with observed evolutionary principles.
Pathway asymmetry: The AP framework naturally favors asymmetry in path-dependency, where fewer steps or instructions are needed to construct certain structures over others. This explains why some complex configurations emerge far more frequently than others, even if their Shannon entropy is identical. This implies that the rise of biological function, chemical self-organisation, or mineral diversity does not require a new thermodynamic-complementary law. It is a computational consequence of algorithmic distributions that already account for functional bias toward modularity, reusability, and adaptability.
3. The Vagueness of “Functional Information” and Its Contextual Relativity
Wong and Hazen defines “functional information” as the negative log of the fraction of configurations achieving a function above some performance threshold. While intuitive, this concept is observer-relative, heavily dependent on context, resolution, and utility. In contrast, algorithmic complexity provides an observer-independent, universal measure of the information content of a configuration, agnostic to function or purpose. Moreover, equating rarity in configuration space with functionality confuses utility with descriptional economy. Many structures can be rare but trivial or non-functional. Conversely, some ubiquitous structures (e.g., Fibonacci spirals, crystal lattices) are both highly compressible and highly functional. AIT captures this duality with precision.
4. Selection and Complexity Through a Computational Lens
AIT reframes natural selection as a computational filter that favours low-complexity generators with high utility. From this view, evolution is not just a trajectory in fitness space but also a search process over program space. Selection acts not only on phenotypes but on short programs that describe phenotypes compactly and robustly.
5. No New Law Is Needed
Wong and Hazen’s law is redundant if its claimed phenomena are already explained by algorithmic probability and compressibility. Proposing a new law of nature without addressing the existence of an already mathematically proven mechanism (Levin’s universal semi-measure, or Solomonoff’s prior) risks misunderstanding the roots of emergent order. Furthermore, invoking a separate thermodynamic-esque principle for functionality introduces epistemic inflation: multiplying explanatory entities beyond necessity. The computational lens grounded in AIT is parsimonious, predictive, and experimentally approximable, as we have shown in our work (dozens of peer-reviewed papers).
I encourage the author of the Quanta and Wired article and the broader scientific community to engage more deeply with the extensive body of work in algorithmic complexity and algorithmic dynamics, which offers robust, model-independent foundations for the study of emergence, evolution, and functional complexity. Without doing much research on the very topics of this article, one does not make the reader any favour and instead, contributes to misinformation.
In summary, while commendable, my main criticism to the work by Wong and Hazen is a lack of engagement with the established, mathematically rigorous definitions of randomness, complexity, and emergence as provided by Algorithmic Information Theory (AIT) but also a lack of neutrality, balance and critical thinking from the Quanta/Wired article that some science writers display which is unfortunate and is damaging science.
I have reported on this before here.
