The 8 Fallacies of Assembly Theory

Dr. Hector Zenil
37 min readDec 29, 2022


In response to the alleged misunderstandings evident in our exposure of the many serious issues undermining the foundations and methods of ‘Assembly Theory’.

(disclaimer: The opinions and views expressed in my updates are personal and reflect my own individual perspectives)

UPDATE (Wednesday 31 May 2023):

Cronin and Walker strike back with another rehash. Cronin and Walker’s ever-changing definition of Assembly theory keeps, paradoxically, evolving in circles, and now it looks more than ever like algorithmic information and Bennett’s logical depth. Cronin and Walker now say Assembly theory is a measure of ‘memory’ size (algorithmic information’s, not their own idea) and/or computing time (which is Bennett’s logical depth idea, not their own). They also explain that their motivation is to explain how some configurations are more likely than others (which is known as ‘algorithmic probability’, not their own idea). They have gone ballistic in misappropriating all the ideas in complexity science as their own in what it looks like a masterclass of a publicity stunt with the help of their friends and colleagues.

Walker and Cronin seem to like to keep jumping from old mystical terms to new mystical terms, with a new article appearing each month now in a bulletin from the Santa Fe Institute introducing new terms like ‘memory’ measurement in their simplistic theory, now adapting it to identify byproducts of life. This new surprising description (which does not match their simplistic indexes) makes AT now identical to Bennett’s logical depth introduced in the 80s but ill-defined and incomplete in practice as they are unable to instantiate it with the equivalent of a 1960s simple statistical algorithm.

In the authors’ efforts to make their theory and measure look different without making any real change, the authors of Assembly theory now claim their measure is not about distinguishing life only but by-products of life to accommodate the fact that their measure predicted that beer was the most alive product on earth. The new development makes Assembly theory indistinguishable from Bennett’s logical depth on paper if it not were because, in practice does not work, and does not perform better than statistical counting functions introduced in the 1960s.

In the never-ending approach to rehashing a simplistic theory of life, to make it more credible and immune to criticisms, the authors of Assembly Theory have decided to change the narrative around their measure and introduce yet another term, ‘memory’ making their AT approach now a carbon copy of algorithmic complexity as it is about the length of the model that generates the object history (including time), and when focused on ‘time’ which they also introduce in a grandiose fashion as if it were the first time it is considered, they make AT a carbon-copy of Bennett’s logical depth none of which the authors cite or attribute correctly to. Moreover, their simplistic measure does not match their grandiose descriptions, so not only are those carbon copies of existing theories of life, but they are bad carbon copies.

Instead of taking advantage to explain their claims and how without using their ‘physical data, ’ other measures can reproduce and even outperform what they think and claim are outstanding results (see our Figures below in our original post), they took this opportunity to continue spreading unfounded claims as a self-promotion exercise.

In our paper, we have shown that we don’t require all the ‘physical data’ they refer to in order to separate organic from non-organic compounds.

As a deception exercise worth some radical politicians, Assembly Theory has got traction because the authors highlight a property of life nobody can disagree with. That life is highly hierarchical, reuses resources, and is heavily nested. We have known this since biological evolution, genetics, self-assembly and so on for decades. The problem is that pointing out such an obvious feature for most experts in the common knowledge gave them credit for the wrong reasons, from the introduction of ideas as if they were their own or by introducing ills defined concepts and measures.

In a previous version of Assembly Theory (AT), the authors of AT suggested that the number of ‘physical’ copies of an element used to assemble an object measures how ‘alive’ may be. Buildings use the same bricks in all possible configurations to make walls and rooms inside self-similar floors made of the same materials across the building, from beams and screws, repeatedly across and within other building blocks in a highly modular fashion. Is this building alive according to Assembly Theory? It seems to suggest that these objects, just like Lego constructions and natural fractals, are alive because they are highly modular and have a long assembly history, just as it predicted beer was alive in their experiments. They now have realised this was wrong because their measure designated beer as the most alive element on earth, above even yeast itself, so they have adapted their version to include living systems products, including beer (which still does not work, but they have morphed their theory again to accommodate for new criticisms without backing down from previous unfounded claims or changed anything that is wrong with their theory or measure. The new claims make it completely empty.

Complexity theorists have long known that nested modularity is a key feature of life but in no way the only one. We are convinced that what characterises life is the agency in its interaction with the environment and not a single intrinsic (and rather simplistic) property.

Let us also address what they claim is another exclusive property of Assembly Theory. They claim that their theory captures ‘physical’ copies and is the only one to do so and that even AIT misses it. They also claim their measure is the only experimentally validated.

The leading authors of Assembly Theory, seems to suggest that their measure has some mystical powers able to capture the concept of ‘physical copy,’ even though it is encoded in a computable description and fed to their measure, just as it is fed to the measures we used to reproduce their results step-by-step, regardless of whether it is ‘physical’ or not, as any algorithm (theirs or ours) has to be computably represented/encoded to be read.

The input data to their assembly index includes InChi IDs, distance matrix (.mol files), and the MS2 spectra (also a type of distance matrix). In other words, it all comes down to 3 pieces of information that have a written and computable representation that comes from observations and are fed to a complexity measure. This is in no way or form different from the use of all other complexity measures. For example, LZW as used by Li and Vitanyi to define their normalised information/compression validated with genomic data. Is not genomic data physical for the authors of Assembly Theory? Moreover, we have shown that InChi ID strings and distance matrices are enough to classify compounds into organic and inorganic hence making his particularly different approach adding another input unnecessary.

The input data to their assembly index includes InChi IDs, distance matrix (.mol files), and the MS2 spectra (also a type of distance matrix). The authors claim that their Assembly Theory represents the first time in the history of science that an index takes ‘physical’ data from observations, effectively claiming that they have invented science.
In a paper predating Cronin and Walker’s work published in the journal of Parallel Processing Letters, we proved that by only taking InChi nomenclature IDs, sometimes enriched with distance matrices, one can come up with very interesting classifications, including organic v organic that they rediscovered 4–5 years claiming to have found the secret of life.
In a landmark paper in the field of complexity science, which also makes the mistake of overusing algorithmic complexity but uses a weaker form based on compression algorithms, they took genomic sequences from different species and correctly reconstructed an evolutionary phylogenetic tree that corresponds to current biological knowledge. According to the authors of Assembly Theory, genomic sequences are not physical as they claim to be the only measure of complexity validated on real ‘physical’ data and therefore ‘experimentally validated’. This is pretty much what every complexity theorist has done in the last 50 years, (including ourselves), this is to take observable data that has a representation and is fed to a measure to classify or extract information about it.

What is deplorable is not to make mistakes but the author’s open ‘fake it until you make it’ Silicon Valley approach to science that unfortunately often pays off with some science journalists and science enthusiasts whom the Cronin and Walker groups seem to be speaking to as their main audience (as every other scientist I have spoken with is either skeptical of their work or become convinced is a scam after reading these arguments). Enthusiasts and science writers are drawn to such grandiose stories sometimes just as much as grant agencies that actively seek for mediatic impact rewarded with resources for cheap labour in the form of armies of wrongly labelled ‘postdocs’ that are simply underpaid researchers executing most of the research but often misled by people with agendas for personal academic gain and greater undeserved influence.

It is wrong to misappropriate ideas from others without attribution and to choose to ignore results that have predated theirs without recognition. The community should repudiate these practices and a way to practice science as a marketing exercise. Their prose may be commendable making their work look rigorous and deep when it is so fundamentally wrong and shallow.

In summary, this is what is wrong with Assembly Theory, why it needs fixing, and why the authors should stop promoting it:

  • The authors take any criticism wrong and make them double down instead of correcting the course.
  • Their theory and papers lack all sorts of control experiments and present everything as de novo ideas without attribution. Had they performed any control experiment, they would have found out they didn’t need any extra information to classify their compounds or any new measure, for that matter, as they could have used almost any other measure of complexity that already counts copies (from Huffman to RLE, LZW, you name it).
  • They have blatantly misappropriated concepts and ideas from others, knowing so and without crediting anyone.
  • They created a pseudo problem for which they created a pseudo solution.
  • They theory is inconsistent with their method.
  • They have taken a marketing approach to doing science.
  • All the eight fallacies below, from creating a strawman fallacy against algorithmic information to embracing an algorithm that does not do what they say it does.

UPDATE (Friday 5 May 2023): In a recent popular article written by the science writer Philip Ball on Quanta (which we won’t link here to increase what we think is undeserved attention to this theory), the authors of Assembly Theory seem to suggest that the idea of considering the entire history of how entities come to be is of Assembly Theory (AT). This is, again, incorrect; this idea was explored in the 1980s and was Charles Bennett’s, one of the most outstanding computer scientists and complexity theorists. Roughly, Bennett’s logical depth measures the computational time (number of steps) required to compute an observed structure. This is “the number of steps in the deductive or causal path connecting a thing with its plausible origin”.

Bennett’s motivation was exactly that of Cronin’s and his group but about 35 years earlier than Cronin and Walker’s, that is, how complex structures evolve and emerge from the large pool of possible (random) combinations, and it is also the main subject of interest of Algorithmic Information Theory which has resource-bounded measures of which AT is a proper weak version not only because it is computable but because it is trivial as proven in our paper and this blog post. The authors of Assembly Theory seem to keep jumping from unsubstantiated claims to new unsubstantiated claims. Unfortunately, most people asked about Assembly Theory in the Quanta article that were positive are too close to Sara Walker (co-author of AT) to be taken entirely objectively (with one of them, one of the most prolific co-authors of Walker) and should not have been chosen to comment as if they were neutral. This is a serious failure of journalism practice.

Also, we found it unfortunate that the wrong idea that Kolmogorov (algorithmic) complexity is too abstract or theoretical to be applied was put forward again; and that Kolmogorov complexity ‘requires a device’ (just as much as AT requires a computer algorithm to be instantiated, and related to the fallacy below on what appears to be some ‘mystical’ properties that the authors attribute to AT).

Algorithmic Information Theory (AIT), or Kolmogorov complexity (which is only an index of AIT), has been applied for almost 80 years and makes all compression algorithms used daily for digital communication possible. It has also found applications in biology, genetics, and molecular biology. Yet, one does not even need Kolmogorov complexity to prove Assembly Theory incorrect because it does not do what it says it does, and what it does, does not do it better than almost any other control measure of complexity. It only takes one of the simplest possible algorithms known in computer science to prove Assembly Theory redundant because the Huffman coding scheme counts copies better than Assembly Theory. It has been the basis of all old statistical lossless compression algorithms since the 1960s and has been used (and sometimes abused) widely in the life sciences and complexity to characterise aspects of life. Nothing theoretical or abstract makes such applications impossible, and it is yet another common fallacy parroted with high frequency.

Original Post:

We have identified at least eight significant fallacies in the rebuttal of the proponents of Assembly Theory to our paper criticising the theory, available at

In a recent blog post (, one of the leading authors of a paper on Assembly Theory suggested that our criticism of Assembly Theory was based on a misunderstanding. At the end of this response, we have included screenshots of this rebuttal to our critique for the record.

By the time you reach the end of this reply, you will have learned how the main results of Assembly Theory can be reproduced, and even outperformed, by some of the simplest algorithms known to computer science, algorithms that were (correctly) designed to do exactly the same that the proponents of Assembly Theory set out to do. We could reproduce all of the results and figures from their original paper, thus demonstrating that Assembly Theory does not add anything new to the decades-old discussion about life. You can go directly to the MAIN RESULT section below should you want to skip the long list of fallacies and cut to the chase (for the empirical demonstration only, hence skipping most of the rest of the foundational issues).

Fallacy 1: Assembly Theory vs AIT

According to the authors’ rebuttal, “[We] contrasted Assembly Theory with measures common in Algorithmic Information Theory (AIT)” and ‘AIT has not considered number of copies’.

This is among the most troubling statements in their reply as it shows the degree of misunderstanding. The number of copies is among the most basic aspects AIT would cover and is the first feature that any simple statistical compression algorithm would look for, so the statement is false and makes no sense.

Furthermore, in our critique, we specifically covered measures of classical information and coding theory unrelated to AIT, which they managed to disregard or distract the reader’s attention from. We showed that their measure was fundamentally and methodologically suboptimal under AIT, under classical Shannon information theory, and under basic traditional statistics and common sense. As discussed in this reply, Assembly Theory and its proponents’ rebuttal of our critique of it mislead the reader into believing that the core of our criticism is predicated upon AIT or Turing machines — an example of a fallacy of origins.

AIT plays little to no role in comparing Assembly Theory with other coding algorithms. As discussed under Fallacies 2 and 4, Assembly Theory proposes a measure that performs poorly than certain simple coding algorithms introduced in the 1950s and 1960s. These simple coding algorithms are based on entropy principles and traditional statistics. Yet, the authors make unsubstantiated and disproportionate claims about their work in papers and on social media.

This type of fallacious argument continues to appear in the text of the rebuttal of our critique, which suggests a lack of formal knowledge of the mathematics underpinning statistics, information theory, and the theory of computation; or represents a vicious cycle in which the authors have felt unable to recognise that they have seriously overstated their case.

To try to distinguish AIT from Assembly Theory in hopes of explaining why our paper’s theoretical results also do not serve as a critique, their text keeps mischaracterising the advantages and challenges of AIT as well as attributing false mathematical properties to AIT, for example, those to be discussed under Fallacies 2, 5, and 6 below.

One of the many issues we pointed out was that their molecular assembly index would fail at identifying as a copy any variant, no matter how negligible the degree of variation, e.g., resulting from DNA methylation. This means the index would need to be tweaked to identify small copy variations, meaning it would no longer be agnostic. For example, even linear transformations (e.g., change of scale, reflection, or rotation) would not be picked up by Assembly Theory’s simplistic method for counting identical copies, from which complexity theory largely moved on decades ago. Given that one cannot anticipate all possible transformations, perturbations, noise, or interactions with other systems to which an object may be subject, it is necessary to have recourse to more robust measures. These measures will typically be ultimately uncomputable or semi-computable, because they will look for all these non-predictable changes, large or small. So indeed, there is a compromise to be made, yet that they are uncomputable or semi-computable does not mean we have to abandon them in favour of trivial methods or that such measures and approaches cannot be estimated or partially implemented.

But if it comes to testing trivial algorithms like Assembly Theory proposes, algorithms such as RLE and Huffman introduced in the 1960s are special-purpose coding methods that were designed to count copies in data and have been proven to be optimal at counting copies and minimising the number of steps to reproduce an object, unlike the ill-defined Assembly Theory indexes.

Below, we compared Assembly Theory (AT) and its molecular assembly (MA) index against these and other more sophisticated algorithms, showing that neither AT nor MA offer any particular advantage and are, in fact, suboptimal both theoretically and in practice at separating living from non-living systems using their own data and taking at face value their own results.

Figure taken from our paper (v2) showing how the Huffman coding is what the authors meant to do (count copies) with Assembly Theory and its primary measure (or ‘molecular assembly index’ as called in the paper in question) but failed doing (later you can also see how Huffman coding does similarly or better than AT at classifying their own molecules without recourse to any structural data). This is a classical word problem in mathematics, often a first-year course problem in a computer science degree that can be solved with a very simple algorithm called Huffman coding that can be implemented by a finite automaton (the authors mock Turing machines because they think it is too simple but their measure runs on a strictly weak version of a Turing machine, a finite automaton, and assumes life can be defined by the processes of life that behave or produce that way). The above figure shows that, in this example, the number of steps to reproduce their favourite example is shorter than what AT and the molecular assembly establishes, hence overestimating its real ‘molecular assembly’. AT is therefore, fundamentally and methodologically incorrect, does not offer a ‘new’ measure but a less efficient one (by their own standard of counting ‘physical copies’) that is redundant because it was introduced in the 1960s.

When we say that AT and MA are suboptimal compared to Huffman, we don’t mean that we expect Assembly Theory to be an optimal compression algorithm (as the authors pretended we were suggesting in a straw-man fallacy attack). Huffman coding is not an optimal statistical compressor, but it is optimal when doing what Assembly Theory claimed it was doing. This is another point the authors seem to get wrong naively or viciously repeating ad infinitum that AT and MA are not compression algorithms hoping that such a claim would make them immune to this criticism.

In no way do we expect Assembly Theory to be like AIT in attempting to implement optimal lossless compression. In other words, the above (and the rest of the paper and post) compares Assembly Theory to one of the most basic coding algorithms for counting copies, a method every compression algorithm has taken for granted since the 1960s but nothing else. The bar is thus quite low, to begin with.

Given that the authors seem to take ‘compression’ as a synonym of AIT, we have decided to change the term ‘compression’ for ‘coding’ when appropriate (in most cases) in the new version of our paper (, so the authors, and readers, know that we are talking about the properties attributed to AT and MA in other algorithms, regardless of whether these algorithms are seen or have been used in the field of compression.

Ultimately, their (molecular) assembly index (MA) is an upper bound of the algorithmic complexity of the objects it measures, including molecules, notwithstanding the scepticism of the proponents of Assembly Theory. Hence their MA is, properly speaking, an estimation of AIT, even if it is a basic or suboptimal one compared to other available algorithms.

Fallacy 2: ‘We are not a compression algorithm,’ so Assembly Theory is immune from any criticism that may be levelled at the use of compression algorithms

Interestingly, in the view of computer science, Assembly Theory’s molecular assembly index falls into the category of a compression algorithm for all intents and purposes, to the possible consternation of its proponents. This is because their algorithm looks for statistical repetitions (or copies, as they call them), which is at the core of any basic statistical compression algorithm.

Compression is a form of coding. Even if the authors fail to recognise it or name it as such, their algorithm is, for all technical purposes, a suboptimal version of a coding algorithm that has been used for compression for decades. Even if they only wanted to capture (physical) ‘copies’ in data or a process, which is exactly what algorithms like RLE and Huffman do, and confine themselves to doing, albeit optimally, their rebuttal of our critique fails to recognise that what they have proposed is a limited special case of an RLE-Huffman coding algorithm, which means their paper introduces a simpler version of what was already considered one of the simplest coding algorithms in computer science.

By ‘simple’, we mean less optimal and weaker at what it is designed to do, meaning that it may miss certain very basic types of ‘copies’ that, for some reason, the assembly index may disregard as nested in a pathway, hence not even properly counting (physical) ‘copies’ — which the Huffman Coding algorithm does effectively and outperforms AT in practice too (see figure below).

Fallacy 3: Assembly theory is the first (experimental) application of biosignatures to distinguish life from non-life

The claim to be the first to have done so is misleading. The entire literature from the complexity theory community is about trying to find the properties of living systems. The community has been working on identifying properties of living systems that can be captured with and by complexity indexes for decades, perhaps since the concept of entropy itself began. Here is one from us published a decade before Assembly Theory: The problem has even inspired models of computation, such as membrane computing and P systems as introduced in the 1990s that exploit nested modularity.

We could also not find any evidence in favour of the claim in the alleged experimental nature of their index, given that all other measures could separate the molecules as they did without any special experimental data, mostly based on molecular nomenclature. Thus, the defence that their claims about their measure are experimentally validated does not make sense. The agnostic algorithms that we tested and that should have been controlling experiments in their original exploration take the same input from their own data source and produce the same output (same separation of classes) or better.

We have updated our paper online ( to cover all their results reproduced by using measures introduced in the 1960s, showing how all other measures produce the same results or even outperform the assembly index.

Actually, we had already reported that nomenclature could drive most of the complexity measures (especially simple ones like AT and MA) into separating living from non-living molecules, which seems to be what the authors of Assembly Theory replicated years later. In this paper, for example, published in 2018, we showed how organic and inorganic molecules could be classified with complexity indexes of different flavors introduced before, both based on basic coding, compression, and AIT (a preprint is available here), predating the Assembly Theory indexes by four years.

In 2018, before Assembly Theory was introduced, we showed, in this paper, that complexity indexes could separate organic from inorganic molecules/compounds.
In 2018, in the same paper, we showed that repetitions in nomenclature would drive some complexity indexes and that some measures would pick up structural properties of these molecules/compounds. However, the authors of Assembly Theory disregarded the literature. They rehashed the work in complexity science and information theory done in the last 60 years that they failed to cite or attribute correctly (even when told before their first publication). They have attracted much attention with an extensive marketing campaign and PR engine that most labs and less social-media-oriented researchers cannot have access to or spare resources on.

We are convinced that it is impossible to define life by looking solely at the structure of an agent’s individual or intrinsic material process in such a trivial manner and without considering its relationship and exchange with its environment, as we explored here, for example, or here where, by the way, we explained how evolution might create modularity (something the authors of the Quanta article on Assembly Theory says, wistfully, that perhaps assembly theory could do).

What differentiates a living organism from a crystal is not the nested structure (which can be very similar) but the way the living system interacts with its environment and how it extracts or mirrors its environment’s complexity to its own advantage. So the fact that Assembly Theory pretends to characterise life by counting the number of identical constituents it is made of does not make scientific sense and may also be why Assembly Theory suggests beer is the most alive of the living systems that the authors considered, including yeast.

Fallacy 4: Surprise at the correlation

The authors say they were surprised and found it interesting that the tested compression algorithms, introduced in the 1950s, produce similar or better results than Assembly Theory, as we reported in This should not be a surprise, as those algorithms implement an optimal version of what the authors meant to implement in the first place, with the results conforming with the expectation implicit in the formal and informal specifications.

In this case, to effectively counter the counterargument would be to argue that although theoretically and empirically relevant from the statistical perspective, the correlation is not obtained due to structural similarities (i.e., that the assembly index is a particular type of coding algorithm) within the measure itself. However, the latter is obviously false. Furthermore, if a statistically significant classification task is being performed with equal or greater capacity than a measurement algorithm and method, then this fact per se would require a further explanation — as to why an equal or superior performance should be disregarded. This would entail exposing the measures’ foundational structural characteristics, bringing the reader face-to-face with the other fallacies in the text.

The authors must now explain how we could reproduce their main result with every other measure. They would have toned down their claims if they had performed basic control experiments. Neither the foundational theory nor the methods of Assembly Theory offer anything not explored decades ago with those other indexes that could separate organic and inorganic compounds hist as MA and AT did.

Fallacy 5: Assembly Theory vs Turing machines (or computable processes)

The authors wrote:

“They have not demonstrated how those algorithms would manifest in the absence of a Turing machine, how those algorithms could result in chemical synthesis, or the implications of their claims for life detection. Their calculations do not have any bearing on the life detection claims made in Marshall et. al. 2021, or the other peer-reviewed claims of Assembly Theory. Despite the alternative complexity measures discussed, there are no other published agnostic methods for distinguishing the molecular artifacts of living and non-living systems which have been tested experimentally.”

Coming back again to the same straw-man fallacy, they seem to conflate a Turing machine with simplicity and disparage the model; they do not realise that their index is a basic coding scheme widely used in compression since the 1960s even if they claim that they do not wish to compress (which they effectively do) and thus is also related to algorithmic complexity as an upper bound. They proceeded to ask “how those algorithms would manifest in the absence of a Turing machine,” misconstruing our arguments and dismissing the results saying they were ‘surprised’ about them. We could not make sense of this statement.

Later on, they say: “MA is grounded in the physics of how molecules are built in reality, not in the abstracted concept of Turing machines.” We could not make any sense of this either. We can only assume that they think we are suggesting that AIT implies that nature operates as a Turing machine. This is an incorrect implication. What AIT does imply is that a measure of algorithmic complexity captures computable (and statistical, as they are a subset) features of a process. There is nothing controversial about this. Science operates on the same assumption, seeking mechanistic causes for natural phenomena. If the implication is that because AIT is typically defined in terms of Turing machines (as an oversimplification of the concept of an algorithm), then they are implying that their assembly index is assuming nature to operate as an even simpler and much sillier automaton, given that the assembly index can be defined in terms of, and executed by, a bounded automaton (an automaton more basic, equally mechanistic, and ‘more simplistic’ than a Turing machine). However, we are not even invoking any Turing-machines-argument. The use of AIT is only to support the logical arguments and a small part of the demonstration of the many issues undermining Assembly Theory.

Algorithms such as RLE and Huffman do not make any particular ontological commitments. They can be implemented as finite automata, just as the assembly index does; they do not require Turing machines. This, again, shows a lack of understanding on the part of the authors of basic concepts in computer science and complexity theory. In other words, if we were to construct a hierarchy of simplicity, with the simpler machines being inferior, their assembly index would occupy a very lowly position in the food chain of automata theory, counting as one of the simplest algorithms possible that does not even require the power of sophisticated automata like Turing machines or a general-purpose algorithm.

The claim that a Turing machine is an abstract model unable to capture the subtleties of their measure defies logic and comprehension because their measures can run on an automaton significantly simpler than a Turing machine; it does not even require the power of a universal Turing machine. And ultimately, we could not find their measure to be grounded in physics or chemistry as they suggest it makes their measure special. For example, when they say that “the goal of assembly theory is to develop a new understanding of evolved matter that accounts for causal history in terms of what physical operations are possible and have a clear interpretation when measured in the laboratory” and “assembly spaces for molecules must be constructed from bonds because the bond formation is what the physical environment must cause to assemble molecules,” they disclose the limitations of Assembly Theory, because it cannot handle other more intricate environmental (or biological) catalytic processes that can increase the odds of a particular molecule being produced as a by-product, which other more capable compression methods can. What they do accept is that their algorithm counts copies, and as such, their algorithm can run on a very simple automaton of strictly less power than a Turing machine.

This misunderstanding is blatantly evinced in the rebuttal’s passage in which they call the process of generating a stochastically random string an “algorithm.” The authors fail to distinguish between the class of computable processes and the class of the algorithms run with access to an oracle (in this case, access to a stochastic source). Then, to construct a counterexample against our methods, they implicitly assume that the generative process of the string belongs to the latter class while the coding/compression processes for the string belong to the former class. Despite these basic mistakes, the authors later argue that this is one of the reasons that AIT fails to capture their notion of “complete randomness”, with Assembly Theory being designed to do just that. These mistakes suggest an en passant reading of the theoretical computer science and complex systems science literature (see also Fallacy 6). Similarly, their rebuttal claims that our results cannot handle the assembly process as their method can. However, their oversimplified method that generates the assembly index is feasibly computable (and advertised as such by the authors) and can easily be reproduced or simulated by decades-old compression algorithms, let alone by other more powerful computable processes.

In most of our criticism, the use of a Turing machine is irrelevant, regardless of what the authors may think of Turing machines. Their assembly index, RLE, and the Huffman coding do not require any description or involvement of a Turing machine other than the fact that they can all be executed on a Turing machine. This holds because our empirical results do not require AIT (see Fallacies 2 and 4), and our theoretical results do not require Turing machines.

Note that even if completely different from an abstract or physically implemented Turing machine, a physical process can be either computable, capable of universal computation, or both. They are trying to depict the proofs against their methods as if our position was that physical or chemical processes are Turing machines, which makes no sense (here, they seem to be employing a type of clichéd-thinking fallacy). Moreover, they ignore the state-of-the-art ongoing advances in complexity science on hybrid physical processes, that is, partially computable and partially stochastic.

The authors also fail to see that any recursive algorithm (like theirs) is equivalent to a computer program running on a Turing machine or to a Turing machine itself, including the methods of Assembly Theory, and thus the use of algorithms or Turing machines to make a point is irrelevant. The only option for making sense of their argument is to assume they believe that their Turing-computable algorithm can capture non-Turing computable processes by some mystical or magical power.

Fallacy 6: Assembly index has certain “magical” properties validated experimentally, including solving a problem that was demonstrated decades ago could not be solved by the likes of it.

The authors claim that if we do not create a molecule that the assembly index fails to characterise we cannot disprove their methods. In addition to being an instance of an appeal to ignorance fallacy, this misrepresents the core of our arguments by ignoring what our results imply. As also discussed under Fallacies 2 and 4, we showed that the assembly measure could be replaced by simple statistical compression measures that do a similar or a better job both of capturing their intended features and classifying the data (the alleged biosignatures) while also offering better (optimal) foundations. We have also proposed, tested, and proven to be better than their index computable approximations to semi-computable measures. Even the computable versions are better than Assembly Theory, both in principle and practice.

Furthermore, they presented a compound as a counter-example that they modestly called Marshallarane (after the lead author of their paper), defined as “[producing] a molecule which contains one of each of the elements in the periodic table.” This is an instance of a combined straw-man and false analogy fallacy, in which an oversimplified explanatory or illustrative example that is supposed to be analogous or similar to the target argument is employed to concentrate the discussion around that oversimplified example.

If the authors find that not saying anything about a compound like the ‘Marshallarane’ algorithm (which I’d rather call Arrogantium or Gaslightium) is an advantage, then RLE and Huffman, and indeed any basic Shannon entropy-based algorithm qualify equally since they wouldn’t be able to find any copy or repetition, as we explain in depth in this paper. In fact, RLE and Huffman coding schemes are limited and among the first and simplest forms of coding schemes to ‘count copies’. This would be different from algorithmic complexity, but it means that we do not need algorithmic complexity to perform like MA, so there’s no need to invoke algorithmic complexity if we can replace Assembly Theory and its molecular assembly with simplistic algorithms such as RLE or Huffman (as introduced in the 1960s) that appear to outperform Assembly Theory itself.

This strategy fails to address our criticism because such a counterexample of building a molecule first poses a contradiction internal to their own proposition. They have provided a Turing-computable process to build their new compound since, according to the authors, “describing a molecule is not the same as causing the physical production of that molecule. Easily describing a graph that represents a molecule is not the same as easily synthesising the real molecule in chemistry or biology”. Yet, their algorithm requires a computable representation that is no more special than any other computable representation of a process. In addition to possibly being an instance of clichéd-thinking fallacy, here again, due to the use in the literature of the term ‘descriptive complexity’ to refer to algorithmic complexity, this is in contradiction to their own claim that Turing machines are not an appropriate abstract model — because it is exactly the mechanistic nature of processes that a Turing machine would be able to emulate or simulate, a process, which lies at the heart of AIT. This passage shows a total misunderstanding of AIT, Turing machines, and internal logical coherence (see also Fallacy 5). Our own research on Algorithmic Information Dynamics is concerned with causal model discovery and analysis based on the principles of AIT. Still, the authors present AIT as oblivious to causality and advance their oversimplified, weak, and suboptimal algorithm as able to capture the subtleties of the physical world.

Such an argument also fails because it distorts or omits, in a straw-man manner, the crucial part of our theoretical results that there are objects or events (e.g., a molecule) that satisfy their own statistical criteria for calculating statistical significance and distinguish pure stochastic randomness from constrained (or biased) assembling processes. In other words, there is a computable process that results from a fair-coin-toss stochastically random outcome that satisfies their own statistical criteria that one may employ to check whether or not the resulting molecule sample frequency is statistically significant; and that satisfies their own mathematical criteria for distinguishing random events (in their own words, those with a “copy number of 1”) from non-random events (in their own words, “those with the repeated formation of identical copies with sufficiently high MA”).

The authors indeed appear to suggest that their index has some “magical” properties and is the only measure that can capture and tell apart physical or chemical processes from living ones. For example, their rebuttal of our critique employs the argument that it can tell apart physical or chemical processes from living ones because it “handles” the problem of randomness differently from AIT. Moreover, when they say that “using compression alone, we cannot distinguish between complete randomness and high algorithmic information,” such a claim already contradicts, even in an at-first-glance reading, the fact that the assembly index can be employed as an oversimplified compression process. In any event, the difference between Assembly Theory and AIT certainly and trivially cannot lie in how randomness is handled because Assembly Theory does not handle randomness at all, as it is in sheer contradiction to what is mathematically defined as randomness.

The formal concept of randomness was only established in the literature of mathematics after a long series of inadequate definitions and open problems exercising the ingenuity of mathematicians, especially in the last century, decades-old problems which (inadvertently or not) already include the statistical and compression method of Assembly Theory as one of the proven cases in which randomness fails to be mathematically characterised. Randomness occurs when it passes every formal-theoretic statistical test one may possibly devise to check whether or not it is somehow “biased” (that is, more formally, that it has some distinctive property or pattern preserved at the measure-theoretic asymptotic limit). However, there are statistical tests (as shown in our paper) for which an event satisfies Assembly Theory’s criteria for “randomness” but does not pass these latter tests. In other words, Assembly Theory fails even in the case of the most intuitive notion of randomness so that there is an object for which its subsequent constituents are less predictable (or random) according to Assembly Theory than they actually are.

Contrary to their claims, randomness is synonymous with maximum algorithmic information. Unlike what the authors propose as research, the real scientific debate they fail to grasp has to do with complexity measures for complex systems with intertwinements of both computable and stochastic dynamics, as discussed under Fallacy 5, systems which the results in our critique already showed Assembly Theory failed to measure even as well as other well-known compression methods.

Fallacy 7: “Many groups around the world are using our [assembly] index”

An instance of the bandwagon fallacy. The authors claim that several groups around the world are working on Assembly Theory. If this is the case, it does not validate our criticism but makes it more relevant. The fact that the authors have published their ideas in high-impact journals also corroborates (if only anecdotally) the ongoing and urgent concern in scientific circles about current scientific practice, how biased the peer review process is in its tendency to value social and symbolic capital; how the kind of behaviour once considered inappropriate is now rewarded by social network dynamics impacting scientific dissemination, and how fancy university titles of corresponding authors may play a role in dissemination, to the detriment of science.

There are definitely more groups working in information theory and AIT (which Cronin calls ‘a scam’, see his Twitter post below) than there are groups working on Assembly Theory, so this exhibits this fallacy. And as of today, despite the high media profile of Leroy Cronin, methods based on AIT (a field in which one of his collaborators, Sara Walker, has been productive), including our own, have way more citations and are used by many more groups than Assembly Theory today.

That a small army of misled and underpaid postdocs in an over-funded lab led by an academic highly active on social media has managed to make other researchers follow their lead is not that surprising. But it is our duty to inform less informed researchers that they may have been misled by severely unsubstantiated claims, naively or viciously. If their work had been ignored, we would not have invested so much time and effort debunking it.

Fallacy 8: “100 molecules only”

To turn now to the claim that we only used 100 molecules and reached conclusions that reveal a misunderstanding of what they did. In their supplemental information, they claim that the 100 molecules in Fig. 2 constitute the main test for MA to establish the chemical space, based upon which they can distinguish biological samples in attempting to detect life. These molecules are, therefore, the same ones they used and defended the use of when confronted with reviewers who pointed out the weaknesses of their paper. The authors mislead the reader by claiming that the 100-molecule experiment has little or nothing to do with their main claims and that we have therefore misunderstood their methods and results. This is false; their own reviewers were concerned that their claims were entirely based on tuning their measure over these 100 molecules. We simply replicated their experiment with proper controls and found that other computable statistical measures (therefore not only AIT but classical information coding measures) were equivalent to or better than Assembly Theory.

It does not follow that our empirical findings, shown in Fig. 2 of the first version of our paper, do not disprove their Fig. 4 findings because, according to the authors, Fig. 2 was how they ‘calibrated’ their index and set the chemical space. However, we have updated our paper online to cover Fig. 4, too, showing how all other measures reproduce the same results or even outperform the assembly index so that now each and every one of their results and figures have been covered.

Their paper was not reproducible, which is one of the reasons we hadn’t produced Fig. 4. So we have taken as given the values of their molecular assembly in their own plot. Even assuming their results, they are inferior compared to unpicked control experiments. Every other measure tested produced similar or better results, some even outperforming their index without any effort.

Notice that they had claimed that the central hypothesis and framework for MA computation was built upon the data and validation, whose results are shown in Figs. 2 and 3, on the 100 molecules we originally used in our criticism analysis. If that had failed, their first hypothesis would have failed on its own terms. So, the reference to Fig. 4 was only to distract readers from the reported issues because the original 100-molecules experiment was the basis for their final results.

They used the 100 molecules to set the chemical space and validated it with the larger database of molecules. They claimed multiple times that complex real biological samples like e.coli extract or yeast extract are just a complex mixture of some of these molecules from which they built the chemical space (i.e., the 100 molecules).


Finally, we provide the main figure showing that other simple and more sophisticated measures reproduce the molecular assembly results at separating biological molecules using the same data that their index uses and displaying either similarly or outperforming Assembly Theory.

It reproduces their results in full using traditional measures, some of which the main author of Assembly Theory has called ‘a scam’, showing that their assembly index does not add anything new, their algorithm does not display ‘special properties’ to count ‘physical’ copies, and that, in fact, some measures even outperform it when taking their results at face value (as we were unable to reproduce them). It is thus fatal to the said theory.

The new version of our paper, available online, includes this figure that incorporates all the molecules/compounds in their paper. The authors insisted that we could not replicate their results using decades-old simplistic algorithms (or outperform them using simple and more sophisticated ones). This added figure shows that what the main author has called a scam (‘algorithmic complexity,’ see image below) can also reproduce the same results as their allegedly novel and ground-breaking concept based on the methodologically ill-defined and fundamentally weak framework of Assembly Theory.
Table taken from our paper showing how other measures trivially outperform AT and MA. The authors of Assembly Theory have never done or offered any control experiments comparing their work or indexes to other measures. The above shows that other complexity measures do similarly or better and that no claim to ‘physical’ processes is justified because with and without structural data, their results are reproducible with trivial algorithms known for decades.

In summary

Readers should judge for themselves by reading the papers and the rebuttals whether or not their fallacious arguments can be worked around. Their response to our critique does not address our main conclusions. It tries, hopefully unintentionally, to distract the reader from the results with statements about their work’s empirical assumptions or putative theoretical foundations. Contrary to their claim that “[our] their theoretical and empirical claims do not undermine previously published work on Assembly Theory or its application to life detection,” our arguments seriously undermine Assembly Theory and seem fatal.

The assembly index can be replaced by simple statistical coding schemes that are truly agnostic, not being designed with any particular purpose in mind, and do a similar or better job at capturing both the features that Assembly Theory is meant to capture and, in practice, classifying the data (the alleged biosignatures) while also offering better foundations (optimality at counting copies) even without having recourse to more advanced methods such as AIT (which also reproduce their results, e.g. 1D-BDM).

The animus of the senior author toward one of the core areas of computer science suggests a lack of understanding of some of the basics of computer science and complexity theory (some replies to his Tweet even told him that his own index was a weak version of AIT to which the author never replied or elaborated). Coming from an established university professor, dismissing algorithmic complexity measures as ‘a scam’ (see the image from Twitter below), with no reasons given even when requested to unpack, is dismaying. And especially so since one of the finest works of one of their original paper’s main co-authors happens to have serious work in the area of algorithmic probability (Sara Walker paper). The authors fail to realise that for all foundational and practical purposes, Assembly Theory and its methods are a special weak case of AIT and would only work because of the principles of algorithmic complexity. AIT is the theory that underpins their Assembly Theory as an (unintended and suboptimal) upper bound of algorithmic complexity.

On behalf of the authors,

Dr. Hector Zenil
Machine Learning Group
Department of Chemical Engineering and Biotechnology
University of Cambridge, U.K.

The following are the honest answers to the authors’ FAQs that they have published on their website (

Second Q&A from the original authors led by L.Cronin

Honest answer: It is a simplistic index of pseudo-statistical randomness that counts ‘copies’ of elements in an object. Surprisingly, the authors claim to capture key physical processes underlying living systems. Indeed, according to the authors, counting copies of an object unveils whether it is alive or not. The idea that such a simplistic ‘theory of life’ can characterise a complex phenomenon like life is too naive.

In our opinion, no definition of life can disregard the agent’s environment, as life is about this interaction. A simplistic intrinsic measure like the assembly index would astonish any past and future scientist if it were actually capable of the feats that their authors believe it capable of (the senior author even claims to be able to detect extraterrestrial life). However, we have shown that it does not perform better than other simple coding measures introduced in the 1960s (and better defined) and that their measure is ill-defined and suboptimal at counting copies (see

We think that the key to characterising life lies in the way living systems interact with their environments and that any measure that does not consider this state-dependent variable, which is the basis of, for example, Darwinian evolution, will fail. We have published some work in this area featuring content that Lee Cronin considers ‘a scam’ (see below), one of which was co-authored with one of the senior authors of the Assembly Theory paper with Cronin, Sara Walker (

Third Q&A from the original authors led by L. Cronin

Honest answer: Despite the apparent fake rigour of the answer, the assembly index is a weak (and unattributed) version of a simplistic coding algorithm widely used in compression and introduced in the 1960s, mostly regarded today as a toy algorithm. As such, and given that all compression algorithms are measures of algorithmic complexity, it is an algorithmic complexity measure. To the authors’ misfortune, their assembly index is the most simplistic statistical compression algorithm known to computer scientists today (see

Their original paper is full of, perhaps unintendedly, fake rigour. The authors even included a proof of computability of their ‘counting-copies algorithm’ which nobody would doubt was trivially computable. Nobody has proven that algorithms like RLE or Huffman coding are computable because they are trivial. Assuming their proof is right, it was unnecessary. These algorithms can be implemented in finite automata and are, therefore, trivially computable. What they have shown is that they seem to be in some sort of strawman crusade against AIT in apparent desperation to show how better they are compared to AIT (before even comparing themselves to other trivial computable measures).

Fourth Q&A from the original author led by L.Cronin

Honest answer: Yes, their second sentence indicates that their algorithm is trying to optimise for the minimum number of steps after compression by looking at how many copies an object is made of (just like Huffman’s coding does. For an example, see, but AT and MA does this suboptimally. Therefore not only is it a frequency-based compression scheme but also, unfortunately for them, a bad one (see

To their surprise, perhaps, almost everything is related to compression. In AI, for example, the current most successful model based on what are called Transformers (ChatGPT) are being found to perform successfully because they can reduce/compress the high dimensionality of large training data. Science itself is about compressing observed data into succinct mechanistic explanatory models.

Fifth Q&A from the original authors led by L. Cronin

Honest answer: The authors say this has nothing to do with algorithmic complexity, but their answer is almost the definition of algorithmic complexity and of algorithmic probability that looks at the likelihood of a process producing other simple processes (including copies). Unfortunately, the assembly index is such a simplistic measure that complexity science left it behind in the 1960s or incorporated it as one of the most basic algorithms that literally any compression and complexity measure takes into account. These days it is taken for granted and used as an exercise for first-year students in Computer Science.

Update (14 May 2023): How to fix Assembly Theory?

One can anticipate and brace for the next move of the group behind AT to announce that they have created artificial life in their lab measured by this simplistic assembly index (which we believe is their original motivation) that even a crystal can pass and beer excels at (according to their own experimental results) in what may appear as another publicity stunt with a well-oiled marketing engine behind.

AT is unfixable in some fundamental ways, but the authors have invested so much effort and their own credibility that they are unlikely to be backing down. In the spirit of constructiveness and discussing this with a group of colleagues (larger than the authors of our paper and blog post), we can get creative and see how to make AT more relevant. Here we suggest a way to somehow ‘patch’ AT:

1. Embrace an optimal method of what Assembly Theory originally wanted to do, which is to count nested copies. It should start with Huffman encoding, or any other coding measure from AIT. They can explore weaker versions by relaxing assumptions if they want to make different interpretations (and explore ideas properly attributed to the right people, such as their idea of the ‘number of steps’ and ‘causally connected processes’ from Logical Depth). Things like counting the frequency of molecules would already be taken into account as shown in our own papers and others.

2. Drop the claim that ‘physical’ bonds, reactions or processes can only be captured by AT; it makes no sense. ‘Physical’ or not, everything is physical, or nothing is. Whatever material they feed their assembly index with must be symbolic and computable as their measure is an algorithm that takes a piece of data containing a representation of those ‘physical’ copies, like in the chemical nomenclature (e.g. InChi) or structural distance matrices.

3. Drop the assumption that bonds or chemical reactions happen with equal probability; to begin with, this is not the case; depending on the environment, each reaction has a different probability of happening, but even if you do, algorithmic probability already indicates a simplicity bias, and is encoded in the universal distribution (see this paper and derived, based on motivated by our own extensive work).

4. Your measure needs to factor in the influence of the environment to change the probability distribution of how likely specific physical or chemical steps can happen. This is the state-dependency step that we defend above, which no measure can ignore, and is the chemical basis of biological evolution.

5. Based on an ever-changing environment, any agent (physical/chemical process) would need to adapt, thus, the likelihood for particular chemical reactions to happen. It is the internal dynamics of this relationship that we know is the hallmark of life. After all, amino acids can be found on dead asteroids with no problem; not many people would call that or beer (like AT does) life.

For AT to work, therefore, the agreement among colleagues seems to be that these steps would need to be considered before anyone suggests any ‘validation’ using spectrometry data and putting a complexity number on a molecule to call it a good measure for detecting life. However, this is not Assembly Theory or what the authors of AT did but what others have been doing in the field with both breakthroughs and incremental progress in the past and for a long time.

Cited screenshots

The original rebuttal to our paper ( from the authors of Assembly Theory (for reference in case it changes or is taken down in the future)


Let’s address the — entirely unwarranted — reservations the authors of Assembly Theory (and many more uniformed researchers) seem to have against a classical model of computation. While researchers from Assembly Theory seem to disparage anything related to the foundations of computer science, such as the model of Turing machines and algorithmic complexity, other labs, such as the Sinclair Lab at Harvard, have recently reported surprising experimental epigenetic aging results deeply connected to information and computer sciences. Says Prof. David Sinclair: “We believe it’s a loss of information — a loss in the cell’s ability to read its original DNA so it forgets how to function — in much the same way an old computer may develop corrupted software. I call it the information theory of aging.”

While we are not embracing a supremacist view of AIT or Turing machines, we have no reason to disparage the Turing machine model. There are very eminent scientists, such as Sydney Brenner (Nobel Prize in Chemistry), who believe not only that nature can be expressed and described by computational systems but that the Turing machine is a fundamental and powerful analogue for biological processes, as Brenner argues in his paper in Nature entitled “Life’s code script” (, ideas that Cronin and his co-authors seem to mock. According to Brenner, DNA is a quintessential example of a Turing machine-like system in nature, at the core of terrestrial biology, underpinning all living systems.

As for the importance of AIT (algorithmic complexity and algorithmic probability) in science, in this video of a panel discussion led by Sir Paul Nurse (Nobel Prize in Physiology or Medicine, awarded by the Karolinska Institute) at the World Science Festival in New York, Marvin Minsky, considered a founding father of AI and one of the most brilliant scientists, expressed his belief that the area was probably the most important human scientific achievement, urging scientists to devote their lives to it ( He did this sitting next to Gregory Chaitin, fellow panelist and one of the founders of AIT and algorithmic probability (and thesis advisor to Dr. Zenil, the senior author of the paper critiquing Assembly Theory).



Dr. Hector Zenil

Senior Researcher & Faculty Member @ Oxford U. & the Alan Turing Institute. Currently @ Department of Chemical Engineering & Biotechnology, Cambridge U.