CHATPRIVACYDONATELOGINREGISTER
DMT-Nexus
FAQWIKIHEALTH & SAFETYARTATTITUDEACTIVE TOPICS
Universality in nature Options
 
dromedary
#1 Posted : 4/4/2012 5:24:49 PM

Camelus dromedarius


Posts: 89
Joined: 05-Dec-2011
Last visit: 06-Jan-2021
Location: Australia
This is a story in four parts of mathematics, logic, science and philosophy. I hope you enjoy reading it as much as I have enjoyed writing it.

Part 1: Search for a Universal Machine

In 1985, mathematician Stephen Wolfram made a remarkable claim: that the number 110, used as a kind of coloring code for filling in the next row of a grid, could be used as a computer equally powerful to any we have ever built. "Rule 110", according to Wolfram, is a "universal machine". Specifically it is "turing-complete", which means it can be programmed with the same instructions (software) and input data, and generate the same result, as any other turing-complete computer, which is to say, all computers. That was Wolfram's claim, at least.

Wolfram had developed and was studying these simple rules, which have come to be known as "Wolfram numbers", for his 2002 magnum opus A New Kind of Science. The principle is relatively simple, and I will try to explain it in words and then visually. Imagine you have a sheet of paper containing a grid of squares. You colour some in at random along the top row. This is your "input string". You then colour in the next row by following a simple rule for each square, according to the three squares above it (the square directly above and one on either side of it). If you think of a coloured in square as a 1 and an empty square as a 0, there are eight possible combinations for any segment of three squares, which you can write as: 111, 110, 101, 100, 011, 010, 001, 000. For each rule, you will colour in the square below some of these combinations and leave blank the square below others. There are 256 ( 2^8 ) different colouring schemes, which we call rules, and you can work out the colouring scheme for each rule by writing it in binary below the 8 colouring combinations, like this example for rule 30 (image courtesy Wolfram MathWorld):



Along the top we see each of the possible patterns that might be above a given square. Along the bottom we see the number 30 written out in binary (00011110). In the middle we see that squares corresponding to a 1 in the rule number are coloured in, while rules corresponding to a zero are left blank. By following this colouring rule, line after line, a strange pattern emerges, as we see here for an input string consisting of a single coloured in square in the middle of our top row (left, from Wolfram MathWorld) and after a few hundred iterations (right, my own render):



These rules are called the elementary cellular automata. Further examples and a more thorough explanation are available from the Wolfram MathWorld website. They are a fascinating topic in their own right, but for now let us return to Wolfram's peculiar claim that one of these rules - Rule 110 - is capable of universal computation. I have generated my own output for the first few iterations of Rule 110 below:



The variably sized triangular shapes that emerge in the pattern are referred to as "spaceships", or more specifically for simple triangles like these, "gliders". They earn that name because of the way they seem to 'fly' through the grid over time, an impressively 2-dimensional feat for an object which emerges from exclusively 1-dimensional input and output. Wolfram saw that gliders of different types or sizes, moving at different speeds, could crash into each other and stop, or be destroyed, or cause a new glider to emerge from the wreckage. Certain patterns in the initial input string (or occuring naturally later) could become 'emitters' from which new gliders would launch periodically. More importantly, he noticed that certain patterns or 'structures' in the input string were stable, and would be conserved from line to line - at least until a glider crashed into them and caused a change. Wolfram hypothesised that an input string could be constructed that took advantage of the interactions between gliders, emitters and structures to simulate the essential parts of a computer - to store data, and work through a series of instructions to perform computations.

This was Wolfram's contention, but as of 1991 (six years after the initial conjecture) he hadn't been able to prove it. He certainly could not demonstrate a working Rule 110-based computer! As his attention became increasingly stretched due to ballooning responsibilities as president of his growing company (Wolfram Research, makers of Mathematica), he assigned the task of proving the universality of Rule 110 to an enthusiastic young assistant named Matthew Cook, just 21 years old at the time. Over the coming months, as he struggled with ever more sophisticated approaches to the problem, Cook's enthusiasm faded and eventually he confessed to Wolfram that he thought the problem was unsolvable. Wolfram disagreed, and encouraged Cook to keep working on the problem. Cook complied, and spent the next several years carefully identifying and cataloguing the different types of glider that emerged from the rule, in the hopes of understanding their behaviour more thoroughly. By 1994 he had gathered enough information that a solution, however far off, might really be possible. All the right interactions between the gliders seemed to be there to make a universal machine, but constructing a mathematical proof required far more rigorous reasoning than the vague assertion of having the right pieces. A working computer based on Rule 110 still seemed far away.

By 1998, Cook was convinced that he had solved it. He presented his results including the details of his proof to the Santa Fe Institute conference that year, to the extreme displeasure of his employer. Wolfram Research, acting as a direct proxy for Stephen Wolfram, sued Cook for violating his Non-Disclosure Agreement, which assumed intellectual property rights over all research conducted by company employees. Wolfram successfully filed for an injunction against publication of Cook's results, and everything went quiet for a while. A New Kind of Science was published in 2002, which explained Cook's proof in rough detail, and in 2004 the original proof was finally published in Wolfram's own journal, Complex Systems. The next several years saw incremental improvements to Cook's work, mainly in reliability and computation speed. The universality of Rule 110 remains an esoteric area of research, and there are still no general methods for building an arbitrary computer program to supply as an input string. The science is there, so to speak, but the technology is still waiting to be built.

Part 2: Biology and Universality

In many ways the most exciting consequence of Cook's proof is not that we may be able to use it to build computers, but that the mechanism is so simple that nature has almost certainly already done so. Other elementary cellular automata have already been observed in nature. You might compare our example Rule 30 to the pattern displayed by Conus textile shells:



Any situation involving a 1-dimension pattern (a line) being copied and changed may be able to be explained as a cellular automaton. The mechanism of deriving each cell from the three adjacent cells is reminiscent of many physical processes, from the simple extrusion of a blade of grass from a stem, to the complex process of DNA transcription into RNA repeated billions of times every second (click for video):



The observance of cellular automata in nature need not be so direct. The versatile Rule 184 has been found to model the way that falling particles deposit onto a surface, the way that particles and antiparticles mutually annihilate on collision, and even the way that traffic flow builds and disperses on a busy road. The question is not whether nature creates cellular automata, it is simply a matter of finding them. Matthew Cook needed an input string of a few million cells - around one megabyte of data - to demonstrate universality. Although this figure has been substantially improved upon by subsequent research, it is an amount which is easily dwarved by many common biological processes. The Paris japonica flower, to take an extreme example, has DNA containing 149 billion base pairs. Since each base pair can be represented by two bits, this flower's DNA trascription process can be said to have an input string of well over 35 gigabytes!



The endless folds of DNA and RNA encodings could comfortably be hiding computational systems in their depths. The consequence is that many natural processes which we have assumed to be regular, mechanical, predictable, may in fact be the product of careful calculations executed simultaneously by trillions of molecular computers. Our powers of prediction over nature may very well turn out to be limited not by the progress of scientific research, but by a very much deeper problem of the unpredictability of computational systems. In computer science, programmers recognise the "halting problem" as a fundamental law of computing: it is impossible to determine by looking at the source code (input string) of a program whether or not that program will terminate. If computational systems arise in nature, the halting problem would seem to impose a similar limitation on our ability to predict how and when vital components of life will cease to function. Is death really inevitable? The problem may turn out to be undecidable.

Part 3: A Logical Conlusion

The halting problem, first identified by Alan Turing in 1936, turned out to be a special case of a more general limitation in reasoning proven by logician Kurt Gödel five years earlier. To understand where Gödel was coming from, we need to look back earlier still to a mammoth effort by Alfred Whitehead and Bertrand Russel to formalise the rules of mathematics into a consistent logical system. They published their work between 1910 and 1913 in a three-part tome called Principia Mathematica, which to this day provides an important framework for the unification of mathematical research. Their work was widely celebrated, and seemed to close many logical gaps that had been left as unanswered questions in mathematics - but this logical soundness came at the cost of some expressive power.

One such cost was a weakening of set theory, which is closely related to computation and the construction of rigorous mathematical proofs. Set theorists had been plagued by a number of "paradoxes" involving self-reference: does a set of all sets that do not contain themselves, contain itself? Russel and Whitehead closed the self-referential gap by denying that it exists. They assigned a numerical order to each level of self-reference in a set. A set that contained no sets, and only regular objects, was of order 0. A set that contained other sets, which only contained regular objects was of order 1. A set that contained order 1 sets was order 2, and so on. By modifying set theory so that it could only express sets that contain sets of a lower order than themselves or regular objects, all set-related paradoxes could be avoided. Principia Mathematica systematically tamed all existing self-reference in mathematics, and with considerable fanfare, announced that it was both consistent (all statements derived from it would be true) and complete (all true statements in mathematics could be derived from it).

Gödel, an expert in the area of formal proof, found the claim to be specious. He noticed that the claim to completeness - that the foundations set out in Principia Mathematica were powerful enough to derive all true statements in mathematics - was itself a self-referential claim which could not be proven. His reasoning went something like this:

1. Imagine the set of all possible true statements in mathematics.
2. We want to prove the claim that "the set of all true statements in mathematics can be dervied from the rules of Principia Mathematica".
3. If our claim is true, then it must be part of the set of true mathematical statements we imagined in (1).
4. Since our claim contains a reference to the set of all true statements, it must not be part of the set of statements that can be derived from Principia Mathmatica, from which only non self-referencing sets can be derived.
5. Principia Mathematica must then either be inconsistent, incomplete, and probably both.

Douglas Hofstadter illustrates the way that a set of rules in a formal system can expand to attempt to fill the space represented by all true statements in that system, but will never be able to fill it, by drawing the shape of the derivations as a tree (image taken from Gödel, Escher, Bach):



Though the picture does not show it, the true shape of the tree, which is generated by a repeated application of a simple set of rules, is fractal in nature. The edges are not blurry, but are nevertheless impossible to define, because the pattern is self-similar at every level, no matter how closely you look. This property highlights the infinite detail that can occur in a sufficiently powerful formal system, which still somehow takes on a recognisable shape.

Notice that this kind of reasoning applies equally to any logical system that claims to be consistent and complete, not just Principia Mathematica. The general principle is called Gödel's incompleteness theorem, and it's proof - which is significantly more nuanced than the plain-English version I have presented here! - is quite irrefutable, and more than a little bit eerie. It says, in essence, that it is impossible to formalise truth. Truth will always escape from any attempt to capture it a framework of rules, no matter how broad and powerful those rules may be. With truth as the cornerstone of knowledge, we are forced against our will to scale back the hunt for the ultimate knowledge. Science will prove no grand unified theory of everything, because it inherently lacks the tools to do so. Objective truth simply cannot exist, not even theoretically.

Part 4: Irreducible Complexity

None of this is to say that the scientific approach of reductionist positivism - to break apart a problem, observe the behaviour of its parts, and find logically consistent explanations - has no place. Science, and its more practically-minded cousin Technology, have been responsible for the most important advances ever made by humanity. We are conquering disease, and a growing proportion of the species is becoming part of a global communications network that allows any member to communicate with any other, instantly, from just about anywhere. The achievements of science and technology are not in question here. Yet the once seemingly boundless potential of understanding the world and harnessing its power by looking at how its parts interact may find the limits of its influence far nearer than anyone expected. If there are computation systems in nature then the best we can hope from scientific enquiry is to identify them. It will be a different field of study that sheds light on how these systems work, what they do, and whether we can use that knowledge to predict or control them.

It is through this channel of research that we will arrive, inexorably, back at Gödel's incompleteness theorem. It starts by trying to model the system. The behaviour and output of a system of computation - whether it be in the form of semiconducting silicon in your desktop computer or a plant's protein chain being transcribed and altered by a sophisticated enzyme - can not in the general case be reduced into a simpler problem or solved in less time. It can only be computed, which is, unsurprisingly, exactly what these computation systems do. We can create useful abstractions (isomorphisms, in the language of formal systems) of the behaviour we identify as we have done with our own computers by creating programming languages and more programming languages on top of those programming languages. Indeed this is useful for understanding (and recreating) certain abstract patterns in a way that makes sense to our own ideas about logic, but it will always be an unsatisfying, partial understanding. Moreover, it will never be complete in a formal sense - our isomorphisms will always eventually meet some observed phenomena that seems to be inconsistent with the current theory.

Eventually, we will all have to accept, as mathematicians did in the 1930s, that logical consistency is a property of our own subjective interpretations, not an inherent property of any system. The system is what it is. We can infer a set of rules and axioms and say that it is consistent with the observed evidence, and that we are merely claiming that we have not yet found any evidence which is inconsistent, but we rarely consider how weak this statement really is. It is enough to describe many observed physical phenomena in a way that is useful to other humans, but it inevitably breaks down in the face of universality, where cause and effect are not related by any reducible physical mechanism. Building up digestible theorems based on the observed behaviour of a universal machine will never yield a higher understanding of it. We should learn from history and not find ourselves in working at the same hopeless task as Whitehead and Russel attempting to capture all of mathematics in a few axioms and rules.

Fortunately we can study these systems in isolation with simulated cellular automata, which give us a way of exploring the strange internal landscape of our own logic. It is my contention that this new form of investigation, which looks deep into the structure of our own thought rather than the physical structure of matter, will be the next stage for humanity in our unending pursuit of understanding.

Bibliography

Weisstein, Eric W. "Elementary Cellular Automaton." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram...ryCellularAutomaton.html
Shalizi, Cosma (2002) "Review: A New Kind Of Science" http://www.cscs.umich.edu/~crshalizi/reviews/wolfram/
Cook, Matthew (2004) "Universality in Elementary Cellular Automata" Complex Systems, 15 1–40
Martinez, Genaro J. et al. "Reproducing the cyclic tag system developed by Matthew Cook with Rule 110 using the phases fi_1"
Hofstadter, Douglas (1979) "Gödel, Escher, Bach"
 

Good quality Syrian rue (Peganum harmala) for an incredible price!
 
polytrip
#2 Posted : 4/4/2012 7:08:09 PM
DMT-Nexus member

Senior Member

Posts: 4639
Joined: 16-May-2008
Last visit: 24-Dec-2012
Location: A speck of dust in endless space, like everyone else.
Interesting. I wonder though.....Doesn't godels theorem just apply to formal, linear systems alone, like math's and formal logic?

I'm asking because i've recently read an interesting article on the mathematical/informational phenomenon of 'reflection', wich occurs in the human mind, DNA-decoding, etc.
I've thought about this a lot and to me it seems that a phenomenon like reflection isnot just a vital part of human counsciousness, but the thing that separate's rationality from logic.

I mean...it increasingly seems to me, that there is a way of information storage and processing, that is fundamentally different from linear systems. A way of information processing that isn't just linear. A way of information processing that differs from linear logic in that it holds literally one or more wholy 'new' dimensions of information.

This would mean that there can be information, a theory or whatever, and even certainty or proof of that information.....but a form of proof that can not be formalised into formal, linear systems, simply because it is too complex...because the proof would be expressed somewhere within it's complex layers of reflection.

I'm especially talking about kind of abstract phenomena that DO exist, like for instance determining what is 'rational'....As far as i can see, logically proving that something is a rational decission is almost impossible within formal logic. Yet, it is not only true that within a set of choices, there can always be one or more choices that are more rational. We can also know somehow, that a decission is rational.

But the very processes that would make a decission rational, cannot be formalised. For instance, everybody who's familiar with game-theory knows that a model like the prisonners dilemma shows that one choice can be more rational than the other choice...but the proof of that is inside our heads and not expressed anywhere within the matrix of the game.

There is no mathematical or logical formalisation for any form of reflection. There is no formalisation for "wait a minute...i've been looking for X, but it now seems i can better look for Y".
A.I. tries to simulate those kind of processes and it more and more succeeds in doing so. But even something not so very rigid, like language, can never 'contain' the truths contained in such a line of thinking....because language is very linear, still.

So i think Godel is definately right. I even think that his theorem is only an aplication of a much more fundamental form of uncertainty and incopleteness...I tend to think that UNIVERSAL uncertainty is realy an axiom, wich can only be 'seen', but never proven from A to Z. But i also think that there is more to it than that. I think that there can be something as 'truth' or 'rationality', but that those concepts come with a specific range, a specific usefullness or aplication wich can not be expressed in a linear way.

I'd like to hear what you think about those thoughts.
 
nen888
#3 Posted : 4/5/2012 8:27:19 AM
member for the trees

Acacia expert | Skills: Acacia, Botany, Tryptamines, CounsellingExtraordinary knowledge | Skills: Acacia, Botany, Tryptamines, CounsellingSenior Member | Skills: Acacia, Botany, Tryptamines, Counselling

Posts: 4003
Joined: 28-Jun-2011
Last visit: 27-May-2024
..thanks dromedary, there's a few things to ponder there..
a few quick thoughts..you wrote:
Quote:
Fortunately we can study these systems in isolation with simulated cellular automata, which give us a way of exploring the strange internal landscape of our own logic.

..if i may go all the way back to the Plantonists, what actually are numbers? we have the abstract notion of, say, 'three'..but does this relate to the structure of our minds, rather than something that exists 'objectively'.. mathematics seems to tell us a lot about the limitations of mind..and computation..
..there are, afterall, 'practically uncomputable problems', meaning they would need an infinite amount of time to be solved..yet some kind of pattern-recognising ability within us allows us to 'feel' or comprehend infinities..
the human mind, or consciousness, seems, to me, a means of achieving the impossible..partitioning within infinity
..in other words, any kind of absolute understanding of reality (if that's possible) must be beyond the scope of purely mathematics to describe...

 
dromedary
#4 Posted : 4/5/2012 7:17:39 PM

Camelus dromedarius


Posts: 89
Joined: 05-Dec-2011
Last visit: 06-Jan-2021
Location: Australia
So much to answer, so little time! I appreciate all your responses even if I do not have time to address every point in each of them.

polytrip wrote:

I wonder though.....Doesn't godels theorem just apply to formal, linear systems alone, like math's and formal logic?

As it turns out, there is not much difference between a turing machine and a formal system. The initial state of the machine (the instructions and memory) is equivalent to an axiom of the system, and the process of execution is equivalent to the set of rules by which one may derive theorems in the system. Executing a program is essentially theorem building, and determining whether a particular state (such as the halting state) may be reachable is an exercise of theorem-proving.

polytrip wrote:

I've thought about this a lot and to me it seems that a phenomenon like reflection isnot just a vital part of human counsciousness, but the thing that separate's rationality from logic.

I think your notion of reflection is equivalent to my notion of self-reference, or Hofstadter's noton of 'Strange Loops'. It's interesting that you bring it up in relation to consciousness, and I agree wholeheartedly that the ability to 'break out' of a system with by 'reflecting' on it must indeed be an important part of consciousness and intelligence.

nen888 wrote:

if i may go all the way back to the Plantonists, what actually are numbers? we have the abstract notion of, say, 'three'..but does this relate to the structure of our minds, rather than something that exists 'objectively'.. mathematics seems to tell us a lot about the limitations of mind..and computation

You may! If we were to visit Plato's Academy c. 300BC, we might find a young Euclid working on the first and most tenacious attempt to formalise number theory: his treatise of geometry, Elements. I must make this point indirectly, so I hope you will bear with me.

Euclid struggled with inexpressiveness with regards to abstract (but seemingly universally true) numerical concepts in the contemporary Greek language of the day, but with a set of ten simple axioms (that is, assumptions) he managed to set out a consistent framework for making statements about numbers and shapes that is still used today. One of these axioms bothered Euclid, though it was necessary for many of the theorems in Elements and he had to admit it did seem like a universal truth. It was the parallel postulate, which asserts that if you have a line and a point not on that line, then there is only one possible line through that point which never intersects the first line. Nearly two millenia later, a handful of mathematicians around the world attempted to prove the parallel postulate by asserting the opposite and noting the contradictions that arise (this method of 'proof by contradiction' was in fashion at the time).

To everyone's amazement, there were no contradictions to be found. The system remained consistent because their interpretation of the rules had changed. It turns out you can arbitrarily change the rules and axioms of a formal system and it never becomes inconsistent, because consistency is a property of our interpretation, not a property of the system itself. Replacing the parallel postulate with altered versions led to the discovery of hyperbolic and elliptic geometry, which produce theorems which are totally inconsistent with traditional Euclidean geometry, yet both are in their own ways equally valid. I don't see any fundamental reason why you could not do the same thing with the rules axioms that relate directly to numbers. Would an alternative number system that results in different answers to the same problems as traditional number systems be less 'real' than the number system we use? I suspect that the way we define 'real' may be just another aspect of interpretation.

I find the opposing viewpoint equally compelling. Say we manufactured an indestructable steel tablet and engraved upon it a pattern of dots:

.
.
..
...
.....
........
.............

and fired it off at some distant galaxy, and it was scooped up by an alien civilisation with nothing in common with humanity that would suggest any reason why we might share common traits in our habits of interpretation. It is likely that the aliens would at least recognise that it was a manufactured object, since it does not look like anything that occurs naturally. It does not seem like much of a jump to suggest that they would notice the engraved dots, and think to count them, and if they did that, then it seems very reasonable that they might notice a pattern - in this case, the Fibonacci sequence, that requires (or at least implies) a common understanding of numbers, addition, and sequences to interpret. Does this mean those concepts are universal? Are they perhaps an essential part of intelligence?

Zip wrote:

Mathematics is also a relational structure. If there is an isomorphism between two relational structures, in what sense is it useful to see them as different? Why is it necessarily the case that we will "always eventually meet some observed phenomena that seems to be inconsistent with the current theory"?


There can be a world of difference between two isomorphisms that relate the same structure. I hinted at one in my original post - the isomorphism between DNA (the genotype) and the individual it embodies (the phenotype). You might consider a simpler relation - between that of a video on an optical disc and when it is rendered on a screen. Both of the isomorphisms are "accessed" the same way - by reflecting light off a surface to a receiver - but no one would claim they are equivalent. Even if we step back into the world of formal systems we can readily find examples where it is useful to see two isomorphisms as different. We could create a trivial formal system that produces statements of the form "xPyQz" where x,y,z are strings of dots, where ".P..Q..." is the only axiom and where the only rule is that "if xPyQz is a theorem then x.PyQz. and xPy.Qz. are both theorems". A useful isomorphism could be to say that P means plus and Q means equals, and use it to add the number of dots in the string. Even though all derived theorems in our PQ system are true in the isomorphism, the isomorphism is still different, because we can write something clearly true like 1+1=2, which is unprovable in the PQ system, because .P.Q.. cannot be derived from the axiom .P..Q... using our simple rule of derivation. It is an unreachable truth.

In the PQ system example, we are creating an isomorphism in a more complex system and finding that we can use it to reach truths that could not be derived in the original system. When we try to "model" (i.e.: simplify with qualified assumptions) a turing machine we find that the tables have been turned. Since turing machines are 'universal', their output can not be predicted without doing the work of the machine - not even in a general sense, not to any degree of accuracy. Any effort to describe the output of these systems using a less complex system must fail, so we must find ways of studying these systems without attempting to reduce their complexity. This is where the study starts to bleed into computer science, because computer scientists are already well acquainted with methods of working with irreducibly complex processes.
 
dromedary
#5 Posted : 4/5/2012 7:55:56 PM

Camelus dromedarius


Posts: 89
Joined: 05-Dec-2011
Last visit: 06-Jan-2021
Location: Australia
Zip wrote:

I wasn't talking about isomorphisms that relate to the same structure, but rather relational structures that are isomorphic. This was prefixed by presupposing the rather popular thesis of ontic structural realism in the philosophy of physics and an isomorphic mathematical relational structure. I do appreciate the explications on incompleteness though.


My apologies for the misunderstanding. I am afraid I am not familiar enough with ontic structural realism to say anything useful about it here. Perhaps you can recommend some readings?
 
 
Users browsing this forum
Guest

DMT-Nexus theme created by The Traveler
This page was generated in 0.122 seconds.