Programmers can usually agree on whether the answer to a given problem posed is ‘correct’ or not.
For instance, in most languages there is an equivalent to the program:
And we expect the output of the program to be ‘7’.
But when you look at cellular automata the concept of ‘correct’ output is more complicated.
For instance, take ‘life’, by Conway.
I assume that everybody is familiar with the game of ‘life’, but in case you’re not, in a nutshell the ‘cells’ on an infinite grid are governed by a few very simple rules:
if a live cell has less than two neighbours it will die (‘loneliness’)
if a cell has 3 neighbours and is empty it will ‘come alive’
if a cell is alive and has 2 or 3 neighbours it will continue to be alive
if a cell has more than three neighbours it will die (‘overcrowding’)
In spite of the simple rules, the ‘life’ cells can exhibit extremely complex patterns, and in fact can be used to build a computer that will calculate anything you want, given enough resources (time, and ‘cell space’).
The game of life as envisioned by Conway works in generations, where each generation is created atomically.
Usually this is achieved by having two sets of states for each cell, during one generation the first set of states is the ‘source’ and the other the ‘destination’, then during the next generation the situation is reversed.
When you implement ‘life’ cells as electronic building blocks you will then have to store the current state, compute the new state and on a synchronized clock signal all cells copy the newly computed state to their output.
This is a serious limitation though. For one the size of a ‘world’ of cells like this would become limited by the need to transmit the clock to all cells so they can switch state synchronously, it also creates the need for a central authority providing the clock.
To speed things up you could implement the ‘cell’ as a device that takes a census of it’s input pins, as long as the input is < 2 the output is ‘0’, 2 or 3 produce a ‘1’ and > 3 produces a ‘0’ again. You can do this fairly easily with some common logic gates.
Technically I think you could argue that this circuit still obeys the rules of life.
But the gates, even though they are digital in function from our high level viewpoint actually are built up from analog components.
And when wired up as described above this analog nature of the logic gates will suddenly become very much apparent. The change of the output of a gate is not a nice ‘0’ to ‘1’ transition in a time ‘0’. It is a relatively slow rise through all the intermediate voltages between the voltage used to represent a ‘0’ and the one used to represent a ‘1’.
Care should be taken that such circuits would not start to oscillate.
But the game of ‘life’ actually has whole classes of oscillators! They’re digital, not analog but they oscillate just the same.
Will turn to
In the next generation, and then back to
This oscillator is the simplest of all of them, it has a ‘period’ of only 2.
So maybe it is possible to ‘tame’ the wild oscillations of the analog components making up the gates, and to use some extra circuitry to dampen these so the circuit can still be free running (‘clock-free’) and produce correct results.
So, the question is:
Is it possible to create a ‘black box’ circuit, with 8 input lines and 1 output line, without a global clock input that will give correct output when wired up into infinitely large configurations?
Or is the consequence of not having a central clock the introduction of some level of uncertainty in to the calculations, and if so how much of a problem does this pose?
Digital computing seems to me to be an all-or-nothing proposal, it either works, or it does not. But in real life there is no central authority, and a certain amount of ‘play’ in the system is taken for granted. And in spite of that it seems to work quite well, so maybe there is a way to relax these constraints in a digital context without losing the ability to produce correct results.