Month: September 2010

Inefficient Machines

In most of the computers today you have the same basic structure: A computing hardware, composed by millions of transistors, getting data from the surroundings (normally registers) and putting values back (to other registers), and Data storage. Of course, you can have multiple computing hardware (integer, floating point, vectorial, etc) and multiple layers of data storage (registers, caches, main memory, disk, network, etc), but it all boils down to these two basic components.

Between them you have the communication channels, that are responsible for carrying the information back and forth. In most machines, the further you are from the central processing unit, the slower is the channel. So, satellite links will be slower than network cables that will be slower than PCIx, CPU bus, etc. But, in a way, as the whole objective of the computer is to transform data, you must have access to all data storage in the system to have a useful computer.

Not-so-useful

Imagine a machine where you don’t have access to all the data available, but you still depend on that data to do useful computation. What happens is that you have to infer what was the data you needed, or get it from a different path, not direct, but converted into subjective ideas and low-quality patterns, that have, then, to be analysed and matched with previous patterns and almost-random results come from such poor analysis.

This machine, as a whole, is not so useful. A lot less useful than a simple calculator or a laptop, you might think and I’d agree. But that machine also have another twist. The data that cannot be accessed have a way of changing how the CPU behave in unpredictable ways. It can increase the number of transistors, change the width of the communication channels, completely remove or add new peripherals, and so on.

This machine has, in fact, two completely separate execution modes. The short term mode, executed within the inner layer, in which the CPU takes decisions based on its inherent hardware and the information that is far beyond the outer layer, and the long term mode, executed in the outer layer, which can be influenced by the information beyond (plus a few random processes) but never (this is the important bit, never), by the inner layer.

The outer layer

This outer layer change data by itself, it doesn’t need the CPU for anything, the data is, itself, the processing unit. The way external processes act on this layer is what makes it change, in a very (very) slow time scale, especially when compared to the inner layer’s. The inner layer is, in essence, at the mercy of the outer layer.

This machine we’re talking about, sometimes called the ultimate machine, has absolutely nothing of ultimate. We can build computer that can easily access the outer layers of data, change them or even erase them for good as easy as they do with the data in the inner layer.

We, today, can build machines much more well designed that this infamous machine. When comparing designs, our current computers have a much more elaborate, precise and analytical design of a machine, we just need more time to get it to perfection, but it’s of my opinion that we’re already far beyond (in design matters) that of life.

Living machines

Living creatures have brains, the CPU and the inner memory and the body (all the other communication channels and peripherals to the world beyond), and they have genes, the long-term storage that defines how the all the rest is assembled and how it behaves. But living creatures, unlike Lamarck’s beliefs, cannot change their own genes at will. Not yet.

The day humans start changing their own genes (and that’s not too far away), we’ll have perfected the design, and only then we would be able to call it: the ultimate machine. Only then, the design would have been perfect and the machine could, then, evolve.

Writing your own genes would be like giving an application the right to re-write the whole operating system. You rarely see that in a computer system, but that’s only because we’re limited to creating designs similar to ourselves. This is why all CPUs are sequential (even when they’re parallel), because our educational model is sequential (to cope with mass education). This is why our machines don’t self-mend since the beginning, because we don’t.

Self-healing is a complex (and dangerous) subject for us because we don’t have first-hand experience with it, but given the freedom we have when creating machines, it’s complete lack of imagination to not do so. It is a complete waste of time to model intelligent systems as if they were humans, to create artificial life with simple neighbouring rules and to think that automata is only a program that runs alone.

Agile Design

The intelligent design concept was coined by people that understand very little of design and even less about intelligence. The design of life is utterly poor. It wastes too much energy, it provides very little control over the process, it has too many variables and too little real gain in each process.

It is true that, in a hardware point of view, our designs are very bad when compared to nature’s. A chlorophyll is much more efficient than a solar cell, spider webs are much stronger than steel and so on. But the overall design, how the process work and how it gets selected, is just horrible.

If there were creators for our universe, it had to be a good bunch of engineers with no management at all, creating machines at random just because it was cool. There was no central planning, no project, ad-hoc feature emerging and lots of easter eggs. If that’s the image people want to have of a God, so be it. Long live the Agile God, a bunch of nerdy engineers playing with toys.

But design would be the last word I’d use for it…