Compute Life


Whatever the first self-replicating molecule was, whether it was RNA-like, or even a cooperation of more than one molecule … whether the membrane came first, or the stuff inside it … we evolution believers think that something came "first."

Once that first self-replicating molecule existed, it was able to multiply. Evolution was off and running. The question is: How hard was it to generate the first such molecule?

I have heard apologeticists preach about how impossible it was for any such molecule to come about randomly. An analogy is often made to the unlikely event of finding a watch on the ground in the middle of the woods. Would you ever say, "Wow, look at what evolved out here!"? Never. The lack of a mechanism for self-replication makes this analogy inappropriate.

Another argument puts forth the idea that even the simplest living thing is "irreducibly complex," and therefore could not have evolved. The canonical case for this argument is the flagellum of certain bacteria. It was touted as being so complex that the removal of one single component made the entire structure useless and would therefore never evolved. This was believed by many until a similar structure was found on Salmonella. It has a structure which is basically a flagellum sans several necessary components. It couldn't produce motion, but it still served a purpose.

I think a more appropriate analogy can be made to a class of problems known as "NP-Complete." In 1971, the mathematician Stephen Cook introduced the concept of a class of problems that cannot be solved in polynomial time -- i.e. they take a really long time to solve. The traveling salesman is perhaps the best known such problem.

Consider a salesman who must visit several locations on a map. He (or she) wants to minimize the actual distance traveled to save time and money, so he attempts to determine the shortest route that visits all locations at least once. There is no quick way to find the optimal answer. Many of the world's smartest people have been trying for 40 years. It has been proven. You may not need to try every possibility, but you have to try enough that you can't solve it in polynomial time.

We are now able to solve this problem very quickly, and at extremely low power, using a molecular computer. This "computer" is basically a soup of smaller molecules that combine into bigger ones. Each molecule represented the distance between two locations. A complex analysis of the resulting molecules reveals the optimal solution to the salesman problem.

This computer brings us back to the discussion above. The salesman problem is NP-complete. There are many other NP-complete problems, and they are all related to each other in a way that's referred to as "polynomial-time reducible." It just means that if you find a good algorithm for solving one, it can be translated to other NP-complete problems. I submit that the problem of assembling a given molecule from a soup of random pieces is also an NP-complete problem.

The salesman problem was solved by trillions of trial-and-error attempts all done in parallel. Eventually they had to hit upon the solution because the options are limited. Given a finite number of atoms, there is a finite number of potential molecules. NOTE: I used the word "trillion" (10^12) because it sounds big. The number of potential interactions per second per drop is much greater. A single drop of water may have 10^22 atoms!

Let x be the number of atoms in the first self-replicating molecule. there are a finite number of possible combinations of all the elements which can form a molecule having x atoms. Nature searches this space by random accident -- trillions upon trillions of random accidents all executed in parallel. Searching this space is NP-hard, but not impossible. In fact, it's polynomial-time reducible to the salesman problem for x locations.

The problem is made easier by the fact that atoms do not float around on their own. They are often assembled into small molecules such as amino acids.

The problem is made even easier by the fact that there might have been many -- perhaps an infinite number of potential first self-replicaing molecules. Our first self-replicator may not be the only one possible. All nature had to do was to find one -- just one! Nature kept pulling down the slot-machine until it hit a jackpot. This was happening all over the earth many trillions of times per second in every drop of water.

Considering the sheer enormity of the computational capacity of a single drop of water, it seems inevitable that a planet full of the stuff would eventually compute a self-replicating molecule.

For any planet, finding that first self-replicating molecule is a challenge, but it's only NP-hard (there are harder problems). Not only can it be solved, I suspect that any planet which could support life, will support life (given at least a billion years of habitable time).

Each one of us is in essence a pocket of extraordinarily low entropy. Variance happens. It just does. Our particular existence wasn't inevitable, but the existence of some living thing certainly was. That's why it came about so quickly -- almost instantaneously (geologically speaking).

Life is a computation problem.

---------------------------------------------------------------------------
Links:

NP-Complete

Automata, Computability, and Complexity (GITCS)

DNA computer helps travelling salesman

Traveling Salesman Problem Based on DNA Computing


2 Responses to “Compute Life”

  1. Sowmya Rajasekaran Reply | Permalink

    Very thought provoking, as usual! :-) The salesman problem... it indicates that, probably, all problems are computationally tractable though specific algorithms to solve the problem may not be so - it is then a question of finding an algorithm that is capable of solving the problem and efficient enough to become computationally tractable. Something to remember when the answer seems to be hard to reach! :-)

    Once that first self-replicating molecule occurs, under what circumstances would it induce similar phenomena in atoms/molecules that previously were 'inert' to manifesting such phenomena? For instance, we believe that Life requires presence of certain matter (such as, nitrogen, oxygen, etc) but why should it necessarily be so? Supposing there exists a range of states that support life on a planet? That is, instead of stating that Life can exist iff xyz condition exists, perhaps we can state that Life can exist in set N(n1,n2,n3,...) where n1, n2, n3,... etc are states some of which may also be mutually exclusive. We know n1 (life on earth) and perhaps we do not know the other states. It then puts the focus on our definition of Life... Supposing n2 induces n3 state to come into existence? The possibilities are endless and that is the charm of pure science! :-)

    It is sort of shocking, though, to think that of the trillions of interactions happening every second, just one had to result in a self-replicating molecule and bring about so much transformation of living and non-living matter in a geologically small period of time. How do we know that one interaction might not result in death in a geologically small period of time? But then, death is only absence of life and not a real state by itself. All that it would mean is that n1 ceases to exist but some other conducive state could come about in future resulting in life and it could be a really long time when each state could emerge and dissolve or be substituted by others and life existing in discontinuous time series... and all we have is hundred years or so to understand the vastness and experience the richness! :-)

    Thank you once again for such a lovely post! :-)

  2. Graham Morehead Reply | Permalink

    Thanks Sowmya!

    The first self-replicating molecule was probably more like a virus than anything we would currently call "alive". I would still call it inert.

    I love the ideas surrounding a more generalized definition for life -- definitions that don't include oxygen, hydrogen, etc. Some don't even require matter. Here's a definition that I've cobbled together, taken mostly from Stuart Kauffman:

    Define a closed surface. Let H(t) be the entropy of the matter and energy within that surface at t. If there exists a time t1 > t0 such that: H(t0) >= H(t1), and during this time there was a constant flux of matter and/or energy through the closed surface, and the system as a whole increased in entropy, then the surface enclosed a living thing during the time t0 to t1.

    Thanks again for your comments!

Leave a Reply


8 − = seven