Taner Edis

Translation of “Waarom ID niet klopt”

[ Translation of Jan Willem Nienhuys, “Waarom ID niet klopt,” Skepter, vol. 17, no. 3 (Fall, 2004), pp. 36-38 (in Dutch). Translation by Jan Willem Nienhuys. ]

The neocreationists hide behind the term Intelligent Design and try to pass themselves off as scientific. But their science is a shaky ramshackle. 

The Dutch magazine Skepter has seen a discussion between March 2002 to the Fall of 2003 about so-called Intelligent Design. The proponents, Dekker and Meester  (professors in respectively nanophysics and probability theory) think that standard sciences like biology, physics and chemistry don’t tell the whole story about the universe and life on earth. Indeed, they think they have scientific  proof that something or somebody has cleverly put this world together. Some parts of our world simply can’t have arisen ‘by accident.’

Science is filled with unsolved puzzles. You have to be blind not to know that. When Newton had discovered his law of gravitation and worked out the consequences, he was confronted with the problem that the solar system and the universe might not be stable. When Darwin published his theory of evolution it was clear that evolution needed lots of time. Physicists remonstrated. The sun could not produce so much energy for so long and neither could the earth stay hot, with volcanos and the like.

It is tempting to seek for explanations for things you can’t explain. In the 1930s physicists had noticed that in some kinds of radioactivity energy seemed to disappear, and also an amount of rotation (technically: angular momentum). Wolfgang Pauli then suggested an invisible particle that was responsible. That was the neutrino, but it was really found much later.  In today’s cosmology one can see from the motion of galaxies (among others) that the universe must contain much more mass or energy or both of an altogether mysterious nature. For these riddles many explanations are offered.

In the case of Newton, he thought that God’s hand guided the solar system. Possibly he held God responsible for gravity too. Wisely he kept his thoughts to himself, and merely wrote in his Principia ‘hypotheses non fingo’: I’m not going to make up hypotheses, in other words: here are the rules of computation that must hold because of the experimental data, and it is irrelevant what I happen to think about the ultimate causes.

Biology is much more complicated than physics. Two scientists, the biochemist Behe and the mathematician Dembski, have tried to show that some tiny bits of biology are so complicated that it looks as if some smart guy has put it together. They don’t say explicitly that this smart guy is the author of the Bible. Behe and Dembski are part of an American Christian conspiracy that wants to use the Bible as biology textbook. The plan of the conspirators is to first make their alternatives for evolution theory respectable and then demand that those alternatives are incorporated in education, and that simultaneously the E-word and the scientific materialism are removed from the classrooms, or decrease in status to the level of Germanic creation myths, something not necessarily true and so unimportant that one could do without.  The first step is the effort to show that it is mathematically certain that the theory of evolution cannot solve specific problems, and that other possibilities must be investigated.

If you don’t want to do that first step, you are a dogmatic ass with large blinders, quite in contradiction with the true scientists who of course don’t want to abolish evolution at all, but who have a broader outlook than their narrow minded colleagues.  But those unsolvable problems, is that really true?

Weevil

Fred Hoyle once proposed the metaphor of a hurricane blowing through a garbage heap of a pile of parts, leaving a perfectly working airplane. The probability of life as we know it would be comparable to this event. Behe and Dembski have refined this argument. Behe pointed out constructions in living beings that consisted of three or more essential parts. He called these irreducibly complex. An example was the bacterial flagellum, but his paradigm was a mousetrap.  Dembski looked in more detail into how improbable such things are. He claims that things that contain the amount of information of a sequence of about 100 signs (digits or letters with spaces) are too improbable to arise ‘accidentally’, also taking the age of the universe into account. If such an object has a more or less predetermined function, then it must be intelligently designed.

Dembski also has paradigm taken from daily life. If you find an arrow sticking in a barn door, with larger and smaller circles around it, then that doesn’t mean the bowman is a good shot. We know that only if we know that the circles were painted before the arrow was shot. Without so many words it is assumed that the arrow has been shot, rather than stuck in by hand. Dembski uses the term complex specified information.

Dembski knows of course that evolution theorists surmise that complicated structures are formed in small steps, one step at a time. As metaphor they use a procedure to seek for a highest point in a hilly landscape, by going up, but regularly taking a random step, so as not to get stuck on a low hill.  Dembski argues that this method can’t work. There is a theorem that says that no search method is better than a random search: the ‘No Free Lunch’ theorem.

The physicists Matt Young and Taner Edis put a volume together that specifically addresses Behe and Dembski’s argument that there is an insolvable problem.

There is a short introduction, but following that Gert Korthof leads the pack. Some creationists (let’s call them that, even if they deny being ones) think that evolution has a modest role, but they don’t believe that a cavia can change into a dog (that’s the example Dekker en Meester gave). Creationists have a vague concept of kinds (Genesis 1:24-25) or fundamental types. Judging from their publications these types are usually domestic animals. Korthof explains that this can’t be biology. Some authors say that kinds are families like the great apes. But that would imply that the weevil is a kind too, and that evolution produced the 65,000 or so weevil species. Another author holds that ‘homo sapiens’ is a kind. That is very restricted. Humans resemble each other genetically so much that biologists even don’t like to speak about different races. On the other hand, some ID proponents don’t mind to believe in common descent for all of life.

Wings and flagellae

Korthof makes an important remark about the experimental evidence for the theory of evolution. When Darwin and Wallace thought up their theory nobody knew anything about heredity. Nobody suspected that hereditary properties are atomistic, and the ‘continuous’ theory of heredity was initially a serious objection to the theory. A beneficial variation would be diluted away too quickly in subsequent generations. So one can consider the atomistic character of heredity as a strong support for evolution.  After that people found out that at the cellular level organisms resemble each other very much. Bacteria had the same DNA code as whales. Plants, fungi and animals have the same types of cells as many one-celled organisms. If you are willing to consider remarkable coincidences as something to be explained, then you can’t ignore the fact that all life has more or less the same biochemistry, which was something to be expected according to any theory that claims common descent. Next one discovered that the history of descent could be read off from the DNA as well. This yielded the same facts as what was noticed on the basis of external visible characteristics.  Man and mouse are both mammals with placenta and the standard theory tells that the common ancestor must have lived about 100 million years ago. The genetic code not only codes for size, but also for the build of bodies and organs and the organisation of the whole biochemical factory  of cells. Man and mouse have a lot on common in this respect, and the match of the DNA with what was already surmised is a nice confirmation of common descent.

The book has a nice chapter on bird wings. Alan Gishlick explains that one doesn’t need to descend to biochemical details to find those irreducible complex systems. A bird wing is ingenuously built, and has parts that control each others’ operations. None of the parts can be missed. Nonetheless the development of the bird wing can be very well tracked in the fossils. The ancestors of the birds had hands for gripping prey. Nobody knows what the function of feathers was at that time, but there a number of plausible possibilities.

Something similar holds for the bacterial flagellum. Ian Musgrave makes it clear to us that this can be compared to the classical example of the eye. In the animal world there is a vast assortment of light detecting equipment in all kinds and in several stages of efficiency. The flagellum is one of a complete spectrum of small organs that serve for excretion, attachment, propulsion and attack of prey, also in several degrees of perfection.  The idea that the flagellum (the arrow in Dembski’s metaphor) is only meant for propulsion (Dembski’s target with bull’s eye) just isn’t correct, and neither is the idea that the parts of the thing are by themselves of no use. So Dembski’s highfalutin statistical theory in support of biochemist Behe collapses if one has a wider background in biology.

Weasel, camel and whale

Computer scientists Shallit and Elsberry dig into Dembski’s argument about information content. According to me they miss one aspect of information that never seems to be spelled out in this kind of discussion.

If one throws a fair coin 500 times and writes down the results, one has created 500 bits of information. If one has a typewriter that produces randomly letters or spaces with equal probability, then each sign represents 4,75 bit of information. A series of 23 such signs is 109 bits. In various reasonings we are dealing with information that has some kind of meaning. An arrow just anywhere on a barn door doesn’t count, but shot in the middle of the target does.

How special is a sentence like ‘methinks it is a weasel’? (Which is what Dawkins uses to illustrate how quickly random mutations can have results if there is selection.) If this happens to be the only meaningful sequence of 23 signs is the world, then one can think it does represent 109 bits. But when a sequence happens to be the report of a typewriter that has gone mad, then any sequence is as ‘special’ is any other one. Neither of the cases applies. How many sequences do we want to consider as special? Let’s assume there are 1024. Pointing out which one of the ‘interesting’ sequences is produced corresponds to 10 bits of information. This is also not what we are interested in. What we want to study is the situation that a special sentence is obtained and not some kind of nonsensical series of signs. We want to assign a numerical information value to the information that a special sentence has been formed. It seems most logical to assign this information 109 minus 10 bits, hence 99 bits. That is what Dembski seems to mean. Dembski emphasizes the fact that the fraction ‘special’ sequences of ‘all sequences’ stays the same if the sequences are somehow transformed according to a complicated recipe. He calls this the law of preservation of information. It is not clear what that has to with reality.  ‘Special’ is related in his view to some kind of usefulness according to an external criterion, and that doesn’t change when you mix the sequences. If you scramble the sequences but not the external criterion of specialness, then the scrambled sequences stop being special and the computation of the quantity of information makes no sense anymore.

So in a certain sense we can estimate the amount of information contained in a DNA-sequence. However, we will have to estimate how many sequences would have been just as good as the one we are studying. Just as good means: equally helpful in the struggle for life. That is hard to put into numbers, especially because proteins that are built according to DNA specifications always seem to have some kind of weak catalytic function. In the example with the Hamlet sentence a few lines further ‘very like a whale’ or a few lines back ‘it is like a camel indeed’ would have been just as good.

In the example with the sentence we know the chance mechanism: a typewriter that produces random letters. But as a metaphor for the building of DNA sequences that is not so good. Nobody is claiming that DNA sequences have been produced randomly as if by a mad typewriter. So one has to get a more plausible chance mechanism. That also an impossible job; or rather, that is what biologists try to do, namely figure out how combination and mutations of genes and all the time adapting to ever changing circumstances makes organisms and the genes evolve.

Fine tuning

Dembski wrote No free lunch in which he uses the quoted theorem to plead for intelligent design. The physicist Mark Perakh has consulted the person who thought of this theorem, and Perakh concludes that Dembski applies the theorem incorrectly. In a nutshell: the theorem averages efficiency over very many hillscapes, many of them very ‘wild’. That is the wrong metaphor for evolution. In real evolution there is often a search for a single optimum on a line, for example the size of the organism or the reaction speed in a specific biochemical pathway, with only one local optimum which is also a global optimum.

Dembski basically tries to prove by mathematics that some thing can’t be ‘chance’, but has to be designed somehow or other. I think that is an effort that is doomed to fail. Of course I don’t know how supernatural beings go about their business. But human designers start out with several (internal or concrete) representations of the parts and abstractions thereof, and also of the natural laws that describe their behavior. All this is coupled in a more or less random fashion in the mind (or the workshop) of the designer, until a (representation of the) final product is produced. The process may seem less random because of the use of a method, but then this method and many other methods that happened not be used can be seen as yet more of the mentioned abstractions. So the design process is just another random process. The latter process differs from the former by the ease of the manipulations. However, the design process has to be followed by some form of implementation to get a concrete working model, and that is a problem which the evolutionary process doesn’t have. One may call this an example of the law of conservation of trouble – something well known to mathematicians: clever tricks to make computations easier often amount to merely shifting the core of the problem around.

Dembski was at the World Skeptical Conference in Burbank, and from what he said there, I gathered that he thinks that the design of our world was put in  already during the Big Bang. It was cunningly arranged that in one of the planets of ten billion trillion stars life would come into existence, then some time later some bacteria got a flagellum, and supposedly again somewhat later man appeared.

One of the last chapters is written by Victor Stenger. One of the arrows in the ID quiver is the argument that if some fundamental constants of physics were just a little different, the universe would look totally different, for example because atomic nuclei larger than a single proton would not exist, or stars would live too short or not at all. Hoyle himself  – whom we met above – discovered one such odd fact. Carbon is made in stars by process where two nuclei of helium fuse to an extremely unstable form of beryllium, which might be hit immediately by another helium nucleus before it decomposes again. This is possible because the carbon nucleus happens to have an excited state of just the right amount of energy, predicted by Hoyle. Stenger shows that  this fine tuning isn’t what it is supposed to be. Lots of combinations of constants produce (in simulations) universes with long-lived stars.

Altogether an inspiring book which I strongly recommend to anyone interested in these problems. One review cannot possibly do justice to the richness of ideas and arguments in it.

 

Matt Young and Taner Edis (eds.) Why Intelligent Design fails: A scientific critique of the new creationism.  ISBN 0-8135-3433-X. Rutgers University Press, New Brunswick, N.J.

Jan Willem Nienhuys

Protected by Akismet
Blog with WordPress