Releasing the Truth

Digging for knowledge…

Tag Archives: design

ATP-synthase: wonderful molecular machine

Today, we’re going to talk about an absolutely wonderful biological machine, called ATP-synthase, another marvel built on almost every living beings that fascinates and intrigues naturalistic minds! Again, to conceive that such a intricate system could have arisen after random mutations defies logic.  But, unfortunately, nothing prevents evolutionists to contrive the most bizarre hypothesis with the purpose of giving the credits to chance, nothingness again.

ATP Synthase is a molecular machine found in every living organisms. It serves as a miniature power-generator, producing an energy-carrying molecule, adenosine triphosphate, or ATP. The ATP synthase machine has many parts we recognize from human-designed technology, including a rotor, a stator, a camshaft or driveshaft, and other basic components of a rotary engine. This machine is just the final step in a long and complex metabolic pathway involving numerous enzymes and other molecules—all so the cell can produce ATP to power biochemical reactions, and provide energy for other molecular machines in the cell. Each of the human body’s 14 trillion cells performs this reaction about a million times per minute. Over half a body weight of ATP is made and consumed every day!

ATP-driven protein machines power almost everything that goes on inside living cells, including manufacturing DNA, RNA, and proteins, clean-up of debris, and transporting chemicals into, out of, and within cells. Other fuel sources will not power these cellular protein machines for the same reasons that oil, wind, or sunlight will not power a gasoline engine.

ATP synthase occurs on the inner membranes of bacterial cells, and the innermost membranes of both mitochondria and chloroplasts, which are membrane-bound structures inside animal and plant cells.

ATP synthase manufactures ATP from two smaller chemicals, ADP and phosphate. ATP synthase is so small that it is able to manipulate these tiny molecules, one at a time. ATP synthase must convert some other form of energy into new ATPs. This energy is in the form of a hydrogen ion (H+) gradient, which is generated by a different whole protein system to ATP synthase. Hydrogen ions pour through ATP synthase like wind through a windmill. This comprises a positively charged electric current, in contrast to our electric motors, which use a negative current of electrons.

ATP synthase is a complex engine and pictures are necessary to describe it. Scientists use clever techniques to resolve the exact locations of each of many thousands of atoms that comprise large molecules like ATP synthase. This protein complex contains at least 29 separately manufactured subunits that fit together into two main portions: the head and the base. The base is anchored to a flat membrane like a button on a shirt (except that buttons are fixed in one place, whereas ATP synthase can migrate anywhere on the plane of its membrane). The head of ATP synthase forms a tube. It comprises six units, in three pairs. These form three sets of docking stations, each one of which will hold an ADP and a phosphate. ATP synthase includes a stator (stationary part), which arcs around the outside of the structure to help anchor the head to the base.



Notice in figure 1 a helical axle labeled “γ” in the middle of the ATP synthase. This axle runs through the center of both the head and base of ATP synthase like a pencil inside a cardboard toilet paper tube.

Here is the “magic”: When a stream of tiny hydrogen ions (protons) flows through the base and out the side of ATP synthase, passing across the membrane, they force the axle and base to spin. The stiff central axle pushes against the inside walls of the six head proteins, which become slightly deformed and reformed alternately. Each of your trillions of cells has many thousands of these machines spinning at over 9,000 rpm!

The spinning axle causes squeezing motions of the head so as to align an ADP next to a phosphate, forming ATP … in bucket loads. Many other cellular protein machines use ATP, breaking it down to ADP and phosphate again. This is then recycled back into ATP by ATP synthase. Lubert Stryer, author of Biochemistry adds,

“… the enzyme appears to operate near 100% efficiency …”1

Two Canadian researchers therefore looked into the innermost workings of ATP synthase. Using electron cryomicroscopy, they produced the first-ever three-dimensional representation of all the enzyme’s parts fitted together the way they are in the actual enzyme.1 Their study results, published in the journal Nature, enabled them to reconstruct the specific sequence of timed events that makes the enzyme work. And it’s a good thing that it functions, because every living cell—from bacteria to brain cells—depends on one or another version of ATP synthase.2

The team found two half-channels situated in the base of the motor, forming something like two half-stroke cylinders. The first half-channel directs a single proton to a precise spot on one of the rotor’s 12 segments where a negatively charged oxygen atom receives and temporarily holds it. After spinning 330 degrees on the rotor, the proton re-enters the cylinder assembly through the second half-channel, and is finally released into an area of lower proton concentration. (ICR)

The  F1-ATPase motor

In a paper published in March 1997, Hiroyuki Noji et al. directly observed the rotation of the enzyme F1-ATPase, a subunit of a larger enzyme, ATP synthase. This had been suggested as the mechanism for the enzyme’s operation by Paul Boyer.6 Structural determination by X-ray diffraction by a team led by John Walker had supported this theory. A few months after Noji et al published their work, it was announced that Boyer and Walker had won a half share of the 1997 Nobel Prize for Chemistry for their discovery.

The F1-ATPase motor has nine components—five different proteins with the stoichiometry of 3a:3b:1g:1d:1e. In bovine mitochondria, they contain 510, 482, 272, 146 and 50 amino acids respectively, so Mr = 371,000. F1-ATPase is a flattened sphere about 10 nm across by 8 nm high—so tiny that 1017 would fill the volume of a pinhead. This has been shown to spin ‘like a motor’ to produce ATP, a chemical which is the ‘energy currency’ of life. This motor produces an immense torque (turning force) for its size—in the experiment, it rotated a strand of another protein, actin, 100 times its own length. Also, when driving a heavy load, it probably changes to a lower gear, as any well-designed motor should.

ATP synthase also contains the membrane-embedded FO subunit functioning as a proton (hydrogen ion) channel. Protons flowing through FO provide the driving force of the F1-ATPase motor. They turn a wheel-like structure as water turns a water wheel, but researchers are still trying to determine precisely how. This rotation changes the conformation of the three active sites on the enzyme. Then each in turn can attach ADP and inorganic phosphate to form ATP. Unlike most enzymes, where energy is needed to link the building blocks, ATP synthase uses energy to link them to the enzyme, and throw off the newly formed ATP molecules. Separating the ATP from the enzyme needs much energy. (CMI)

Evolutionists’ reverie

Evolutionary scientists have suggested that the head portion of ATP synthase evolved from a class of proteins used to unwind DNA during DNA replication, i.e, the hexameric helicase enzyme. 3

However, how could ATP synthase “evolve” from something that needs ATP, manufactured by ATP synthase, to function? Absurd “chicken-egg” paradox! Also, consider that ATP synthase is made by processes that all need ATP—such as the unwinding of the DNA helix with helicase to allow transcription and then translation of the coded information into the proteins that make up ATP synthase. And manufacture of the 100 enzymes/machines needed to achieve this needs ATP! And making the membranes in which ATP synthase sits needs ATP, but without the membranes it would not work. This is a really vicious circle for evolutionists to explain.

Some says that not every living beings need ATP-synthase, such as anaerobical bacteria, because they produce ATP via glycolysis only. Thus, they imply that evolution really occurred on the creation of ATP-synthase… But every organism need ATPase!

Obligate anaerobes may not use ATP synthase to manufacture ATP, but they do use it to pump protons out of their cytoplasm. They would die otherwise. All cells have ATP synthase, because all cells need it. In sum, all life depends on ATPase, but not all life depends on it for ATP production. Anaerobic bacteria use it to maintain pH balance instead. So ATPase must have been present in the very first cell.

As the researches advance, more impressive facts are disclosed! The necessity of engineering  ATPase is actually just the tip of the iceberg. One amazingly revealing 2010 study in the journal Nature demonstrated how not only ATPase, but the entire electron transport chain apparatus and in fact whole mitochondria were absolutely essential to the ‘first’ eukaryote. 4

So, the evolutionary dilemma only strengthens! Oh, surely they miss the Darwin’s epoch, when cells were just organic “jellybeans”, which no complex content, the eye anatomy was the most complicated biological mechanism they’ve had to deal with (and even the contemporary knowledge of eye was enough to puzzle Darwin’s mind!), there was no annoying DNA (with its smart informational content pointing to a intelligent Creator), second Laws of Thermodynamics denying spontaneous increasing in complexity of polymers occurring naturally, and so on… Damn it, science, always disturbing godless people dreams!



1 Stryer. L., Biochemistry, 18.4.3., The world’s smallest molecular motor: rotational catalysis, online: < 2539>.

2 Lau, W. C. Y. and J. L. Rubinstein. 2012. Subnanometre-resolution structure of the intact Thermus thermophilus H+-driven ATP synthase. Nature. 481 (7380): 214-218.

3 Evolution of the F1-ATPase <>.

4 Lane, N. and W. Martin. 2010. The energetics of genome complexity. Nature. 467 (7318): 929-934.



God bless you!

“I will praise thee; for I am fearfully and wonderfully made: marvellous are thy works; and that my soul knoweth right well.” Psalms 139.14


Our inverted retina: Not a bad design

Evolutionists frequently maintain that the vertebrate retina exhibits a feature which indicates that it was not designed because its organisation appears to be less than ideal. They refer to the fact that for light to reach the photoreceptors it has to pass through the bulk of the retina’s neural apparatus, and presume that consequent degradation of the image formed at the level of the photoreceptors occurs. In biological terms this arrangement of the retina is said to be inverted because the visual cells are oriented so that their sensory ends are directed away from incident light. It is typical of vertebrates but rare among invertebrates, being seen in a few molluscs and arachnids.







As usual, evolutionists like to point this out as an evidence of “bad design”, thus, being supposedly more explainable under the light of natural, unguided view. Dawkins, while admitting that light traversing the inverted retina is not disturbed significantly during its passage to the photoreceptors, writes as follows :

‘Any engineer would naturally assume that the photocells would point towards the light, with their wires leading backwards towards the brain. He would laugh at any suggestion that the photocells might point away, from the light, with their wires departing on the side nearest the light. Yet this is exactly what happens in all vertebrate retinas. Each photocell is, in effect, wired in backwards, with its wire sticking out on the side nearest the light. The wire has to travel over the surface of the retina to a point where it dives through a hole in the retina (the so-called ‘blind spot’) to join the optic nerve. This means that the light, instead of being granted an unrestricted passage to the photocells, has to pass through a forest of connecting wires, presumably suffering at least some attenuation and distortion (actually, probably not much but, still, it is the principle of the thing that would offend any tidy-minded engineer). I don’t know the exact explanation for this strange state of affairs. The relevant period of evolution is so long ago.’ 1

First, we must review some things about ocular anatomy:


Figure 2Light enters the human eye via the transparent cornea, the eye’s front window, which acts as a powerful convex lens. After passing through the pupil (the aperture in the iris diaphragm) light is further refracted by the crystalline lens. An image of the external environment is thus focused on the retina which transduces light into neural signals and is the innermost (relative to the geometric centre of the eyeball) of the three tunics of the eye’s posterior segment. The other two tunics of the eye’s posterior segment are the white tough fibrous sclera which is outermost and continuous with the cornea anteriorly, and thechoroid, a pigmented and highly vascular layer which lies sandwiched between the retina and sclera.

The retina consists of ten layers, of which the outermost is the dark retinal pigment epithelium (RPE) which because of its melanin pigment is opaque to light. The RPE cells have fine hair-like projections on their inner surface called microvilli which lie between and ensheath the tips of the photoreceptor outer segments. There is thus a potential plane of cleavage between the RPE and the photoreceptors which is manifested when the neurosensory retina becomes separated from the RPE, e.g. as a result of injury, a condition known as retinal detachment.

Each photoreceptor, whether rod or cone, consists of an inner and an outer segment, the former having organelles (intracellular apparatus) for manufacturing the visual pigment present in the latter. The rod and cone layer and all eight layers internal to it constitute (in distinction from the RPE) what is known as the neurosensory retina which is virtually transparent to light. By means of many complex nerve connections within the neurosensory retina, electrical impulses generated by light reaching the photoreceptors are processed and transmitted to the retina’s nerve fibre layer and thence pass up the optic nerve to the brain.

In many species for whom vision in very low levels of illumination is important, a layer of reflective crystalline material, the tapetum (Latin: carpet) is incorporated in the RPE or choroid.1 Acting as a mirror, the tapetum reflects light which has passed between the photoreceptors, so augmenting the light bombarding the photoreceptors. Hence the proverbial ‘cat’s eyes’ when caught by a beam of light in the dark.

The retinal pigment epithelium

Fundamental to understanding the inverted retina is the crucial role played by the RPE. Many of its important functions are now well known. Each RPE cell is in intimate contact with the tips of 20 or more photoreceptor outer segments which number over 130 million. Without the RPE the photoreceptors and the rest of the neurosensory retina cannot function normally and ultimately atrophy.

The outer segment of a photoreceptor consists of a stack of discs containing light-sensitive photopigment. These discs are being continually formed by the inner segment from where they move in succession outwards in the outer segment towards the RPE which phagocytoses (Greek: φάγω (phagō) = eat) them and recycles their chemical components.

The RPE stores vitamin A, a precursor of the photopigments, and thus participates in their regeneration. There are four photopigments which are all bleached on exposure to light: rhodopsin (found in the rods, for night vision) and one for each of the three different types of cones (one for each of the primary colours). It synthesises glycosaminoglycans for the interphotoreceptor matrix, i.e. the material lying between and separating the photoreceptors.

Besides oxygen, the RPE selectively transports nutrients from the choroid to supply the outer third of the retina and removes the waste products of photoreceptor metabolism to be cleared by the choroidal circulation. By selective pumping of metabolites and the presence of its tight intercellular junctions, the RPE acts as a barrier, called the blood-retinal barrier, preventing access of larger or harmful chemicals to retinal tissue, thereby contributing to the maintenance of a stable and optimal retinal environment.

The RPE has complex mechanisms for dealing with toxic molecules and free radicals produced by the action of light. Specific enzymes such as the superoxide dismutases, catalases, and peroxidases are present to catalyse the breakdown of potentially harmful molecules such as superoxide and hydrogen peroxide. Antioxidants such as a-tocopherol (vitamin E) and ascorbic acid (vitamin C) are available to reduce oxidative damage.

Our photoreceptors thus continually synthesise new outer segment discs with their specific photopigments, recycling materials from used discs digested by the RPE. This prompts the question, ‘Why have such a complicated process?’ The answer must be that it is an example of biological renewal, by means of which tissues exposed to damaging chemicals, radiation, mechanical trauma, etc., are able to survive. Without self renewal, tissues such as the skin, the lining of the gut, blood cells etc would quickly accumulate fatal defects. In the same way, by the continual replacing of their discs the photoreceptors counter the relentless process of disintegration accelerated by toxic agents, particularly short wavelength light.


The choroidal heat sink


It has been observed that the damage to photoreceptors in an experimental model is strongly related to temperature, and other studies have confirmed that heat exacerbates photochemical injury. Any system designed to protect against the latter should also protect against the former. In 1980, a paper was published which explained for the first time something already known about the choroid.2 That is, its very high rate of blood flow which far exceeds the nutritional needs of the retina, despite the latter being highly active metabolically, as indicated.

The choroidal capillaries (the choriocapillaris) form a rich plexus lying immediately external to the RPE, predominantly its central area, and separated from it by only a very thin membrane (Bruch’s). The absorption of excess light by the RPE produces heat in the outer retina which has to be dissipated if thermal damage to the delicate and complex biological machinery, its own and that of its neighbourhood, is to be avoided.

The authors of this study cogently argue that an important function of the choroid with its torrential blood flow (in local terms) and its close proximity to the RPE, is to act as a heat sink and cooling device. Still more fascinating are the results of further studies by the same workers indicating that there are central (via the brain), light-mediated nervous reflexes regulating choroidal blood flow, increasing the blood flow with increased illumination. Both RPE and choroid are essential for vision, but they are opaque, so it follows that for light to reach the photoreceptors, both RPE and choroid have to be located external to the neurosensory retina; hence we can conclude that there are sound reasons for the inverted configuration of the human and vertebrate retina.


The foveola

Although the neurosensory retina is virtually transparent apart from the blood in its very slender blood vessels, there is an additional refinement of its structure in its central region called the macula. The retina and the occipital cerebral cortex (called the visual cortex) of the brain, to which the former transmits visual information, are so organised that the VA is maximal in the visual axis. The visual axis passes through the foveola which forms the floor of a circular pit with a sloping wall, the fovea (Latin: pit) at the centre of the macula. Away from the fovea the VA diminishes progressively towards the periphery of the retina. Thus the colour photoreceptors—the cones for red, green and possibly also blue—have their greatest density of 150,000 per square mm at the foveola, which measures only 300–330 µm across.


Xanthophyll pigment

The optical system of the human eye is such that ambient light tends to fall with peak intensity on the macular area of the retina with much less on the retinal periphery. It must be significant therefore that not only is melanin more abundant in the macular region because its RPE cells are taller and more numerous per unit area than elsewhere30 but there is also in the retina’s central area the yellow pigment xanthophyll (Greek: ξάνθος xanthos, yellow). In this region of the retina, xanthophyll permeates all layers of the neurosensory retina between its two limiting membranes and is concentrated in the retinal cells, both the neurons and the supporting tissue cells. Recently attention has been drawn to the presence of a collection of retinal supporting tissue cells (called Müller cells after the person who first described them) over the internal surface of the fovea and forming a cone whose apex plugs the foveolar depression.

Retinal xanthophyll is a carotenoid, chemically related to vitamin A, whose absorption spectrum peaks at about 460 nm and ranges from 480 nm down to 390 nm It helps to protect the neurosensory retina by absorbing much of the potentially damaging shorter wavelength visible light, i.e. blue and violet, which is more scattered by small molecules and structures.


The blind spot

Because of the retina’s inverted arrangement, the axons (nerve fibres) transmitting data to the brain pass under cover of the retina’s inner surface to converge to a small area which is the optic nerve head, where they all exit the eye together as the optic nerve. The optic nerve head has no photoreceptors and so is blind, thereby producing a small blind spot in the visual field. No surprise, evolutionists criticized that. As Williams puts it:

‘Our retinal blind spots rarely cause any difficulty, but rarely is not the same as never. As I momentarily cover one eye to ward off an insect, an important event might be focused on the blind spot of the other.’ 3
Notwithstanding, this issue has to be viewed in perspective: the blind spot is centred at 15° away from the visual axis (3.7 mm from the foveola) and is very small in relation the visual field of an eye, occupying less than 0.25%. As mentioned above, the further away a point in the retina is from the foveola, the less will be its VA and its sensitivity. The retina surrounding the optic nerve head, in the light-adapted state, has a VA of only about 15% of that at the foveola. We can safely infer that the theoretical risk referred to by Williams arising from the blind spot in a one-eyed person, is negligible; and, in keeping with this, it is considered safe for a one-eyed person to drive a private motor car, i.e. for non-vocational purposes.’ 

Because the two visual fields overlap to a large degree, the blind spot of one eye is covered by the other eye’s visual field. It is true that occlusion or loss of one eye is a handicap, but this is not because of the blind spot of the seeing eye for the reasons given above.


Invertebrated eyes

Some claims that the verted retinae of cephalopods, such as squids and octopuses, are more efficient than the inverted retinae found in vertebrates. But this presupposes that the inverted retina is inefficient in the first place, and we’ve seen that isn’t the case. Also, they have never shown that cephalopods actually see better. On the contrary, their eyes merely ‘approach some of the lower vertebrate eyes in efficiency’ and they are probably colour blind. Further, the cephalopod retina, besides being ‘verted’, is actually much simpler than the ‘inverted’ retina of vertebrates; as Budelmann states, ‘The structure of the [cephalopod] retina is much simpler than in the vertebrate eye, with only two neural components, the receptor cells and efferent fibres’.5 It is an undulating structure with ‘long cylindrical photoreceptor cells with rhabdomeres consisting of microvilli’, so that the cephalopod eye has been described as a ‘compound eye with a single lens’. Finally, they live in regions with much lower light intensity than most vertebrates, which contributes to show that cephalopods eyes don’t need to be so complex as it’s usually claimed.

Despite the efforts of evolution promoters, the inverted retina isn’t an evidence of bad design; all the way around, even its “backwards wired” design poses a clear sign of planned origin, as to suit the demands of each living being, in accordance to its environment.

(From the article:  Is our ‘inverted’ retina really ‘bad design’?-Creation Ministries)


God bless you!




1 Dawkins, R., The Blind Watchmaker: Why the evidence of evolution reveals a universe without design. W.W. Norton and Company, New York, p. 93, 1986

2 Duke-Elder, S., System of Ophthalmology, Henry Kimpton, London, vol. 1, p. 147, 1958.

3 Parver, L.M., Auker, C., Carpenter, D.O., Choroidal blood flow as a heat dissipating mechanism in the macula, Am. J. Ophthalmol. 89:641–646, 1980.

4 Williams, G.C., Natural Selection: Domains, Levels and Challenges, Oxford University Press, Oxford, pp. 72–73, 1992.

5 Budelmann, B.U., Cephalopod sense organs, nerves and brain, 1994. In Pörtner, H.O., O’Dor, R.J. and Macmillan, D.L., ed., Physiology of cephalopod molluscs: lifestyle and performance adaptations, Gordon and Breach, Basel, Switzerland, p. 15, 1994.

Sangue e água-marinha, parentes? A coagulação sanguinea, obra do Criador!

Por vezes evolucionistas alegam que nosso sangue é composto de elementos (sódio, cloro, etc) muitos similares aos da água do mar e isto é, por eles, atribuído ao fato de nossos ancestrais terem evoluido nos oceanos, éons atrás. Vários divulgadores da evolução têm feito essa alegação. Por exemplo, Robert Lehrman, no seu The Long Road to Man (a longa estrada até o homem) (Fawcett Publications, 1961), afirma:

Uma característica humana, química, remonta a nossa ancestralidade nos oceanos… As porcentagens de sódio, potássio, cálcio, magnésio, iodo, cloro, e outros minerais no sangue humano coincidem com os percentuais destes na água-marinha.. Nossos ancestrais, habitantes dos oceanos, desenvolveram células adaptadas ao ambiente químico de água salgada. Quando eles sairam do oceano, eles carregaram consigo uma parte desse ambiente na forma de um fluido que rega as células; posteriormente ele fora incorporado à corrente sanguínea.”

Este argumento não tem sido amplamente utilizado ultimamente, porém ele vem a superfície de tempos em tempos —veja Presidents and evolution.

Existem imensos problemas com este argumento:

As concentrações de minerais no plasma sanguíneo humano e/ou soro e água marinha são bastante diferentes. Eles não são de todo semelhantes (ver tabela). O teor de cloro e de sódio no sangue são apenas cerca de 20% a 30% similar ao da água do mar ao passo que o teor de ferro sanguíneo é 250 vezes maior. Comparado com a água-marinha, o sangue possui pouquíssimo magnésio mas em compensação 9.000 vezes mais selênio. Os dados na Tabela contradizem a ideia de evolução desde o mar. Lehrman e outros estão redondamente equivocados ao dizerem que percentuais de mineral no sangue humano coincidem com os da água-marinha.

Mesmo a partir de um ponto de vista evolucionário tal alegação não faz sentido algum. De acordo com o que é crido na evolução, anfíbios sairam do mar a mais de 350 milhões de anos atrás. Sais têm sido adicionado ao mar o tempo inteiro, pelos rios que carregam sal dissolvido do continente ao mar, por exemplo. Levaria no máximo 62 milhões de anos para acumular todo o sal que agora encontra-se nos oceanos (usando as taxas atuais e próprio pressuposto evolucionista de que “o presente é a chave para o passado”, e sendo tão generoso com os evolucionistas quanto possível ao assumirmos água pura de início) veja Salty seas: evidence for a young earth. Em outras palavras, 350 milhões de anos atrás, quando os anfíbios supostamente evoluiram, não deveria haver sal algum nos oceanos! Portanto, se o sal no sangue de anfíbios fosse similar ao da água-marinha hoje, o que de fato não é, não o seria devido ao teor de sal marinho quando eles supostamente evoluiram! É claro que os oceanos não possuem nem perto de milhões de anos de idade, já que as evidências demonstram uma insuficiente quantidade de minerais acumulados nestes.

Nosso sangue!




Sódio 3220 10800
Cloro 3650 19400
Potássio 200 392
Cálcio 50 411
Magnésio 27 1290
Fosforo 36 0.09
Ferro 1 0.004
Cobre 1 0.001
Zinco 1.1 0.005
Crômio 1.1 0.002
Bromo 4 67
Flúor 0.1 1.3
Boro 1 5
Selênio 0.9 0.0001
Tabela. O conteúdo mineral encontrado no plasma ou soro, do sangue humano 1,2 e água-marinha3 (mg por litro).

Estudos sobre o sangue revelam o quão incrível ele é! O sangue leva o oxigênio de nossos pulmões à todas as células de nosso corpo e delas leva o dióxido de carbono até os pulmões, que os elimina durante a respiração. No entanto, o sangue faz muito mais do que isso. Ele transporta alimento para todas as células em forma de energia (glicose) e blocos construtores químicos, como minerais, vitaminas, aminoácidos e ácidos graxos para a construção de inúmeros componentes celulares. Nosso sangue transporta resíduos como a uréia até aos rins, de onde eles são eliminados. O sangue carrega uma complexa série de agentes que estancam sangramentos; caso soframos algum corte ele inicia o reparo da área ferida (veja mais abaixo). Os sistemas que regulam nossa temperatura corporal igualmente dependem do sangue, que leva o calor até as extremidades (do corpo) onde ele é dissipado. E ainda há muito mais o que se aprender sobre o sangue e suas fascinantes habilidades.

Os níveis de elementos no sangue são controlados pelo corpo dentro de limites restritos, afim de que o sangue possa executar suas variadas funções eficientemente. Defeitos genéticos (mutações) que tornam certas enzimas menos eficientes e causam, por exemplo, excesso ou carência de ferro no sangue, provocam enfermidade. Tais defeitos genéticos têm-se acumulado desde a Queda (Adão e Eva foram criados perfeitos) e são atualmente identificados como a causa de muitos males à humanidade.

Coagulação sanguínea: obra de Deus


Bioquimicamente, uma das funções mais maravilhosas do sangue é a coagulação do sangue, através de cascatas (um processo que ativa outro processo, que ativa outro, sucessivamente)! Como sendo uma função essencial ao organismo, a coagulação tem de ser precisa, rapida, infalível! Estancar todo o ferimento, sem correr o risco de obstruir toda a corrente sanguinea, enfim. Cerca de 2 a 3 por cento das proteínas no plasma sanguíneo (a parte que sobrou
após as células vermelhas do sangue são removidas) é constituído por uma complexa proteína chamad fibrinogenio. Fibrinogénio é fácil de lembrar, porque a proteína
«faz fibras» que formam o coágulo. Contudo, o fibrinogénio é apenas o potencial
material de coágulador. Quase
todas as outras proteínas estão efetivamente envolvidas no controle da coagulação do sangue e colocação do coágulo. O fibrinogênio é um composto de seis cadeias de proteínas, contendo pares de gêmeos

de três proteínas diferentes. Microscopio eletrônico mostrou que o fibrinogénio

é uma molécula em forma de haste, com duas saliências redondas sobre cada extremidade da haste

e uma única colisão rodada no meio. Assim, o fibrinogénio se assemelha a um conjunto de barras com um conjunto extra de pesos no meio da barra. Normalmente, o fibrinogénio se dissolve no plasma, como o sal é dissolvido em

água do oceano. Ele flutua ao redor, cuidando de forma pacífica o seu próprio negócio, até

de um corte ou ferimento provoca hemorragia. Em seguida, uma outra proteína, chamada trombina,

corta” vários pedaços pequenos de dois dos três pares de cadeias de proteínas do fibrinogénio. Esses vários pedaços, agora chamados de fibrina, se agrupam em longos “fios”, ao invés de formar um grande bolo só, que cobriria menos trechos e exigiria mais proteínas no processo. Trombrina, que corta pedaços do fibrinogênio, é como uma serra circular, mas o que seria do processo se só houvessem essas duas proteínas? Certamente seria um descontrole, a trombina poderia fatiar todo fibrinogênio do sangue, fazendo tanta fibrina que congestionaria o sistema circulatório do animal.. Seria o fim!

Afim de evitar isso, o sistema tem de controlar a trombina.


A cascata


O sangue normalmente armazena enzimas (proteínas que catalizam, aceleram uma reação química) em estado inativo, para uso posterior. Essas formas inativas são chamadas proenzimas (ou zimogêno). Quando um sinal de alerta é recebido para certa enzima ser utilizada, a proenzima correspondente é ativa.

Trombina existe numa forma inativa, protrombina. Por estar inativa, ela não pode fatiar o fibrinogênio, barrando o processo de coagulação. Aí mora o dilema, a necessidade de controle rigoroso: se não cessado, a trombina ia cortar todo fibrinogênio, causando morte, mas também, sem um processo que ligasse ela, o sistema de coagulação não iria jamais iniciar, e não bastaria apenas fatiar várias fibrinas aleatoriamente; estas apenas ficariam flutuando pelo sangue sem estancar o local machucado. Não a toa a cascata entra em ação! Uma proteína chamda fator Stuart (ou fator X) fatia protrombina, tornando-a em uma ativa trombina, que então fatia o fibrinogênio, que forma a fibrina. Mas, novamente, somente estas 3 proteínas não seriam suficientes para gerir todo o processo. Neste ponto, há uma pequena inversão de ordem no processo. Mesmo o fator Stuart não pode ativar a protrombina! Misture ambas proteínas em um tubo de ensaio o tempo que for, notar-se-à que nenhuma trombina se formará. E aí que outra proteína, acelerina, é necessária, para aumentar a atividade do fator Stuart. A dinâmica duo-acelerina e o fator X ativados cortam a protrombina rápido o bastante para auxiliar no sangramento do animal. Então, neste processo precisamos de duas proteínas separadas para ativar uma proenzima.

Sim, acelerina também existe em forma inativa, chamada proacelerina! E o que ativa ela? Trombina! Mas como vimos, esta está ainda mais abaixo na cascata regulatória do que proacelerina. No entanto, devido a baixa velocidade de fatiamento da protrombina pelo fator Stuart, sempre encontra-se traços da primeira na corrente sanguinea. Coagulação, portanto, é auto-catalítica, porque proteínas no sangue aceleram a produção de mais das mesmas proteínas!

Temos de voltar atrás um pouco na história aqui, porque, como visto, protrombina, como inicialmente produzida pela célula, não pode ser tornada em trombina, mesma na presença do fator Stuart e acelerina ativadas. Protrombina deve ser antes modificada ao ter-se 10 aminoácidos específicos residuais, chamados resíduos glutamatos (Glu) convertidas em γ-carboxyglutamate residuais (Gla). A modificação pode ser comparada ao colocar a mandíbula junto ao maxilar superior do crânio. A estrutura complera pode morder e segurar o objeto mordido; sem a mandíbula, o crânio não pode abocanhar. No caso da protrombina, resíduos Gla “mordem” (ou se ligam) cálcio, permitindo que a protrombina se fixe na superfície das células. Apenas o complexo cálcio-protrombina intacto e modificado, se apega a uma membrana celular, pode ser repartido pelo fator Stuart ativado e acelerina, para virar trombina.

A modificação da protrombina não ocorre por acidente. Como praticamente todas as reações bioquímicas, ela requer catalise por parte de uma específica enzima. Porém, além da enzima, a conversão de Glu para Gla necessita outro componente: vitamina K. Esta não é uma proteína, porém, uma pequena molécula, como o 11-cis-retinal (que compõe a bioquímica da visão). Como uma arma necessita de balas, a enzima que transforma Glu em Gla necessita de vitamina K para trabalhar. Um tipo de veneno para ratos é baseado no papel que a vitamina K tem na coagulação do sangue. O veneno sintético chamado “warfarin”, foi feito para parecer igual a vitamina K para as enzimas que a usm. Quando o rato come comida envenenada com warfarin, sua coagulação é afetada, pois a protrombina não é mais modificada nem fatiada, o que leva o animal a sangrar até a morte.

Agora temos de saber o que ativa o fator Stuart. Veremos que ele pode ser ativado por meio de duas diferentes rotas, chamadas intrínsecas e extrínsecas. Na rota intrínseca, todas as proteínas requeridas para a coagulação estão já contidas no plasma do sangue; na rota extrínseca alguma destas proteínas ocorrem nas celulas. Vamos primeiro examinar a rota intrínseca.


Quando um animal sofre corte, uma proteína chamada fator Hageman se fixa na superfície das células próximas a ferida. Esta proteína é então partida em pedaços por uma proteína chamada HMK, daí produzindo fator Hageman ativo. Imediatamente o fator Hageman ativado converte outra proteína, chamada precalicreína, em sua forma ativa, calicreína. Está auxilia HMK a acelerar a conversão de fator Hageman para sua forma ativa. Fator Hageman e HMK juntas transformam outra proteína de nome PTA, em sua forma ativa. PTA ativa por sua vez, unida a forma ativa de outra proteína, a convertina, ativam a proteína nomeada fator Christmas (Natal em inglês). Finalmente, o fator Christmas, junto com o fator anti-hemofílico (que por sua vez é ativado pela trombina de uma maneira semelhante a da proacelerina) mudam o fator Stuart para sua forma ativa.



Assim como com a rota intrínseca, a rota extrínseca é também uma cascata. Ela começa quando uma proteína chamada proconvertina é convertida em convertina pelo fator Hageman e pela trombina. Na presença de outra proteína, o fator tissular (também chamado tromboplastina ou fator III, CD142), conertina muda o fator Stuart em sua forma ativa. Fator tissular, porém, apenas ocorre no exterior de células que geralmente não estão em contato com o sangue. Portanto, apenas quando uma lesão traz tecido em contato com o sangue é que a rota extrínseca se inicia.


Ambas as rotas se cruzam em inúmeras etapas. O fator Hageman, ativado pela rota intrínseca, pode ativar a proconvertina da rota extrínseca. Convertina pode então colaborar na rota intrínseca ajudando a PTA a ativar o fator Christmas. Trmbina sozinha pode disparar ambas rotas da cascata ao ativar o fator anti-hemofílico, que é exigido para ajudar o ativo fator Christmas na conversão do fator Stuart à sua forma ativa, e também ao ativar proconvertina. Ufa! Que complexidade, não? Mas, não descansa ainda, porque não terminamos nossa cascata!


Uma vez que a coagulação começa, nos perguntamos o que a faz parar no tempo correto, antes que possa solidificar todo o sangue animal (e matá-lo)? A coagulação é confinada ao local da ferida por vários meios. Primeiro, uma proteína do plasma chamada antitrombina se liga nas formas ativas (mas não nas inativas) de boa parte das proteínas, e torna-as inativas. Antitrombina por si só é relativamente inativa, até que se ligue a substância de nome heparina. Esta ocorre dentro das células e vasos sanguíneos não-danificados. Um segundo meio que torna a coagulação local, e não geral é a ação da proteína C. Após sua ativação por parte da trombina, proteína C destrói acelerina e o fator anti-hemofílico. Finalmente, uma proteína chamada trombomodulina se enfileira na superfície das células na parte de dentro dos vasos. Esta se liga a trombina, tornando-a menos eficiente ao cortar fibrinogênio e simultaneamente mais capaz de ativar a proteína C.

Quando um coágulo se forma, ele é inicialmente frágil: se a área danificada for atingida o coagúlo pode facilmente ser rompido, e sangramento recomeça. Para evitar isso, o organismo tem um método para reforçar o coágulo depois de formado. Fibrina agregada é “amarrada” por uma proteína ativada chamada FSF (fator estabilizador de fibrina), que forma enlaçamentos e cruzamentos químicos entre diferentes moléculas de fibrina. Eventualmente, o sangue coagulado tem de ser removido após o processo de cura da ferida estar adiantado. Uma proteína, a plasmina, age como tesouras especiaias para cortar os coágulos de fibrina. Felizmente, plasmina não funcionano fibrinogênio. Plasmina não pode agir tão rápido, senão a ferida não teria tempo suficiente para se curar por inteiro. Ela ocorre inicialmente numa forma inativa chamada plasminogênio. Conversão de plasminogênio em plasmina é catalizada pela proteína t-PA. Existem outras proteínas que controlam a dissolução do coágulo, incluindo α2-antiplasmina, que se liga a plasmina, evitando-a de destruir a fibrina do coágulo.


Como podemos ver, essas etapas todas, desde o início da cascata, a ativação intermitente de cada proteína, o tempo preciso, o passo-a-passo do processo até a remoção final do coágulo, quando a ferida sara, é parte de um complexo e intrincado todo, onde qualquer falha ou mal ajuste poderia ser danoso, fatal. Então imaginar que tudo isso seja fruto de meros processos “naturalistas”, cegos, aleatórios, irracionais, é ir contra o bom senso e lógica, sem mencionar as probabilidades. Exemplo, o TPA tem quatro tipos diferentes de domínios, as chances dela aparecer por sorte são de 30.000 elevada à quarta potência!


Existem outros mecanismos que auxiliam no estancamento do sangue, exemplo, o corpo pode contrair os vasos próximos ao local cortado, reduzindo o fluxo de sangue na região. Também existem células sanguíneas chamadas plaquetas, que grudam na região ferida, como tijolinhos, ajudando a tapar a ferida. Mas a coagulação do sangue é o principal, e mais magnífico, elaborado de todos! Algo que claramente anula qualquer crença materialista, antiteísta, clara evidência da criação coordenada pelo Senhor.


Como escrevera Davi:


Eu te louvarei, porque de um modo tão admirável e maravilhoso fui formado; maravilhosas são as tuas obras, e a minha alma o sabe muito bem.”Salmos 139.14


Fontes: CMI

Livro Dawin’s Black Box, de Michael Behe


Autopoiesis: evidence of Intelligent Design

From: CMI

Life’s irreducible structure—Part 1: autopoiesis


The commonly cited case for intelligent design appeals to: (a) the irreducible complexity of (b) some aspects of life. But complex arguments invite complex refutations (valid or otherwise), and the claim that only someaspects of life are irreducibly complex implies that others are not, and so the average person remains unconvinced. Here I use another principle—autopoiesis (self-making)—to show that all aspects of life lie beyond the reach of naturalistic explanations. Autopoiesis provides a compelling case for intelligent design in three stages: (i) autopoiesis is universal in all living things, which makes it a pre-requisite for life, not an end product of natural selection; (ii) the inversely-causal, information-driven, structured hierarchy of autopoiesis is not reducible to the laws of physics and chemistry; and (iii) there is an unbridgeable abyss between the dirty, mass-action chemistry of the natural environmental and the perfectly-pure, single-molecule precision of biochemistry. Naturalistic objections to these propositions are considered in Part II of this article.

Snowflake photos by Kenneth G. Libbrecht.

SnowflakesFigure 1. Reducible structure. Snowflakes (left) occur in hexagonal shapes because water crystallizes into ice in a hexagonal pattern (right). Snowflake structure can therefore be reduced to (explained in terms of) ice crystal structure. Crystal formation is spontaneous in a cooling environment. The energetic vapour molecules are locked into solid bonds with the release of heat to the environment, thus increasing overall entropy in accord with the second law of thermodynamics.

The commonly cited case for intelligent design (ID) goes as follows: ‘some biological systems are so complex that they can only function when all of their components are present, so that the system could not have evolved from a simpler assemblage that did not contain the full machinery.’1 This definition is what biochemist Michael Behe calledirreducible complexity in his popular book Darwin’s Black Box2 where he pointed to examples such as the blood-clotting cascade and the proton-driven molecular motor in the bacterial flagellum. However, because Behe appealed to complexity, many equally complex rebuttals have been put forward,3 and because he claimed that only some of the aspects of life were irreducibly complex, he thereby implied that the majority of living structure was open to naturalistic explanation. As a result of these two factors, the concept of intelligent design remains controversial and unproven in popular understanding.

In this article, I shall argue that all aspects of life point to intelligent design, based on what European polymath Professor Michael Polanyi FRS, in his 1968 article in Science called ‘Life’s Irreducible Structure.’4 Polanyi argued that living organisms have a machine-like structure that cannot be explained by (or reduced to) the physics and chemistry of the molecules of which they consist. This concept is simpler, and broader in its application, than Behe’s concept of irreducible complexity, and it applies to all of life, not just to some of it.

The nature and origin of biological design

Biologists universally admire the wonder of the beautiful ‘designs’ evident in living organisms, and they often recoil in revulsion at the horrible ‘designs’ exhibited by parasites and predators in ensuring the survival of themselves and their species. But to a Darwinist, these are only ‘apparent designs’—the end result of millions of years of tinkering by mutation and fine tuning by natural selection. They do not point to a cosmic Designer, only to a long and ‘blind’ process of survival of the fittest.5 For a Darwinist, the same must also apply to the origin of life—it must be an emergent property of matter. An emergent property of a system is some special arrangement that is not usually observed, but may arise through natural causes under the right environmental conditions. For example, the vortex of a tornado is an emergent property of atmospheric movements and temperature gradients. Accordingly, evolutionists seek endlessly for those special environmental conditions that may have launched the first round of carbon-based macromolecules6 on their long journey towards life. Should they ever find those unique environmental conditions, they would then be able to explain life in terms of physics and chemistry. That is, life could then be reduced to the known laws of physics, chemistry and environmental conditions.

However, Polanyi argued that the form and function of the various parts of living organisms cannot be reduced to (or explained in terms of) the laws of physics and chemistry, and so life exhibits irreducible structure. He did not speculate on the origin of life, arguing only that scientists should be willing to recognize the impossible when they see it:

‘The recognition of certain basic impossibilities has laid the foundations of some major principles of physics and chemistry; similarly, recognition of the impossibility of understanding living things in terms of physics and chemistry, far from setting limits to our understanding of life, will guide it in the right direction.’7

Reducible and irreducible structures

To understand Polanyi’s concept of irreducible structure, we must first look at reducible structure. The snowflakes in figure 1 illustrate reducible structure.

Meteorologists have recognized about eighty different basic snowflake shapes, and subtle variations on these themes add to the mix to produce a virtually infinite variety of actual shapes. Yet they all arise from just one kind of molecule—water. How is this possible?


SilverFigure 2. Irreducible structure. The silver coins (left) have properties of flatness, roundness and impressions on faces and rims, that cannot be explained in terms of the crystalline state of silver (close packed cubes) or its natural occurrence as native silver (right).

When water freezes, its crystals take the form of a hexagonal prism. Crystals then grow by joining prism to prism. The elaborate branching patterns of snowflakes arise from the statistical fact that a molecule of water vapour in the air is most likely to join up to its nearest surface. Any protruding bump will thus tend to grow more quickly than the surrounding crystal area because it will be the nearest surface to the most vapour molecules.8 There are six ‘bumps’ (corners) on a hexagonal prism, so growth will occur most rapidly from these, producing the observed six-armed pattern.

Snowflakes have a reducible structure because you can produce them with a little bit of vapour or with a lot. They can be large or small. Any one water molecule is as good as any other water molecule in forming them. Nothing goes wrong if you add or subtract one or more water molecules from them. You can build them up one step at a time, using any and every available water molecule. The patterns can thus all be explained by (reduced to) the physics and chemistry of water and the atmospheric conditions.


Machine componentsFigure 3. Common irreducibly structured machine components: lever (A), cogwheel (B) and coiled spring (C). All are made of metal, but their detailed structure and function cannot be reduced to (explained by) the properties of the metal they are made of.

To now understand irreducible structure, consider a silver coin.

Silver is found naturally in copper, lead, zinc, nickel and gold ores—and rarely, in an almost pure form called ‘native silver’. Figure 2 shows the back and front of two vintage silver coins, together with a nugget of the rare native form of silver. The crystal structure of solid silver consists of closely packed cubes. The main body of the native silver nugget has the familiar lustre of the pure metal, and it has taken on a shape that reflects the available space when it was precipitated from groundwater solution. The black encrustations are very fine crystals of silver that continued to grow when the rate of deposition diminished after the main load of silver had been deposited out of solution.

Unlike the case of the beautifully structured snowflakes, there is no natural process here that could turn the closely packed cubes of solid silver into round, flat discs with images of men, animals and writing on them. Adding more or less silver cannot produce the roundness, flatness and image-bearing properties of the coins, and looking for special environmental conditions would be futile because we recognize that the patterns are man-made. The coin structure is therefore irreducible to the physics and chemistry of silver, and was clearly imposed upon the silver by some intelligent external agent (in this case, humans).

Whatever the explanation, however, the irreducibility of the coin structure to the properties of its component silver constitutes what I shall call a ‘Polanyi impossibility’. That is, Polanyi identified this kind of irreducibility as a naturalistic impossibility, and argued that it should be recognized as such by the scientific community, so I am simply attaching his name to the principle.

There are endless examples of such irreducible structures in living systems, but they all work under a unifying principle called ‘autopoiesis’.

Polanyi pointed to the machine-like structures that exist in living organisms. Figure 3 gives three examples of common machine components: a lever, a cogwheel and a coiled spring. Just as the structure and function of these common machine components cannot be explained in terms of the metal they are made of, so the structure and function of the parallel components in life cannot be reduced to the properties of the carbon, hydrogen, oxygen, nitrogen, phosphorus, sulphur and trace elements that they are made of. There are endless examples of such irreducible structures in living systems, but they all work under a unifying principle called ‘autopoiesis’.

Autopoiesis defined

Autopoiesis literally means ‘self-making’ (from the Greek auto for self, and the verb poiéō meaning ‘I make’ or ‘I do’) and it refers to the unique ability of a living organism to continually repair and maintain itself—ultimately to the point of reproducing itself—using energy and raw materials from its environment. In contrast, an allopoietic system (from the Greek allo for other) such as a car factory, uses energy and raw materials to produce an organized structure (a car) which is something other than itself (a factory).9

Autopoiesis is a unique and amazing property of life—there is nothing else like it in the known universe. It is made up of a hierarchy of irreducibly structured levels. These include: (i) components with perfectly pure composition, (ii) components with highly specific structure, (iii) components that are functionally integrated, (iv) comprehensively regulated information-driven processes, and (v) inversely-causal meta-informational strategies for individual and species survival (these terms will be explained shortly). Each level is built upon, but cannot be explained in terms of, the level below it. And between the base level (perfectly pure composition) and the natural environment, there is an unbridgeable abyss. The enormously complex details are still beyond our current knowledge and understanding, but I will illustrate the main points using an analogy with a vacuum cleaner.

A vacuum cleaner analogy

My mother was excited when my father bought our first electric vacuum cleaner in 1953. It consisted of a motor and housing, exhaust fan, dust bag, and a flexible hose with various end pieces. Our current machine uses a cyclone filter and follows me around on two wheels rather than on sliders as did my mother’s original one. My next version might be the small robotic machine that runs around the room all by itself until its battery runs out. If I could afford it, perhaps I might buy the more expensive version that automatically senses battery run-down and returns to its induction housing for battery recharge.

Notice the hierarchy of control systems here. The original machine required an operator and some physical effort to pull the machine in the required direction. The transition to two wheels allows the machine to trail behind the operator with little effort, and the cyclone filter eliminates the messy dust bag. The next transition to on-board robotic control requires no effort at all by the operator, except to initiate the action to begin with and to take the machine back to the power source for recharge when it has run down. And the next transition to automatic sensing of power run-down and return-to-base control mechanism requires no effort at all by the operator once the initial program is set up to tell the machine when to do its work.

If we now continue this analogy to reach the living condition of autopoiesis, the next step would be to install an on-board power generation system that could use various organic, chemical or light sources from the environment as raw material. Next, install a sensory and information processing system that could determine the state of both the external and internal environments (the dirtiness of the floor and the condition of the vacuum cleaner) and make decisions about where to expend effort and how to avoid hazards, but within the operating range of the available resources. Then, finally, the pièce de résistance, to install a meta-information (information about information) facility with the ability to automatically maintain and repair the life system, including the almost miraculous ability to reproduce itself—autopoiesis.

Notice that each level of structure within the autopoietic hierarchy depends upon the level below it, but it cannot be explained in terms of that lower level.

Notice that each level of structure within the autopoietic hierarchy depends upon the level below it, but it cannot be explained in terms of that lower level. For example, the transition from out-sourced to on-board power generation depends upon there being an electric motor to run. An electric vacuum cleaner could sit in the cupboard forever without being able to rid itself of its dependence upon an outside source of power—it must be imposed from the level above, for it cannot come from the level below. Likewise, autopoiesis is useless if there is no vacuum cleaner to repair, maintain and reproduce. A vacuum cleaner without autopoietic capability could sit in the cupboard forever without ever attaining to the autopoietic stage—it must be imposed from the level above, as it cannot come from the level below.

The autopoietic hierarchy is therefore structured in such a way that any kind of naturalistic transition from one level to a higher level would constitute a Polanyi impossibility. That is, the structure at level is dependent upon the structure at level i-1 but cannot be explained by the structure at that level. So the structure at level must have been imposed from level or above.

The naturalistic abyss

Most origin-of-life researchers agree (at least in the more revealing parts of their writings)10 that there is no naturalistic experimental evidence directly demonstrating a pathway from non-life to life. They continue their research, however, believing that it is just a matter of time before we discover that pathway. But by using the vacuum cleaner analogy, we can give a solid demonstration that the problem is a Polanyi impossibility right at the foundation—life is separated from non-life by an unbridgeable abyss.

Dirty, mass-action environmental chemistry

The ‘simple’ structure of the early vacuum cleaner is not simple at all. It is made of high-purity materials (aluminium, plastic, fabric, copper wire, steel plates etc) that are specifically structured for the job in hand and functionally integrated to achieve the designed task of sucking up dirt from the floor. Surprisingly, the dirt that it sucks up contains largely the same materials that the vacuum cleaner itself is made of—aluminium, iron and copper in the mineral grains of dirt, fabric fibres in the dust, and organic compounds in the varied debris of everyday home life. However, it is the difference in form and function of these otherwise similar materials that distinguishes the vacuum cleaner from the dirt on the floor. In the same way, it is the amazing form and function of life in a cell that separates it from the non-life in its environment.

Naturalistic chemistry is invariably ‘dirty chemistry’ while life uses only ‘perfectly-pure chemistry’. I have chosen the word ‘dirty chemistry’ not in order to denigrate origin-of-life research, but because it is the term used by Nobel Prize winner Professor Christian de Duve, a leading atheist researcher in this field.11 Raw materials in the environment, such as air, water and soil, are invariably mixtures of many different chemicals. In ‘dirty chemistry’ experiments, contaminants are always present and cause annoying side reactions that spoil the hoped-for outcomes. As a result, researchers often tend to fudge the outcome by using artificially purified reagents. But even when given pure reagents to start with, naturalistic experiments typically produce what a recent evolutionist reviewer variously called ‘muck’, ‘goo’ and ‘gunk’12—which is actually toxic sludge. Even our best industrial chemical processes can only produce reagent purities in the order of 99.99%. To produce 100% purity in the laboratory requires very highly specialized equipment that can sort out single molecules from one another.

Another crucial difference between environmental chemistry and life is that chemical reactions in a test tube follow the Law of Mass Action.13Large numbers of molecules are involved, and the rate of a reaction, together with its final outcome, can be predicted by assuming that each molecule behaves independently and each of the reactants has the same probability of interacting. In contrast, cells metabolize their reactants with single-molecule precision, and they control the rate and outcome of reactions, using enzymes and nano-scale-structured pathways, so that the result of a biochemical reaction can be totally different to that predicted by the Law of Mass Action.

The autopoietic hierarchy

Perfectly-pure, single-molecule-specific bio-chemistry

The vacuum cleaner analogy breaks down before we get anywhere near life because the chemical composition of its components is nowhere near pure enough for life. The materials suitable for use in a vacuum cleaner can tolerate several percent of impurities and still produce adequate performance, but nothing less than 100% purity will work in the molecular machinery of the cell.

One of the most famous examples is homochirality. Many carbon-based molecules have a property called ‘chirality’—they can exist in two forms that are mirror images of each other (like our left and right hands) called ‘enantiomers’. Living organisms generally use only one of these enantiomers (e.g. left-handed amino acids and right-handed sugars). In contrast, naturalistic experiments that produce amino acids and sugars always produce an approximately 50:50 mixture (called a ‘racemic’ mixture) of the left-and right-handed forms. The horrors of the thalidomide drug disaster resulted from this problem of chirality. The homochiral form of one kind had therapeutic benefits for pregnant women, but the other form caused shocking fetal abnormalities.

The property of life that allows it to create such perfectly pure chemical components is its ability to manipulate single molecules one at a time. The assembly of proteins in ribosomes illustrates this single-molecule precision. The recipe for the protein structure is coded onto the DNA molecule. This is transcribed onto a messenger-RNA molecule which then takes it to a ribosome where a procession of transfer-RNA molecules each bring a single molecule of the next required amino acid for the ribosome to add on to the growing chain. The protein is built up one molecule at a time, and so the composition can be monitored and corrected if even a single error is made.

Specially structured molecules

Life contains such a vast new world of molecular amazement that no one has yet plumbed the depths of it. We cannot hope to cover even a fraction of its wonders in a short article, so I will choose just one example. Proteins consist of long chains of amino acids linked together. There are 20 amino acids coded for in DNA, and proteins commonly contain hundreds or even thousands of amino acids. Cyclin B is an averaged-size protein, with 433 amino acids. It belongs to the ‘hedgehog’ group of signalling pathways which are essential for development in all metazoans. Now there are 20433 (20 multiplied by itself 433 times) = 10563 (10 multiplied by itself 563 times) possible proteins that could be made from an arbitrary arrangement of 20 different kinds of amino acids in a chain of 433 units. The human body—the most complex known organism—contains somewhere between 105 (= 100,000) and 106 (=1,000,000) different proteins. So the probability (p) that an average-sized biologically useful protein could arise by a chance combination of 20 different amino acids is about = 106 /10563 = 1/10557 . And this assumes that only L-amino acids are being used—i.e. perfect enantiomer purity.14

For comparison, the chance of winning the lottery is about 1/106 per trial, and the chance of finding a needle in a haystack is about 1/1011 per trial. Even the whole universe only contains about 1080 atoms, so there are not even enough atoms to ensure the chance assembly of even a single average-sized biologically useful molecule. Out of all possible proteins, those we see in life are very highly specialized—they can do things that are naturally not possible. For example, some enzymes can do in one second what natural processes would take a billion years to do.15 Just like the needle in the haystack. Out of all the infinite possible arrangements of iron alloy (steel) particles, only those with a long narrow shape, pointed at one end and with an eye-loop at the other end, will function as a needle. This structure does not arise from the properties of steel, but is imposed from outside.

Water, water, everywhere

There is an amazing paradox at the heart of biology. Water is essential to life,16 but also toxic—it splits up polymers by a process called hydrolysis, and that is why we use it to wash with. Hydrolysis is a constant hazard to origin-of-life experiments, but it is never a problem in cells, even though cells are mostly water (typically 60–90%). In fact, special enzymes called hydrolases are required in order to get hydrolysis to occur at all in a cell.17 Why the difference? Water in a test tube is free and active, but water in cells is highly structured, via a process called ‘hydrogen bonding’, and this water-structure is comprehensively integrated with both the structure and function of all the cell’s macromolecules:

‘The hydrogen-bonding properties of water are crucial to [its] versatility, as they allow water to execute an intricate three-dimensional “ballet”, exchanging partners while retaining complex order and enduring effects. Water can generate small active clusters and macroscopic assemblies, which can both transmit and receive information on different scales.’18

Water should actually be first on the list of molecules that need to be specially configured for life to function. Both the vast variety of specially structured macromolecules and their complementary hydrogen-bonded water structures are required at the same time. No origin-of-life experiment has ever addressed this problem.

Functionally integrated molecular machines


ATP synthaseFigure 4. ATP synthase, a proton-powered molecular motor. Protons (+) from inside the cell (below) move through the stator mechanism embedded in the cell membrane and turn the rotor (top part) which adds inorganic phosphate (iP) to ADP to convert it to the high-energy state ATP.

It is not enough to have specifically structured, ultra-pure molecules, they must also be integrated together into useful machinery. A can of stewed fruit is full of chemically pure and biologically useful molecules but it will never produce a living organism19 because the molecules have been disorganized in the cooking process. Cells contain an enormous array of useful molecular machinery. The average machine in a yeast cell contains 5 component proteins,20 and the most complex—the spliceosome, that orchestrates the reading of separated sections of genes—consists of about 300 proteins and several nucleic acids.21

One of the more spectacular machines is the tiny proton-powered motor that produces the universal energy molecule ATP (adenosine tri-phosphate) illustrated in Figure 4. When the motor spins one way, it takes energy from digested food and converts it into the high-energy ATP, and when the motor spins the other way, it breaks down the ATP in such a way that its energy is available for use by other metabolic processes.22

Comprehensively regulated, information-driven metabolic functions

It is still not enough to have spectacular molecular machinery—the various machines must be linked up into metabolic pathways and cycles that work towards an overall purpose. What purpose? This question is potentially far deeper than science can take us, but science certainly can ascertain that the immediate practical purpose of the amazing array of life structures is the survival of the individual and perpetuation of its species.23 Although we are still unravelling the way cells work, a good idea of the multiplicity of metabolic pathways and cycles can be found in the BioCyc collection. The majority of organisms so far examined, from microbes to humans, have between 1,000 and 10,000 different metabolic pathways.24 Nothing ever happens on its own in a cell—something else always causes it, links with it or benefits or is affected by it. And all of these links are multi-step processes.

All of these links are also ‘choreographed’ by information—a phenomenon that never occurs in the natural environment. At the bottom of the information hierarchy is the storage molecule—DNA. The double-helix of DNA is ‘just right’ for genetic information storage, and this ‘just right’ structure is beautifully matched by the elegance and efficiency of the code in which the cell’s information is written there.25 But it is not enough even to have an elegant ‘just right’ information storage system—it must also contain information. And not just biologically relevant information, but brilliantly inventive strategies and tactics to guide living things through the extraordinary challenges they face in their seemingly miraculous achievements of metabolism and reproduction. Yet even ingenious strategies and tactics are not enough. Choreography requires an intricate and harmonious regulation of every aspect of life to make sure that the right things happen at the right time, and in the right sequence, otherwise chaos and death soon follow.

Recent discoveries show that biochemical molecules are constantly moving, and much of their amazing achievements are the result of choreographing all this constant and complex movement to accomplish things that static molecules could never achieve. Yet there is no spacious ‘dance floor’ on which to choreograph the intense and lightning-fast (up to a million events per second for a single reaction26) activity of metabolism. A cell is more like a crowded dressing room than a dance floor, and in a show with a cast of millions!

Inversely causal meta-information

The Law of Cause and Effect is one of the most fundamental in all of science. Every scientific experiment is based upon the assumption that the end result of the experiment will be caused by something that happens during the experiment. If the experimenter is clever enough, then he/she might be able to identify that cause and describe how it produced that particular result or effect.

Causality always happens in a very specific order—the cause always comes before the effect.27 That is, event A must always precede event Bif A is to be considered as a possible cause of B. If we happened to observe that A occurred after B, then this would rule out A as a possible cause of B.

In living systems however, we see the universal occurrence of inverse causality. That is, an event A is the cause of event B, but A exists or occurs after B. It is easier to understand the biological situation if we refer to examples from human affairs. In economics, for example, it occurs when behaviour now, such as an investment decision, is influenced by some future event, such as an anticipated profit or loss. In psychology, a condition that exists now, such as anxiety or paranoia, may be caused by some anticipated future event, such as harm to one’s person. In the field of occupational health and safety, workplace and environmental hazards can exert direct toxic effects upon workers (normal causality), but the anticipation or fear of potential future harm can also have an independently toxic effect (inverse causality).

Darwinian philosopher of science Michael Ruse recently noted that inverse causality is a universal feature of life,28 and his example was that stegosaur plates begin forming in the embryo but only have a function in the adult—supposedly for temperature control. However most biologists avoid admitting such things because it suggests that life might have purpose (a future goal), and this is strictly forbidden to materialists.

The most important example of inverse causality in living organisms is, of course, autopoiesis. We still do not fully understand it, but we do understand the most important aspects. Fundamentally, it is meta-information—it is information about information. It is the information that you need to have in order to keep the information you want to have to stay alive, and to ensure the survival of your descendants and the perpetuation of your species.

This last statement is the crux of this whole paper, so to illustrate its validity lets go back to the vacuum cleaner analogy. Let’s imagine that one lineage of vacuum cleaners managed to reach the robotic, energy-independent stage, but lacked autopoiesis, while a second makes it all the way to autopoiesis. What is the difference between these vacuum cleaners? Both will function very well for a time. But as the Second Law of Thermodynamics begins to take its toll, components will begin to wear out, vibrations will loosen connections, dust will gather and short circuit the electronics, blockages in the suction passage will reduce cleaning efficiency, wheel axles will go rusty and make movement difficult, and so on. The former will eventually die and leave no descendants. The latter will repair itself, keep its components running smoothly and reproduce itself to ensure the perpetuation of its species.

In summary, autopoiesis is the information—and associated abilities—that you need to have (repair, maintenance and differential reproduction) in order to keep the information that you want to have (e.g. vacuum cleaner functionality) alive and in good condition to ensure both your survival and that of your descendants.

But what happens if the environment changes and endangers the often-delicate metabolic cycles that real organisms depend upon? Differential reproduction is the solution. Evolutionists from Darwin to Dawkins have taken this amazing ability for granted, but it cannot be overlooked. There are elaborate systems in place—for example, the diploid to haploid transition in meiosis, the often extraordinary embellishments and rituals of sexual encounters, the huge number of permutations and combinations provided for in recombination mechanisms—to provide offspring with variations from their parents that might prove of survival value. To complement these potentially dangerous deviations from the tried-and-true there are also firm conservation measures in place to protect the essential processes of life (e.g. the ability to read the DNA code and to translate it into metabolic action). None of this should ever be taken for granted.

In summary, autopoiesis is the information—and associated abilities—that you need to have (repair, maintenance and differential reproduction) in order to keep the information that you want to have (e.g. vacuum cleaner functionality) alive and in good condition to ensure both your survival and that of your descendants. In a parallel way, my humanity is what I personally value, so my autopoietic capability is the repair, maintenance and differential reproductive capacity that I have to maintain my humanity and to share it with my descendants. The egg and sperm that produced me knew nothing of this, but the information was encoded there and only reached fruition six decades later as I sit here writing this—the inverse causality of autopoiesis.


There are three lines of reasoning pointing to the conclusion that autopoiesis provides a compelling case for the intelligent design of life.

• If life began in some stepwise manner from a non-autopoietic beginning, then autopoiesis will be the end product of some long and blind process of accidents and natural selection. Such a result would mean that autopoiesis is not essential to life, so some organisms should exist that never attained it, and some organisms should have lost it by natural selection because they do not need it. However, autopoiesis is universal in all forms of life, so it must be essential. The argument from the Second Law of Thermodynamics as applied to the vacuum cleaner analogy also points to the same conclusion. Both arguments demonstrate that autopoiesis is required at thebeginning for life to even exist and perpetuate itself, and could not have turned up at the end of some long naturalistic process. This conclusion is consistent with the experimental finding that origin-of-life projects which begin without autopoiesis as a pre-requisite have proved universally futile in achieving even the first step towards life.

• Each level of the autopoietic hierarchy is dependent upon the one below it, but is causally separated from it by a Polanyi impossibility. Autopoiesis therefore cannot be reduced to any sequence of naturalistic causes.

• There is an unbridgeable abyss below the autopoietic hierarchy, between the dirty, mass-action chemistry of the natural environment and the perfect purity, the single-molecule precision, the structural specificity, and the inversely causal integration, regulation, repair, maintenance and differential reproduction of life.



  1. Mark J. Pallen and Nicholas J. Matzke, From The Origin of Species to the origin of bacterial flagella, Nature Reviews Microbiology 4(10):1493, 2006; <>, 19 Mar. 2007. Return to text.
  2. Behe, M., Darwin’s Black Box: The biochemical challenge to evolution, Free Press, New York, 1996. Return to text.
  3. See Pallen and Matzke, ref. 1, also web articles and links at: <>, 19 Mar. 2007, and <>, 19 Mar. 2007. Return to text.
  4. Polanyi, M., Life’s irreducible structure, Science 160:1308–1312, 1968. Return to text.
  5. Dawkins, R., The Blind Watchmaker: Why the evidence of evolution reveals a universe without design, Norton, New York, 1996. Return to text.
  6. The molecules of the non-living world are usually fairly short, most consisting of fewer than 10 different atoms. In stark contrast, living organisms crucially depend upon long chains of hundreds, thousands and tens of thousands of atoms, called macromolecules. The reason for their extraordinary length is so that they can be shaped into biologically useful tools, structures and molecular machines. The carbon atom is uniquely suited to making long chain molecules because it has unusually versatile bonding capabilities. Return to text.
  7. Polanyi, ref. 4, p. 1312. Return to text.
  8. Information and illustrations adapted from <>, 19 Mar. 2007. Return to text.
  9. Autopoiesis, <>, 19 Mar. 2007. Return to text.
  10. Nobel Prize winning origin of life researcher Christian de Duve admitted in the foreword to his latest book that he had not been entirely clear on this point in his earlier books on the subject and wished to correct this oversight. See, de Duve, C., Singularities: Landmarks on the Pathways of Life, Cambridge University Press, UK, 2005. Return to text.
  11. de Duve, C., Singularities: Landmarks on the pathways of life, Cambridge University Press, UK, 2005. Return to text.
  12. Conway Morris, S., Life’s Solution: Inevitable humans in a lonely universe, Cambridge University Press, UK, Chs 3–4, 2003. Return to text.
  13. Mass action, <>, 19 Mar. 2007. Return to text.
  14. Many proteins will tolerate some variations in their amino acid sequence, but only substitutions by certain other L-amino acids. Cytochrome c can tolerate 1035such variations (Yockey, H., Information Theory, Evolution and the Origin of Life, Cambridge University Press, UK, 2005, Ch.6), but this makes no significant difference to the outcome of probability calculations. In contrast, ubiquitin, a protein found in all forms of life except bacteria, will tolerate no variation at all at most of its amino acid positions (Truman, R., The ubiquitin protein: chance or design? Journal of Creation 19(3):116–127, 2005; <> 19 Mar. 2007). Return to text.
  15. Enzyme, <>, 19 Mar. 2007. Return to text.
  16. Some organisms have life stages that can dry out and survive, but they still need a majority composition of water to grow and reproduce. Return to text.
  17. Some metabolic processes are confusingly called ‘hydrolysis reactions’ but they are not the hydrolysis referred to here. ‘ATP hydrolysis,’ for example, is a highly structured way of transferring chemical energy through a metabolic coupling without the loss to the environment that free hydrolysis would cause. Return to text.
  18. Chaplin, M., Do we underestimate the importance of water in cell biology? Nature Reviews Molecular Cell Biology 7:861–866, 2006. Return to text.
  19. If a can of food did happen to pop its lid because of biological activity inside, an examination would find it to have been caused, not by a newly evolved form of life, but by a common and well-known contaminant organism that was not eliminated by the sterilization process. Return to text.
  20. Krogan, N.J. et al., Global landscape of protein complexes in the yeast Saccharomyces cerevisiae, Nature 440:637–643, 2006. Return to text.
  21. Nilsen, T.W., The spliceosome: the most complex macromolecular machine in the cell? Bioessays 25(12):1147–1149, 2003. Return to text.
  22. Images and information are available at: <>, 19 March 2007, and animation movies are available at: <>, 19 Mar. 2007. Return to text.
  23. The usual definition of autopoiesis does not include survival of the species, but it is built-in to living organisms and should be included in the definition. Return to text.
  24. Karp, P.D., Ouzounis, C.A., Moore-Kochlacs, C., Goldovsky, L., Kaipa, P., Ahrén, D., Tsoka, S., Darzentas, N., Kunin, V. and López-Bigas, N., Expansion of the BioCyc collection of pathway/genome databases to 160 genomes, Nucleic Acids Research 33(19):6083–6089, 2005; <>, 19 Mar. 2007. Return to text.
  25. Conway Morris, S., ref. 12, pp. 27–31. Return to text.
  26. The enzyme carbonic anhydrase can exchange carbon dioxide in blood at this rate. <>, 19 Mar. 2007. Return to text.
  27. In physics there are some apparent exceptions in extreme conditions under which life cannot survive. In very powerful gravitational fields and at velocities near the speed of light, the time sequence of events and their apparent (but not actual) chain of causality may be violated. Apparent violations can also occur at the quantum level, but only with quantum particles, not with objects as large as molecules or living cells. Return to text.
  28. Ruse, M., Darwin and Design: Does Evolution have a Purpose? Harvard University Press, MA, 2003. See reviewJournal of Creation 18(3):31–34, 2004; <;, 19 Mar. 2007. Return to text.



Life’s irreducible structure—Part 2: naturalistic objections


In Part I of this article, I showed that autopoiesis (self-making) provides a compelling case for the intelligent design of life because all aspects of life lie beyond the reach of naturalistic explanation. Here in Part II the argument from autopoiesis is tested against commonly cited naturalistic objections to intelligent design. It comes through soundly intact, even strengthened because the opponents of design agree on the facts. They disagree on the historical inferences, but only intelligent design meets the criterion of an acceptable historical inference according to the Law of Cause and Effect. Naturalistic explanations of biological origins in the face of universally contradictory evidence depend upon faulty reasoning such as: (i) exclusion by definition and ridicule, (ii) assuming what must be proved, (iii) misinterpreting the scientific evidence, (iv) assigning unrealistic properties to the environment, and (v) misusing the concept of chance. In Polanyi’s terms, now is a very reasonable time to declare the impossibility of a naturalistic origin of life and accept that it was intelligently designed.

Image by Alex Williams

The irreducible structure of the autopoietic hierarchy is separated from the dirty chemistry of the natural environment by an unbridgeable abyssFigure 1. The irreducible structure of the autopoietic hierarchy is separated from the dirty chemistry of the natural environment by an unbridgeable abyss.

In Part I of this article,1 I argued as follows:

  1. Autopoiesis (self-making) is universal and therefore essential to life, so it is required at the beginning for life to exist and is thus not the end product of some long naturalistic process.
  2. Each level of the autopoietic hierarchy is separated from the one below it by a Polanyi impossibility, so it cannot be reduced to any sequence of naturalistic causes.
  3. There is an unbridgeable abyss between the autopoietic hierarchy and the dirty mass-action chemistry of the natural environment.

In this part, I test the integrity of this argument in the face of naturalistic objections to intelligent design. I then go on to assess evolutionary arguments for a naturalistic origin of life in the face of universally contradictory evidence.

Objective knowledge and historical inference

Science gets results by observation and experiment upon repeatable phenomena. Its most valued products are general laws that are observed repeatedly which we can confidently call ‘objective knowledge’. These general laws may be incomplete or even false, but they are objective in that they are open to testing by others. New information may cause them to be modified or discarded. Meanwhile, this objective knowledge is usually useful in curing disease, improving technology and food production, etc.

Our general laws can tell us what might have happened in the past but they cannot tell us what did happen.

But the subject of origins is quite different. It deals with unique sequences of unobservable and unrepeatable past events. No one can develop general laws about unique, unobservable and unrepeatable past events. Our general laws can tell us what might have happened in the past but they cannot tell us what didhappen. Nor does anyone have a time machine to go back and observe what actually happened.

The best that science can do is extrapolate backwards in time from present day objective knowledge, using the principle of uniformity. This principle says that the laws of nature remain the same through all of time and space. Note that this principle is not objective knowledge—we cannot visit all of time and space to verify it, so it is just a convenient but necessary philosophical assumption. Most people do not realize that this principle underlies all of evolutionary theory, nor do they realize that it is potentially an anti-God assumption because it assumes that God has never intervened in history.

Historical inference is thus quite different to objective knowledge. We cannot test it by observation or experiment, so it is only as good as the assumptions it is built upon. If the assumptions are wrong, the ‘knowledge’ will be faulty. In the following discussion, the objective knowledge of life is available to all sides. Surprisingly, there is universal agreement on the fact that at present there is no naturalistic explanation for the origin of life. The controversy lies entirely in the historical inferences about what might have happened in the past. The only way we can evaluate these historical inferences is to examine the assumptions used to make those historical inferences and test the logical connections for internal consistency.

Naturalistic objections to Intelligent Design

The fact of autopoiesis

There has been a general reluctance among biologists to acknowledge and develop the idea of autopoiesis.2 But it is a fact of biology beyond dispute, so the reasons must be ideological rather than scientific. Organisms do repair themselves. For example, there are at least 148 known genes dedicated to DNA repair, using at least 14 known different methods, carrying out up to a million repair events per cell per day.3Organisms do maintain themselves. For example, every production pathway for every molecular component in a cell has a corresponding degradation pathway so that redundant, used and/or damaged molecules can be broken down and the parts recycled. There are even programmed cell death mechanisms to remove unwanted cells from a developmental pathway (apoptosis) and to cleanly dispose of malfunctioning or injured cells (necrosis). Damage to these degradation pathways often leads to disease and death because cells and tissues become clogged with molecular rubbish. Organisms do reproduce themselves, and in an astonishing variety of ways, and they do producevariable offspring as everyone since Darwin has acknowledged. There are no sustainable objections to the fact of autopoiesis.

The universality of autopoiesis

The universality of autopoiesis is also a fact of biology beyond dispute. In Kirschner and Gerhart’s groundbreaking book The Plausibility of Life: Resolving Darwin’s Dilemma,4 in which they announce the first ever theory—called facilitated variation—of how life works at the molecular level, they identify two basic components:

  • conserved core processes of cellular architecture, metabolic function and body plan organization; and
  • modular regulatory mechanisms that are built in special ways that allow them to be easily rearranged into new combinations to generate new and variable phenotypes.Concerning the conserved core processes, they say,

    “Core processes may have emerged together as a suite, for we know of no organism today that lacks any part of the suite … The most obscure origination of a core process is the creation of the first prokaryotic cell. The novelty and complexity of the cell is so far beyond anything inanimate in the world of today that we are left baffled” (pp. 253–256).

The central message of Kirschner and Gerhart’s theory is that not genes but the cell, with its highly conserved architecture, machinery and regulatory circuitry, is the centrepiece of life and heredity. When these ideas are combined—that the cell as a whole is the functional entity, that cell structure and function is highly conserved, that its origination as a whole entity has no naturalistic explanation, and that the “suite of core processes” is universal—this clearly supports the universality of autopoiesis.

The separation of autopoietic levels by Polanyi impossibilities

The existence of Polanyi impossibilities is also beyond dispute. This is demonstrated in Part 1 in figures 1 and 2, where man-made artefacts clearly have structure that cannot be explained by the properties of the materials they are made of. The parallel with biology is also clear—life is made of carbon, hydrogen, oxygen, nitrogen, phosphorus etc., but life cannot be explained simply by the properties of these materials.

Nobel Prize winning biochemist Christian de Duve, in his latest book on the origin of life, itemizes numerous obstacles to a naturalistic origin, which he calls singularities—events that only happened once and have never been repeated. He then offers seven different possible explanations, six of which are naturalistic and the seventh is intelligent design. Of the latter he says “it can come into account only after all natural explanations have been ruled out, and, obviously, they never can be.”5 This is an appeal to ignorance, not knowledge. What we doknow, even by de Duve’s own admission, rules out naturalistic explanations and leaves only intelligent design.

The ground level of the autopoietic hierarchy is perfectly pure components, such as only left-handed amino acids (in contrast to the dirty chemistry of the natural environment). De Duve has no naturalistic explanation for this transition because the mass-action laws of environmental chemistry drive it towards mixtures rather than purity. The next level is specific structure of individual molecules. De Duve has no naturalistic explanation for this transition because the mass-action laws of environmental chemistry drive it towards the statistically far, far more likely non-functional structures. The all-pervasive problem of hydrolysis is not even mentioned in his book. The next level in the hierarchy is integration of specially structured molecules into functional machines. De Duve has no naturalistic explanation for this transition because the mass-action laws of environmental chemistry have no functional goal-orientation. The next level is information-driven regulation of the cellular machinery. De Duve has no naturalistic explanation for this transition because environmental chemistry carries no coded information. The next level is the inversely causal meta-information that keeps the functional information intact and passes it onto its offspring for the purpose of survival in a changing world. De Duve has no naturalistic explanation for this transition because without any coded information, environmental chemistry has no mechanism for handling meta-information.

De Duve can explain none of the structure or function of life using the properties of its constituent materials because in every case the laws of environmental chemistry work against, not towards, life. Each level of the autopoietic hierarchy is thus separated by Polanyi impossibilities. The most reasonable historical inference to make from this conclusion is that it could not have arisen by any of de Duve’s six naturalistic processes, so that leaves only the seventh, intelligent design.

The unbridgeable abyss

The third crucial argument is that there is an unbridgeable abyss below the autopoietic hierarchy, between it and the dirty, mass-action chemistry of the natural environment. Does this abyss actually exist?

The existence of the abyss is clearly established by the title of Professor de Duve’s book just mentioned, Singularities. Even though he puts all his great intelligence and skill into seeking ways to circumvent these singular obstacles, he (and many others) cannot, and that is why he chose that title. Another recent book by Hubert Yockey, the result of half a century of research on the subject, approaches the origin of life from the point of view of information theory and comes to the conclusion that the question of origin is undecidable.6 Together, these two long-time researchers in their respective fields give us a good definition of the abyss:

  • The environment can provide organic ‘building blocks’ such as amino acids, thioesters, and pyrophosphates, but only in a “dirty gemisch (heterogeneous collection of molecules)” of other useless and often toxic materials (de Duve).
  • Life runs on 100% pure reagents. De Duve has no explanation.
  • Life processes are information-driven, a feature unknown in the natural world (Yockey).
  • The digital information of the genetic code has been faithfully transmitted across the whole time span of life on Earth and leads back to no known naturalistic originating source (Yockey).
  • Both the laws of physics and Gödel’s incompleteness theorem allow for undecidable propositions, so we should not shy away from concluding that the origin of life is an undecidable question (Yockey).

This leads to a simple definition of the abyss: “it is a naturalistically undecidable question because there is no evidence of a naturalistic cause.” Yockey’s claim of undecidability is not compelling, however, because neither physics nor Gödel’s theorem identify which questions are undecidable. Yockey has simply grabbed onto this excuse to conveniently avoid the uncomfortable conclusion that life was intelligently designed.

Naturalistic fudges and fumbles

Since even the specialist scientist opponents of intelligent design agree that there is at present no naturalistic explanation for the origin of life, why is the world at large so convinced otherwise? Here are five common reasons:

  1. exclusion by definition and ridicule,
  2. assuming what needs to be proved,
  3. misinterpreting the scientific evidence (unintentionally),
  4. assigning unrealistic properties to the environment, and
  5. misusing the concept of chance.

Exclusion by definition and ridicule

Dawkins and Coyne write, “[Intelligent design] is not a scientific argument at all, but a religious one. It might be worth discussing in a class on the history of ideas, in a philosophy class on popular logical fallacies, or in a comparative religion class on origin myths from around the world. But it no more belongs in a biology class than alchemy belongs in a chemistry class, phlogiston in a physics class or the stork theory in a sex education class.”7

Exclusion of intelligent design by definition fails on the grounds that the issue is fundamentally about history, not science.

By defining intelligent design out of the field of science, they appear not to have to answer its scientific challenges. But the issue here is history, not science. Unique events of history—either creation or evolution—are not science. But we can certainly use science to assess historical inferences of either kind, and when we do so we come up with very strong support for intelligent design as an event in history, and very strong evidence against a naturalistic origin. Exclusion of intelligent design by definition fails on the grounds that the issue is fundamentally about history, not science. Exclusion by ridicule would only be valid if the arguments were ridiculous, but they clearly are not, so the ploy is nothing more than bluff—the resort of those who have nothing better to offer.

Assuming what must be proved

In Singularities, Professor de Duve personally rejects both chance and intelligent design as explanations for life, and concludes that life evolved naturalistically, via “strictly chemical phenomena that … were bound to occur under the physical-chemical conditions that prevailed … leaving no room for chance” (p. 238). How did this happen?

The first trick that he uses is equivocation—two different meanings in the same argument for the one word protometabolism. On p. 15 he says,

“These early chemical processes [cosmically produced and Miller-type amino acids] are generally referred to as prebiotic, or abiotic, chemistry. They will be designated protometabolism in this book [emphasis in original].”

Then, on p. 150 he presents a summary table of his model, and there we find that all the essential properties of metabolism (life chemistry) have been moved down into protometabolism, and before that he still has ‘abiotic chemistry’ continuing to churn out the building blocks.

The second trick he uses is assuming what must be proved. His first singularity is the 100% purity of proteins (homochirality). “How this could have happened is not known. … but whatever the starting situation, one would expect homochirality to emerge by selection” (p. 12). But selection can only occur if you already have organisms. He assumes what he is trying to prove, and even admits to doing so:

“How RNA could possibly have emerged from the clutter [dirty gemisch] without a ‘guiding hand’ would baffle any chemist; it seems explainable only by selection, a process that presupposes replication [emphasis in original]” (p. 78).

In his famous Blind Watchmaker argument, Richard Dawkins does the same thing, saying “The theory of the blind watchmaker is extremely powerful, given that we are allowed to assume replication and hence cumulative selection.”8 Replication with cumulative selection only occurs in living organisms so he assumes the existence of what he is trying to prove.

Misinterpretation of biological evidence

Because of a prior commitment to naturalism,9 many scientists and media organizations reject any thought of design and only discuss evidence of apparent naturalistic origin. Here are five common examples, all of which are faulty.

(1) Natural variation

Neo-Darwinists assume that genes produce organisms, mutations in genes produce changes in organisms and genes have a continual influence on organisms. Since only about 3% of our genomes are protein-coding genes, they assume the rest is mostly ‘junk’—left-over mutation-disabled genes from past evolutionary stages. In his book The Ancestors Tale: A Pilgrimage to the Dawn of Evolution, Richard Dawkins says,

“We don’t need fossils to peer back into history. Because DNA changes very slowly through the generations, history is woven into the fabric of modern animals and plants, and inscribed in the coded characters.”10

In his book Climbing Mt Improbable11 Dawkins overcame the most daunting design challenges by “going around the back way”—natural selection captures every tiny useful mutation and accumulates them until self-aware human beings emerge at the top. He continues to rely on this mechanism in his latest book, The God Delusion.12 In the section ‘refuting’ intelligent design he argues that we can easily imagine situations where “half an eye [i.e. 50%] is better than 49%” and so natural selection will select the superior version and work towards ever-more-advanced and “apparently designed” eyes.

The added limitless range and plasticity of these natural variations is contradicted by all our experience with plant and animal breeding, which shows that there is a limit to natural variation—it is not infinitely plastic.

Natural variation thus appears to point back to a naturalistic origin of life, but it actually assumes everything that needs to be proved—the existence of fully functional organisms with the ability to reproduce variable offspring. The added limitless range and plasticity of these natural variations is contradicted by all our experience with plant and animal breeding, which shows that there is a limit to natural variation—it is not infinitely plastic.

Recent discoveries in molecular biology have completely overturned this neo-Darwinian picture of life. The ‘junk DNA’ concept has been discredited by the ENCODE project.13 They examined the RNA sequences transcribed from just 1% of the human genome and discovered that virtually all the DNA is transcribed from both strands of the double helix (not just the gene-coding regions of the ‘positive’ strand, as expected). And there are multiple layers of interleaved transcripts, not the beads-on-a-string model that neo-Darwinists used. So genes are no longer the centrepiece of life and heredity but vast numbers of RNAs that are derived from multiple overlapping transcriptions of the whole genome. Almost all DNA is being used right now so Dawkins’ record of history does not exist!

According to Kirschner and Gerhart’s facilitated variation theory mentioned earlier, genes do not have a continual influence on organisms; they only work when switched ON. Natural variation is mostly the result of rearrangements of modular regulatory switching circuits, plus some contribution from mutations that disrupt these switching circuits. The conserved core processes (all the architecture and the machinery in the cell) and the modular regulatory circuits (which they compare to ®Lego blocks which can be easily pulled apart and rearranged) have to be in place before natural variation can occur.

An example of facilitated variation is found in the phylogenetic history of a group of sibling species in the fruit fly genus Drosophila, where a particular wing pigment pattern has been gained twice and lost twice, but for different reasons. All pigment patterns were produced by the one pleiotropic14 gene called yellow. The two loss events occurred via mutations that inactivated the switch that turned yellow ON. But the two independent gains of the pattern resulted from the gene being switched ON by other switches.15 These gene switches have ‘signature sequences’ that can be changed about in numerous different permutations and combinations to produce different outcomes.16

This means that natural variation is not merely the passive result of mutations, as neo-Darwinists assume, but rather cells actively userandom changes to produce useful new combinations of existing circuitry. Natural variation is thus built-in. Kirschner and Gerhart argue that without this built-in capacity for variation, a purely mechanical kind of life would break down at the first encounter with a mechanical malfunction. This is powerful evidence of design.


Truly random outcomes are difficult to obtain. Figure 2. Truly random outcomes are difficult to obtain. They require precisely designed structures (such as coins, dice, or a roulette wheel) that can consistently maintain their integrity and performance. They point to an intelligently designed source. Image by Alex Williams

(2) Random outcomes

Gregor Mendel showed experimentally that—for certain carefully chosen characters—inheritance was carried by paired factors (genes on homologous chromosomes) that dissociate during gamete formation (meiosis) and then recombine randomly (according to the laws of chance) during fertilization. It has ever since been widely assumed among biologists that random natural variation points back to the possibility of a random natural origin. Nothing could be further from the truth.

A random outcome is surprisingly difficult to obtain, and it is always constrained and not open-ended as evolutionists require for ‘goo-to-you-via-the-zoo’ evolution. The tossing of an unbiased coin can produce a random result but only between two possibilities—heads or tails. The tossing of an unbiased die can produce a random result, but only among its six possible faces. Even a computer cannot produce a truly random result because it does calculations and calculations always produce predictable results.17

Truly random outcomes are difficult to obtain because they crucially depend upon the stability of the system that produces them. If Mendel’s pea plants had not reliably produced seeds from independently segregating cell divisions every generation, and had not produced a sufficiently large amount of pollen to ensure independent fertilization events, he could never have discovered the random outcomes that showed him the laws of hybridization. Likewise, coin-tossing produces random outcomes only while the coin remains solidly round and flat, and the die only works if it remains rigid and unbroken. Any system that is capable of continually producing a chance outcome must have a stable core mechanism. Indeed, any system that varies continually in any manner, random or otherwise, without a core of stability will quickly encounter an error catastrophe—changes mount upon changes until the core functionality collapses.

The random variation we observe in biology provides a powerful case for intelligent design. It requires a well-engineered underlying mechanism of stability to protect itself from error catastrophe, and it is not infinitely plastic but constrained to the range of possible outcomes provided by the kinds of gene regulation combinations accessible to it.

(3) Error tolerance

Living things tolerate errors remarkably well. Evolutionists use this property to argue that since life is error tolerant, then it could have arisen in an error tolerant (sloppy, haphazard, inefficient, mutation-ridden) stepwise, Darwinian manner. This fallaciously assumes that error tolerance is an intermediate step between non-functionality and functionality, but it is not. Error-tolerant systems are very much more complex than error-intolerant systems.

The computer industry provides an excellent illustration of this principle. Word-processing software of thirty years ago produced very similar results as today, but with very much shorter software codes. Today’s error-tolerant software that detects, interprets and corrects errors as you type, requires far more code, far greater programming skills, and far more computer memory and processing power, than the earlier models. Error tolerance is therefore not a sign of error-prone evolution, but a sign of advanced engineering design.

As I showed in Part I of this article, the reason that organisms tolerate errors is because they have the most wonderful repair and maintenance mechanisms built-in by design!

(4) Redundancy

A common objection to Michael Behe’s claim that the bacterial flagellum is irreducibly complex is to point to other bacterial flagella that require fewer parts than the one Behe chose. This argument is superficially persuasive, but false, because it assumes an important property of life that cannot be assumed—redundancy. Living organisms usually carry with them more than they really need to survive. The obvious reason for this is that God intended them to have the capacity to adapt to changing conditions, in particular to the stress of living under the curse of Adam’s sin after the Fall. Evolutionists have never come near to explaining how even the simplest living organism could arise naturalistically, so the difficulty is multiplied many-fold if the first organism has to contain more than it needs at that time to survive. If it did not, it could not have adapted to environmental change and would have gone extinct before life got to the second generation.

To illustrate how much redundancy can be present, consider the bacterium Salmonella enterica. Of 700 enzymes identified in infected mice hosts, over 400 of those enzymes can be knocked out without reducing Salmonella virulence, reflecting “extensive metabolic redundancies and access to surprisingly diverse host nutrients.”18 The mouse genome provides another example. In gene knockout experiments, only about 15% of single-gene knockouts were developmentally lethal.19 That is, about 85% of the mouse genes can be knocked out (one or a few at a time) and still produce a viable adult. If naturalistic experiments are unlikely to produce an organism with sufficient functionality to survive and reproduce, then they are even less likely to produce one with more functionality than is needed. Redundancy is powerful evidence of design.

(5) Self-organizing chemicals

To propose that chemicals could come to life by chance is as absurdly anti-scientific as the idea that a cow could jump over the moon by chance.

Because many steps in biochemistry have a self-organizing component, origin-of-life researchers are always looking for self-organizing systems in nature that might perhaps explain the origin of life. However, self-organizing components in life already have ultra-pure composition and ultra-specific structure. For example, tubulin, the protein that forms much of the internal scaffolding (cytoskeleton) of cells, and kinesins, the motor proteins that travel along tubulin pathways, when put together in a test tube, will spontaneously form networks similar to those inside cells, such as the mitotic spindle apparatus that assists in cell division.20 However, it is the pure composition and remarkable structure of these amazing proteins that causes them to behave in this way, not any innate tendency of environmental chemistry towards self-organization. They behave in this way because their purity and specific design (whatever its origin) causes them to behave in this way!

Similarly, RNA shows a wide range of interesting self-organizing activity in pure solutions. However, this very same activity creates great problems for any origin-of-life experiment. A long strand of RNA is like a long strand of sticky tape—it sticks to anything it touches, including itself, and quickly ‘self-organizes’ into a jumbled mess. Moreover, it is highly unstable outside of its normal cellular environment and breaks down in a matter of minutes.

(6) Assigning unrealistic properties to the environment

According to Christian de Duve, the two components that produced life from non-life were chemistry and environment. At no point does he make any systematic attempt to describe what these special conditions in the environment might have been, so it is an appeal to ignorance once again, not an argument based on objective knowledge.

The most he says is things like “it is not known.” However, on p. 167 he speculates on what environmental conditions might have caused nascent proto-life to overcome the final singularity and become the first life form. What were these special conditions? “Starvation, acidification, and excessive heat.” These conditions are not at all special—they are repeatable in every laboratory—and none of them produce life!

Misuse of the concept of chance

Since no one has any naturalistic explanation for life, cosmologists have suggested that perhaps an infinite number of other universes exist and we are just the lucky one where life occurred by chance. But chance cannot make impossible events possible. Chance is nothing more than the mathematical calculation of how often real events might occur if they are not certain to occur.

For example, the laws of physics do not prevent a cow from jumping over a fence. Cows do jump over fences, but only rarely, so we could gather information and use statistical theory to predict how likely that event might be, given various circumstances. However, the laws of physics do prevent a cow from jumping over the moon (it would need a rocket engine to do that) so the idea of a cow jumping over the moon by chance is absurdly anti-scientific. In similar manner, Professor de Duve has met impossibility after impossibility in his search for the origin of life because the laws of chemistry work against, not towards, his goal. To propose that chemicals could come to life by chance is as absurdly anti-scientific as the idea that a cow could jump over the moon by chance. Both are Polanyi impossibilities.

Identity of the Designer

Photo by William Wallace Denslow, from

Truly random outcomes are difficult to obtain. The laws of physics do not prevent a cow from jumping over this Moon, but they do prevent a cow from jumping high enough to escape the Earth’s gravity and jump over the real Moon. In exactly the same way, the laws of chemistry prevent environmental chemicals from organizing themselves into living organisms. Neither events can occur by chance and it is profoundly anti-scientific to suggest that they could.

Richard Dawkins argues that intelligent design is a non-solution to the origin of life issue because it begs the question of the identity of the designer.

“If complex organisms demand an explanation, so does a complex designer. And it’s no solution to raise the plea that the Intelligent Designer is simply immune to the normal demands of scientific explanation. To do so would be to shoot yourself in the foot. You cannot have it both ways.”7

This is a red herring. There is a pencil on my desk that I can deduce was intelligently designed, and Richard Dawkins would agree with me. But neither of us need to know the identity of the designer in order to come to that conclusion. All we need is the evidence of objective knowledge and the logic of historical inference. The identity of the designer is a separate issue to the evidence of design.

Actually, the Law of Cause and Effect that Dawkins appeals to does, when used correctly, give us a strong argument for design and at least some clue to the designer’s identity. An effect can only be produced by a cause that is sufficient, or competent, to produce that effect. For example, an ant cannot push a bulldozer, but a bulldozer can push an ant. The movement of an ant therefore cannot be accepted as a sufficient cause to explain the movement of a bulldozer, but the movement of a bulldozer could be accepted as a sufficient cause to explain the movement of an ant. Correspondingly, the astonishing sophistication of autopoietic life could only be explained by a comparably astonishingly sophisticated cause. The only causes available are chance, chemistry-and-the-environment, and intelligent design. Of these, only intelligent design meets this criterion.


Life’s irreducible structure and the concept of autopoiesis are not in any way contradicted by the common arguments against intelligent design. Yockey’s claim that the origin of life is an undecidable question does not stand up to scrutiny—it is an empty play on words designed to hide the uncomfortable conclusion of design.

The idea that life arose naturalistically from non-living chemicals is not objective knowledge, nor is it based upon any inference, deduction or extrapolation from objective knowledge. Quite the reverse—it is an ideological statement formulated in opposition to universally contradictory objective knowledge. Only intelligent design meets the criterion of an acceptable explanation according to the Law of Cause and Effect.

Naturalistic explanations of biological origins all depend upon faulty reasoning such as: (i) exclusion by definition and ridicule, (ii) assuming what must be proved, (iii) misinterpreting the scientific evidence, (iv) assigning unrealistic properties to the environment, and (v) misusing the concept of chance. In Polanyi’s terms, now is a very reasonable time to declare the impossibility of a naturalistic origin of life and accept that it was intelligently designed.



  1. Williams, A., Life’s irreducible structure Part 1: autopoiesisJournal of Creation 21(2):109–115, 2007. Return to text.
  2. Luisi, P.L., Autopoiesis: a review and a reappraisal, Naturwissenschaften 90(2):49–59, 2003. Return to text.
  3. DNA repair, <>; Human DNA repair genes, <>, 20 June 2007. Return to text.
  4. Kirschner, M.W. and Gerhart, J.C., The Plausibility of Life: Resolving Darwin’s Dilemma, Yale University Press, New Haven, CT, 2005. Return to text.
  5. de Duve, C., Singularities: Landmarks on the Pathways of Life, Cambridge University Press, 2005, pp.4–5. Return to text.
  6. Yockey, H., Probability Theory, Evolution, and the Origin of Life, Cambridge University Press, 2005. Return to text.
  7. Dawkins, R. and Coyne, J., One side can be wrong, The Guardian, September 1 2005, <,13026,1559743,00.html>, 20 June 2007. Return to text.
  8. Dawkins, R., The Blind Watchmaker, Penguin, London, 1988, p.140. Return to text.
  9. Lewontin, R., Billions and billions of demons, The New York Review, p. 31, 9 January 1997; See Amazing admissionReturn to text.
  10. Dawkins, R., The Ancestors Tale: A Pilgrimage to the Dawn of Evolution, Houghton Miflin, New York, p. 20, 2004. Return to text.
  11. Dawkins, R., Climbing Mt Improbable, Norton, New York, 1996. Return to text.
  12. Dawkins, R., The God Delusion, Houghton Miflin, New York, 2006. Return to text.
  13. Williams, A., Astonishing DNA complexity uncovered, 20 June 2007. Return to text.
  14. A pleiotropic gene is one that influences many different aspects of an organism’s development. Return to text.
  15. Prud’homme, B. et al., Repeated morphological evolution through cis-regulatory changes in a pleiotropic gene, Nature 440:1050–1053, 2006. Return to text.
  16. Carroll, S.B., Endless Forms Most Beautiful: The new science of Evo Devo, Norton, New York, pp.114–122, 2005. Return to text.
  17. Computer-generated random numbers are technically called pseudorandom; they are chosen from a series that is very much longer than most common applications require so the series is unlikely to start repeating itself. Return to text.
  18. Becker, D. et al., Robust Salmonella metabolism limits possibilities for new antimicrobials, Nature 440:303–307, 2006. Return to text.
  19. Knockout mouse, <>, 20 June 2007. Return to text.
  20. Karsenti, E., Nédélec, F. and Surrey, T., Modelling microtubule patterns, Nature Cell Biology 8(11):1204–1211, 2006. Return to text.
%d bloggers like this: