Releasing the Truth

Digging for knowledge…

Tag Archives: energy

Bacteria Share Light Spectrum with Leaves

Photosynthesis is one of the most important chemical processes for the existence and surviving of the entire biosphere… Every plant and some microscopic organisms are capable of using light from the Sun to produce and store energy. See below an animation:

It’s another crucial example of complex biological mechanism which defies any plausible explanation from the evolutionary community, and delights theist views!

Plant leaves convert light into chemical energy for use in cells. Their biochemistry specifically absorbs the blue and red areas of the visible light spectrum. Now researchers have discovered that light-harvesting bacteria living on the surfaces of leaves gather energy from the green part of the spectrum, meaning that they cooperate rather than compete with plants. How did this perfectly balanced energy-sharing system come about?

Knowing that light-harvesting microbes live in aquatic environments, the researchers tested the hypothesis that similar bacteria live on leaves. They were right. And the light that the microbes gather was “compatible with the plant’s photosynthesis,” resulting in “a significant ecological advantage to microbes inhabiting this environment.”1

In a study published online in Environmental Microbiology, the research team screened genetic material from the surfaces of different leaves harvested from an oasis near the Dead Sea. They found genetic codes for specific types of rhodopsins, which are molecules that capture light. Some enable sight in vertebrate eyes, but many of the rhodopsins found on leaf surfaces were part of light-gathering apparatuses used by bacteria as tiny energy generators called “light-driven proton pumps.”1

The researchers found that the bacteria absorb the most light at exactly the same point where plants absorb no light.

Not only does the sharing of ecosystem resources between these species—as between plants and animals—indicate design,4 but the ingenious machinery required to capture and convert light into useful cellular energy points to an Engineer of surpassing brilliance.5

This was emphasized by yet another observation. The researchers found that the bacteria use some of their rhodopsins as light sensors so they can most effectively use the energy available to them. “This suggests that microorganisms in the phyllosphere [leaf surfaces] are intensively engaged in light sensing, to accommodate the effects of fluctuations in light quality, intensity and UV radiation at the leaf surface,” according to the study authors.1

From: ICR

References

  1. Atamna-Ismaeel, N. et al. Microbial rhodopsins on leaf surfaces of terrestrial plants. Environmental Microbiology. Published online <http://www.imls.uzh.ch/research/vonmering/publ/21883799.pdf>before print September 1, 2011.
  2. Darwin, C. 1859. On the Origin of Species by Means of Natural Selection: or The Preservation of Favoured Races in the Struggle of Life. New York: D. Appleton and Company.
  3. Mackay, J. Leaves and Microbes Share the Light. Evidence News. Creation Research. Posted on evidenceweb.net November 16, 2011, accessed November 29, 2011.
  4. Demick, D. 2000. The Unselfish Green GeneActs & Facts. 29 (7).
  5. Swindell, R. 2002. Shining Light on the Evolution of PhotosynthesisJournal of Creation (formerly TJ). 17 (3): 74-84.
Advertisements

ATP-synthase: wonderful molecular machine

Today, we’re going to talk about an absolutely wonderful biological machine, called ATP-synthase, another marvel built on almost every living beings that fascinates and intrigues naturalistic minds! Again, to conceive that such a intricate system could have arisen after random mutations defies logic.  But, unfortunately, nothing prevents evolutionists to contrive the most bizarre hypothesis with the purpose of giving the credits to chance, nothingness again.

ATP Synthase is a molecular machine found in every living organisms. It serves as a miniature power-generator, producing an energy-carrying molecule, adenosine triphosphate, or ATP. The ATP synthase machine has many parts we recognize from human-designed technology, including a rotor, a stator, a camshaft or driveshaft, and other basic components of a rotary engine. This machine is just the final step in a long and complex metabolic pathway involving numerous enzymes and other molecules—all so the cell can produce ATP to power biochemical reactions, and provide energy for other molecular machines in the cell. Each of the human body’s 14 trillion cells performs this reaction about a million times per minute. Over half a body weight of ATP is made and consumed every day!

ATP-driven protein machines power almost everything that goes on inside living cells, including manufacturing DNA, RNA, and proteins, clean-up of debris, and transporting chemicals into, out of, and within cells. Other fuel sources will not power these cellular protein machines for the same reasons that oil, wind, or sunlight will not power a gasoline engine.

ATP synthase occurs on the inner membranes of bacterial cells, and the innermost membranes of both mitochondria and chloroplasts, which are membrane-bound structures inside animal and plant cells.

ATP synthase manufactures ATP from two smaller chemicals, ADP and phosphate. ATP synthase is so small that it is able to manipulate these tiny molecules, one at a time. ATP synthase must convert some other form of energy into new ATPs. This energy is in the form of a hydrogen ion (H+) gradient, which is generated by a different whole protein system to ATP synthase. Hydrogen ions pour through ATP synthase like wind through a windmill. This comprises a positively charged electric current, in contrast to our electric motors, which use a negative current of electrons.

ATP synthase is a complex engine and pictures are necessary to describe it. Scientists use clever techniques to resolve the exact locations of each of many thousands of atoms that comprise large molecules like ATP synthase. This protein complex contains at least 29 separately manufactured subunits that fit together into two main portions: the head and the base. The base is anchored to a flat membrane like a button on a shirt (except that buttons are fixed in one place, whereas ATP synthase can migrate anywhere on the plane of its membrane). The head of ATP synthase forms a tube. It comprises six units, in three pairs. These form three sets of docking stations, each one of which will hold an ADP and a phosphate. ATP synthase includes a stator (stationary part), which arcs around the outside of the structure to help anchor the head to the base.

 F1-ATPase

F1-ATPase

Notice in figure 1 a helical axle labeled “γ” in the middle of the ATP synthase. This axle runs through the center of both the head and base of ATP synthase like a pencil inside a cardboard toilet paper tube.

Here is the “magic”: When a stream of tiny hydrogen ions (protons) flows through the base and out the side of ATP synthase, passing across the membrane, they force the axle and base to spin. The stiff central axle pushes against the inside walls of the six head proteins, which become slightly deformed and reformed alternately. Each of your trillions of cells has many thousands of these machines spinning at over 9,000 rpm!

The spinning axle causes squeezing motions of the head so as to align an ADP next to a phosphate, forming ATP … in bucket loads. Many other cellular protein machines use ATP, breaking it down to ADP and phosphate again. This is then recycled back into ATP by ATP synthase. Lubert Stryer, author of Biochemistry adds,

“… the enzyme appears to operate near 100% efficiency …”1

Two Canadian researchers therefore looked into the innermost workings of ATP synthase. Using electron cryomicroscopy, they produced the first-ever three-dimensional representation of all the enzyme’s parts fitted together the way they are in the actual enzyme.1 Their study results, published in the journal Nature, enabled them to reconstruct the specific sequence of timed events that makes the enzyme work. And it’s a good thing that it functions, because every living cell—from bacteria to brain cells—depends on one or another version of ATP synthase.2

The team found two half-channels situated in the base of the motor, forming something like two half-stroke cylinders. The first half-channel directs a single proton to a precise spot on one of the rotor’s 12 segments where a negatively charged oxygen atom receives and temporarily holds it. After spinning 330 degrees on the rotor, the proton re-enters the cylinder assembly through the second half-channel, and is finally released into an area of lower proton concentration. (ICR)

The  F1-ATPase motor

In a paper published in March 1997, Hiroyuki Noji et al. directly observed the rotation of the enzyme F1-ATPase, a subunit of a larger enzyme, ATP synthase. This had been suggested as the mechanism for the enzyme’s operation by Paul Boyer.6 Structural determination by X-ray diffraction by a team led by John Walker had supported this theory. A few months after Noji et al published their work, it was announced that Boyer and Walker had won a half share of the 1997 Nobel Prize for Chemistry for their discovery.

The F1-ATPase motor has nine components—five different proteins with the stoichiometry of 3a:3b:1g:1d:1e. In bovine mitochondria, they contain 510, 482, 272, 146 and 50 amino acids respectively, so Mr = 371,000. F1-ATPase is a flattened sphere about 10 nm across by 8 nm high—so tiny that 1017 would fill the volume of a pinhead. This has been shown to spin ‘like a motor’ to produce ATP, a chemical which is the ‘energy currency’ of life. This motor produces an immense torque (turning force) for its size—in the experiment, it rotated a strand of another protein, actin, 100 times its own length. Also, when driving a heavy load, it probably changes to a lower gear, as any well-designed motor should.

ATP synthase also contains the membrane-embedded FO subunit functioning as a proton (hydrogen ion) channel. Protons flowing through FO provide the driving force of the F1-ATPase motor. They turn a wheel-like structure as water turns a water wheel, but researchers are still trying to determine precisely how. This rotation changes the conformation of the three active sites on the enzyme. Then each in turn can attach ADP and inorganic phosphate to form ATP. Unlike most enzymes, where energy is needed to link the building blocks, ATP synthase uses energy to link them to the enzyme, and throw off the newly formed ATP molecules. Separating the ATP from the enzyme needs much energy. (CMI)

Evolutionists’ reverie

Evolutionary scientists have suggested that the head portion of ATP synthase evolved from a class of proteins used to unwind DNA during DNA replication, i.e, the hexameric helicase enzyme. 3

However, how could ATP synthase “evolve” from something that needs ATP, manufactured by ATP synthase, to function? Absurd “chicken-egg” paradox! Also, consider that ATP synthase is made by processes that all need ATP—such as the unwinding of the DNA helix with helicase to allow transcription and then translation of the coded information into the proteins that make up ATP synthase. And manufacture of the 100 enzymes/machines needed to achieve this needs ATP! And making the membranes in which ATP synthase sits needs ATP, but without the membranes it would not work. This is a really vicious circle for evolutionists to explain.

Some says that not every living beings need ATP-synthase, such as anaerobical bacteria, because they produce ATP via glycolysis only. Thus, they imply that evolution really occurred on the creation of ATP-synthase… But every organism need ATPase!

Obligate anaerobes may not use ATP synthase to manufacture ATP, but they do use it to pump protons out of their cytoplasm. They would die otherwise. All cells have ATP synthase, because all cells need it. In sum, all life depends on ATPase, but not all life depends on it for ATP production. Anaerobic bacteria use it to maintain pH balance instead. So ATPase must have been present in the very first cell.

As the researches advance, more impressive facts are disclosed! The necessity of engineering  ATPase is actually just the tip of the iceberg. One amazingly revealing 2010 study in the journal Nature demonstrated how not only ATPase, but the entire electron transport chain apparatus and in fact whole mitochondria were absolutely essential to the ‘first’ eukaryote. 4

So, the evolutionary dilemma only strengthens! Oh, surely they miss the Darwin’s epoch, when cells were just organic “jellybeans”, which no complex content, the eye anatomy was the most complicated biological mechanism they’ve had to deal with (and even the contemporary knowledge of eye was enough to puzzle Darwin’s mind!), there was no annoying DNA (with its smart informational content pointing to a intelligent Creator), second Laws of Thermodynamics denying spontaneous increasing in complexity of polymers occurring naturally, and so on… Damn it, science, always disturbing godless people dreams!

References

______________________________________________

1 Stryer. L., Biochemistry, 18.4.3., The world’s smallest molecular motor: rotational catalysis, online: <www.ncbi.nlm.nih.gov/books/bv.fcgi?rid=stryer.section.2528# 2539>.

2 Lau, W. C. Y. and J. L. Rubinstein. 2012. Subnanometre-resolution structure of the intact Thermus thermophilus H+-driven ATP synthase. Nature. 481 (7380): 214-218.

3 Evolution of the F1-ATPase <www.life.uiuc.edu/crofts/bioph354/Evol_F1.html>.

4 Lane, N. and W. Martin. 2010. The energetics of genome complexity. Nature. 467 (7318): 929-934.

Also:

http://creation.com/atp-synthase-in-all-life

http://creation.com/design-in-living-organisms-motors-atp-synthase

 

God bless you!

“I will praise thee; for I am fearfully and wonderfully made: marvellous are thy works; and that my soul knoweth right well.” Psalms 139.14

The restless quest for “magic” dark matter continues!

Year after year, researches keep it up their  impossible mission-like struggle (without Tom Cruise, of course) to find the totally hypothetical substance (or whatever it may be) called dark matter! What would it be?

“In astronomy and cosmologydark matter is a type of matter hypothesized to account for a large part of the total mass in the universe. Dark matter cannot be seen directly with telescopes; evidently it neither emits nor absorbs light or other electromagnetic radiation at any significant level.[1] Instead, its existence and properties are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. According to the Planck mission team, and based on the standard model of cosmology, the total mass–energy of the Universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy.” Wikipedia

Is it clear for you, now? It’s, to date, a fantasy! Following a reverse, upside down scientific methodology, the BBT proponents faced some inexplicable facts, example, a 3D view of the Universe shows a numberless of galaxies separated from each other by gigantic VOIDS! This cosmological “scratchiness ” poses an enormous conundrum to naturalistic theories of origins.

According to standard cosmologies, an explosive beginning such as the Big Bang should have distributed matter more smoothly across the universe. Shaun Thomas, lead author of the research appearing in the journal Physical Review Letters, told Wired Science, “This potentially could be one of the first signs that something peculiar is going on.” (wired.com)

This month (June) edition of the Nature reports a ambitious (and quite expensive, of course!) project to build up a ground-based telescope array to detect and analyse high-energy γ-rays, as the title affirms:

High-energy γ-ray astronomy comes back to Earth

With Earth’s atmosphere acting as a near-total shield against high-energy γ-rays, astronomers have traditionally relied on space telescopes to detect them. But plans that will be presented in early July at the International Cosmic Ray Conference in Rio de Janeiro, Brazil, indicate that γ-ray astronomers are betting their future on an ambitious ground-based telescope. On dark, moonless nights, the proposed Cherenkov Telescope Array (CTA) would capture the fleeting trails of blue light that are produced when γ-ray photons, emitted by collapsing stars or gas-guzzling black holes, are absorbed in the upper atmosphere.

“For high-energy γ-ray astronomy, the future is on the ground,” says Rene Ong, an astroparticle physicist at the University of California, Los Angeles, who is part of the CTA consortium of more than 1,000 physicists and engineers from 27 countries. Proponents of the CTA say that it would be able to solve two mysteries: the origin of ultra-high-energy cosmic rays and the nature of dark matter. The facility could also test theories of quantum gravity, they say.

In the 1950s, astronomers pioneered the technique of tracking γ-rays by their atmospheric signature (see ‘Tell-tale trails’). Three operational ground-based arrays consisting of just a few telescopes have since identified more than 150 high-energy γ-ray sources.

Source: Nature

The CTA would have the energy range, sensitivity and angular resolution to find many more. It would consist of two sites, one in the Northern Hemisphere and one in the Southern, each with dozens of telescopes spread over about ten square kilometres. Together, they could identify an estimated 1,000 high-energy γ-ray sources. With a construction start in 2015, the facilities are projected to carry a price tag of €200 million (US$268 million).

The arrays would build on the range of energies up to 100 gigaelectronvolts (GeV) already mapped by the Fermi Gamma-ray Space Telescope, and could cover energies up to 100,000 GeV, a region that has never before been imaged. To achieve the same coverage in space, “you would have to fly an instrument the size of a football stadium,” says CTA spokesperson Werner Hofmann of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. But the CTA’s upper-energy limit is still only one-millionth of the highest energy cosmic rays detected so far.”

US$268 million easily spent with this useless mission shows one more reason for why the mainstream scientists keep clinging to these standard (nonetheless flaw, unsatisfactory) theories: “streams” of money (paid with our painstaking taxes….) runs easier for them! How much money has been wasted with BBT, evolution, “primordial soup” and another fanciful hypothesis is mind-blowing!

Another lame effort, by the way, earlier researches have totally failed, as we read in a Wired Science article:

“The result could mean cosmologists need to reassess their understanding of dark energy, the mysterious force that drives the universe outward at an ever-increasing rate. Dark energy itself is supposed to be almost perfectly smooth, but clumps of dark energy could draw clumps of visible matter around them.

The extra lumps could also mean dark energy doesn’t exist at all. Instead, gravity could behave differently on very large scales than it does on smaller scales, meaning Einstein’s theory of general relativity needs an overhaul.”

Conclusion

No one has a clue what these (dark matter, dark energy) are. This huge appeal to hypothetical stuff is making many uncomfortable. Richard Panek, in a March 11, 2007 New York Times article, quipped, “‘You get to invoke the tooth fairy only once,’ meaning dark matter, ‘but now we have to invoke the tooth fairy twice,’ meaning dark energy.” In an April 11, 2007 article in Nature, Jenny Hogan described the mood at a recent cosmology conference; one astronomer said, “There is a sense of desperation…. The standard model is horribly ugly, but the data support it.” Dark energy was called “a profound problem from the viewpoint of fundamental physics.”

It remains to be seen if cosmologists will be able to establish the existence of dark matter and dark energy to everyone’s satisfaction. But it becomes difficult to defend against charges of pseudoscience when the bulk of your model depends on imponderable substances. If they only serve to shield a model from being falsified, appeals to dark things seem occult in more than one sense.

God bless you!

Entropy in General Chemistry

Contents

  1. Introduction
  2. What is entropy, really?
  3. What is entropy good for?
  4. Entropy proves that ‘heat’ will always flow from a hotter body to a cooler one
  5. What is the change in entropy when a gas expands or gases mix or liquids mix?
  6. Why gases mix. Why liquids mix.
  7. What happens to the entropy of a solvent when an (ideal) solute is dissolved in it?
  8. ‘Positional entropy’ and thermal entropy are similar.
    ‘Unavailable’ energy is available!

Introduction

From the preceding description of the second law of thermodynamics, you already have a good start toward understanding entropy. In this section we will go further by seeing exactly how the basic process of energy becoming dispersed or spread out in spontaneous events is measured by entropy increase.

Some recaps involving the second law plus a few new details will give us a strong foundation. First, in chemistry the energy on which we focus most is the motional energy of molecules that is responsible for the temperature of a substance or a system of chemicals. When that energy is transferred from hotter surroundings to a cooler system, it has been called “heat” energy. However, what is really happening is that faster moving molecules in the surroundings are colliding with the walls of the system and that way some of their energy gets to the molecules of the system and makes them move faster. “Heat” is really the motional energy of molecules being transferred.

  • (The molecules in a gas like nitrogen at room temperature at any instant are moving at an average speed of nearly a thousand miles an hour, constantly colliding and therefore exchanging energy so that their individual speeds are always changing. If two molecules with exactly the same speed have just collided head-on, they will even be motionless — but only for an instant, before being hit by another molecule! Some other molecules are racing at as fast as 2500 miles an hour. When we measure the temperature of a system and find it is higher than ‘room temperature’, this means that the average molecules are moving even faster than a thousand miles an hour and thus their motional energy is considerably greater.)
    • As you now can see, it takes motional molecular energy (‘heat energy’) from hotter surroundings like faster moving molecules in a flame or violently vibrating iron atoms in a hot plate, etc., to melt or to boil a substance (the system) at the temperature of its melting or boiling point. That amount of motional energy from the surroundings that is required for melting or boiling is called the phase change energy, specifically the enthalpy of fusion or of vaporization, respectively. What this phase change energy does to the molecules in the system is to break bonds between them (not chemical bonds inside the molecules that hold the atoms together!). Thus, the added energy does not contribute to the motional energy and make the molecules move any faster (and thus doesn’t raise the temperature). Instead, it is necessary so that they can break free to move as a liquid or as a vapor.
    • Energy-wise, this means that when a solid becomes a liquid or a liquid a vapor, motional energy coming from the surroundings is changed to ‘ potential energy ‘ in the substance. (This phase change energy in a substance is released back to the surroundings when the surroundings become cooler than its boiling or melting temperature, respectively.) Phase change energy increases the entropy of a substance or system because it is energy that must be spread out in the system from the surroundings so that the substance can exist as a liquid or vapor at a temperature above its melting or boiling point. (Often, the phase change process — and other processes — are said to occur in a ‘universe’ that consists of the surroundings plus the system. Clearly then, the total energy of such a universe becomes more dispersed or spread out if part of the greater energy that was only in the hotter surroundings becomes transferred so some is in the cooler system. That’s energy dispersal.)

Second, our important overall principle is ”Energy of all types changes from being localized to becoming dispersed or spread out, if it is not hindered from doing so. Entropy (or better, entropy change) is the quantitative measure of that kind of a spontaneous process: how much energy has been transferred/T or how widely it has become spread out at a specific temperature.

We will ignore the old error of talking about entropy as “disorder”. That has been deleted from most US university chemistry textbooks (http://entropysite.oxy.edu/#whatsnew). Most of these texts also have adopted the description of entropy change as involving energy dispersal that we are using here.

What is entropy, really?

Entropy was first defined and used in 1865, long before the behavior of molecules was understood. Chemists and physicists had no idea that temperature was due to the motional energy of molecules or that “heat” was actually the transferring of that motional molecular energy from one place to another. Back then, entropy change, ΔS, could only be described in macro terms that could be measured, such as volume or temperature or pressure. The 1865 equation that is still completely correct is stated in most modern texts as simply ΔS = q (rev)/T. (And millions of students for almost a century and a half have silently or loudly asked the question, “What does that really mean?” and been frustrated by inadequate explanations!)

Fortunately today, a complete explanation is simple and easy to understand because we can use the old equation but interpret it in modern terms of how molecules are responsible for what is happening. Here is that equation expanded, part by part:

  • ΔS = the entropy of a system. i.e., of a substance or a group of substances, after some motional energy (“heat”) has been transferred to it by fast moving molecules, minus the entropy of that system before any such energy was transferred to it. So, ΔS = S (final) – S (initial).
  • Then, ΔS = S (final) – S (initial) = q, motional energy (“heat”) that is transferred reversibly to the system from the surroundings (that can be just another system of chemicals that is in contact with the first one) divided by T, the absolute temperature at which the transfer occurs = q (rev) / T.
    • That “reversible” or “reversibly” simply means that T, the temperature of the system, has to stay exactly the same while any energy is being transferred to or from it. That’s easy in the case of phase changes, where the system absolutely must stay in the solid or liquid form until enough energy is given to it to break bonds between the molecules before it can change to a liquid or a gas. For example in the melting of ice at 273.0 K, no matter what temperature the surroundings are — from 273.1 K to even 500 K and higher, the temperature of the ice will stay at 273.0 K until the last molecules in the ice are changed to liquid water, i.e., until all the hydrogen bonds between the water molecules in ice are broken and new, less-exactly fixed hydrogen bonds between liquid water molecules are formed. This amount of energy necessary for ice melting per mole has been found to be 6008 joules at 273 K. Therefore, the entropy change of q(rev)/T = 6008 J/273 K or 22 J/K.
    • The situation and calculations are completely different at temperatures other than the melting or boiling point of a substance, i.e., when no intermolecular bond-breaking is possible. Then, when motional molecular energy (“heat”) from the surroundings is being transferred to a system, this raises the temperature of the system by making its molecules move faster and faster. However, the temperature is constantly rising, so how can you measure a particular value of “T” at which you transfer some energy?!
    • The only way you could transfer the energy reversibly at T over a temperature range would be by measuring how much energy is transferred at each of many many small temperature intervals or increments. For example, if you wanted to know the entropy change, q(rev)/T, from 300 K to 310 K, you should measure the amount of energy transferred at dozens or hundreds of temperature increments, say from 300.00 K to 300.01 K and then 300.01 to 300.02 and on and on and on, dividing the q by each T, and finally adding them all.
    • Fortunately, the process is far easier via calculus if the effect of energy input to the system is linearly dependent on the temperature change. This is true in simple heating of a system at moderate to relatively high temperatures, (Then, from calculus we find that the energy being transferred “per incremental change in temperature” (the heat capacity, Cp), multiplied by the integral of dT/T from T(initial) to T(final), is directly given by ΔS = Cp ln T(final)/T(initial).)

There…now you know more about “what entropy really is” than most beginning students have known for the past century! Entropy in a heating process is just a measure of the motional energy of molecules from hotter surroundings being transferred (reversibly) to a cooler system divided by the T at which it is transferred. . What’s so mysterious or complicated about that? (Of course, in advanced work after the first year of chemistry, entropy change can be very complex, but that is not our problem now)

What is entropy good for? To begin with, see what standard molar entropy means:

Let’s look at one aspect of the surprising power of entropy: how it quickly gives us a general idea of how much energy a substance needs to exist at a given temperature compared to other chemicals. If you are now taking a course in chemistry and have a textbook, turn to the table or short list of ‘Standard Molar Entropy Values’ for elements and compounds. (The standard temperature used in such tables is 298.15 K and a solid is (s), a liquid (l), and a gas (g). The entropies are often stated to several figures in terms of joules/K mol but let’s use whole numbers rather than precise values.)

All textbooks give you generalities, like “The liquid forms of substances have higher standard entropies than the solid, and gases have higher entropies than their corresponding liquid.” “For different substances in the same phase molecular complexity determines which ones have higher density,” Great. Some more junk to memorize, right? But why fundamentally are entropies higher or lower for various elements and compounds or in solid, liquid and gas states?

Of course, we have a start on a good answer to that question: it must have something to do with the motional energy (plus any phase change potential energy) in the substance at that temperature. However, we need one more fact about energy and entropy for a complete understanding. At absolute zero, 0 K, all substances are assumed to have zero entropy. Then, when a substance is warmed from 0 K, more and more motional energy is added to it as its temperature increases to the ‘standard entropy temp’ of 298.15 K. (From what we have already discussed, a substance such as hydrogen gas would require additional energy to change phases from a solid at 0 K to a liquid and then more to change from liquid to gas before the temperature reaches 298 K.) So, it is probable that a solid like carbon at 298 K (as a diamond) has a low standard state entropy (“So”) of 2 J/K mol whereas liquid water has a medium So of 70 J/K mol and gaseous hydrogen gas has an So of 131 J/K mol. Liquids and gases need more energy because they have had to be changed from a solid to a liquid and then to be a gas at 298.15 K and stay that way at that temperature.

There. You have a pretty good idea of why the standard entropies are larger or smaller for various substances and for various states — solid or otherwise. The larger the number for entropy in the tables of standard entropy values, the more was the quantity of energy that had to be dispersed from the surroundings to that substance for it to exist and be stable at 298.15 K rather than 0 K!

Then, it’s obvious why “liquid forms of substances have higher standard entropies than their solid form” — the solid had to be given additional energy (enthalpy) for fusion so the molecules could move more freely. The amount of energy in their ΔH/T of fusion/melting has to be added to the entropy of the solid substance. No problem. But why does pure carbon in the form of solid graphite have a higher So (6 J/K mol) than pure carbon in the form of solid diamond (2 J/K mol)? That means that graphite must need energy to move in some way that diamond can’t. Aha — diamond is totally rigid, each carbon atom tightly held in a tetrahedral framework. Graphite is totally different — its atoms are even more tightly held in layers, but those layers are like sheets of paper in a pile. The individual atoms can’t move readily, but the sheets of atoms can slide over one another a tiny bit without great difficulty. That requires some energy and therefore it means that graphite has to have more energy in it than diamond at 298.15 K.

Entropy proves that ‘heat’ will always flow from a hotter body to a cooler one

Here’s a question students have asked for generations, although it doesn’t bother some because it just seems like common sense. (Well, it is common experience!) “How can you prove scientifically that “heat” will always flow from a place that is hotter to one that is cooler?

Let’s use a warm metal bar touching a similar bar that is just barely cooler (both of them isolated from the surroundings so no ‘heat’ is lost). Designate q for the thermal energy/”motional energy” that will be transferred from the hotter to the cooler bar. (Actually, it is vibrational energy in the hotter bar that will be transferred .) Under these nearly reversible conditions, let us designate the temperature of the hotter bar in bold type as T and T for the temperature of the slightly cooler. Therefore, because T is larger than T, q/ T is smaller than q/T. Now, if —q/ T is transferred from the hotter bar to the barely cooler bar, the cooler would be given q/T. But since q/T is larger than —q/ T, the cooler bar has increased in entropy more than the hot bar has decreased in entropy.

Important note! Considering only the originally cooler bar as the system, the above simple calculation shows that the system has increased in entropy because energy has been transferred to it. However, considering the originally hotter bar as the surroundings and the cooler as the system, that whole small universe of surroundings and system has increased in entropy because of the energy spreading out to the cooler bar as they both reach the same temperature.

[Only for advanced students and only after reading ahead about the Boltzmann entropy equation: Since the conditions are near reversibility (and qualitatively, the same entropy increase can be shown to be true at a larger temperature difference), the foregoing conclusions can be generalized as “energy transfer from hot surroundings to a colder system always results in an increase in entropy in the cooler system.” Then, because ΔS = kB ln WFinal /WInitial , the number of accessible microstates in the cooler bar must have increased. The energy, q, didn’t change in the transfer but it is more dispersed in the cooler bar because there are more accessible microstates for the entire energy of the cooler bar in any one of which it might be at one instant. Thus, the entropy increase that we see in a spontaneous thermal energy transfer is due to increased energy dispersal in the cooler system, and in the universe of surroundings plus system, and (especially helpful to beginners because it is easily visualized) increased dispersal in three-dimensional space. This is a powerful example of the utility of viewing the nature of energy as becoming more dispersed spontaneously — if it is not constrained.

What is the change in entropy when a gas expands or gases mix or liquids mix?

In all of the preceding examination of entropy, we have been considering — essentially — the change in entropy when energy is transferred from the surroundings to a system, and we found that the traditional entropy equation of q/T aids in understanding why things happen: the less it is hindered, the more the q ‘motional energy’ tends to spread out or disperse. But what does entropy have to do with spontaneous events where there isn’t any transfer of energy from the surroundings? Why do gas molecules spread out into a vacuum? That doesn’t take any energy flowing from the surroundings. Why do gases spontaneously mix and liquids that are somewhat like each other mix easily without any help? Why do so many chemicals dissolve in water?

By now, you can hardly help guessing correctly! All these events must have something to do with energy dispersing of spreading out. The only difference between them and what we have talked about before is that they all involve a system’s initial energy spreading out within the system itself. No energy is transferred to or from the surroundings; no heating or cooling at all IF (as we always assume in our elementary approach) the substances involved are ideal gases or liquids or solids whose molecular attractions do not make the results complex.

A first and an excellent example is shown in most textbooks as two spherical glass bulbs connected with a stopcock between them. One bulb has a gas in it and the other is evacuated. Open the stopcock and the gas flows into the evacuated bulb. Then, for a paragraph or even a dozen, some texts make a big deal about how improbable it is that all the gas would flow back into the first bulb or all go into the second bulb. That should sound dumb to you because you know the basic principle: the initial motional energy of the gas will spread out as much as it can — and stay that way as the speeding molecules continually collide with each other and bounce around everywhere throughout the two bulbs. Molecules with motional energy will disperse in three dimensional space if they are not constrained, i.e., if they’re not hindered by stopcocks or small containers!

The following indented sections are only for honors students. They can be skipped by the majority of those in beginning chemistry because the preceding and following pages provide a superior introduction for starting to understand entropy change in chemistry. However, more and more texts are introducing probability and microstates ineptly or incorrectly and therefore students going on as chemistry majors should be aware of a valid approach to those topics.

Actually, thermodynamic entropy increase in chemistry is dependent on two factors. It is enabled by the inherent motional energy of molecules above 0 K (that also can arise from bond energy change in a reaction). It is only actualized if a process makes available a larger number of microstates, a maximal probability for the distribution of the molecular motional energy in the final state. Both factors are essential. Neither is sufficient by itself. http://entropysite.oxy.edu/calpoly_talk.html In this introduction to entropy for beginning students, I have focused on the first factor and left tacit the presence of the second, the actualization details that have always been implicit in the examples given. Now, let us look at what microstates are, and then at what “maximal probability of the distribution of energy” means.

One way of describing a microstate is that it is like an instantaneous photo of all the molecules in a system that shows each of their positions and their energies. (Then the next photo should be as impossible (!) but should be ‘snapped’ about a trillionth of a trillionth of a second later when only two molecules have collided and changed their energies. This would be another microstate. It would require many more than trillions times trillions of years of taking such photos to show the possible number of microstates in a mole of any liquid or gaseous substance at 298 K. Even at a temperature of around 4 K , there are about 10 raised to an exponent of 26,000,000,000,000,000,000 microstates in any substance. This is an unimaginable number, but it is a reliable calculation by K. S. Pitzer (See “Order to Disorder”) In quantum mechanics, only the energies of the molecules are considered to be distributed on energy levels. This approach is described, with simplified diagrams, here.

In summary, a microstate is one way of a huge number of ways in which the energy of the system can be distributed on quantized energy levels. The larger the number of possible (sometimes called “accessible”) microstates that there are for a system, the greater is the system’s entropy. This is how microstates and probability are related to entropy. Mobile, energetic molecules are continually colliding seeking the most probable situation, exploring a fraction of the gigantic number of microstates that are ‘accessible’. If a process — such as expansion into a larger volume — is made available to them, an increase in entropy is actualized because in a larger volume the energy levels are closer together. Thus, without any change in their initial motional energy, it is more probable that there are more ways in which that energy can be distributed at any one instant because there are more accessible microstates. It is not that the energy of the system is in more than one microstate at one instant!! That is impossible. Rather, a greater dispersal or spreading out of energy after a process like expansion of a gas or mixing of fluids than before it occurred means that at any instant there are many many more choices of possible microstates in the next instant than were possible before the expansion or other kind of change in state.

General chemistry textbooks commonly show microstates as fixed positions of three or four molecules and then calculate probabilities to prove how more probable it is to have the molecules more widely distributed. This is misleading in that it omits why molecules should become “more widely distributed” — i.e., that they are always energetic and mobile particles! Further, it is equally misleading to treat microstates in the classic sense of just an arrangement in space. A microstate of molecular energies in quantum mechanics is a distribution of those energies on energy levels, not a static pattern. Molecules obey quantum mechanics rather than the statistical mechanics with which we may calculate.

Why gases mix. Why liquids mix.

Just as simple as the reason for gas expanding into a vacuum is the reason for two ideal gases mixing. You could use the same apparatus as the one for gas expansion. Put some red orange bromine vapor in one of the bulbs and air (nitrogen plus oxygen) in the other. Soon both bulbs will be pale red orange and, if you would analyze the composition of both bulbs you would find the identical amount of bromine and oxygen and nitrogen in them. But why? Because each molecule of each kind in the bulbs has a greater volume in which to bounce around if it is permitted to be anywhere in two bulbs rather than one. How obvious! All of the energetic colliding molecules — the ‘motional energy’ of oxygen, nitrogen, bromine and any other gas or liquid — will spread out if they are not hindered from doing so. (The same is true of molecules in solids, but they can’t move appreciably because they are really hindered by being so strongly attracted to their close neighbors!) The more sophisticated, but still easy to understand, way of describing why gases or liquids mix is “there are more ways in which the energy of the system can be distributed in a larger volume than in a smaller volume.” Or those ‘more ways’ can be considered the greater probability of a system to be in a state in which its energy is more widely distributed, in the sense of having more microstates. This modern view of ‘more ways’ fits perfectly with the predictions of a genius, Ludwig Boltzmann, who lived in the late nineteenth century, and used “W” for his ‘more ways’.

Before much was known about molecules (the greatest scientist of his day didn’t believe in them; Boltzmann himself thought that an almost infinite number of molecules might exist in a nearly infinitesimal space), before speeds or energy levels for atoms or molecules were dreamed of — Boltzmann developed the basic theory that entropy, S, was related to the number of different ways that a system of molecules could achieve a particular total energy. He said that this total of ways was the most probable state for the system and now we realize that those probable “ways” of distribution of molecular energy are equivalent to what we now call quantized ‘microstates’ . The equation that bears his name was not written (by Planck) until the year of Boltzmann’s death in 1906 and there is no evidence that Boltzmann ever saw it or ever used “Boltzmann’s constant, k”. (That constant was first used by Planck in 1900 but nobly, he never claimed it should be called “Planck’s constant”.) The Boltzmann entropy equation is simply S = k ln W, where k is Boltzmann’s constant of R/N, the gas constant divided by Avogadro’s constant.

ΔS = k ln W(final state)/W(initial state) is a most useful form of the Boltzmann entropy equation. Calculations from it most readily yield quantitative results for the kind of entropy changes that we have just been discussing: volume expansion of gases and mixing of gases or of liquids and of dissolving of solids in solvents. That last type of entropy change is surely one of the most important in general chemistry because it is the cause of colligative effects such as the raising of boiling points and depression of freezing points of solvents, as well as of osmosis. We will not develop the calculations, but it is important to realize that the concepts behind the processes are well supported by seeing entropy change as involving a spreading out of the motional energy of the substituents, the solvent and also the solid.

When liquid water and alcohol are added to one another, they spontaneously mix — as do any ‘like’ (similar to each other in chemical structure) liquids. This is true regardless of volume change because the mere presence of two substances in the same total volume involves a dispersal of the energy of each in that whole mixture — and increased dispersal and greater entropy change as the relative quantities of each liquid in the mixture increases toward a 50:50 mole ratio. This is one of several cases in which the simple view of energy becoming more dispersed when substances are mixed works — i.e., predicts the correct result of spontaneity and of entropy increase. However, the fundamental calculations are moderately complex and involve statistical mechanics in which the solution is considered to be in many ‘cells’ with each cell a different combination of molecules in the ratio of the quantities of the two liquids present. The equation for the entropy increase in the mixture uses the relative molar quantities of liquids that were mixed.

What happens to the entropy of a solvent when an (ideal) solute is dissolved in it?

When even a small amount of solid solute is added to a solvent, just as in the mixtures of liquids above, the motional energy of the individual solvent molecules (and the solute molecules) in the new solution is each more separated from its own type of molecule than before, and thus each individual molecule’s energy is more spread out or dispersed. The entropy of the solvent and the solute each increases in the solution. (The more fundamental reasoning and resulting equations from statistical mechanics are the same as described above for liquid mixtures.) If we realize that a solvent’s energy is more dispersed in a solution than when it is a pure solvent, we can see why a solvent in a solution should be increased in entropy compared to its entropy as a pure solvent. Then also, it is obvious that the entropy will be larger depending on how many molecules of solute are added to the solvent. That increased entropy of the solvent in a solution is the cause of the “colligative effects” that we study: (1) osmotic pressure, (2) boiling point elevation, and (3) freezing point depression.

Now, if the solvent tends to stay in a solution (because its energy is more dispersed there), rather than being with only with its own kind of molecules in pure solvent, it will stay in that solution if it has a ‘choice’! That means (1) if there is a membrane placed between a pure solvent and a solution containing it (and that membrane allows solvent molecules from pure solvent to go through it from the other side but not the solute molecules), pure solvent will go through the membrane to get to the solution because its energy is more spread out there. That’s ‘osmosis’, a very important phenomenon in biochemistry; (2) solvent molecules in a solution will not leave that solution to become vapor molecules in the atmosphere above a solution as readily as at the normal boiling point of the pure solvent; a higher temperature will be necessary to cause enough molecules to leave the solution to be in equilibrium with the atmospheric pressure and ‘boil’; (3) solvent molecules in a solution will not be in equilibrium with the solid phase (composed of pure solvent molecules) at the normal freezing point; a lower temperature is necessary to reduce the motional energy of the solvent molecules so that they can make intermolecular bonds to form a solid; i.e., be in equilibrium with their molecules in the solid phase), All of these colligative effects increase as the amount of solute in the solvent is increased because the entropy (i.e., energy dispersal) of the solvent increases in the solution with a greater concentration of solute.

‘Positional entropy’ and thermal entropy are more similar than different!
Entropy, ‘unavailable energy’, is available!

Two important final notes: (1) about the similarity of all kinds of entropy change — whether “heat” transfer to a system from surroundings (or the converse) or the volume expansion of a system or the mixing of fluids or solid and solvent; and (2) the meaning of “unavailable energy” and “entropy as waste heat”.

  • All types of entropy increase involve an increased spreading out of motional energy in three-dimensional space. Of course, it is clear that this is what happens in gas expansion and in the mixing of fluids (and is called “configurational” or “positional entropy” in some texts.  However, this entropy change is actually due to spreading out or dispersal of the initial molecular motional energy that was in the system, not just due to the number of “positions” of the molecules in space!). “Thermal entropy” change always involves surroundings plus system. In chemistry these consist of real gases or liquids or solid in three-dimensional space and so, spontaneous energy dispersal between them must entail an increase in spreading out of motional energy in that real 3-D space. Thus, entropy increase is increased motional energy dispersal in space, no matter what may be the process in chemistry.
  • We used a phrase several pages back when discussing standard state entropy that is very important in clearing up a century-old misunderstanding about entropy. It described the amount of energy that is present in a compound at a given temperature and measured by is necessary for it to be stable and stay that way “To stay that way at a specific temperature” is very important in understanding two of the most confusing statements made by textbook authors about entropy. Many texts have said (and a few advanced texts still do) “entropy is unavailable energy” and “entropy is waste heat”. Those sentences are ambiguous, either untrue or true depending on exactly what is meant by the words. If we put an ice cube on a small iron pan, the ice cube would begin to melt and the pan would become cooler. In this sense, the sentence is foolishly untrue: the pan’s entropy (that is roughly related to its motional energy content at T) represents instantly available energy or “heat”.
  • However, if any amount of energy is transferred from that 298 K pan, the pan no longer has enough motional energy for its q/T value to equal the entropy needed for that amount of iron to exist at 298 K. Thus, from this viewpoint, the sentence is true, but tricky: We can easily transfer energy from the pan. It is not at all “unavailable” — except that when we actually transfer the slightest amount of energy from it, the pan no longer is in its original energy and entropy states! For the pan to remain in its original state, the energy is unavailable.
  • “Entropy is waste heat” is equally ambiguous when motional energy is transferred to the surroundings as a result of a chemical reaction. An example would be the “waste heat” coming out of a car’s exhaust pipe as a result of the chemical reaction between gasoline and oxygen in a car’s engine That energy is spread out in the surroundings and no longer available for running the car. However, it is completely available for many other processes, e.g., heating water or cooking food (indirectly!). Therefore, the energy measured by entropy change is not waste heat in terms of any process, it simply is no longer available for the original process that occurred in the system at the original temperature. 
%d bloggers like this: