Think about this, is energized matter natural? If it were then why is it losing energy and breaking down into an eventual state of heat death? Matter exhibits to us that this is not its normal state and is changing back a state where energy is distributed equally.
Sigh...
"In recent years, the thermodynamic interpretation of evolution in relation to entropy has begun to utilize the concept of the
Gibbs free energy, rather than entropy.
[9] This is because biological processes on earth take place at roughly constant temperature and pressure, a situation in which the Gibbs free energy is an especially useful way to express the
second law of thermodynamics. The Gibbs free energy is given by:
The minimization of the Gibbs free energy is a form of the
principle of minimum energy, which follows from the
entropy maximization principle for closed systems. Moreover, the Gibbs free energy equation, in modified form, can be utilized for
open systems when
chemical potential terms are included in the energy balance equation. In a popular 1982 textbook,
Principles of Biochemistry by noted American biochemist
Albert Lehninger, it is argued that the order produced within cells as they grow and divide is more than compensated for by the disorder they create in their surroundings in the course of growth and division. In short, according to Lehninger, "living organisms preserve their internal order by taking from their surroundings
free energy, in the form of nutrients or sunlight, and returning to their surroundings an equal amount of energy as heat and entropy."
[10]
Similarly, according to the chemist
John Avery, from his recent 2003 book
Information Theory and Evolution, we find a presentation in which the phenomenon of life, including its origin and evolution, as well as human cultural evolution, has its basis in the background of
thermodynamics,
statistical mechanics, and
information theory. The (apparent) paradox between the
second law of thermodynamics and the high degree of order and complexity produced by living systems, according to Avery, has its resolution "in the information content of the
Gibbs free energy that enters the biosphere from outside sources."
[11] The process of natural selection responsible for such local increase in order may be mathematically derived directly from the expression of the second law equation for connected non-equilibrium open systems.
[12]"
https://en.wikipedia.org/wiki/Entropy_and_life
Well, it is the case.
It's the way your questions are formed and the fact that your questions could have been resolved by simply looking it up as I have already pointed out. Look at the way one of your questions is layed out;
You appear to know that that ID people agree about an intelligent agency but then you immediately roll into assuming what their evidence contains. A typical intellect that is trying to learn something will gather the freely available information on a subject first and then ask questions in regards to them or something that isn't easily looked up. In many cases people who wish to attack a viewpoint don't care about what the people having the viewpoint have to say and try to form cornering questions to force a foolish answer by it. It's a form of verbal attack.
I mean, you have answered the question by selecting something else. I guess I could have asked, Is the Intelligent Designer Christian, Muslim, Or Can Nothing Can be Said About Him... I Mean, It?
So if you wish to learn about ID's evidence for yourself quickly and easily here you go;
http://www.designinference.com/documents/2005.06.Specification.pdf
In response to his flagellum argument:
"In my last
post, I explained why the bacterial flagellum remains so powerful an icon for the Intelligent Design (ID) movement: it looks and functions just like the outboard motor, a machine designed by intelligent human engineers. So conspicuous is the resemblance that it seems perfectly logical to infer a Designer for the flagellum.
Yet as we saw, appearances can be deceiving. ID advocates William Dembski and Jonathan Witt agree that “a careful investigator will be on guard against deceiving appearances. The sun looks like it rises in the east and sets in the west, but really the Earth spins on its axis as it revolves around the sun. A healthy skepticism about appearances is vital…To distinguish appearance from reality, the successful investigator must remain open to various possibilities and follow the evidence.”
Despite the strong appearance of special design, most scientists, myself included, believe the evidence points to a gradual development for the bacterial flagellum. We’ll delve into some of that evidence in future posts. First, however, I want to explain how flagella are assembled in bacteria. This amazing process gives me such delight in our Father’s world; I hope it does for you as well.
How does the flagellum assemble?
The bacterial flagellum may look like an outboard motor, but there is at least one profound difference: the flagellum assembles spontaneously, without the help of any conscious agent. The self-assembly of such a complex machine almost defies the imagination. As I showed with an earlier
blog on the self-assembly of viruses (much simpler contraptions by comparison), all such phenomena seem astonishing and counterintuitive.
Because the tail of the flagellum extends well beyond the bacterial cell wall, many of its 40 or so components have to be extruded through an export apparatus that assembles first and makes up the base of the final structure. In general, assembly occurs as a linear process, with components in the base coming together first, followed by the formation of the hook, followed by formation of the filament (see figure).
First, the MS-ring (orange) assembles in the inner cell membrane, most likely in conjunction with some of the export proteins (light green; labeled Type III secretion system). The MS-ring serves as housing for the export apparatus and as a mounting plate for the rotor, which will assemble later.
Next, the stator (gray) assembles around the MS-ring, followed by the rotor (light blue; labeled C-ring). The stator remains fixed in the cell’s frame of reference, while the rotor spins; together, these two parts make up the proton-powered motor.
Now that the base of the flagellum is built, most of the remaining parts are assembled from proteins exported through its center. First comes the rod (yellow), made of four different kinds of proteins, guided by a fifth, the “rod cap,” which is believed to help break down the tough bacterial cell wall.
This “rod cap” is then displaced by a “hook cap,” which guides the formation of the hook structure (dark blue). The hook acts as a universal joint to connect the rod and the filament. When the hook reaches its characteristic length, several “junction zones” form, followed by the export of the “filament cap” protein. This cap structure, different than the rod or hook caps, guides the bundling of more than 20,000 copies of a protein called flagellin into a helical tail (dark green; labeled filament).
The helical filament is long and fragile, but breakage is not too serious a concern for the bacterium. Like a lizard, the flagellum can grow a new tail if it breaks, because flagellin proteins continue to move down the central channel from the cell body toward the tip. Other parts of the flagellum are dynamic as well: individual proteins in the rotor and stator, for example, can exchange with freely-diffusing proteins in the membrane. Such activity may be important for the bacterium’s direction-sensing capability.
How do we know all this?
Scientists are pretty clever at teasing out the workings of microscopic machines like the flagellum. The general order of assembly was meticulously worked out by removing individual protein components one at a time and observing what occurred. If you remove the flagellin protein, for instance, you get the base and the hook, but not the tail. This tells us that the tail forms late in the assembly process. If you remove one of the proteins that make up the MS-ring, on the other hand, the motor elements do not assemble and neither does the rest of the flagellum. That’s how we know the MS-ring isn’t just tacked on at the end.
Other scientists have looked at how the timing of the assembly process is controlled at the genetic level. The genes that contain the instructions for making all the protein components of the flagellum are organized in a number of clusters called operons. Each operon is read when its “master sequence” is activated like a light switch. When the switch is flipped, the genes in that particular operon are interpreted by the cell so that the corresponding proteins are made. It turns out that the genes needed to produce proteins in the base of the flagellum are activated first. Once the base is complete, a clever feedback mechanism flips the next switch, activating the next set of genes, which allows later stages of assembly to occur, and so on. (It’s actually more complicated than that, but you get the idea.) So the parts of the flagellum are made “just in time,” shortly before each piece is needed.
Natural forces work “like magic”
Nothing we know from every day life quite prepares us for the beauty and power of self-assembly processes in nature. We’ve all put together toys, furniture, or appliances; even the simplest designs require conscious coordination of materials, tools, and assembly instructions (and even then there’s no guarantee that we get it right!). It is tempting to think the spontaneous formation of so complex a machine is “guided,” whether by a Mind or some “life force,” but we know that the bacterial flagellum, like countless other machines in the cell, assembles and functions automatically according to known natural laws. No intelligence required.1
Video animations like
this one by Garland Science beautifully illustrate the elegance of the self-assembly process (see especially the segment from 2:30-5:15). Isn’t it extraordinary? When I consider this process, feelings of awe and wonder well up inside me, and I want to praise our great God.
Several ID advocates, most notably Michael Behe, have written engagingly about the details of flagellar assembly. For that I am grateful—it is wonderful when the lay public gets excited about science! But I worry that in their haste to take down the theory of evolution, they create a lot of confusion about how God’s world actually operates.
When reading their work, I’m left with the sense that the formation of complex structures like the bacterial flagellum is miraculous, rather than the completely normal behavior of biological molecules. For example, Behe writes, “Protein parts in cellular machines not only have to match their partners, they have to go much further and assemble themselves—a very tricky business indeed” (
Edge of Evolution, 125-126). This isn’t tricky at all. If the gene that encodes the MS-ring component protein is artificially introduced into bacteria that don’t normally have any flagellum genes, MS-rings spontaneously pop up all over the cell membrane. It’s the very nature of proteins to interact in specific ways to form more complex structures, but Behe makes it sound like each interaction is the product of special design. Next time I’ll review some other examples from the ID literature where assembly is discussed in confusing or misleading ways.
Notes:
1. Some would say this kind of statement violates the sovereignty of God. Not so! I fully believe God is sovereign, but I don’t take that to mean he himself carries out everything that happens inside each cell."
http://biologos.org/blog/self-assembly-of-the-bacterial-flagellum-no-intelligence-required
A critique of earlier work:
"1. Many years ago, I read this advice to a young physicist desperate to get his or her work cited as frequently as possible: Publish a paper that makes a subtle misuse of the second law of thermodynamics. Then everyone will rush to correct you and in the process cite your paper. The mathematician William Dembski has taken this advice to heart and, apparently, made a career of it.
2. Specifically, Dembski tries to use information theory to prove that biological systems must have had an intelligent designer. [Dembski, 1999; Behe et al., 2000] A major problem with Dembski's argument is that its fundamental premise - that natural processes cannot increase information beyond a certain limit - is flatly wrong and precisely equivalent to a not-so-subtle misuse of the second law.
3. Let us accept for argument's sake Dembski's proposition that we routinely identify design by looking for specified complexity. I do not agree with his critics that he errs by deciding after the fact what is specified. That is precisely what we do when we look for extraterrestrial intelligence (or will do if we think we've found it).
4. Detecting specification after the fact is little more than looking for nonrandom complexity. Nonrandom complexity is a weaker criterion than specified complexity, but it is adequate for detecting design and I will show below that there is no practical difference between nonrandom complexity and Dembski's criterion. [For other criticisms of Dembski's work, see especially Fitelson et al., 1999; Korthof, 2001; Perakh, 2001; Wein, 2001. For the Argument from Design in general, see Young, 2001a.]
5. Specified or nonrandom complexity is, however, a reliable indicator of design only when we have no reason to suspect that the nonrandom complexity has appeared naturally, that is, only if we think that natural processes cannot bring about such complexity. More to the point, if natural processes can create a large quantity of information, then specified or nonrandom complexity is not a reliable indicator of design.
6. Let us, therefore, ask whether Dembski's "law" of conservation of information is correct or, more specifically, whether natural processes can create large quantities of information.
7.
Entropy. We begin by considering a machine that tosses coins, say five at a time. We assume that the coins are fair and that the machine is not so precise that its tosses are predictable.
8. Each coin may turn up heads or tails with 50 % probability. There are in all 32 possible combinations:
H H H H H
H H H H T
H H H T T
and so on. Because the coins are independent of each other, we find that the total number of permutations is 2 x 2 x 2 x 2 x 2 = 25 = 32.
9. The exponent, 5, is known as the
entropy of the system of five coins. The entropy is the number of bits of data we need to describe the arrangement of the coins after each toss. That is, we need one bit that describes the first coin (H or T), one that describes the second (H or T), and so on till the fifth coin. Here and in what follows, I am for simplicity neglecting noise.
10. In mathematical terms, the average entropy of the system is the sum over all possible states of the quantity -
p(i) x log
p(i), where
p(i)is the probability of finding the system in a given state i and the logarithm is to the base 2. In our example, there are 2
N permutations, where
N is the number of coins, in our case, 5. All permutations are equally likely and have probability 1/2
N. Thus, we may replace
p(i) with the constant value
p = 1/2
N. The sum over all i of -
p(i) x log
p(i) becomes 2
N x [-
p x log(
p)], which is just equal to -log(
p). That is, the entropy of
N coins tossed randomly (that is, unspecified) is -log(
p), or 5.
11.
Information. At this point, we need to discuss an unfortunate terminological problem. The entropy of a data set is the quantity of data - the number of bits - necessary to transmit that data set through a communication channel. In general, a random data set requires more data to be transmitted than does a nonrandom data set, such as a coherent text. In information theory, entropy is also called uncertainty or information, so a random data set is said to have more information than a nonrandom data set. [Blahut, 1991] In common speech, however, we would say that a random data set contains little or no information, whereas a coherent text contains substantial information. In this sense, the entropy is our lack of information, or uncertainty, about the data set or, in our case, about the arrangement of the coins. [Stenger, 2002; for a primer on information theory, see also Schneider, 2000]
12. To see why, consider the case where the coins are arranged by an intelligent agent; for example, suppose that the agent has chosen a configuration of 5 heads. Then, there is only one permutation:
H H H H H
Because 20 = 1, the entropy of the system is now 0. The information gained (the information of the new arrangement of the coins) is the original entropy, 5, minus the final entropy, 0, or 5 bits.
13. In the terminology of communication theory, we can say that a receiver gains information as it receives a message. [Pierce, 1980] When the message is received, the entropy (in the absence of noise) becomes 0. The information gained by the receiver is again the original entropy minus the final entropy.
14. In general, then, a decrease of entropy may be considered an increase of information. [Stenger, 2002; Touloukian, 1956] The entropy of the 5 coins arranged randomly is 5; when they are arranged in a specified way, it is 0. The information has increased by 5 bits. As I have noted, this definition of information jibes with our intuitive understanding that information increases as randomness decreases or order increases. In the remainder of this paper, I will use "entropy" when I mean entropy and "information" when I mean decrease of entropy. Information used in this way means "nonrandom information," and I will show below how it is related to Dembski's complex specified information.
15. Dembski correctly notes that you do not need a communication channel to talk of information. In precisely the sense that Dembski means it, the system of coins loses entropy and therefore gains information when the intelligent agent arranges the coins. That is, a nonrandom configuration displays less entropy and therefore more information than a random configuration. There is nothing magic about all heads, and we could have specified any other permutation with the same result.
16. Similarly, the genome contains information in nonrandom sequences of the four bases that code genetic information. It also contains a lot of junk DNA (at least as far as anyone is able to deduce). [Miller, 1994] If we write down the entire genome, we at first think it has a very high entropy (many bases with many more possible combinations). But once we find out which bases compose genes, we realize that those bases are arranged nonrandomly and that their entropy is 0 (or at least very much less than the entropy of an equivalent, random set of bases). That is, the genes contain information because their entropy is less than that of a random sequence of bases of the same length.
17.
Natural selection. Suppose now that we have a very large number, or ensemble, of coin-tossing machines. These machines toss their coins at irregular intervals. The base of each machine is made of knotty pine, and knots in the pine sometimes leak sap and create a sticky surface. As a result, the coins sometimes stick to the surface and are not tossed when the machine is activated.
18. For unknown reasons, machines that have a larger number of, say, heads have a lower probability of malfunctioning. Perhaps the reverse side of the coins is light-sensitive, corrodes, and damages the working of the machine. For whatever reason, heads confers an advantage to the machines.
19. As time progresses, many of the machines malfunction. But sometimes a coin sticks to the knotty pine heads up. A machine with just a few heads permanently showing is fitter than those with a few tails permanently showing or those with randomly changing permutations (because those last show tails half the time, on average). Given enough machines and enough time (and enough knots!), at least some of the machines will necessarily end up with five heads showing. These are the fittest and will survive the longest.
20. You do not need reproduction for natural selection. Nevertheless, it must be obvious by now that the coins represent the genome. If the machines were capable of reproducing, then machines with more heads would pass their "headedness" to their descendants, and those descendants would outcompete machines that displayed "tailedness." After a few generations, there would be a preponderance of headedness in the genomes of the ensemble.
21. Thus do we see a combination of regularity (the coin-tossing machines) and chance (the sticky knots) increasing the information in a genome.
22.
Explanatory filter. Dembski's explanatory filter is a flow chart that is designed to distinguish between chance and design. The coin-tossing machines would escape Dembski's explanatory filter and suggest design where none exists, because the filter makes a false dichotomy between chance and design. Natural selection by descent with modification is neither chance nor design but a combination of chance and law. Many self-organizing systems would also pass through Dembski's filter and "prove" design where none exists. Indeed, the intelligent designauts give short shrift to self organization, an area where they are most vulnerable.
23.
The 747 argument. Nonrandom information can thus be generated by natural causes. In order to quantify the meaning of specified complexity, Dembski defines
complex specified information as nonrandom information with 500 bits or more. He claims that complex specified information could not appear naturally in a finite time and argues that, therefore, life must have been designed. What about that?
24. You will hear the argument that there is a very small chance of building a Boeing 747 by tossing the parts into the air and expecting them to fall down as a fully assembled airplane. Similarly, the argument goes, there is a very small chance of building a complex organism (or, equivalently, a genome) by chance. The analogy is false for at least two reasons.
25. First, airplanes and mousetraps are assembled from blueprints. The arrangement of the parts is not a matter of chance. The locations of many of the parts are highly correlated, in the sense that subsystems such as motors are assembled separately from the airplane and incorporated into the airplane as complete units. All airplanes and mousetraps of a given generation are nominally identical. When changes are made, they are apt to be finite and intentional. This is one reason, by the way, that Michael Behe's mousetrap [Behe, 1996] as an analogy for an irreducibly complex organism is a false analogy. [Young, 2001b]
26. Birds and mice, by contrast, are assembled from recipes, not blueprints. The recipes are passed down with modification and sometimes with error. All birds and mice of a given generation are different. When changes are made, they are apt to be infinitesimal and accidental.
27. When Dembski appeals to specified complexity, he is presenting the 747 argument in a different guise. He presents a back-of-the-envelope calculation to "prove" that there has not been enough time for complex specified information to have accumulated in a genome. The calculation implicitly assumes that each bit in the genome is independent of all others and that no two changes can happen simultaneously.
28. Creationists used to argue, similarly, that there was not enough time for an eye to develop. A computer simulation by Dan Nilsson and Susanne Pelger [1994] gave the lie to that claim: Nilsson and Pelger estimated conservatively that 500,000 years was enough time. I say conservatively because they assumed that changes happened in series, whereas in reality they would almost certainly have happened in parallel, and that would have decreased the time required. Similarly with Dembski's probability argument: Undoubtedly many changes of the genotype occurred in parallel not in series as the naive probability argument assumes.
29. Additionally, many possible genomes might have been successful; minor modifications in a given gene can still yield a workable gene. The odds of success are greatly increased when success is fuzzily defined. The airplane parts could as well have assembled themselves into a DC-10 and no one would have been the wiser. Dembski's analysis, however, ignores the DC-10 and all other possibilities, and in effect assumes that the only possible airplane is the 747. More specifically, by assigning a probability to a specific outcome, Dembski ignores all other possible outcomes and thereby calculates far too low a probability.
30. In assuming that the genome is too complex to have developed in a mere billion years, Dembski in essence propagates the 747 argument. Organisms did not start out with a long, random genome and then by pure chance rearrange the bases until, presto, Adam appeared among the apes. To the contrary, they arguably started with a tiny genome. How that first genome appeared is another matter; I think here we are arguing about natural selection by descent with modification, not about the origin of life. No less quantitatively than Dembski, we may argue that the genome gradually expanded by well known mechanisms, such as accidental duplications of genes and incorporation of genomes from other organisms, until it was not only nonrandom, but also complex, that is, contained more than 500 bits. To put it as simply as possible, if an organism with a 400-bit genome incorporates an organism with a 300-bit genome, then the resulting organism has a genome of 700 bits. Similarly, if an organism with a 100-bit genome incorporates five other organisms with 100-bit genomes, the resulting genome has 600 bits. There is nothing to prevent either genome from growing even larger, either in theory or in practice. Dembski's law of conservation of information, which is really a law of conservation of complex specified information, can thus be rendered moot as regards an entire genome.
31. Even if the 500-bit limit had validity, then, it would have to be applied to individual genes or perhaps groups of genes rather than whole organisms - and then only if it can be shown that the bits in the genes in question mutated wholly independently of each other.
32. To see exactly what Dembski is doing, let us suppose that there are 2 manufacturers of jet engines and that they share the market equally. Then, in the absence of further information, we would assume that there is a 50 % chance that the engines of the 747 were made by Manufacturer A. Dembski, by contrast, would argue that Manufacturer A's engine has N parts that could have been bought from various subcontractors. He would assign a probability
p(i) to each part and calculate the probability
p =
p(1) x
p(2) x ... x
p(N) that the engine exists in its present form. Since the engine has many parts,
p is a very small number. Dembski would conclude that it is very unlikely that the 747 uses the engine of Manufacturer A. Indeed, he would think it extremely unlikely that the 747 has any engine at all.
33. Even if complex specified information were a valid concept, it would not apply to the entire genome but only to specific genes. It is impossible to distinguish whether a specific gene is subject to the 500-bit limit, because the calculation depends on the unknown history of the gene (whether it contains duplicated segments, for example). I can, therefore, see no practical difference between specified complexity and nonrandom complexity. In distinguishing between specified and nonrandom complexity, I mean to imply that
the concept of complex specified information is meaningless unless Dembksi can demonstrate that the bits in a given gene mutated independently of each other, throughout the entire history of that gene; otherwise, the 500-bit limit does not apply.
34. At the risk of adding to Dembski's already complex terminology, let us define
aggregated complexity. A complex entity is aggregated if it consists of a number of subunits, no one of which demonstrates specified complexity. Aggregated complexity may exceed 500 bits yet not be specified in the way that Dembski means it. Thus, given a gene or a genome with more than 500 bits, how will Dembski demonstrate that the information in that gene is truly specified and not simply aggregated? How will he demonstrate that my far simpler analysis is incorrect? If he can do neither, then complex specified information is at best a meaningless innovation and at worst a smokescreen to hide a simple misapplication of information theory.
35.
Reversing entropy. The definition of entropy in information theory is precisely the same as that in thermodynamics, apart from a multiplicative constant. Thus, Dembski's claim that you cannot increase information beyond a certain limit is equivalent to the claim that you cannot reverse thermodynamic entropy. That claim, which has long been exploited by creationists, is not correct. The correct statement is that you cannot reverse entropy in a closed or isolated system. A living creature (or a coin-tossing machine) is not a closed system. A living creature thrives by reversing entropy and can do so in part because it receives energy from outside itself. It increases the entropy of the universe as a whole as it discards its wastes. Dembski's information-theoretical argument amounts to just another creationist ploy to yoke science in support of a religious preconception."
http://www.pcts.org/journal/young2002a.html
And also, all the information above in regards to his "specified complexity."
I'm surprised you haven't considered how SETI thinks they can identify intelligence. Suppose you were searching for something that you could be fairly sure came from intelligent agency what exactly would you look for?
Has it found the intelligence that indicates an ID yet?