[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help] 

Status: Not Logged In; Sign In

Consequences of Mild, Moderate & Severe Plagiarism

Plagiarism: 5 Potential Legal Consequences

When Philadelphia’s Foul-Mouthed Cop-Turned-Mayor Invented White Identity Politics

Trump Wanted to Pardon Assange and Snowden. Blocked by RINOs.

What The Pentagon Is Planning Against Trump Will Make Your Blood Run Cold Once Revealed

How Trump won the Amish vote in Pennsylvania

FEC Filings Show Kamala Harris Team Blew Funds On Hollywood Stars, Private Jets

Israel’s Third Lebanon War is underway: What you need to know

LEAK: First Behind-The-Scenes Photos Of Kamala After Getting DESTROYED By Trump | Guzzling Wine!🍷

Scott Ritter Says: Netanyahu's PAINFUL Stumble Pushes Tel Aviv Into Its WORST NIGHTMARE

These Are Trump's X-Men | Dr. Jordan B. Peterson

Houthis (Yemen) Breached THAAD. Israel Given a Dud Defense!!

Yuma County Arizona Doubles Its Outstanding Votes Overnight They're Stealing the Race from Kari Lake

Trump to withdraw U.S. troops from northern Syria

Trump and RFK created websites for the people to voice their opinion on people the government is hiring

Woke Georgia DA Deborah Gonzalez pummeled in re-election bid after refusing Laken Riley murder case

Trump has a choice: Obliterate Palestine or end the war

Rod Blagojevich: Kamala’s Corruption, & the Real Cause of the Democrat Party’s Spiral Into Insanity

Israel's Defense Shattered by Hezbollah's New Iranian Super Missiles | Prof. Mohammad Marandi

Trump Wins Arizona in Clean Sweep of Swing States in US Election

TikTok Harlots Pledge in Droves: No More Pussy For MAGA Fascists!

Colonel Douglas Macgregor:: Honoring Veteran's Day

Low-Wage Nations?

Trump to pull US out of Paris climate agreement NYT

Pixar And Disney Animator Bolhem Bouchiba Sentenced To 25 Years In Prison

Six C-17s, C-130s deploy US military assets to Northeastern Syria

SNL cast members unveil new "hot jacked" Trump character in MAGA-friendly cold open

Here's Why These Geopolitical And Financial Chokepoints Need Your Attention...

Former Army Chief Moshe Ya'alon Calls for Civil Disobedience to Protest Netanyahu Government

The Deep State against Trump


Science/Tech
See other Science/Tech Articles

Title: A Universe Designed for Life (Chapter 10 of "Human Devolution - A Vedic Alternative To Darwin's Theory" by Michael Cremo)
Source: [None]
URL Source: [None]
Published: Jan 7, 2012
Author: Michael Cremo
Post Date: 2012-01-07 06:43:22 by wudidiz
Ping List: *Out There*     Subscribe to *Out There*
Keywords: None
Views: 131
Comments: 7

A Universe Designed for Life

The universe itself appears designed for life. Certain fundamental constants of nature, certain ratios between the forces of nature, appear to be very finely tuned. If their numerical values were even slightly different, the universe as we know it would not exist. Stable atoms, stars, and galaxies could not form (Barrow and Tipler 1996, p. 20). And thus, life itself, as we know it, could not exist.The values of the constants and ratios appear to be entirely arbitrary. In other words, as far as scientists today can tell, the values are not determined by any law of nature or property of matter. It is as if the values had been set by chance. But the odds against this are so staggering, in some cases trillions to one, that we are confronted with a genuine problem in cosmology, called the fine tuning problem. One possible explanation is that the finely tuned values were set by a providential intelligence. In these times, this is the last thing most scientists would concede. One way to avoid the God conclusion is to suppose that there are innumerable universes. Therefore cosmologists have begun to favor theories that result in the production of such universes. They imagine that in each universe, the fundamental constants and ratios have, by chance, different values. And we just happen to live in the universe where all the values are properly adjusted for life to exist. We should not be surprised at this. After all, if the values of the constants and ratios were not just the way they are, then we would not be here to observe them. Another way to avoid the God conclusion is to find some as yet undiscovered physical explanation, a new theory of everything, such as superstring theory, that would yield the fine tuning we observe in this universe. All of the current discussions about the fine tuning problem take place within the general framework of the Big Bang cosmology. Under the general heading of the Big Bang cosmology, there are dozens of Big Bang theories, almost as many as there are Big Bang cosmologists. It is not my purpose here to explore all the technicalities of these theories. I just want to give a general composite picture of what they involve. First, the universe emerges as a fluctuation of the quantum mechanical vacuum, which is compared to a sea of energy. In the case of multiverse theories, many universes emerge as fluctuations of the quantum mechanical vacuum. These universes in their beginning stages are immeasurably small, dense, and hot. Then they began to expand rapidly for a short period of time. As they continue to expand, they are filled with a super hot plasma. Later, after more expansion, and cooling, the super hot plasma condenses into subatomic particles, which later begin to condense into the gases hydrogen, helium, and deuterium. More exotic types of matter and energy, called dark matter and dark energy, are also produced. In regions of more dense concentrations of dark matter and energy, the atomic gases condense into stars and galaxies. Within the superheated cores of these stars, heavier elements form. And when the stars finally explode into supernovas, still heavier elements form in the heat and shock. And after billions of years, we have the universe we observe. Its fate is not precisely known, but according to some cosmologists the universe will eventually contract into a black hole, and perhaps rebound again, emerging through a white hole. The Big Bang theory grew out of observations that the universe appears to be expanding. In the 1920s, astronomer Edwin Hubble discovered that light coming to us from distant galaxies was shifted toward the red end of the spectrum. The more distant the galaxies, the larger the red shift. Toward the red end of the spectrum, light waves become longer. So the wavelengths of light coming from these galaxies had been stretched. If the galaxies were moving away from us, that would explain the stretching of the light waves. Imagine a paddle boat in an otherwise still lake, with the paddle turning at a certain fixed rate. You are observing the boat from the shore. If the boat was anchored and held its position, the waves would come toward you at regular intervals. But if the boat started moving away from you, then the waves reaching you would be coming in further apart, even though the paddle on the boat was still turning at the same fixed rate. The wavelength of light coming from receding galaxies would get longerin the same way. And that is what scientists actually observe. The wavelengths of light are stretched. Scientists also say that the Big Bang theory predicted the temperature of the cosmic microwave background radiation. The cosmic microwave background is the heat left over from the initial superheated expansion of the universe billions of years ago. Furthermore, scientists say that the Big Bang theory predicts the abundances of hydrogen, deuterium and helium that we observe today in the universe. There are many critics of the Big Bang theory. Among them is astronomer Tom Van Flandern, who has compiled a list of twenty principal problems with the Big Bang theory. These do have to be taken into account. To some, these problems suggest that the universe did not expand from an initial small state, as most cosmologists now believe. They suggest a steady state universe. To me, the current problems with the Big Bang theory suggest only that the expansion of the universe from a tiny seedlike form cannot be completely described without taking into account God and His powers. Aside from that, many elements of the Big Bang theory correspond to accounts of the origin of the universe found in the ancient Sanskrit writings of India. Here is a brief summary of the Vedic account of the origin of the universe, taken from the Shrimad Bhagavatam and the Brahma Samhita. Beyond time and space as we know them, Maha Vishnu floats in cosmic slumber upon the waves of the Causal Ocean. From the pores of the Maha Vishnu emerge numberless universes in seedlike form. When Maha Vishnu glances upon these seedlike universes, energizing them with His potencies, they begin to expand in a flash of golden light. Within each universe, ele-ments are gradually formed beginning with the lighter ones and proceeding toward the heavier. While this is happening the celestial bodies are formed. And the universe continues to expand. The universes exist for the length of one breath of the Maha Vishnu.The universes come out from His body when He exhales and reenter his body when He inhales. The length of one breath is estimated to be 311 trillion years. Within this vast period of time, each universe continuously undergoes subcycles of manifestation and nonmanifestation lasting about 8.6 billion years each. Both the Big Bang cosmology and the Vedic cosmology posit a sea of transcendental energy existing before the material manifestation of universes. Some cosmologists propose that universes expand from white holes and contract into black holes. White holes spit out universes, black holes eat them up. The Vedic version also proposes that universes expand from and contract into holes, the skin holes of Maha Vishnu. Both accounts propose that there is an initial period of rapid inflation. Both accounts propose an initial burst of light, or radiation. Both accounts propose that the universe goes on to expand. Both accounts involve many universes. When the Big Bang theory was originally presented to my guru, Bhaktivedanta Swami Prabhupada, he was opposed to it. His disciples presented it to him as an explosion of an original lump of matter, with no involvement by God. Of course, a simple explosion like that could not produce the universe we observe. But that was not an accurate picture of what the Big Bang picture actually says. When more realistic accounts of the theory were reported to him, he was more favorably inclined toward them. He accepted the principle of an expanding universe, as can be seen in a conversation with disciples that took place in Los Angeles, on December 6, 1973 (Conversations 1989, v. 6, pp. 228–229).

Bali mardana: Prabhupada, when the universes are emanated from the body of Maha-Vishnu, they begin to expand.

Prabhupada: Yes, yes.

Bali mardana: Is the universe still expanding?

Prabhupada: Yes. . . .

Karandhara: While the exhaling is going on, the universe is expanding . . .

Prabhupada: Yes.

Karandhara: In the inhaling, the universe is contracting.

Prabhupada: Yes.

And in his commentary on one of the verses of Shrimad Bhagavatam (3.29.43), he stated that “the total universal body is increasing.” The entering and reentering of the universes into the body of the Maha Vishnu is described in Brahma Samhita (5.48), which characterizes the Maha Vishnu as an expansion of God “into whom all the innumerable universes enter and from whom they come forth again simply by His breathing process.” Nevertheless, the Vedic expanding universe cosmology is distinct from the modern materialistic Big Bang cosmology in that the substance of the Vedic universe emerges from God, as one of His energies, and the further deployment of this energy is accomplished and controlled by God. In the following discussion, I will show how careful examination of the modern Big Bang cosmology leads one toward the same conclusion. The argument takes this basic form: even if one assumes, as modern cosmologists do, that the origin and development of the universe are to be explained solely in terms of the interaction of various kinds of matter and physical forces, then one is led to the conclusion that the finely tuned nature of these interactions implies a cosmic intelligence behind them. Once that conclusion is reached, we may then have to go back and reevaluate the initial assumptions of the modern Big Bang cosmology, and we may be warranted in making substantial changes in those assumptions, so as to bring the Big Bang cosmology and the Vedic cosmology into sharper agreement. In my discussion of the Big Bang cosmology, it may appear that I am accepting its assumptions as fundamentally correct. But I am simply saying that if, for the sake of argument, we accept the assumptions underlying the current state of Big Bang cosmology as correct, then certain conclusions follow. It would not be practical for me, however, to qualify each and every reference I make to Big Bang cosmology in this way.

The anthropic Principle

Hundreds of years ago, most astronomers believed the earth was the center of the universe. Then the astronomer Copernicus introduced the idea that the earth rotated around the sun. Astronomers adopted what they called the Copernican Principle, the idea that the earth and its human inhabitants do not occupy any special place in the universe. But in the twentieth century astronomer Brandon Carter proposed that our position is to some extent special, returning to some degree to the previous view, which is also found in the Vedic cosmology. In order for humans to exist as observers, said Carter, we have to find ourselves in a certain position in a universe with certain characteristics. Carter (1974, p. 291) called this the anthropic principle. For one thing, according to current ideas about cosmology, the universe would have to be of a certain age in order for there to be human observers—about ten billion years old. According to current theories, it would take that long for successive generations of stars to convert helium and hydrogen into the heavier elements, such as carbon, one of the main ingredients of organic life. In an expanding universe, the size of the universe is related to its age. This means, say modern cosmologists, that we should expect a universe capable of supporting carbon-based human life to be at least ten billion light years in diameter, and our universe is of that size, according to currently accepted observations and calculations (Barrow and Tipler 1996, p. 3). I do not necessarily agree with the exact size and age estimates given for the universe by modern cosmologists and note that these are constantly changing. Some versions of the anthropic principle hold that human observers should not only expect to find themselves in a universe of a certain age and size, but also in a universe where the values of physical constants and ratios of natural forces are finely tuned to allow the very formation of that universe and the human life in it. It is this initial fine tuning which is of most interest to me. My discussion of the fine tuning problem will be based primarily on two main sources: the book Just Six numbers by Sir Martin Rees, the astronomer royal of Great Britain, and the book the Cosmological anthropic Principle by astronomer John D. Barrow and physicist Frank J. Tipler.

Fine tuning

Physicist John Wheeler, known for his many worlds interpretation of quantum mechanics, wrote (Barrow and Tipler 1986, p. vii), “It is not only that man is adapted to the universe. The universe is adapted to man. Imagine a universe in which one or another of the fundamental . . . constants of physics is altered by a few percent one way or the other. Man could never come into being in such a universe. That is the central point of the anthropic principle. According to this principle, a life-giving factor lies at the center of the whole machinery and design of the world.” Let us now look at the numerical values associated with some of these fundamental constants and ratios of natural forces, and see exactly what would happen if each of them were changed only slightly.

The large number N and Gravity

According to modern cosmology, the size of the universe and the sizes of the objects and living things in it are related to the ratio between the force of electromagnetism and the force of gravity (Rees 2000, pp. 27–31). Atoms are composed of subatomic particles with different electric charges. Among these subatomic particles are electrons and protons. Electrons have negative charge and protons have positive charge. The electromagnetic attraction between the positive and negative charges of electrons and protons is one of the factors holding the atom together. The force of gravity also acts among the subatomic particles making up the atom. But the force of gravity is much weaker than the electromagnetic force. The ratio of gravity to electromagnetism is obtained by dividing the strength of the electromagnetic force by the strength of the gravitational force. The resulting number (n) is 1036, which means that the gravitational force is 1,000,000,000,000,000,000,000,000,000,000,000,000 times weaker than the electromagnetic force. On the atomic scale, gravity does not have much of an effect. But on the larger scale, gravity does have a very noticeable effect, even though it is much weaker than electromagnetism. The positive and negative charges in atoms cancel each other out. This means that on the large scale we do not feel very much of an effect from electromagnetism (unless the electromagnetic charges in an object are aligned, as in a magnet or an electric current). But gravity is always positive. The more mass present in an object, the greater its gravity. So when there are large aggregates of atoms their masses add up, and the force of gravity increases proportionately. The combined force of the gravity in the mass of all the atoms in the earth holds us down on the surface of the earth. In fact, the force of gravity determines how big living things can be on a particular planet. If the force of gravity were slightly stronger, the maximum size of living things would decrease. Let us imagine that n was 1030 instead of 1036. Then gravity would be “only” 1,000,000,000,000,000,000,000,000,000,000 times weaker than electromagnetism. As a result of this small change (just a few zeroes), the force of gravity on earth would be so heavy, that no creatures larger than insects would be able to survive the pressure. And even these little insect sized creatures would have to have massive legs. And that’s not all. Everything in the universe would be much smaller. For example, it would take a billion times fewer atoms to make a star.According to current thinking, stars form when the gravitational force of the atoms in hydrogen and helium gas clouds causes the gas to condense.As the gas condenses it becomes heated, and when the gas becomes dense enough and hot enough, it triggers fusion reactions. The heat of the fusion reactions pushes the star’s material outwards, but the force of gravity holds it in. The balance between the outward expansion and inward contraction causes stars to assume a particular size. The star has to be big enough to have enough gas molecules to collapse into a core with enough pressure to start the atomic fusion process. And the star has to retain enough mass to keep the heat generated by that fusion process from forcing all the materials of the star out into empty space. In general, stars normally have to be quite big. If, however, the force of gravity were greater, it would take fewer atoms to start the fusion process and fewer atoms to overcome the outward expansion. If n were 1030 instead of 1036, it would take a billion times fewer atoms to overcome the force of outward expansion. This means that stars would be much smaller. They would also burn their nuclear fuel much more quickly. A fire with a little bit of fuel is normally going to go out more quickly than a fire with large amount of fuel. According to Rees (2000, p. 31), the average lifetime of a star would be ten thousand years rather than ten billion years. That would have quite a negative impact on the possibility of biological evolution of the kind scientists now imagine. Galaxies would also be much smaller, and would be more densely packed with stars. The dense packing of stars would interfere with the orbits of planets circling those stars. And we have to remember that the presence of life depends on stable orbits for planets. If our own planet’s orbit were not stable the extremes of temperature on earth would be too great for life as we know it to survive. Why does N have the precise value it does? Rees (2000, p. 31) says, “We have no theory that tells us the value of N. All we know is that nothing as complex as humankind could have emerged if N were much less than 1,000,000,000,000,000,000,000,000,000,000,000,000.”

The Binding energy µ

The binding energy µ is another cosmic number that greatly influences the characteristics of our universe (Rees 2000, pp. 43–49). It determines how atoms are formed, and how nuclear reactions take place. Of course, this is also very important for the existence of life forms. Atoms of different elements have different binding energies. For us, the most important is the binding energy of helium. According to today’s astrophysicists, the first generation of stars converts hydrogen into helium by fusion. The nucleus of a hydrogen atom contains one proton. The nucleus of deuterium, an isotope of hydrogen, contains one proton and one neutron. When two deuterium atoms fuse, they form an atom of helium, with two protons and two neutrons. The nucleus of the helium atom has a mass equivalent to .993 (99.3 percent) of the mass of the two protons and two neutrons it contains. In the process of fusion, .007 (0.7 percent) of the mass is converted into energy, mostly heat. This number .007 is µ, the binding energy of atomic nuclei. It is related to the strong nuclear force, which keeps the protons in the atom together. Rees (2000, p. 48) says, “The amount of energy released when simple atoms undergo nuclear fusion depends on the strength of the force that ‘glues’ together the ingredients in an atomic nucleus.” The greater the binding energy, the greater the strength of the strong nuclear force. The protons in the nucleus have positive charge, and normally positive charges will repel each other, thus blowing the atom apart. But the strong nuclear force is just strong enough to overcome this repulsion, and holds the protons together in the nucleus. We do not feel this force, because it operates only within the nucleus of the atom. If the value of µ were even slightly different, there would be major effects on atomic structure. If, for example, the value of µ were .006 instead of .007, this would mean that the strong nuclear force was slightly weaker than it is now. But this would be enough to interrupt the formation of elements heavier than hydrogen. Heavier elements are formed by adding protons to the nuclei of atoms. Hydrogen, with one proton, is the lightest element. Iron has 26 protons. But to get to iron and the heavier elements, we first have to go from hydrogen to helium.The helium nucleus usually contains two protons and two neutrons, while the simple hydrogen nucleus consists of just one proton. So to go from hydrogen to helium requires a middle step, the conversion of hydrogen to its isotope deuterium, which consists of one proton and one neutron. Then two deuterium nuclei can fuse to form a helium nucleus, with two protons and two neutrons. The strength of the nuclear binding force between the protons and neutrons in the helium nucleus causes the release of part of their mass as energy, the binding energy. Now if this binding energy were .006 of the total mass of the protons and neutrons, instead of .007, the strong nuclear force would be weaker. It would be just weak enough so that a neutron could not bind to a proton. Deuterium nuclei could not form, and therefore helium nuclei could not form. The hydrogen atoms would still condense into heavy masses, and these masses would heat up. But there would be no fusion reactions to keep the star going. No other elements would be formed. There would be no planets and no life as we know it. What if µ was .008 instead of .007, indicating that the strong nuclear force was slightly stronger than it is today? That would lead to a problem of another kind in the process of element formation. As we have seen, the strong nuclear force is needed to bind protons together. Today, the strong force is not strong enough to bind just two protons together. A combination of two protons is called a diproton. There are no stable diprotons in the universe today. This is because the repulsion between the two positively charged protons is stronger than the binding energy of the strong nuclear force. But the binding energy, at its current value of .007, is strong enough to cause a proton to bind to a neutron, thus forming deuterium. And then two deuterium atoms can combine to form helium. This happens because the neutrons supply the extra binding energy needed to bring the two protons together. Because the neutrons are neutral in electric charge, they do not add any additional force of repulsion. Now if µ were .008, then two protons could join together, forming a diproton, an isotope of helium with two protons and no neutrons. This means that all of the hydrogen atoms (each with one proton) in the early universe would quickly combine into diprotons. Today, only some of the hydrogen atoms form deuterium and normal helium, over long periods of time. This leaves hydrogen in the universe for the formation of hydrogen compounds necessary for life. Barrow and Tipler (1996, p. 322) put it like this: “If the strong interaction were a little stronger, the diproton would be a stable bound state with catastrophic consequences—all the hydrogen in the Universe would have been burnt to He2 during the early stages of the Big Bang and no hydrogen compounds or long-lived stable stars would exist today. If the diproton existed we would not!” The most important hydrogen compound is water, and in a universe in which µ was .008, there would be no water. The stable stars would not exist because they require hydrogen for fuel and there would be no hydrogen. Going from helium to carbon also requires some fine tuning (Barrow and Tipler 1996, pp. 250–253). According to cosmologists, the first generations of stars burn hydrogen nuclei by a fusion process that yields helium nuclei. Eventually, the star runs out of hydrogen, and the helium core of the star begins to become denser. The condensation raises the temperature of the star to the point where helium begins to fuse into carbon. A helium nucleus has two protons. A carbon nucleus has six protons. Theoretically, three helium nuclei could fuse to form a carbon nucleus. But in practice this does not happen, because it is not very likely that three helium nuclei could collide at the same instant in just the way necessary to produce a carbon nucleus. Instead, there is a two step process. First two helium nuclei combine to form a beryllium nucleus, with four protons. Then a beryllium nucleus combines with another helium nucleus to form carbon. The problem is that the beryllium nuclei are unstable and rather quickly break back down into helium nuclei. Therefore, physicists would expect that very little carbon would be produced, certainly not the amounts of carbon present in the universe. But then the English astronomer Fred Hoyle showed that the carbon nucleus just happens to have a particular resonant energy level that lies just above the combined energy levels of beryllium and helium. The additional energy supplied to beryllium and helium by the heat of the solar core brings the beryllium and helium nuclei up to this level, enabling them to combine into carbon nuclei much more rapidly than might otherwise be expected. It is possible that all of the carbon produced in this way could have been immediately converted into oxygen, if the carbon nuclei combined with helium nuclei. But the oxygen nucleus has a resonant energy level that is below the combined energies of carbon and helium. This lucky circumstance means that the fusion reaction between carbon and helium becomes less likely. And therefore we have enough carbon for carbon-based life forms. Rees (2000, p. 50) noted: “This seeming ‘accident’ of nuclear physics allows carbon to be built up, but no similar effect enhances the next stage in the process, whereby carbon captures another helium nucleus and turns into oxygen. The crucial ‘resonance’ is very sensitive to the nuclear force. Even a shift by four per cent would severely deplete the amount of carbon that could be made. Hoyle therefore argued that our existence would have been jeopardized by even a few percentage points’ change in µ.” Commenting on the finely tuned resonances that enabled the production of heavy elements in the stellar interior, Hoyle said, “I do not believe that any scientist who examines the evidence would fail to draw the inference that the laws of physics have been deliberately designed with regard to the consequences they produce inside the stars” (Barrow and Tipler 1996, p. 22).

Ω (omega) and the Cosmic Balance of Forces

According to modern cosmologists, the expanding universe in its very beginnings had three possible fates. (1) The force of gravity could have overwhelmed the force of expansion, and the universe could have rapidly collapsed back on itself, before any stars and galaxies could have formed. (2) The force of expansion could have overwhelmed the force of gravity, so that the universe would have expanded too rapidly for stars and galaxies to form. (3) The forces of gravity and expansion could have been adjusted very carefully so that the universe expanded at just the right speed for stars and galaxies to form and persist over billions of years. The fate of the universe therefore depends upon a critical average density of matter. According to cosmologists, the critical density is 5 atoms per cubic meter. If the density is more than 5 atoms per cubic meter, gravity will be strong enough to cause the universe to collapse. If the density is much less than 5 atoms per cubic meter, the universe will expand too rapidly for stars and galaxies to form. The cosmic number omega is the ratio between the critical density and the actual density (Rees 2000, pp. 72–90). If the critical density and the actual density are equal, then the ratio is 1, and hence Ω (omega) = 1. This allows a slowly expanding universe in which stars and galaxies can form, as is the case with our universe. But in our universe, the actual density of visible matter is far less than the critical density. If all visible matter, in the form of stars, galaxies and gas clouds, is taken into account, the actual density is only .04 of the critical density. But observations of the movement of the visible matter have convinced scientists that there must exist another form of matter in the universe, called dark matter. For example, spiral galaxies are shaped like rotating pinwheels, with two or more curving “arms” of stars streaming from a bright central core. When astronomers look at spiral galaxies, they see that they do not contain enough ordinary visible matter to keep the arms curving as closely as they do toward the centers of such galaxies. According to the current laws of gravity, the arms should be less curved. For the galaxies to maintain their observed shapes, they should have ten times more matter than they visibly have. This means there is some “missing matter.” What form does it take? Some astrophysicists suggest the dark matter may be made of neutrinos, strange particles generated during the Big Bang with very small mass, or myriads of black holes of extremely great mass. “It’s embarrassing,” said Rees (2000, p. 82), “that more than ninety per cent of the universe remains unaccounted for—even worse when we realize that the dark matter could be made up of entities with masses ranged from 10-33 grams (neutrinos) up to 1039 grams (heavy black holes), an uncertainty of more than seventy powers of ten.” When the dark matter is added to the visible matter, the actual density of matter in the universe becomes about .30 of the critical density. For this to be the situation now, after billions of years of expansion, the ratio of the actual density of matter in the universe to the critical density had to be extremely close to unity (i.e., one to one). Rees (2000, p. 88) stated,“Our universe was initiated with a very finely-tuned impetus, almost exactly enough to balance the decelerating tendency of gravity. It’s like sitting at the bottom of a well and throwing a stone up so that it just comes to a halt exactly at the top—the required precision is astonishing: at one second after the Big Bang, Ω cannot have differed from unity by more than one part in a million billion (one in 1015) in order that the universe should now, after ten billion years, be still expanding and with a value of Ω that has certainly not departed wildly from unity.”

» (lambda): levity in addition to Gravity?

If gravity were the only force operating in connection with the expansion of the universe, then astronomers should detect that the rate of expansion is decreasing. Gravity should be slowing down the rate at which all the material objects in the universe are moving away from each other. In short, we should observe deceleration of the expansion. The force of gravity depends on the total density of matter. The more density, the more gravity. The more gravity, the more deceleration. Depending on the exact density of matter in the universe, the rate of deceleration could be faster or slower. But there should be some deceleration, as the force of gravity counteracts the expansion. Instead, scientists have noted an apparent acceleration in the rate of expansion. This was somewhat unexpected, as it indicates that in addition to gravity there may be another fundamental natural force that is repulsive, rather than attractive. In other words, there may be antigravity in addition to gravity. The antigravity force was discovered by scientists who were hoping to find the total amount of dark matter in the universe (Rees 2000, pp. 91–95). The visible matter in the universe contributes only .04 of the critical density. The critical density is the exact amount of matter necessary for a Big Bang expanding universe to exist for long periods of time with relatively stable stars and galaxies. There must be enough matter to slow therate of expansion so that all the matter in the universe does not quickly disperse into a featureless gas. But there must not be so much matter as to thoroughly overcome the expansion, causing the universe to quickly recollapse into a black hole. Because the visible matter in the universe is distributed in ways not possible according to the laws of gravity, scientists have inferred the existence of clumps of dark matter, which although invisible possess gravitational force. Taking into account the gravitational force of these clumps of invisible dark matter allows cosmologists to explain the distribution of visible matter. But when the clumped dark matter is added to the visible matter, the total amount of matter is still only.30 of the critical density. Some scientists have proposed that the present state of our universe would most easily be explained if the actual density of matter in the universe very closely approached the critical density, so that their ratio (Ω) was one to one (Ω = 1). But that would require that there should be some more dark matter in the universe. Therefore, some scientists have proposed that there might be large amounts of extra dark matter evenly distributed throughout the universe. Unlike the clumped dark matter, this evenly distributed dark matter would not exert noticeable gravitational force on individual galaxies. And it would therefore not show its influence in the form of anomalies in the distribution of matter in and among galaxies. However, the evenly distributed dark matter might be slowing down the overall expansion of the universe. To test their ideas, scientists measured the red shifts of a particular type of supernova: “A distinctive type of supernovae, technically known as a ‘Type 1a’, signals a sudden nuclear explosion in the center of a dying star, when its burnt-out core gets above a particular threshold of mass and becomes unstable,” stated Rees (2000, p. 93). “It is, in effect, a nuclear bomb with a standard yield. . . . What is important is that Type 1a supernovae can be regarded as ‘standard candles’, bright enough to be detected at great distances. From how bright they appear, it should be possible to infer reliable distances, and thereby (by measuring the red shift as well) to relate the expansion speed and distance at a past epoch. Cosmologists hoped that such measurements would distinguish between a small slowdown-rate (expected if the dark matter has all been accounted for) or the larger rate expected if—as many theorists suspected—there was enough extra dark matter to make up the full ‘critical density.’” Two groups of researchers were surprised to find that their measurements of these supernova red shifts showed no deceleration effect at all. Instead, their measurements showed the rate of the expansion of the universe was actually increasing. This meant two things. First, there was not any significant amount of extra dark matter. Second, in order to explain the increase in the rate of the universe’s expansion, scientists had to propose a kind of antigravitational force. The idea of an antigravitational force goes back to Einstein. In the 1920s, Einstein was working on the assumption that the universe was static. But his equations would not allow a universe to exist in a static state. The attractive force of gravity would cause all the matter in the universe to contract. To balance this attractive force, Einstein added to his equations a “cosmological constant,” called » (lambda), to balance the force of gravity. When cosmologists accepted an expanding universe, they lost interest in the idea of a cosmological constant tied to equations describing a static universe. But now it turns out that the expanding universe model itself appears to require ». What exactly does » measure? It does not measure the force of any kind of light or dark matter. Cosmologists have been reduced to proposing that » “measures the energy content of empty space” (Rees 2000, p. 154). The current measured value of » appears to be quite special. “A higher-valued » would have overwhelmed gravity earlier on, during the higher-density stages,” stated Rees (2000, p. 99). “If » started to dominate before galaxies had condensed out from the expanding universe, or if it provided a repulsion strong enough to disrupt them, then there would be no galaxies. Our existence requires that » should not have been too large.”

Q

According to the Big Bang cosmology, our universe started out as a small dense globular mass of extremely hot gas. As it expanded, it became cooler. If the globe of gas had been perfectly smooth, then as the expansion continued the atoms of gas would have distributed themselves evenly in space. In order for matter to have organized into structures like stars, galaxies, and clusters of galaxies, there had to have been some variations in the smoothness of the original globular cloud of gas. Some regions had to have been slightly denser than others. In these slightly more dense regions, the atoms became attracted to each other by the force of gravity, eventually becoming stars and galaxies. Rees (2000, p. 106) explains the measure of this force: “The most conspicuous structures in the cosmos—stars, galaxies, and clusters of galaxies—are all held together by gravity. We can express how tightly they are bound together—or, equivalently, how much energy would be needed to break up and disperse them—as a proportion of their total ‘rest-mass energy’ (mc2). For the biggest structures in our universe—clusters and superclusters—the answer is about one part in a hundred thousand. This is a pure number—a ratio of two energies—and we call it Q.” In other words, it would not take very much energy to overcome the force of gravity holding galaxies and clusters of galaxies together.

Q is necessarily related to the original density variations in the fireball of the early stages of the Big Bang. If there were no density variations at all, then the matter in the universe would have expanded completely evenly, so that there would have been no clumping of matter in the more dense regions. So according to the present value of Q (one in a hundred thousand, i.e. 10-5), the initial variations in the energy of the Big Bang universe were no greater than one hundred thousandth of its radius. Scientists plan to confirm this with space satellites that can very accurately measure minute variations in the cosmic microwave background radiation, which scientists take to be the remnants of the original Big Bang fireball. It turns out that Q’s present value (10-5) is just about the only one that allows for the kind of universe in which there can be stable stars and planets on which life as we know it could exist. What if Q were smaller than 10-5? Rees (2000, p. 115) said “the resulting galaxies would be anaemic structures, in which star formation would be slow, and inefficient, and ‘processed’ material would be blown out of the galaxy rather than being recycled into new stars that could form planetary systems.” If Q were still smaller (smaller than 10-6), then “gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless” (Rees 2000, p. 115). But what would happen if Q were much greater than 10-5? Rees (2000, p. 115) said in such a universe most matter would quickly collapse into huge black holes and any remaining stars “would be packed too close together and buffeted too frequently to retain stable planetary systems.” So although the current value of Q is critical for our existence, there is no particular reason why Q has that value. As Rees (2000, pp. 113–114) put it, “The way Q is determined . . . is still perplexing.” I do not want to leave the impression that there are no problems with the general scenario of galaxy formation implicit in this discussion. Although scientists do believe that stars and galaxies form more or less automatically according to physical laws during the condensation of gas clouds in space, they have not been able to accurately model the process on computers. Rees (2000, p. 110) noted that “nobody has yet performed a simulation that starts with a single cloud and ends up with a population of stars.” In other words, the evidence for the fine tuning of constants combined with the inability of scientists to accurately model the process of star and galaxy formation may lead us to the conclusion that more is required than matter acting according to certain laws. The overall active intervention of a supreme being may also be required. In other words, God is not necessary just to fill in the gaps, but as an overall enabling and coordinating factor.

D: the number of Dimensions

The number of spatial dimensions, D, determines important features of our universe. For our universe D is three. If D were two or four 482 Human Devolution: a vedic alternative to Darwin’s theory or some other number, life as we know it could not exist. In our universe gravity and electricity obey the inverse square law. If you move an object twice as far away from you as it is now, the force of its gravity upon you will be only one quarter of what it was. Four is the square of two (1/2 × 1/2), and one quarter is the inverse square of two (2 × 2). If the object is moved four times as far away, its gravitational force becomes one sixteenth of what it was, one sixteenth being the inverse square of four. In a four dimensional world, gravity would follow an inverse cube law instead of an inverse square law. This would have a devastating effect, according to Rees (2000, p. 135): “An orbiting planet that was slowed down—even slightly—would then plunge ever-faster into the Sun, rather than merely shift into a slightly smaller orbit, because an inverse-cube force strengthens so steeply towards the center; conversely, an orbiting planet that was slightly speeded up would quickly spiral outwards into darkness.” Only an inverse square law of gravity allows for stable orbits of planets. The same is true for orbits of electrons. If gravity and electromagnetism operated according to anything other than an inverse square law, there would be no stable atoms (Rees 2000, p. 136; Barrow and Tipler 1996, pp. 265–266). If there were only two dimensions, it would be difficult for a functioning brain to exist. Barrow and Tipler (1996, p. 266), citing the work of Whitrow (1959), said, “He argues that if the spatial structure were of dimension two or less then nerve cells (or their analogues) would have to intersect when superimposed and a severe limitation on informationprocessing of any complexity would result.” It also appears that reliable electromagnetic signaling (of the kind we use in radios, televisions, computers, and telephone systems, as well as in biological neural systems) is possible only in a three dimensional universe. Barrow and Tipler (1996, p. 268) explained, “In two-dimensional spaces wave signals emitted at different times can be received simultaneously: signal reverberation occurs. It is impossible to transmit sharply defined signals in two dimensions.” Reliable transmission requires not only that waves are without reverberation but also without distortion. Barrow and Tipler went on to say, “Three-dimensional worlds allow spherical waves . . . to propagate in distortionless fashion. . . . Only threedimensional worlds appear to possess the ‘nice’ properties necessary for the transmission of high-fidelity signals because of the simultaneous realization of sharp and distortionless propagation. . . . If living systems require high-fidelity wave propagation for their existence to be possible, then we could not expect to observe the world to possess other than three spatial dimensions.” Also, the gravity waves of Einstein’s general theory of relativity could only propagate in a universe with three spatial dimensions and one time dimension (Barrow and Tipler 1996, p. 273). A modern cosmological theory, string theory, relies on a universe with ten spatial dimensions and one time dimension; however, all but three of the spatial dimensions are compacted on the microscopic level and have no visible effect on wave propagation (Barrow and Tipler 1996, pp. 274–275).

How to explain the Fine tuning

The existence of a universe in which human life as we know it is possible depends on the fine tuning of several constants and ratios of physical forces. How did this fine tuning come about? Today, theorists recognize three main possibilities. First, the fine tuning in the universe of our experience might be determined by an as yet undiscovered physical law. Second, it could be that our universe is only one of an infinite number of universes, each with different values for the constants and ratios, and we just happen to be living in the one with the values that will allow life to arise. Third, the fine tuning could be the result of providential design. Let us now consider each of these possibilities, beginning with physical determination of the fundamental constants and ratios. In modern cosmology, some theorists propose that the fine tuning of universal constants and ratios of fundamental forces of nature will eventually be predicted by a grand unified theory of everything. At the present moment the biggest obstacle to a theory of everything is the unification of quantum mechanics and Einstein’s general relativity theory. Quantum mechanics does very well in explaining the world of atoms and subatomic particles, where the main forces are electromagnetism, the atomic weak force, and the atomic strong force. Relativity theory does very well in explaining the action of gravity on the larger scale of the universe. At present no theory has successfully integrated both quantum mechanics and relativity theory, and this unification is especially necessary to explain the very early history of the Big Bang universe, when all the forces of nature were unified. One theory that promises to unify gravity with the other three fundamental forces is superstring theory. According to superstring theory, the basic units of matter are very tiny circular “strings” of energy. The various subatomic particles are strings vibrating at different frequencies in ten dimensional space. Superstring theorists claim that many of the fine tunings of fundamental constants and ratios of natural forces could be directly derived from the theory. But at the present moment there is no physical verification of superstring theory. “Strings” are many orders of magnitude smaller than the smallest subatomic particles visible in the biggest particle colliders. Rees (2000, p. 145) calls attention to the “unbridged gap between the intricate complexity of ten-dimensional string theory and any phenomena that we can observe or measure.” Until some kind of verification can be obtained, superstring theory remains in the realm of speculation and cannot be called upon to resolve the fine tuning problem. In the absence of a physical theory that determines the finely tuned fundamental constants observed in our universe, one can consider the possibility that some intelligent designer adjusted the constants. A good many cosmologists would rather not have it come down to this, so they appeal to the existence of innumerable other universes, in which the constants vary randomly. Among these universes is ours. There are varieties of ways to get many universes. One proposal is that the Big Bang is cyclical. A Big Bang universe ends in a “Big Crunch,”compacting itself into a singularity, a point of unlimited density, and then bounces back into existence in another Big Bang. And the process repeats itself endlessly, with each universe having a different set of fundamental constants. But Barrow and Tipler (1996, pp. 248–249) stated: “Only in those cycles in which the ‘deal’ is right will observers evolve. . . . The problem with this idea is that it is far from being testable. . . . Also, if the permutation at each singularity extends to the constants of Nature, why not to the space-time topology and curvature as well? And if this were the case, sooner or later the geometry would be exchanged for a noncompact structure bound to expand for all future time. No future singularity would ensue and the constants of Nature would remain forever invariant. . . . However, why should this final permutation of the constants and topology just happen to be the one which allows the evolution of observers!” For our purposes, the main point is that the cyclic universe idea is an untestable speculation motivated by the desire to avoid the idea that God finely tuned the fundamental constants and ratios we observe in our universe. One of the main interpretations of quantum mechanics also assumes many universes. Quantum mechanics involves transforming the deterministic equations of ordinary physics to yield a wave function specifying a range of statistical probabilities. The situation we observe in the universe of our experience represents only one of these statistical probabilities. According to the “many worlds” interpretation of quantum mechanics, the other possibilities are simultaneously realized in separate universes. Yet another way to introduce many universes is to propose that just after the initial Big Bang many regions of the universe had their own Mini Bangs and moved away from each other so quickly that light signals were no longer able to pass between them. Thus isolated from each other, these noncommunicating regions are in effect separate simultaneously existing universes. No matter how they get many universes, cosmologists concerned with the fine tuning question go on to propose that in each of these universes the fundamental constants are adjusted differently, by chance. And we just happen to find ourselves in the universe where all these constants are adjusted so as to allow the presence of stable stars, planets, atoms, and the development of life forms. Among modern cosmologists, Rees (2000, p. 4), for example, favors this many universe explanation. But he himself has admitted it is “speculative” (Rees 2000, p. 11). There is no way of demonstrating by the methods of modern materialistic science that these many alternative universes actually exist. And even if it could be shown they existed, one would have to further show that in each of them the fundamental constants varied randomly.According to theVedic cosmology,alternative material universes do exist. An unlimited number of them emanate from Maha Vishnu. But in each one of them there is life, according to the Vedic accounts, indicating that in each universe the fundamental constants of nature would show the appropriate fine tuning. In short, the hypothesis of many universes does not in itself provide an escape from the fine tuning problem or rule out providential design. In the absence of a physical theory that yields the observed values of the fundamental constants, and in the absence of experimental evidence for a multiplicity of universes with randomly varying fundamental constants, the fine tuning of physical constants that we observe in our own universe, the only one that we can observe, points directly to providential design. In essence, all the attempts by modern cosmologists to come up with alternative explanations are motivated by the desire to avoid the default conclusion that God is responsible for the fine tuning. Subscribe to *Out There*

Post Comment   Private Reply   Ignore Thread  


TopPage UpFull ThreadPage DownBottom/Latest

Begin Trace Mode for Comment # 6.

#6. To: wudidiz (#0)

Some light afternoon reading huh?

intotheabyss  posted on  2012-01-09   16:08:34 ET  Reply   Untrace   Trace   Private Reply  


Replies to Comment # 6.

        There are no replies to Comment # 6.


End Trace Mode for Comment # 6.

TopPage UpFull ThreadPage DownBottom/Latest


[Home]  [Headlines]  [Latest Articles]  [Latest Comments]  [Post]  [Sign-in]  [Mail]  [Setup]  [Help]