Sunday, September 13, 2009

The Brain

The brain has three main parts, the cerebrum, the cerebellum, and the brain stem. The brain is divided into regions that control specific functions.

THE CEREBRUM:
Frontal Lobe

* Behavior
* Abstract thought processes
* Problem solving
* Attention
* Creative thought
* Some emotion
* Intellect
* Reflection
* Judgment
* Initiative
* Inhibition
* Coordination of movements
* Generalized and mass movements
* Some eye movements
* Sense of smell
* Muscle movements
* Skilled movements
* Some motor skills
* Physical reaction
* Libido (sexual urges)

Occipital Lobe

* Vision
* Reading

Parietal Lobe

* Sense of touch (tactile senstation)
* Appreciation of form through touch (stereognosis)
* Response to internal stimuli (proprioception)
* Sensory combination and comprehension
* Some language and reading functions
* Some visual functions

Temporal Lobe

* Auditory memories
* Some hearing
* Visual memories
* Some vision pathways
* Other memory
* Music
* Fear
* Some language
* Some speech
* Some behavior amd emotions
* Sense of identity

Right Hemisphere (the representational hemisphere)

* The right hemisphere controls the left side of the body
* Temporal and spatial relationships
* Analyzing nonverbal information
* Communicating emotion

Left Hemisphere (the categorical hemisphere)

* The left hemisphere controls the right side of the body
* Produce and understand language

Corpus Callosum

* Communication between the left and right side of the brain

THE CEREBELLUM

* Balance
* Posture
* Cardiac, respiratory, and vasomotor centers

THE BRAIN STEM

* Motor and sensory pathway to body and face
* Vital centers: cardiac, respiratory, vasomotor

Hypothalamus

* Moods and motivation
* Sexual maturation
* Temperature regulation
* Hormonal body processes

Optic Chiasm

* Vision and the optic nerve

Pituitary Gland

* Hormonal body processes
* Physical maturation
* Growth (height and form)
* Sexual maturation
* Sexual functioning

Spinal Cord

* Conduit and source of sensation and movement

Pineal Body

* Unknown

Ventricles and Cerebral Aqueduct

* Contains the cerebrospinal fluid that bathes the brain and spinal cord

excerpt borrowed from http://www.enchantedlearning.com/subjects/anatomy/brain/Structure.shtml

Thursday, September 10, 2009

The Structure of the Brain

The nervous system is your body's decision and communication center. The central nervous system (CNS) is made of the brain and the spinal cord and the peripheral nervous system (PNS) is made of nerves. Together they control every part of your daily life, from breathing and blinking to helping you memorize facts for a test. Nerves reach from your brain to your face, ears, eyes, nose, and spinal cord... and from the spinal cord to the rest of your body. Sensory nerves gather information from the environment, send that info to the spinal cord, which then speed the message to the brain. The brain then makes sense of that message and fires off a response. Motor neurons deliver the instructions from the brain to the rest of your body. The spinal cord, made of a bundle of nerves running up and down the spine, is similar to a superhighway, speeding messages to and from the brain at every second.

The brain is made of three main parts: the forebrain, midbrain, and hindbrain. The forebrain consists of the cerebrum, thalamus, and hypothalamus (part of the limbic system). The midbrain consists of the tectum and tegmentum. The hindbrain is made of the cerebellum, pons and medulla. Often the midbrain, pons, and medulla are referred to together as the brainstem.

What do each of these lobes do?

* Frontal Lobe- associated with reasoning, planning, parts of speech, movement, emotions, and problem solving
* Parietal Lobe- associated with movement, orientation, recognition, perception of stimuli
* Occipital Lobe- associated with visual processing
* Temporal Lobe- associated with perception and recognition of auditory stimuli, memory, and speech

Note that the cerebral cortex is highly wrinkled. Essentially this makes the brain more efficient, because it can increase the surface area of the brain and the amount of neurons within it.

A deep furrow divides the cerebrum into two halves, known as the left and right hemispheres. The two hemispheres look mostly symmetrical yet it has been shown that each side functions slightly different than the other. Sometimes the right hemisphere is associated with creativity and the left hemispheres is associated with logic abilities. The corpus callosum is a bundle of axons which connects these two hemispheres.

Nerve cells make up the gray surface of the cerebrum which is a little thicker than your thumb. White nerve fibers underneath carry signals between the nerve cells and other parts of the brain and body.

The neocortex occupies the bulk of the cerebrum. This is a six-layered structure of the cerebral cortex which is only found in mammals. It is thought that the neocortex is a recently evolved structure, and is associated with "higher" information processing by more fully evolved animals (such as humans, primates, dolphins, etc).

The Cerebellum: The cerebellum, or "little brain", is similar to the cerebrum in that it has two hemispheres and has a highly folded surface or cortex. This structure is associated with regulation and coordination of movement, posture, and balance.

The cerebellum is assumed to be much older than the cerebrum, evolutionarily. What do I mean by this? In other words, animals which scientists assume to have evolved prior to humans, for example reptiles, do have developed cerebellums. However, reptiles do not have neocortex. Go here for more discussion of the neocortex or go to the following web site for a more detailed look at evolution of brain structures and intelligence: "Ask the Experts": Evolution and Intelligence

Limbic System: The limbic system, often referred to as the "emotional brain", is found buried within the cerebrum. Like the cerebellum, evolutionarily the structure is rather old.

This system contains the thalamus, hypothalamus, amygdala, and hippocampus.

Brain Stem: Underneath the limbic system is the brain stem. This structure is responsible for basic vital life functions such as breathing, heartbeat, and blood pressure. Scientists say that this is the "simplest" part of human brains because animals' entire brains, such as reptiles (who appear early on the evolutionary scale) resemble our brain stem. Look at a good example of this here.

The brain stem is made of the midbrain, pons, and medulla.

excerpt borrowed from http://serendip.brynmawr.edu/bb/kinser/Structure1.html#cerebrum

Tuesday, September 8, 2009

My Personal View - Fred Hoyle

Fred Hoyle - A Personal View [1960]

Looking to the Future

I come now to an entirely different class of question. With the clear understanding that what I am going to say has no agreed basis among scientist but represents my own personal views, I shall try to sum up the general philosophic issues that seem to come out of our survey of the Universe.
It is my view that man's unguided imagination could never have chanced on such structure as I have put before you. No literary genius could have invented a story one-hundreth part as fantastic as the sober facts that have been unearthed by astronomical science. You need only compare our inquiry into the nature of the universe with the tales of such acknowledged masters as Jules Verne and H. G. Wells to see that fact outweighs fiction by an enormous margin. One is naturally led to wonder what the impact of the new cosmology would have been on a man like Newton, who would have been able to take it in, details and all, in one clean sweep. I think that Newton would have been quite unprepared for any such revelation, and that it would have had a shattering effect on him.
Is it likely that any astonishing new developments are lying in wait for us? Is it possible that the cosmology of 500 years hence will extend as far beyond our present beliefs as our cosmology goes beyond that of Newton? It may surprise you to hear that I doubt whether this will be so. If this should appear presumptuous to you, I think you should consider what I said earlier about the observable region of the Universe. As you will remember, even with a perfect telescope we could penetrate only about twice as far into space as the new telescope at Palomar. This means that there are no new fields to be opened up by the telescopes of the future, and this is a point of no small importance in our cosmology.There will be many advances in the detailed understanding of matters that still baffle us. Of the larger issues I expect a considerable improvement in the theory of the expanding Universe. Continuous creation I expect to play an important role in the theories of the future. Indeed, I expect that much will be learned about continuos creation, especially connection with atomic physics. But by and large, I think that our present picture will turn out to bear an appreciable resemblance to the cosmologies of the future.
In all this I have assumed that progress will be made in the future. It is quite on the cards that astronomy may go backward, as, for instance, Greek astronomy went backwards after the time of Hipparchus. And in saying this I am not thinking about an atomic war destroying civilization, but about the increasing tendency to rivet scientific inquiry in fetters. Secrecy, nationalism, the Marxist ideology- these are some of the things that are threatening to choke the life out of science. You may possible think that this might be a good thing, as we have obviously had quite enough of atom bombs, disease spreading bacteria, and radioactive poisons to last us for a long time. But this is not the way in which it works. What will happen if science declines is that there will be more work, not less. on the comparatively easy problems of destruction. It will be the real science, where the adversary is not man but the Universe itself, that will suffer.
Next we come to a question that everyone, scientist and nonscientist alike, must have asked at some time. What is man's place in the Universe? I should like to make a start on this momentous issue by considering the view of the out-and-out materialist. The appeal of their argument is based on simplicity. The universe is here, they say, so let us take it for granted. Then the Earth and other planets must arise in the way we have already discussed. On a suitably favored planet like the Earth, life would be very likely to arise, and once it had started so the argument goes, only the biological processes of mutation and natural selection are needed to produce living creatures as we know them. Such creatures are no more than ingenuous machines that have evolved as strange by-products in an odd corner of the universe. No important connection exists, so the argument concludes, between these machines and the universe as a whole, and this explains why all attempts by the machines themselves to find such a connection have failed.
Most people object to this argument for the not very good reason that they do not like to think of themselves as machines. But taking the argument at face value, I see not point that can actually be disproved, except the claim of simplicity. The outlook of the materialist is not simple; it is really very complicated. The apparent simplicity is only achieved by taking the existence of the Universe for granted. For myself there is a great deal more about the Universe that I should like to know. Why is the Universe as it is and not something else? Why is the Universe here at all? It is true that at present we have no clue to the answers to question such as these, and it may be that the materialist are right in saying that no meaning can be attached to them. But throughout the history of science, people have been asserting that such and such an issue is inherently beyond the scope of reasoned inquiry, and time after time they have been proved wrong. Two thousand years ago it would have been thought quite impossible to investigate the nature of the Universe to the extent I have been describing it to you in this book. And I dare say that you yourself would have said, not so very long ago, that it was impossible to learn anything about the way the universe is created. All experience teaches us that no one has yet asked too much.
And now I should like to give some considerations to contemporary religious beliefs. There is a good deal of cosmology in the Bible. My impression of it is that it is remarkable conception, considering the time when it was written. But I think it can hardly be denied that the cosmology of the ancient Hebrews is only the merest daub compared with the sweeping grandeur of the picture revealed by modern science. Is it in any way reasonable to suppose that it was given to the Hebrews to understand mysteries far deeper than anything we can comprehend, when it is quite clear that they were completely ignorant of many matters that seem commonplace to us? No, it seems to me that religion is but a desperate attempt to find an escape from the truly dreadful situation in which we find ourselves. Here we are in this wholly fantastic Universe with scarcely a clue as to whether our existence has any real significance. No wonder then that many people feel the need for some belief that gives them a sense of security, and no wonder that they become very angry with people like me who say that this security is illusory. But I do not like the situation any better then they do. The difference is that I cannot see how the smallest advantage is to be gained from deceiving myself. We are in rather the situation of a man in a desperate, difficult position on a steep mountain. A materialist is like a man who becomes crag-fast and keeps shouting: "I'm safe, I'm safe" because he doesn't fall. The religious person is like a man who goes to the other extreme and rushes up the first route that shows the faintest hope of escape, and who is entirely reckless of the yawning precipices that lie below him.
I will illustrate all this by saying what I think about perhaps the most inscrutable question of all: do our minds survive death? To make any progress with this question it is necessary to understand what our minds are. If we knew this with any precision then I have no doubt we should be well on the way to getting a satisfactory answer. My own answer would be that mind is an intricate organization of matter. In so far as the organization can be remembered and reproduced there is no such thing as death. If ordinary atoms of carbon, oxygen, hydrogen, nitrogen, etc., could be fitted together into exactly the structural organization of Homer, or of Titus Oates, then these individuals would come alive exactly as they were originally. The whole issue therefore turns on whether our particular organization is remembered in some fashion. If it is, there is no death. If it is not, there is complete oblivion.
I should like to discuss a little further the beliefs of the Christians as I see them myself. In their anxiety to avoid the notion that death is the complete end of our existence, they suggest what is to me an equally horrible alternative. If I were given the choice of how long I should live with my present physical and mental equipment, I should decide on a good deal more then 70 years. But I doubt whether I should be wise to decide on more then 300 years. Already I am very much aware of my own limitations, and I think that 300 years is as long as I should like to put up with them. Now what the Christians offer me is an eternity of frustration. ANd it is no good their trying to mitigate the situation by saying that sooner or later my limitations would be removed, because this could not be done without altering me. It strikes me as very curious that the Christians have so little to say about how they propose eternity should be spent.
Perhaps I had better end by saying how I should arrange matters if it were my decision to make. It seems to me that the greatest lesson of adult life is that one's own consciousness is not enough. What one of us would not like to share the consciousness of half a dozen chosen individuals? What writer would not like to share the consciousness of Shakespeare? What musician that of Beethoven or Mozart? What mathematician that of Gauss? What I would choose would be an evolution of life whereby the essence of each of us becomes welded together into some vastly larger and more potent structure. I think such a dynamic evolution would be more in keeping with the grandeur of the physical Universe than the static picture offered by formal religion.
What is the chance of such an idea being right? Well, if there is one important result that comes out of our inquiry into the nature of the Universe it is this: when by patient inquiry we learn the answer to any problem, we always find, both as a whole and in detail, that the answer thus revealed is finer in concept and design than anything we could ever have arrived at by random guess. And this, I believe, will be the same for the deeper issues we have been discussing. I think that all our present guesses are likely to prove but a very pale shadow of the real thing; and it is on this note that I must now finish. Perhaps the most majestic feature of our whole existence is that while our intelligences are powerful enough to penetrate deeply into the evolution of this quite incredible Universe, we still have not the smallest clue to our own fate.

Wednesday, August 19, 2009

The Standard Model of Particle Physics

Chemistry can be understood in the physics of 3 particles (proton, neutron and electron), and the influence of the electromagnetic force. Nuclear physics can be understood in the physics of 4 particles (proton, neutron, electron and electron neutrino), and the influence of the strong and weak nuclear forces together with the electromagnetic force. The Standard Model Theory (SM) of particle physics provides a framework for explaining chemistry and nuclear physics (low energy processes). It additionally provides an explanation for sub-nuclear physics and some aspects of cosmology in the earliest moments of the universe (high energy processes).

The Standard Model is conceptually simple and contains a description of the elementary particles and forces. The SM particles are 12 spin-1/2 fermions (6 quarks and 6 leptons), 4 spin-1 ‘gauge’ bosons and a spin-0 Higgs boson. These are shown in the figure below and constitute the building blocks of the universe. The 6 quarks include the up and down quarks that make up the neutron and proton. The 6 leptons include the electron and its partner, the electron neutrino. The 4 bosons are particles that transmit forces and include the photon, which transmits the electromagnetic force. With the recent observation of the tau neutrino at Fermilab, all 12 fermions and all 4 gauge bosons have been observed. Seven of these 16 particles (charm, bottom, top, tau neutrino, W, Z, gluon) were predicted by the Standard Model before they were observed experimentally! There is one additional particle predicted by the Standard Model called the Higgs, which has not yet been observed. It is needed in the model to give mass to the W and Z bosons, consistent with experimental observations. While photons and gluons have no mass, the W and Z are quite heavy. The W weighs 80.3 GeV (80 times as much as the proton) and the Z weighs 91.2 GeV. The Higgs is expected to be heavy as well. Direct searches for it at CERN dictate that it must be heavier than 110 GeV.

The matter and force particles of the Standard Model. Up and down quarks were observed for the first time in electron-scattering experiments at SLAC in the late 1960s. The 1990 Nobel Prize in physics for this discovery was awarded to SLAC's Richard Taylor and to Jerome Friedman and Henry Kendall from MIT. The charm quark was discovered simultaneously in experiments at SLAC and at Brookhaven in 1974. SLAC's Burton Richter and MIT's Samuel Ting shared the 1976 Nobel Prize in physics for this discovery. The tau lepton was discovered at SLAC in 1975, for which SLAC's Martin Perl was awarded the 1995 Nobel Prize in physics.

The SM particles are considered to be point-like, but contain an internal ‘spin’ (angular momentum) degree of freedom which is quantized and can have values of 0, ½ or 1. Spin-1/2 particles obey Fermi statistics, which have as a consequence that no 2 electrons can be in the same quantum state. This feature is necessary for forming atoms more complex than hydrogen. Spin-1 and spin-0 particles obey Bose-Einstein statistics, which prefer to have many particles in the lowest energy or ground state. This phenomenon is responsible for superconductivity.
The Standard Model says that forces are the exchange of gauge bosons (the force particles) between interacting quarks and leptons. Feynman diagrams are useful to describe this pictorially. As illustrated in the figures below, two electrons may interact by scattering and exchanging a photon; or an electron and positron may collide and annihilate to form a Z particle, which then decays into a quark and anti-quark. Electromagnetic forces occur via exchange of photons; weak nuclear forces occur via exchange of W and Z particles; and strong nuclear forces occur via exchange of gluons. Electromagnetic forces and interactions are familiar to everyone. They are responsible for visible light and radiowaves, and are the physics behind the electronics and telecommunications industries. All quarks and leptons can interact electromagnetically. Strong nuclear forces are responsible for holding protons and neutrons together inside the nucleus, and for fueling the power of the sun. Only quarks interact via the strong interaction. Weak nuclear forces are responsible for radioactivity and also for exhibiting some peculiar symmetry features not seen with the other forces. In contrast to electromagnetic and strong forces, the laws of physics (ie. the strengths of the forces) for the weak force are different for particles and anti-particles (C Violation), for a scattering process and its mirror image (P Violation), and for a scattering process and the time reversal of that scattering process (T Violation). All quarks and leptons can interact via the weak interaction. The Standard Model provides much more than simply a description of electromagnetic, strong and weak interactions. Its mathematics provides explicit and accurate calculations for the rates at which these processes take place and relative probabilities for decays of unstable particles into other lower mass particles (such as for a Z particle to decay into different types of quarks and leptons).

excerpt borrowed from http://www-sldnt.slac.stanford.edu/alr/standard_model.htm

Wednesday, August 5, 2009

Conditioned Reflexes

Who was Ivan Pavlov?

The Russian scientist Ivan Petrovich Pavlov was born in 1849 in Ryazan, where his father worked as a village priest. In 1870 Ivan Pavlov abandoned the religious career for which he had been preparing, and instead went into science. There he had a great impact on the field of physiology by studying the mechanisms underlying the digestive system in mammals.

For his original work in this field of research, Pavlov was awarded the Nobel Prize in Physiology or Medicine in 1904. By then he had turned to studying the laws on the formation of conditioned reflexes, a topic on which he worked until his death in 1936. His discoveries in this field paved the way for an objective science of behavior.

Pavlov's drooling dogs

While Ivan Pavlov worked to unveil the secrets of the digestive system, he also studied what signals triggered related phenomena, such as the secretion of saliva. When a dog encounters food, saliva starts to pour from the salivary glands located in the back of its oral cavity. This saliva is needed in order to make the food easier to swallow. The fluid also contains enzymes that break down certain compounds in the food. In humans, for example, saliva contains the enzyme amylase, an effective processor of starch.

Pavlov became interested in studying reflexes when he saw that the dogs drooled without the proper stimulus. Although no food was in sight, their saliva still dribbled. It turned out that the dogs were reacting to lab coats. Every time the dogs were served food, the person who served the food was wearing a lab coat. Therefore, the dogs reacted as if food was on its way whenever they saw a lab coat.

In a series of experiments, Pavlov then tried to figure out how these phenomena were linked. For example, he struck a bell when the dogs were fed. If the bell was sounded in close association with their meal, the dogs learnt to associate the sound of the bell with food. After a while, at the mere sound of the bell, they responded by drooling.

Different kinds of reflexes

Reflexes make us react in a certain way. When a light beam hits our eyes, our pupils shrink in response to the light stimulus. And when the doctor taps you below the knee cap, your leg swings out. These reflexes are called unconditioned, or built-in. The body responds in the same fashion every time the stimuli (the light or the tap) is applied. In the same way, dogs drool when they encounter food.

Pavlov's discovery was that environmental events that previously had no relation to a given reflex (such as a bell sound) could, through experience, trigger a reflex (salivation). This kind of learnt response is called conditioned reflex, and the process whereby dogs or humans learn to connect a stimulus to a reflex is called conditioning.

Animals generally learn to associate stimuli that are relevant to their survival. Food aversion is an example of a natural conditioned reflex. If an animal eats something with a distinctive vanilla taste and then eats a tasteless poison that leads to nausea, the animal will not be particularly eager to eat vanilla-flavoured food the next time. Linking nausea to taste is an evolutionarily successful strategy, since animals that failed to learn their lesson did not last very long.

Why were Pavlov's findings given so much acknowledgment?

Pavlov's description on how animals (and humans) can be trained to respond in a certain way to a particular stimulus drew tremendous interest from the time he first presented his results. His work paved the way for a new, more objective method of studying behavior.

So-called Pavlovian training has been used in many fields, with anti-phobia treatment as but one example. An important principle in conditioned learning is that an established conditioned response (salivating in the case of the dogs) decreases in intensity if the conditioned stimulus (bell) is repeatedly presented without the unconditioned stimulus (food). This process is called extinction.

In order to treat phobias evoked by certain environmental situations, such as heights or crowds, this phenomenon can be used. The patient is first taught a muscle relaxation technique. Then he or she is told , over a period of days, to imagine the fear-producing situation while trying to inhibit the anxiety by relaxation. At the end of the series, the strongest anxiety-provoking situation may be brought to mind without anxiety. This process is called systematic desensitization.

Conditioning forms the basis of much of learned human behavior. Nowadays, this knowledge has also been exploited by commercial advertising. An effective commercial should be able to manipulate the response to a stimulus (like seeing a product's name) which initially does not provoke any feeling. The objective is to train people to make the "false" connection between positive emotions (e.g. happiness or feeling attractive) and the particular brand of consumer goods being advertised.

Pavlov's prize

Although the first image that comes to mind while mentioning Ivan Pavlov's name is his drooling dogs, he became a Nobel Laureate for his research in a different field. In 1904 he received the Nobel Prize in Physiology or Medicine for his pioneering studies of how the digestive system works.

Until Pavlov started to scrutinize this field, our knowledge of how food was digested in the stomach, and what mechanisms were responsible for regulating this, were quite foggy.

In order to understand the process, Pavlov developed a new way of monitoring what was happening. He surgically made fistulas in animals' stomachs, which enabled him to study the organs and take samples of body fluids from them while they continued to function normally.

excerpt taken from http://nobelprize.org/educational_games/medicine/pavlov/readmore.html

Thursday, July 16, 2009

Panspermia

An idea, with ancient roots, according to which life arrives, ready-made, on the surface of planets from space. Anaxagoras is said to have spoken of the "seeds of life" from which all organisms derive. Panspermia began to assume a more scientific form through the proposals of Berzelius (1834), Richter (1865), Thomson (Lord Kelvin) (1871), and Helmholtz (1871), finally reaching the level of a detailed, widely-discussed hypothesis through the efforts of the Swedish chemist Svante Arrhenius. Originally in 1903, but then to a wider audience through a popular book in 1908,3 Arrhenius urged that life in the form of spores could survive in space and be spread from one planetary system to another by means of radiation pressure. He generally avoided the problem of how life came about in the first place by suggesting that it might be eternal, though he did not exclude the possibility of living things generating from simpler substances somewhere in the universe. In Arrhenius's view, spores escape by random movement from the atmosphere of a planet that has already been colonized and are then launched into interstellar space by the pressure of starlight ("radiopanspermia"). Eventually, some of the spores fall upon another planet, such as the Earth, where they inoculate the virgin world with new life or, perhaps, compete with any life-forms that are already present.

Arrhenius's ideas prompted a variety of experimental work, such as that of Paul Becquerel, to test whether spores and bacteria could survive in conditions approximating those in space. A majority of scientists reached the conclusion that stellar ultraviolet would probably prove deadly to any organisms in the inner reaches of a planetary system and, principally for this reason, panspermia quietly faded from view-only to be revived some four decades later.


Sagan's analysis

In the early 1960s, Carl Sagan analyzed in detail both the physical and biological aspects of the Arrhenius scenario. The dynamics of a microorganism in space depend on the ratio p/g, where p is the repulsive force due to the radiation pressure of a star and g is the attractive force due to the star's gravitation. If p > g, a microbe that has drifted into space will move away from the star;
if p less then g, the microbe will fall toward the star. For a microbe to escape into interstellar space from the vicinity of a star like the Sun, the organism would have to be between 0.2 and 0.6 microns across. Though small, this is within the range of some terrestrial bacterial spores and viruses. The ratio p/g increases for more luminous stars, enabling the ejection of larger microbes. However, main sequence stars brighter than the Sun are also hotter, so that they emit more ultraviolet radiation which would pose an increased threat to space-borne organisms. Additionally, such stars have a shorter main sequence lifespan, so that they provide less opportunity for life to take hold on any worlds that might orbit around them. These considerations, argued Sagan, constrain "donor" stars for Arrhenius-style panspermia to spectral types G5 (Sun-like) to A0. Stars less luminous than the Sun would be unable to eject even the smallest of known living particles. "Acceptor" stars, on the other hand, must have lower p/g ratios in order to allow microbes, approaching from interstellar space, to enter their planetary systems. The most likely acceptor worlds, Sagan concluded, are those circling around red dwarfs (dwarf M stars), or in more distant orbits around G stars and K stars. In the case of the solar system, he surmised, the best place to look for life of extrasolar origin would be the moons of the outer planets, in particular Triton.


Life-carrying rocks?

Many variations on the panspermia theme have been put forward. William Thomson (Lord Kelvin) proposed that spores might travel aboard meteorites ("lithopanspermia"), thus affording them better protection from high-energy radiation in space. Whether events violent enough to hurl rocks from the surface of a biologically active planet into interstellar space ever occur is not clear. But there is now overwhelming evidence that ballistic panspermia occasionally operates between worlds of the same planetary system. This follows the discovery of meteorites on Earth that have almost certainly come from the surface of Mars (see SNC meteorites) and the Moon. There is also controversial evidence for fossil remains aboard some carbonaceous chondrites, including the Orgueil meteorite.


Contamination

In the 1960s, Thomas Gold pointed out another way in which life might travel from world to world (see "garbage theory," of the origin of life). A team of explorers from an advanced, interstellar-faring race might land on the planet of a foreign star and, unwittingly, leave behind "bugs" which then adapt to the local conditions. He imagined, for example, the visitors having a picnic and not clearing up afterward. What effect microscopic alien fauna and flora might have on the indigenous species is impossible to predict, but such considerations were foremost in the minds of scientists receiving the first samples of rock and soil from the Moon. Precautions against alien contamination will be even more important when the first spacecraft return from Mars or Europa where the possibility of extant life is far greater ( back-contamination). And there is the reverse problem (forward-contamination). The remarkable case of Surveyor 3 makes it clear that some terrestrial microbes can survive for significant periods in hostile conditions on other worlds. What if such a world (like Mars) had life-forms of its own? What chaos might the "alien" microbes from Earth wreak? It would be tragic indeed if the very means of discovering the first examples of extraterrestrial life were also to be the vehicle of its extinction. On the other hand, as Carl Sagan pointed out, if Gold's "picnic scenario" had actually happened in the Earth's past "some microbial resident of a primordial cookie crumb may be the ancestor of us all." Just as the chance of accidental contamination arising from intelligent activity cannot be ruled out, there is the complimentary possibility of intentional or directed panspermia.

Life from space

Today, the panspermia hypothesis has finally achieved some measure of scientific respectability. Although it remains the orthodox view that life evolved in situ on this world and, possibly, many others, there is mounting evidence of at least some extraterrestrial input to the formative stages of planet-based biology. Prebiotic chemicals have been detected in interstellar clouds (similar to that from which the Solar System formed), comets, and meteorites (see astrochemistry). At the very least, it seems that some of the raw ingredients for life, such as amino acids, may have fallen from the sky in addition to being manufactured here on Earth. But some researchers have gone much further in their speculations. Most notably, Fred Hoyle and Chandra Wickramasinghe have argued persistently since the 1970s that complex organic substances, and perhaps even primitive organisms, might have evolved on the surface of cosmic dust grains in space and then been transported to the Earth's surface by comets and meteorites (see life, in space). The extraordinary durability of some extremophiles, bacterial spores, and even exposed DNA, lends credence to the view that simple life-forms may have originated between the stars or been capable of surviving long interstellar journeys.

excerpt taken from http://www.daviddarling.info/encyclopedia/P/panspermia.html

Monday, July 13, 2009

Carl Gustav Jung

Amid all the talk about the "Collective Unconscious" and other sexy issues, most readers are likely to miss the fact that C.G. Jung was a good Kantian. His famous theory of Synchronicity, "an acausal connecting principle," is based on Kant's distinction between phenomena and things-in-themselves and on Kant's theory that causality will not operate among thing-in-themselves the way it does in phenomena. Thus, Kant could allow for free will (unconditioned causes) among things-in-themselves, as Jung allows for synchronicity ("meaningful coincidences"). Next to Kant, Jung is close to Schopenhauer, praising him as the first philosopher he had read, "who had the courage to see that all was not for the best in the fundaments of the universe" [Memories, Dreams, Reflections, p. 69]. Jung was probably unaware of the Friesian background of Otto's term "numinosity" when he began to use it for his Archetypes, but it is unlikely that he would object to the way in which Otto's theory, through Fries, fits into Kantian epistemology and metaphysics.

Jung's place in the Kant-Friesian tradition is on a side that would have been distasteful to Kant, Fries, and Nelson, whose systems were basically rationalistic. Thus Kant saw religion as properly a rational expression of morality, and Fries and Nelson, although allowing an aesthetic content to religion different from morality, nevertheless did not expect religion to embody much more than good morality and good art. Schopenhauer, Otto, and Jung all represent an awareness that more exists to religion and to human psychological life than this. The terrifying, uncanny, and fascinating elements of religion and ordinary life are beneath the notice of Kant, Fries, and Nelson, while they are indisputable and irreducible elements of life, for which there must be an account, with Schopenhauer, Otto, and Jung. As Jung again said of Schopenhauer: "He was the first to speak of the suffering of the world, which visibly and glaringly surrounds us, and of confusion, passion, evil -- all those things which the others hardly seemed to notice and always tried to resolve into all-embracing harmony and comprehensibility" [ibid. p. 69]. It is an awareness of this aspect of the world that renders the religious ideas of "salvation" meaningful; yet "salvation" as such is always missing from moralistic or aesthetic renderings of religion. Only Jung could have written his Answer to Job.

Jung's great Answer to Job, indeed, represents an approach to religion that is all but unique. Placing God in the Unconscious might strike most people as reducing him to a mere psychological object; but that is to overlook Jung's Kantianism. The Unconscious, and especially the Collective Unconscious, belongs to Kantian things-in-themselves, or to the transcendent Will of Schopenhauer. Jung was often at pains not to complicate his theory of the Archetypes by committing himself to a metaphysical theory -- he wanted the theory to work whether he was talking about the brain or about the Transcendent -- but that was merely a concession to the materialistic bias of contemporary science. He had no materialistic commitment himself and, when it came down to it, was not going to accept such naive reductionism. Instead, he was willing to rethink how the Transcendent might operate. Thus, he says about Schopenhauer:

I felt sure that by "Will" he really meant God, the Creator, and that he was saying that God was blind. Since I knew from experience that God was not offended by any blasphemy, that on the contrary He could even encourage it because He wished to evoke not only man's bright and positive side but also his darkness and ungodliness, Schopenhauer's view did not distress me. [ibid. pp. 69-70]

The Problem of Evil, which for so many people simply denuminizes religion, and which Schopenhauer used to reject the value of the world, became a challenge for Jung in the psychoanalysis of God. The God of the Bible is indeed a personality, and seemingly not always the same one. God as a morally evolving personality is the extraordinary conception of Answer to Job. What Otto saw as the evolution of human moral consciousness, Jung turns right around on the basis of the principle that the human unconscious, expressed spontaneously in religious practice and literature, transcends mere human subjectivity. But the transcendent reality in the unconscious is different in kind from consciousness. As Jung said in Memories, Dreams, Reflections again:

If the Creator were conscious of Himself, He would not need conscious creatures; nor is it probable that the extremely indirect methods of creation, which squander millions of years upon the development of countless species and creatures, are the outcome of purposeful intention. Natural history tells us of a haphazard and casual transformation of species over hundreds of millions of years of devouring and being devoured. The biological and political history of man is an elaborate repetition of the same thing. But the history of the mind offers a different picture. Here the miracle of reflecting consciousness intervenes -- the second cosmogony [ed. note: what Teilhard de Chardin called the origin of the "noosphere," the layer of "mind"]. The importance of consciousness is so great that one cannot help suspecting the element of meaning to be concealed somewhere within all the monstrous, apparently senseless biological turmoil, and that the road to its manifestation was ultimately found on the level of warm-blooded vertebrates possessed of a differentiated brain -- found as if by chance, unintended and unforeseen, and yet somehow sensed, felt and groped for out of some dark urge. [p. 339]

In other words, a "meaningful coincidence." Jung also says,

As far as we can discern, the sole purpose of human existence is to kindle a light in the darkness of mere being. It may even be assumed that just as the unconscious affects us, so the increase in our consciousness affects the unconscious. [p. 326]

However, Jung has missed something there. If consciousness is "the light in the darkness of mere being," consciousness alone cannot be the "sole purpose of human existence," since consciousness as such could appear as just a place of "mere being" and so would easily become an empty, absurd, and meaningless Existentialist existence. Instead, consciousness allows for the meaningful instantiation of existence, both through Jung's process of Individuation, by which the Archetypes are given unique expression in a specific human life, and from the historic process that Jung examines in Answer to Job, by which interaction with the unconscious alters in turn the Archetypes that come to be instantiated. While Otto could understand Job's reaction to God, as the incomprehensible Numen, Jung thinks of God's reaction to Job, as an innocent and righteous man jerked around by God's unconsciousness. Jung's idea that the Incarnation then is the means by which God redeems Himself from His morally false position in Job is an extraordinary reversal (I hesitate to say "deconstruction") of the consciously expressed dogma that the Incarnation is to redeem humanity.

It is not too difficult to see this turn in other religions. The compassion of the Buddhas in Mahâyâna Buddhism, especially when the Buddha Shakyamuni comes to be seen as the expression of a cosmic and eternal Dharma Body, is a hand of salvation stretched out from the Transcendent, without, however, the complication that the Buddha is ever thought responsible for the nature of the world and its evils as their Creator. That complication, however, does occur with Hindu views of the divine Incarnations of Vishnu. Closer to a Jungian synthesis, on the other hand, is the Bahá'í theory that divine contact is though "Manifestations," which are neither wholly human nor wholly divine: merely human in relation to God, but entirely divine in relation to other humans. Such a theory must appear Christianizing in comparison to Islam, but it avoids the uniqueness of Christ as the only Incarnation in Christianity itself. This is conformable to the Jungian proposition that the unconscious is both a side of the human mind and a door into the Transcendent. When that door opens, the expression of the Transcendent is then conditioned by the person through which it is expressed, possessing that person, but it is also genuinely Transcendent and reflecting the ongoing interaction that the person historically embodies. The possible "mere being" even of consciousness then becomes the place of meaning and value.

Whether "psychoanalysis" as practiced by Freud or Jung is to be taken seriously anymore is a good question; but both men will survive as philosophers long after their claims to science or medicine may be discounted. Jung's Kantianism enables him to avoid the materialism and reductionism of Freud ("all of civilization is a substitute for incest") and, with a great breadth of learning, employs principles from Kant, Schopenhauer, and Otto that are easily conformable to the Kant-Friesian tradition. The Answer to Job, indeed, represents a considerable advance beyond Otto, into the real paradoxes that are the only way we can conceive transcendent reality.

excerpt taken from http://www.friesian.com/jung.htm

Monday, July 6, 2009

Quasars

In the 1960s it was observed that certain objects emitting radio waves but thought to be stars had very unusual optical spectra. It was finally realized that the reason the spectra were so unusual is that the lines were Doppler shifted by a very large amount, corresponding to velocities away from us that were significant fractions of the speed of light. The reason that it took some time to come to this conclusion is that, because these objects were thought to be relatively nearby stars, no one had any reason to believe they should be receding from us at such velocities.

Quasars and QSOs

These objects were named Quasistellar Radio Sources (meaning "star-like radio sources") which was soon contracted to quasars. Later, it was found that many similar objects did not emit radio waves. These were termed Quasistellar Objects or QSOs. Now, all of these are often termed quasars (Only about 1% of the quasars discovered to date have detectable radio emission).

Quasars Are Related to Active Galaxies

The quasars were deemed to be strange new phenomena, and initially there was considerable speculation that new laws of physics might have to be invented to account for the amount of energy that they produced. However, subsequent research has shown that the quasars are closely related to the active galaxies that have been studied at closer distances. We now believe quasars and active galaxies to be related phenomena, and that their energy output can be explained using the theory of general relativity. In that sense, the quasars are certainly strange, but perhaps are not completely new phenomena.

Quasar Redshifts Imply Enormous Distance and Energy Output

The quasars have very large redshifts, indicating by the Hubble law that they are at great distances. The fact that they are visible at such distances implies that they emit enormous amounts of energy and are certainly not stars.

The Energy Source of Quasars is Extremely Compact

Quasars are extremely luminous at all wavelengths and exhibit variability on timescales as little as hours, indicating that their enormous energy output originates in a very compact source. Here are some light curves at different wavelengths illustrating the variability in intensity of some quasars and other active galaxies. Here is an explanation of these light curves. In all cases, the timescale for variability of the light from an active galaxy sets an upper limit on the size of the compact energy source that powers the active galaxy. These limits are typically the size of the Solar System or smaller.

Some quasars emit radio frequency, but most (99%) are radio quiet. Careful observation shows faint jets coming from some quasars. The above images of the quasar 3C273 illustrate both a jet in the optical image on the left and radio frequency emission associated with the jet on the right. Here are some spectra of quasars and other active galaxies - see the following description.

Relationship of Quasars and Active Galaxies

The quasars are thought to be powered by supermassive rotating black holes at their centers. Because they are the most luminous objects known in the universe, they are the objects that have been observed at the greatest distances from us. The most distant are so far away that the light we see coming from them was produced when the Universe was only one tenth of its present age.

The present belief is that quasars are actually closely related to active galaxies such as Seyfert Galaxies or BL Lac objects in that they are very active galaxies with bright nuclei powered by enormous rotating black holes. However, because the quasars are at such large distances, it is difficult to see anything other than the bright nucleus of the active galaxy in their case. As we have noted above, modern observations have begun to detect around some quasars jets and evidence for the surrounding faint nebulosity of a galaxy-like object.

Evolution of Quasars

The standard theory is that quasars turn on when there is matter to feed their supermassive black hole engines at the center and turn off when there is no longer fuel for the black hole. Recent Hubble Space Telescope observations indicate that quasars can occur in galaxies that are interacting with each other. This suggests the possibility that quasars that have turned off because they have consumed the fuel available in the original galaxy may turn back on if the galaxy hosting the quasar interacts with another galaxy in such a way to make more matter available to the black hole. Here is a recent survey of quasar host galaxies that sheds light on this issue.

Abundance of Quasars in the Early Universe

Looking at large distances in the Universe is equivalent to looking back in time because of the finite speed of light. Thus, the observation of quasars at large distances and their scarcity nearby implies that they were much more common in the early Universe than they are now.

This is one piece of evidence that argues against the steady state theory of the Universe but would be consistent with the big bang theory. We shall discuss this further below.

Hungry Black Holes

Notice that the greater abundance of quasars early in the Universe would be consistent with the mechanism discussed above whereby a quasar shuts off when its black hole engine has consumed the fuel available in the host galaxy. We would expect that generally in the early Universe there may have been more mass easily accessible to the black hole than later, after much of it had been consumed. Perhaps later quasars are more dependent on interactions between galaxies to disturb mass distributions and cause galaxies to begin to feed the hungry black hole.

excerpt taken from http://csep10.phys.utk.edu/astr162/lect/active/quasars.html

Sunday, July 5, 2009

Dark Energy

The discovery in 1998 that the Universe is actually speeding up its expansion was a total shock to astronomers. It just seems so counter-intuitive, so against common sense. But the evidence has become convincing.

The evidence came from studying distant type Ia supernovae. This type of supernova results from a white dwarf star in binary system. Matter transfers from the normal star to the white dwarf until the white dwarf attains a critical mass (the Chandrasekhar limit) and undergoes a thermonuclear explosion. Because all white dwarfs achieve the same mass before exploding, they all achieve the same luminosity and can be used by astronomers as "standard candles." Thus by observing their apparent brightness, astronomers can determine their distance using the 1/r2 law.

By knowing the distance to the supernova, we know how long ago it occurred. In addition, the light from the supernova has been red-shifted by the expansion of the universe. By measuring this redshift from the spectrum of the supernova, astronomers can determine how much the universe has expanded since the explosion. By studying many supernovae at different distances, astronomers can piece together a history of the expansion of the universe.

In the 1990's two teams of astronomers, the Supernova Cosmology Project and the High-Z Supernova Search, were looking for distant type Ia supernovae in order to measure the expansion rate of the universe with time. They expected that the expansion would be slowing, which would be indicated by the supernovae being brighter than their redshifts would indicate. Instead, they found the supernovae to be fainter than expected. Hence, the expansion of the universe was accelerating!

In addition, measurements of the cosmic microwave background indicate that the universe has a flat geometry on large scales. Because there is not enough matter in the universe - either ordinary or dark matter - to produce this flatness, the difference must be attributed to a "dark energy". This same dark energy causes the acceleration of the expansion of the universe. In addition, the effect of dark energy seems to vary, with the expansion of the Universe slowing down and speeding up over different times.

Astronomers know dark matter is there by its gravitational effect on the matter that we see and there are ideas about the kinds of particles it must be made of. By contrast, dark energy remains a complete mystery. The name "dark energy" refers to the fact that some kind of "stuff" must fill the vast reaches of mostly empty space in the Universe in order to be able to make space accelerate in its expansion. In this sense, it is a "field" just like an electric field or a magnetic field, both of which are produced by electromagnetic energy. But this analogy can only be taken so far because we can readily observe electromagnetic energy via the particle that carries it, the photon.

Some astronomers identify dark energy with Einstein's Cosmological Constant. Einstein introduced this constant into his general relativity when he saw that his theory was predicting an expanding universe, which was contrary to the evidence for a static universe that he and other physicists had in the early 20th century. This constant balanced the expansion and made the universe static. With Edwin Hubble's discovery of the expansion of the Universe, Einstein dismissed his constant. It later became identified with what quantum theory calls the energy of the vacuum.

In the context of dark energy, the cosmological constant is a reservoir which stores energy. Its energy scales as the universe expands. Applied to the supernova data, it would distinguish effects due to the matter in the universe from those due to the dark energy. Unfortunately, the amount of this stored energy required is far more than observed, and would result in very rapid acceleration (so much so that the stars and galaxies would not form). Physicists have suggested a new type of matter, "quintessence," which would fill the universe like a fluid which has a negative gravitational mass. However, new constraints imposed on cosmological parameters by Hubble Space Telescope data rule out at least simple models of quintessence.

Other possibilities being explored are topological defects, time varying forms of dark energy, or a dark energy that does not scale uniformly with the expansion of the universe.

excerpt taken from http://imagine.gsfc.nasa.gov/docs/science/mysteries_l1/dark_energy.html

Saturday, July 4, 2009

Quantum Cosmology

The physical laws that govern the universe prescribe how an initial state evolves with time. In classical physics, if the initial state of a system is specified exactly then the subsequent motion will be completely predictable. In quantum physics, specifying the initial state of a system allows one to calculate the probability that it will be found in any other state at a later time. Cosmology attempts to describe the behaviour of the entire universe using these physical laws. In applying these laws to the universe one immediately encounters a problem. What is the initial state that the laws should be applied to? In practice, cosmologists tend to work backwards by using the observed properties of the universe now to understand what it was like at earlier times. This approach has proved very successful. However it has led cosmologists back to the question of the initial conditions.

Inflation (a period of accelerating expansion in the very early universe) is now accepted as the standard explanation of several cosmological problems. In order for inflation to have occurred, the universe must have been formed containing some matter in a highly excited state. Inflationary theory does not address the question of why this matter was in such an excited state. Answering this demands a theory of the pre-inflationary initial conditions. There are two serious candidates for such a theory. The first, proposed by Andrei Linde of Stanford University, is called chaotic inflation. According to chaotic inflation, the universe starts off in a completely random state. In some regions matter will be more energetic than in others and inflation could ensue, producing the observable universe.

The second contender for a theory of initial conditions is quantum cosmology, the application of quantum theory to the entire universe. At first this sounds absurd because typically large systems (such as the universe) obey classical, not quantum, laws. Einstein's theory of general relativity is a classical theory that accurately describes the evolution of the universe from the first fraction of a second of its existence to now. However it is known that general relativity is inconsistent with the principles of quantum theory and is therefore not an appropriate description of physical processes that occur at very small length scales or over very short times. To describe such processes one requires a theory of quantum gravity.

In non-gravitational physics the approach to quantum theory that has proved most successful involves mathematical objects known as path integrals. Path integrals were introduced by the Nobel prizewinner Richard Feynman, of CalTech. In the path integral approach, the probability that a system in an initial state A will evolve to a final state B is given by adding up a contribution from every possible history of the system that starts in A and ends in B. For this reason a path integral is often referred to as a `sum over histories'. For large systems, contributions from similar histories cancel each other in the sum and only one history is important. This history is the history that classical physics would predict.

For mathematical reasons, path integrals are formulated in a background with four spatial dimensions rather than three spatial dimensions and one time dimension. There is a procedure known as `analytic continuation' which can be used to convert results expressed in terms of four spatial dimensions into results expressed in terms of three spatial dimensions and one time dimension. This effectively converts one of the spatial dimensions into the time dimension. This spatial dimension is sometimes referred to as `imaginary' time because it involves the use of so-called imaginary numbers, which are well defined mathematical objects used every day by electrical engineers.

The success of path integrals in describing non-gravitational physics naturally led to attempts to describe gravity using path integrals. Gravity is rather different from the other physical forces, whose classical description involves fields (e.g. electric or magnetic fields) propagating in spacetime. The classical description of gravity is given by general relativity, which says that the gravitational force is related to the curvature of spacetime itself i.e. to its geometry. Unlike for non-gravitational physics, spacetime is not just the arena in which physical processes take place but it is a dynamical field. Therefore a sum over histories of the gravitational field in quantum gravity is really a sum over possible geometries for spacetime.

The gravitational field at a fixed time can be described by the geometry of the three spatial dimensions at that time. The history of the gravitational field is described by the four dimensional spacetime that these three spatial dimensions sweep out in time. Therefore the path integral is a sum over all four dimensional spacetime geometries that interpolate between the initial and final three dimensional geometries. In other words it is a sum over all four dimensional spacetimes with two three dimensional boundaries which match the initial and final conditions. Once again, mathematical subtleties require that the path integral be formulated in four spatial dimensions rather than three spatial dimensions and one time dimension.

The path integral formulation of quantum gravity has many mathematical problems. It is also not clear how it relates to more modern attempts at constructing a theory of quantum gravity such as string/M-theory. However it can be used to correctly calculate quantities that can be calculated independently in other ways e.g. black hole temperatures and entropies.

We can now return to cosmology. At any moment, the universe is described by the geometry of the three spatial dimensions as well as by any matter fields that may be present. Given this data one can, in principle, use the path integral to calculate the probability of evolving to any other prescribed state at a later time. However this still requires a knowledge of the initial state, it does not explain it.

Quantum cosmology is a possible solution to this problem. In 1983, Stephen Hawking and James Hartle developed a theory of quantum cosmology which has become known as the `No Boundary Proposal'. Recall that the path integral involves a sum over four dimensional geometries that have boundaries matching onto the initial and final three geometries. The Hartle-Hawking proposal is to simply do away with the initial three geometry i.e. to only include four dimensional geometries that match onto the final three geometry. The path integral is interpreted as giving the probability of a universe with certain properties (i.e. those of the boundary three geometry) being created from nothing.

In practice, calculating probabilities in quantum cosmology using the full path integral is formidably difficult and an approximation has to be used. This is known as the semiclassical approximation because its validity lies somewhere between that of classical and quantum physics. In the semiclassical approximation one argues that most of the four dimensional geometries occuring in the path integral will give very small contributions to the path integral and hence these can be neglected. The path integral can be calculated by just considering a few geometries that give a particularly large contribution. These are known as instantons. Instantons don't exist for all choices of boundary three geometry; however those three geometries that do admit the existence of instantons are more probable than those that don't. Therefore attention is usually restricted to three geometries close to these.

Remember that the path integral is a sum over geometries with four spatial dimensions. Therefore an instanton has four spatial dimensions and a boundary that matches the three geometry whose probability we wish to compute. Typical instantons resemble (four dimensional) surfaces of spheres with the three geometry slicing the sphere in half. They can be used to calculate the quantum process of universe creation, which cannot be described using classical general relativity. They only usually exist for small three geometries, corresponding to the creation of a small universe. Note that the concept of time does not arise in this process. Universe creation is not something that takes place inside some bigger spacetime arena - the instanton describes the spontaneous appearance of a universe from literally nothing. Once the universe exists, quantum cosmology can be approximated by general relativity so time appears.

People have found different types of instantons that can provide the initial conditions for realistic universes. The first attempt to find an instanton that describes the creation of a universe within the context of the `no boundary' proposal was made by Stephen Hawking and Ian Moss. The Hawking-Moss instanton describes the creation of an eternally inflating universe with `closed' spatial three-geometries.

It is presently an unsolved question whether our universe contains closed, flat or open spatial three-geometries. In a flat universe, the large-scale spatial geometry looks like the ordinary three-dimensional space we experience around us. In contrast to this, the spatial sections of a realistic closed universe would look like three-dimensional (surfaces of) spheres with a very large but finite radius. An open geometry would look like an infinite hyperboloid. Only a closed universe would therefore be finite. There is, however, nowadays strong evidence from cosmological observations in favour of an infinite open universe. It is therefore an important question whether there exist instantons that describe the creation of open universes.

The idea behind the Coleman-De Luccia instanton, discovered in 1987, is that the matter in the early universe is initially in a state known as a false vacuum. A false vacuum is a classically stable excited state which is quantum mechanically unstable. In the quantum theory, matter which is in a false vacuum may `tunnel' to its true vacuum state. The quantum tunnelling of the matter in the early universe was described by Coleman and De Luccia. They showed that false vacuum decay proceeds via the nucleation of bubbles in the false vacuum. Inside each bubble the matter has tunnelled. Surprisingly, the interior of such a bubble is an infinite open universe in which inflation may occur. The cosmological instanton describing the creation of an open universe via this bubble nucleation is known as a Coleman-De Luccia instanton.

The Coleman-De Luccia Instanton

Remember that this scenario requires the existence of a false vacuum for the matter in the early universe. Moreover, the condition for inflation to occur once the universe has been created strongly constrains the way the matter decays to its true vacuum. Therefore the creation of open inflating universes appears to be rather contrived in the absence of any explanation of these specific pre-inflationary initial conditions.

Recently, Stephen Hawking and Neil Turok have proposed a bold solution to this problem. They constructed a class of instantons that give rise to open universes in a similar way to the instantons of Coleman and De Luccia. However, they did not require the existence of a false vacuum or other very specific properties of the excited matter state. The price they pay for this is that their instantons have singularities: places where the curvature becomes infinite. Since singularities are usually regarded as places where the theory breaks down and must be replaced by a more fundamental theory, this is a quite controversial feature of their work.

The Hawking-Turok Instanton

The question of course arises which of these instantons describes correctly the creation of our own universe. The way one might hope to distinguish between different theories of quantum cosmology is by considering quantum fluctuations about these instantons. The Heisenberg uncertainty principle in quantum mechanics implies that vacuum fluctuations are present in every quantum theory. In the full quantum picture therefore, an instanton provides us just with a background geometry in the path integral with respect to which quantum fluctuations need to be considered.

During inflation, these quantum mechanical vacuum fluctuations are amplified and due to the accelerating expansion of the universe they are stretched to macroscopic length scales. Later on, when the universe has cooled, they seed the growth of large scale structures (e.g. galaxies) like those we see today. One sees the imprint of these primordial fluctuations as small temperature perturbations in the cosmic microwave background radiation.

Since different types of instantons predict slightly different fluctuation spectra, the temperature perturbations in the cosmic microwave background radiation will depend on the instanton from which the universe was created. In the next decade the satellites MAP and PLANCK will be launched to measure the temperature of the microwave background radiation in different directions on the sky to a very high accuracy. The observations will not only provide us with a very important test of inflation itself but may also be the first possibility to observationally distinguish between different theories for quantum cosmology.

excerpt taken from http://www.damtp.cam.ac.uk/user/gr/public/qg_qc.html

Thursday, July 2, 2009

Why is the sky blue?

It is easy to see that the sky is blue. Have you ever wondered why? A lot of other smart people have, too. And it took a long time to figure it out!

The light from the Sun looks white. But it is really made up of all the colors of the rainbow.

A prism separates white light into the colors of the rainbow.

A prism is a specially shaped crystal. When white light shines through a prism, the light is separated into all its colors.

If you visited The Land of the Magic Windows, you learned that the light you see is just one tiny bit of all the kinds of light energy beaming around the Universe--and around you!

Like energy passing through the ocean, light energy travels in waves, too. Some light travels in short, "choppy" waves. Other light travels in long, lazy waves. Blue light waves are shorter than red light waves.

Different colors of light have different wavelengths.

All light travels in a straight line unless something gets in the way to--

* reflect it (like a mirror)

* bend it (like a prism)

* or scatter it (like molecules of the gases in the atmosphere)

Sunlight reaches Earth's atmosphere and is scattered in all directions by all the gases and particles in the air. Blue light is scattered in all directions by the tiny molecules of air in Earth's atmosphere. Blue is scattered more than other colors because it travels as shorter, smaller waves. This is why we see a blue sky most of the time.

Atmosphere scatters blue light more than other colors.

Closer to the horizon, the sky fades to a lighter blue or white. The sunlight reaching us from low in the sky has passed through even more air than the sunlight reaching us from overhead. As the sunlight has passed through all this air, the air molecules have scattered and rescattered the blue light many times in many directions. Also, the surface of Earth has reflected and scattered the light. All this scattering mixes the colors together again so we see more white and less blue.

What Makes a Red Sunset?

As the Sun gets lower in the sky, its light is passing through more of the atmosphere to reach you. Even more of the blue light is scattered, allowing the reds and yellows to pass straight through to your eyes.

Red sky at sunset

Sometimes the whole western sky seems to glow. The sky appears red because larger particles of dust, pollution, and water vapor in the atmosphere reflect and scatter more of the reds and yellows.

Red sun at sunset.
Why Does Scattering Matter?

How much of the Sun's light gets bounced around in Earth's atmosphere and how much gets reflected back into space? How much light gets soaked up by land and water, asphalt freeways and sunburned surfers? How much light do water and clouds reflect back into space? And why do we care?

Sunlight carries the energy that heats Earth and powers all life on Earth. Our climate is affected by how sunlight is scattered by forests, deserts, snow- and ice-covered surfaces, different types of clouds, smoke from forest fires, and other pollutants in the air.

excerpt taken from http://spaceplace.nasa.gov/en/kids/misrsky/misr_sky.shtml

Friday, June 19, 2009

Nature vs Values

by: Bertrand Russell

The philosophy of nature is one thing, the philosophy of value is quite another. Nothing but harm can come of confusing them. What we think good, what we should like, has no bearing whatever upon what is, which is the question for the philosophy of nature. On the other hand, we cannot be forbidden to value this or that on the ground that the nonhuman world does not value it, nor can we be compelled to admire anything because it is a "law of nature." Undoubtedly we are part of nature, which has produced our desires, our hopes and fears, in accordance with laws which the physicist is beginning to discover. In this sense we are part of nature; in the philosophy of nature we are subordinated to nature, the outcome of natural laws, and their victims in the long run.
The philosophy of nature must be unduly terrestrial; for it, the earth is merely one of the smaller planets of one of the smaller stars of the Milky Way. It would be ridiculous to warp the philosophy of nature in order to bring out results that are pleasing to the tiny parasites of this insignificant planet. Vitalism as a philosophy, and evolutionism, show in this respect, a lack of sense of proportion and logical relevance. They regard the facts of life, which are personally interesting to us, as having a cosmic significance, not a significance confined to the earth's surface. Optimism and pessimism, as cosmoc philosophies, show the same naive humanism; the great world, so far as we know it from the philosophy of nature, is neither good nor bad, and is not concerned to make us happy or unhappy. All such philosophies spring from self-importance and are best corrected by a little astronomy.
But in the philosophy of value the situation is reversed. Nature is only a part of what we can imagine; everything, real or imagined, can be appraised by us, and there is no outside standard to show that our valuation is wrong. We are ourselves the ultimate and irrefutable arbiters of value, and in the world of value nature is only a part. Thus in this world we are greater than nature. In the world of values, nature in itself is neutral, neither good nor bad, deserving of neither admiration nor censure. It is we who create value and our desires which confer value. In this realm we are kings, and we debase our kingship if we bow down to nature. It is for us to determine the good life, not for nature.

Wednesday, June 10, 2009

Edwin Hubble

was born in the small town of Marshfield, Missouri, USA, on November 29th, 1889. In 1898, His family moved to Chicago, where he attended high school. Young Edwin Hubble had been fascinated by science and mysterious new worlds from an early age, having spent his childhood reading the works of Jules Verne (20,000 Leagues Under the Sea, From the Earth to the Moon), and Henry Rider Haggard (King Solomon's Mines), Edwin Hubble was a fine student and an even better athlete, having broken the Illinois State high jump record. When he attended University, Hubble continued to excel in sports such as basketball and boxing, but he also found time to study and earn an undergraduate degree in mathematics and astronomy.

Edwin Hubble went to Oxford University on a Rhodes scholarship, where he did not continue his studies in astronomy, but instead studied law. At this point in his life, he had not yet made up his mind about pursuing a scientific career.

In 1913, Hubble returned from England and was admitted to the bar, setting up a small practice in Louisville Kentucky; but it didn't take long for Hubble to realize he wasn't happy as a lawyer, and that his real passion was astronomy, so he studied at the Yerkes Observatory, and in 1917, received a doctorate in astronomy from the University of Chicago.

Following a tour of duty in the first World War, Hubble took a job at the Mount Wilson Observatory in California, where took many photographs of Cepheid variables through 100 inch reflecting Hooker telescope, proving they were outside our galaxy, and determining the existence of several other galaxies such as our own milky way, which had until then been believed to be the universe.

Hubble had also devised a classification system for the various galaxies he observed, sorting them by content, distance, shape, and brightness; it was then he noticed redshifts in the emission of light from the galaxies, seeing saw that they were moving away from each other at a rate constant to the distance between them. From these observation, he was able to formulate Hubble's Law in 1929, helping astronomers determine the age of the universe, and proving that the universe was expanding.

It is interesting to note that In 1917, Albert Einstein had already introduced his general theory of relativity, and produced a model of space based on that theory, claiming that space was curved by gravity, therefore that it must be able to expand or contract; but he found this assumption so far fetched, that he revised his theory, stating that the universe was static and immobile. Following Hubble's discoveries, he is quoted as having said that second guessing his original findings was the biggest blunder of his life, and he even visited Hubble to thank him in 1931.

excerpt taken from http://www.edwinhubble.com/hubble_bio_001.htm

Monday, June 8, 2009

The Double Helix

The Discovery of the Double Helix, 1951-1953

The discovery in 1953 of the double helix, the twisted-ladder structure of deoxyribonucleic acid (DNA), by James Watson and Francis Crick marked a milestone in the history of science and gave rise to modern molecular biology, which is largely concerned with understanding how genes control the chemical processes within cells. In short order, their discovery yielded ground-breaking insights into the genetic code and protein synthesis. During the 1970s and 1980s, it helped to produce new and powerful scientific techniques, specifically recombinant DNA research, genetic engineering, rapid gene sequencing, and monoclonal antibodies, techniques on which today's multi-billion dollar biotechnology industry is founded. Major current advances in science, namely genetic fingerprinting and modern forensics, the mapping of the human genome, and the promise, yet unfulfilled, of gene therapy, all have their origins in Watson and Crick's inspired work. The double helix has not only reshaped biology, it has become a cultural icon, represented in sculpture, visual art, jewelry, and toys.

Researchers working on DNA in the early 1950s used the term "gene" to mean the smallest unit of genetic information, but they did not know what a gene actually looked like structurally and chemically, or how it was copied, with very few errors, generation after generation. In 1944, Oswald Avery had shown that DNA was the "transforming principle," the carrier of hereditary information, in pneumococcal bacteria. Nevertheless, many scientists continued to believe that DNA had a structure too uniform and simple to store genetic information for making complex living organisms. The genetic material, they reasoned, must consist of proteins, much more diverse and intricate molecules known to perform a multitude of biological functions in the cell.

Crick and Watson recognized, at an early stage in their careers, that gaining a detailed knowledge of the three-dimensional configuration of the gene was the central problem in molecular biology. Without such knowledge, heredity and reproduction could not be understood. They seized on this problem during their very first encounter, in the summer of 1951, and pursued it with single-minded focus over the course of the next eighteen months. This meant taking on the arduous intellectual task of immersing themselves in all the fields of science involved: genetics, biochemistry, chemistry, physical chemistry, and X-ray crystallography. Drawing on the experimental results of others (they conducted no DNA experiments of their own), taking advantage of their complementary scientific backgrounds in physics and X-ray crystallography (Crick) and viral and bacterial genetics (Watson), and relying on their brilliant intuition, persistence, and luck, the two showed that DNA had a structure sufficiently complex and yet elegantly simple enough to be the master molecule of life.

Other researchers had made important but seemingly unconnected findings about the composition of DNA; it fell to Watson and Crick to unify these disparate findings into a coherent theory of genetic transfer. The organic chemist Alexander Todd had determined that the backbone of the DNA molecule contained repeating phosphate and deoxyribose sugar groups. The biochemist Erwin Chargaff had found that while the amount of DNA and of its four types of bases--the purine bases adenine (A) and guanine (G), and the pyrimidine bases cytosine (C) and thymine(T)--varied widely from species to species, A and T always appeared in ratios of one-to-one, as did G and C. Maurice Wilkins and Rosalind Franklin had obtained high-resolution X-ray images of DNA fibers that suggested a helical, corkscrew-like shape. Linus Pauling, then the world's leading physical chemist, had recently discovered the single-stranded alpha helix, the structure found in many proteins, prompting biologists to think of helical forms. Moreover, he had pioneered the method of model building in chemistry by which Watson and Crick were to uncover the structure of DNA. Indeed, Crick and Watson feared that they would be upstaged by Pauling, who proposed his own model of DNA in February 1953, although his three-stranded helical structure quickly proved erroneous.

The time, then, was ripe for their discovery. After several failed attempts at model building, including their own ill-fated three-stranded version and one in which the bases were paired like with like (adenine with adenine, etc.), they achieved their break-through. Jerry Donohue, a visiting physical chemist from the United States who shared Watson and Crick's office for the year, pointed out that the configuration for the rings of carbon, nitrogen, hydrogen, and oxygen (the elements of all four bases) in thymine and guanine given in most textbooks of chemistry was incorrect. On February 28, 1953, Watson, acting on Donohue's advice, put the two bases into their correct form in cardboard models by moving a hydrogen atom from a position where it bonded with oxygen to a neighboring position where it bonded with nitrogen. While shifting around the cardboard cut-outs of the accurate molecules on his office table, Watson realized in a stroke of inspiration that A, when joined with T, very nearly resembled a combination of C and G, and that each pair could hold together by forming hydrogen bonds. If A always paired with T, and likewise C with G, then not only were Chargaff's rules (that in DNA, the amount of A equals that of T, and C that of G) accounted for, but the pairs could be neatly fitted between the two helical sugar-phosphate backbones of DNA, the outside rails of the ladder. The bases connected to the two backbones at right angles while the backbones retained their regular shape as they wound around a common axis, all of which were structural features demanded by the X-ray evidence. Similarly, the complementary pairing of the bases was compatible with the fact, also established by the X-ray diffraction pattern, that the backbones ran in opposite direction to each other, one up, the other down.

Watson and Crick published their findings in a one-page paper, with the understated title "A Structure for Deoxyribose Nucleic Acid," in the British scientific weekly Nature on April 25, 1953, illustrated with a schematic drawing of the double helix by Crick's wife, Odile. A coin toss decided the order in which they were named as authors. Foremost among the "novel features" of "considerable biological interest" they described was the pairing of the bases on the inside of the two DNA backbones: A=T and C=G. The pairing rule immediately suggested a copying mechanism for DNA: given the sequence of the bases in one strand, that of the other was automatically determined, which meant that when the two chains separated, each served as a template for a complementary new chain. Watson and Crick developed their ideas about genetic replication in a second article in Nature, published on May 30, 1953.

The two had shown that in DNA, form is function: the double-stranded molecule could both produce exact copies of itself and carry genetic instructions. During the following years, Crick elaborated on the implications of the double-helical model, advancing the hypothesis, revolutionary then but widely-accepted since, that the sequence of the bases in DNA forms a code by which genetic information can be stored and transmitted.

Although recognized today as one of the seminal scientific papers of the twentieth century, Watson and Crick's original article in Nature was not frequently cited at first. Its true significance became apparent, and its circulation widened, only towards the end of the 1950s, when the structure of DNA they had proposed was shown to provide a mechanism for controlling protein synthesis, and when their conclusions were confirmed in the laboratory by Matthew Meselson, Arthur Kornberg, and others.

Crick himself immediately understood the significance of his and Watson's discovery. As Watson recalled, after their conceptual breakthrough on February 28, 1953, Crick declared to the assembled lunch patrons at The Eagle that they had "found the secret of life." Crick himself had no memory of such an announcement, but did recall telling his wife that evening "that we seemed to have made a big discovery." He revealed that "years later she told me that she hadn't believed a word of it." As he recounted her words, "You were always coming home and saying things like that, so naturally I thought nothing of it."

Retrospective accounts of the discovery of the structure of DNA have continued to elicit a measure of controversy. Crick was incensed at Watson's depiction of their collaboration in The Double Helix (1968), castigating the book as a betrayal of their friendship, an intrusion into his privacy, and a distortion of his motives. He waged an unsuccessful campaign to prevent its publication. He eventually became reconciled to Watson's bestseller, concluding that if it presented an unfavorable portrait of a scientist, it was of Watson, not of himself.

A more enduring controversy has been generated by Watson and Crick's use of Rosalind Franklin's crystallographic evidence of the structure of DNA, which was shown to them, without her knowledge, by her estranged colleague, Maurice Wilkins, and by Max Perutz. Her evidence demonstrated that the two sugar-phosphate backbones lay on the outside of the molecule, confirmed Watson and Crick's conjecture that the backbones formed a double helix, and revealed to Crick that they were antiparallel. Franklin's superb experimental work thus proved crucial in Watson and Crick's discovery. Yet, they gave her scant acknowledgment. Even so, Franklin bore no resentment towards them. She had presented her findings at a public seminar to which she had invited the two. She soon left DNA research to study tobacco mosaic virus. She became friends with both Watson and Crick, and spent her last period of remission from ovarian cancer in Crick's house (Franklin died in 1958). Crick believed that he and Watson used her evidence appropriately, while admitting that their patronizing attitude towards her, so apparent in The Double Helix, reflected contemporary conventions of gender in science.

excerpt taken from http://profiles.nlm.nih.gov/SC/Views/Exhibit/narrative/doublehelix.html