Tuesday, April 28, 2009

The Cosmic Microwave Background Radiation (CMB)

by: Douglas Scott & Martin White


Cosmology is the study of the beginning and evolution of the universe.

The big bang

It is now generally agreed among both astronomers and physicists alike that the Universe was created some 10 to 20 billion years ago in a leviathan explosion dubbed the "Big Bang". The exact nature of the initial event is still cause for much speculation, and it's fair to say that we know little if anything about the first instant of creation. Nevertheless we do know that the Universe used to be incredibly hotter and more dense than it is today. Expansion and cooling after this cataclysm of the Big Bang, resulted in the production of all of the physical contents of the Universe which we see today. Namely: light in the form of "photons"; matter in the form of "leptons" (electrons, positrons, muons) and "baryons" (protons, antiprotons, neutrons, antineutrons); more esoteric particles like "neutrinos" and perhaps some exotic "dark matter" particles; and the subsequent formulation of the Universe's first chemical elements.

The concept of the Big Bang was not immediately obvious to astrophysicists, but rather grew out of a steady accumulation of evidence gathered from both theoretical and observational research throughout the course of the 20th century. A wide range of theories attempting to explain the origin of the Universe were eventually discredited and superseded by the Big Bang hypothesis based upon the following critical considerations:

  • the current expansion, or Hubble flow, of the Universe.
  • the observed helium and deuterium abundances.
  • the cosmic background radiation.
  • the cosmological solutions of Einstein's equations.
  • agreement between various independent estimates of the age of the Universe.

The Cosmic Microwave Background Radiation

Perhaps the most conclusive (and certainly among the most carefully examined) piece of evidence for the Big Bang is the existence of an isotropic radiation bath that permeates the entire Universe known as the "cosmic microwave background" (CMB). The word "isotropic" means the same in all directions; the degree of anisotropy of the CMB is about one part in a thousand. In 1965, two young radio astronomers, Arno Penzias and Robert Wilson, almost accidentally discovered the CMB using a small, well-calibrated horn antenna. It was soon determined that the radiation was diffuse, emanated unifromly from all directions in the sky, and had a temperature of approximately 2.7 Kelvin (ie 2.7 degrees above absolute zero). Initially, they could find no satisfactory explanation for their observations, and considered the possibility that their signal may have been due to some undetermined systematic noise. They even considered the possibility that it was due to "a white dielectric substance" (ie pigeon droppings) in their horn!

However, it soon came to their attention through Robert Dicke and Jim Peebles of Princeton that this background radiation had in fact been predicted years earlier by George Gamow as a relic of the evolution of the early Universe. This background of microwaves was in fact the cooled remnant of the primeval fireball - an echo of the Big Bang.

If the universe was once very hot and dense, the photons and baryons would have formed a plasma, ie a gas of ionized matter coupled to the radiation through the constant scattering of photons off ions and electrons. As the universe expanded and cooled there came a point when the radiation (photons) decoupled from the matter - this happened about a few hundred thousand years after the Big Bang. That radiation cooled and is now at 2.7 Kelvin. The fact that the spectrum of the radiation is almost exactly that of a "black body" (a physicists way of describing a perfect radiator) implies that it could not have had its origin through any prosaic means. This has led to the death of the steady state theory for example. In fact the CMB spectrum is a black body to better than 1% accuracy over more than a factor of 1000 in wavelength. This is a much more accurate black body than any we can make in the laboratory!

By the early 1970's it became clear that the CMB sky is hotter in one direction and cooler in the opposite direction, with the temperature difference being a few mK (or about 0.1% of the overall temperature). The pattern of this temperature variation on the sky is known as a "dipole", and is exactly what is expected if we are moving through the background radiation at high speed in the direction of the hot part. The inference is that our entire local group of galaxies is moving in a particular direction at about 600 km/s. In the direction we are moving the wavelengths of the radiation are squashed together (a blue-shift), making the sky appear hotter there, while in the opposite direction the wavelengths are stretched out (redshift), making the sky appear colder there. When this dipole pattern, due to our motion, is removed, the CMB sky appears incredibly isotropic. Further investigations, including more recent ones by the COBE satellite (eg Smoot et. al.), confirmed the virtual isotropy of the CMB to better than one part in ten-thousand.

Given this level of isotropy, together with the accurate black-body spectrum, any attempt to interpret the origin of the CMB as due to present astrophysical phenomena (i.e. stars, dust, radio galaxies, etc.) is no longer credible. Therefore, the only satisfactory explanation for the existence of the CMB lies in the physics of the early Universe.

The Cosmological Dark Ages

The age of the universe is around 10 to 20 billion years. The early Universe was so hot and dense that it was like the conditions within a particle accelerator or nuclear reactor. As the Universe expanded it cooled, so that the average energy of its constituent particles decreased with time. All of the high energy particle and nuclear physics was over in the first 3 minutes (see the book of that name, written by Steven Weinberg in 1977). By that time all of the main constituents of the Universe had formed, including the light elements and the radiation.

It is generally believed that little of note happened for the next 300,000 years or so. This period is sometimes referred to as the "Dark Ages" of the Universe. One way to learn about physical processes which might have occurred at these times is to search for minor deviations from a black-body in the spectrum of the CMB. An injection of energy, through for example a decaying exotic particle, could distort the spectrum a little away from the characteristic blackbody shape. So far no such distortions have been found, so we have no reason to believe that anything particularly exciting happened during this time.

The important thing which happened at about 300,000 years after the Big Bang is that the Universe became cool enough for the atoms to become neutral. Before that time all of the protons and electrons existed as free ions moving around in a plasma. Every time that a proton snatched an electron it would be zapped by a photon with high enough energy to rip them apart again. Only after about a few hundred thousand years was the average temperature low enough that the protons could hold onto their electrons to form neutral hydrogen atoms. This period is referred to as the epoch of "recombination" (in general when atoms become neutral after being ionized we talk of them recombining -- here in fact the ions and electrons are combining for the first time, so it should perhaps be called "combination"!).

When the Universe was ionized, the matter was constantly interacting with the radiation, ie photons were continually being scattered by ions and electrons. Looking back at the CMB we see the surface of "last scattering", when the photons last significantly interacted with the matter. At earlier times the universe is opaque, and so we don't see back further than the epoch of recombination. Between last scattering and today the universe is almost totally transparent. So when we look at the CMB we are seeing, in each direction, out to when the radiation last scattered. This means we are effectively seeing back in time to a few hundred thousand years after the Big Bang.

After the Universe recombined, the stars, galaxies and clusters of galaxies started to form. We know little in detail about this process, largely because it is a very complex physical process. One of the biggest uncertainties is understanding the "seeds" from which the galaxies and other structures grew. Everything that we see with optical telescopes (or telescopes in any other wavelength range) tells us about objects which have existed in the last 10 billion years or so. It becomes more and more difficult to probe conditions in the Universe at earlier times.

Detailed observations of the CMB provide exactly the sort of information required to attack most of the major cosmological puzzles of our day. By looking for small ripples in the temperature of the microwave sky we can learn about the seed fluctuations as they existed 300,000 years after the Big Bang, and well before galaxies had started to form. We can also learn what the Universe as a whole was like back then: whether it was open or closed; what the dominant form of dark matter is; and how the Universe has been expanding since that time. Through careful examination of the Cosmic Microwave Background we can probe the cosmological Dark Ages.

Temperature Fluctuation

While the CMB is predicted to be very smooth, the lack of features cannot be perfect. At some level one expects to see irregularities, or anisotropies, in the temperature of the radiation.

These temperature fluctuations are the imprints of very small irregularities which through time have grown to become the galaxies and clusters of galaxies which we see today.


excerpt taken from http://www.astro.ubc.ca/people/scott/cmb_intro.html

Sunday, April 26, 2009

Jung's Theory of Dreams

by: Mark L. Dotson

Why do we have dreams? Where do they originate? Do they have meaning? Are dreams of any value to us, or are they just so much nonsense? These questions have puzzled thinkers since the dawn of humanity. Every culture in the world has offered explanations. For instance, the Australian Aborigines believe that what we consider the realm of dreams is the real world (the Dreamtime), and the world we experience with our senses is a dream.

C.G. Jung put forth a theory of dreams which is quite popular today. Following in the footsteps of Sigmund Freud, Jung claimed that dream analysis is the primary way to gain knowledge of the unconscious mind. He says that the dream is a natural phenomenon which we can study, thereby gaining knowledge of the hidden part of our mind. The images are symbolic of conscious and unconscious mental processes.

There is a significant difference between a symbol and a sign in Jung's view. A sign merely points to something. For instance, a red stop-light points to the idea that we should stop our car; the green light points to the idea that we should go. These lights are not symbols, because a symbol, according to Jung,

is a term, a name, or even a picture that may be familiar in daily life, yet that possesses specific connotations in addition to its conventional and obvious meaning. It implies something vague, unknown, or hidden from us (Jung 3).

A good example of a symbol is the American flag. If one who did not know what the flag symbolized saw it for the first time, he or she would not be able to relate the connotations attached to it that we, as American citizens, are familiar with. It is not obvious to a foreigner what deeper meaning the flag holds for us. Another good example of a symbol which holds deep meaning is the swastika.

For Jung, dreams originate in the unconscious. They are naturally occurring phenomena, arising spontaneously and autonomously into the conscious mind. Generally, we cannot decide beforehand which dreams we will have each night. It would be interesting to know what Jung would think of present-day research into "lucid dreaming," where one is said to be aware, while in the dream state, that one is in a dream, thus allowing one to guide the outcome. In this type of dream, the spontaneity and autonomy of the dream seem less evident; the dreamer seems to have more control.

Jung explains the phenomenon of dreaming by saying that the psyche regulates itself by a process of compensation. He was influenced here by psychologist, Alfred Adler, who introduced the notion of compensation into psychology. Jung was also inspired by the Greek philosophers, Heraclitus, and Anaximander. Heraclitus taught that "when a one-sided attitude persists, . . . the opposite attitude comes to the fore in an automatic attempt to restore a balanced attitude" (Bennet 92). Anaximander talked about a continual, cyclical process by which opposing forces do battle. Taking these views into consideration, Jung developed a theory which claimed that, when there is an imbalance between the conscious and unconscious minds, a neurosis or psychosis occurs. This is a fragmentation of the personality, in the sense that the psyche is split into two opposing energies which refuse to be reconciled. Schizophrenia is a good example of such a conflict. In schizophrenia, the intellectual faculties and the affective elements of the personality become dissociated, i.e., there is a split between the rational elements and the emotional elements. As compensation for the imbalance, the psyche will attempt to right itself by providing clues, or possible solutions to the problem through dreams, according to Jung. He claims that if the dreamer can understand and apply what the dream is saying, the imbalance will be corrected. As evidence for this, he offered many case studies where dreams would give him an idea of the problem confronting a particular individual, and how to proceed with treatment. He claimed to help many of his patients in this manner.

"Imbalance" sounds very negative and pathological to our ears. I do not think Jung meant for the idea to be taken that way. For the psyche to be perfectly balanced at some point in one's life, one would have to be in a state of perfection, or so it seems to me. Very small fluctuations between the energies would constantly be correcting themselves via dreams (if Jung is correct, that is). These small fluctuations would not result in full-blown mental pathology, but rather in something like very mild mood-swings. It is obvious that we all experience these.

Jung believes that the unconscious communicates with the conscious mind through dream imagery. When the dream is considered, one finds in one's consciousness certain associations which are connected to the images. Associations, in this context, are ideas or feelings which arise in the mind of the dreamer when contemplating the dream. Jung contends that only through these associations can the true meanings be discovered. He referred to this as amplification.

Unlike Freud, Jung did not believe the dream should be interpreted using "free association." Rather, he claimed that one could come closer to the meaning by focusing on the specific images that the dream provides. For instance, one person might dream of an obelisk, and another of a Saturn rocket. Freud might claim that both are, in general, phallic symbols, and may allude to some sexual dysfunction, depending on the context of the dream imagery. On the other hand, Jung would want to know why one dream contained an obelisk and the other a rocket. This difference could affect the entire interpretation. In Jung's words, "I concluded that only the material that is clearly and visibly part of the dream should be used in interpreting it" (Jung 14). A dream image, he says, can have many different meanings according to the dreamer's associations. Because of this, Jung was vehemently opposed to any kind of "dream dictionary," where the images are given fixed meanings.

Jung believes that creative ideas can come to us through dreams. He points to the nineteenth-century German chemist, Kekule, and his discovery of the molecular structure of benzene. It seems that Kekule dreamed one night of a snake swallowing its own tail. He took this to mean that the structure was a closed carbon ring. Jung also refers to the author of Dr. Jekyll and Mr. Hyde, Robert Louis Stevenson. The plot for the book came to Stevenson one night in a dream. For Jung, the unconscious is a "rich vein" of creativity and the source of all genius (Jung 25-26).

Up until now, the discussion has focused on dreams which are of a personal nature. Sometimes, however, a "collective dream," may appear, which contains symbolism pertaining to an entire culture or race, or perhaps even the entire human population. Jung once visited a primitive tribe called the Elgoni in East Africa. They told him they distinguished between "big dreams" and "little dreams." According to Jung, the former refer to collective dreams, which arise from the collective unconscious; the latter to personal dreams, emanating from the personal unconscious. Collective dreams contain symbols which are common to all human beings. For example, in most religious mythologies, there are stories of a destruction of the world by the supreme deity. In the Bible, we read of Noah and the great flood, and of the battle of Armageddon. In Germanic mythology, there is the tale of ragnarok, which is the Norse myth of the final battle of gods and warriors. The Cherokee believe that someday the earth will sink into the ocean (Eliade 59). Collective dreams are not easily understood by the dreamer because they are of an impersonal nature. Usually, with these kinds of dreams there will be few, if any, associations. Why these images exist in the human psyche remains a mystery. Jung says "their origin is so far buried in the mystery of the past that they seem to have no human source" (Jung 42). Thinking along Jungian lines, perhaps there is a need, at times, for a balancing of the collective psyche of humanity, just as the opposing energies of the individual personality are stabilized by dreams. The apocalyptic myths may be adjustments to the attitude which assumes that the world is permanent and indestructible. Surely all the movies and books in the last fifty years about nuclear holocaust helped adjust our thinking about the permanency of the human race and this planet.

Jung's theory is quite popular in our modern culture, even though there are several things which must be closely pondered. First of all, the fact that the dream is a subjective phenomenon makes an objective study nearly impossible. The only dream images we can examine are our own. We have no assurance that others will relate their dreams accurately and truthfully. And even if they do, how do we ascertain their relevance? On the other hand, there is at least one subjective phenomenon which science gives credence to, namely, pain. We all experience pain just as we experience dreams. We must relate our pain to our doctor so that he or she can make a diagnosis of our condition. The difference, however, is that modern medicine can find empirical evidence that pain exists by finding the effected physical component, whereas no physical component can be found which corresponds to a certain dream image.

Also, there is the problem of dream interpretation. How can we test an interpretation for accuracy? How can we be certain that the associations which arise in our minds are really connected with the dream? They may simply be our overactive imaginations. Moreover, how do we know that the interpretation we decide upon (if we ever do) is correct? The answer is, we have no certainty in these areas. Jung seems to believe that whatever interpretation one comes up with is the correct one for that person. The result is that objectivity is impossible in these interpretations.

The notion of compensation is intriguing, but is there a way to test this hypothesis? What if we conducted an experiment where we allowed one person to sleep and dream normally, and another we deprived of sleep, and hence of dreaming as well? Does the person who is deprived of sleep act in an abnormal manner? Does he seem to be out of balance in some way? Does he exhibit any symptoms of neurosis or psychosis? I have read that persons who are deprived of sleep for a few days sometimes suffer from hallucinations. Could this be the psyche attempting to right itself, as in Jungian dream theory?

And what of Jung's idea of the collective dream? We can plainly see there are striking similarities between the religious mythologies of different races and cultures, even between those which are separated by thousands of miles of ocean. The end-of-the-world motif mentioned earlier is one example. Another would be the idea that all cultures seem to have heroes who deliver the people from evil. Christ is an obvious example. Others which come to mind are the stories of Krishna in Hinduism, and of Gautama in Buddhism. There seems to be evidence for a comparative mythology, but is its source a collective unconscious?

Jung's ideas concerning dreams are a fascinating topic for casual conversation and speculation, but they are by no means on a solid scientific foundation, at least not yet. Perhaps future discoveries in dream psychology will give us more to work with.

His thoughts on comparative mythology and collective dreams have some objective support, but not in the sense of empirical, objective scientific investigation. Rather, it is akin to the manner in which Immanuel Kant spoke of the apriori as an underlying reality that is prior to experience, and hence makes experience possible. Similarly, it seems that Jung's theory of the collective unconscious (and collective dreams) rests on the assumption of a transcendental (in the Kantian sense) objectivity. Just as Kant posited a priori structures of the mind which make human experience possible, Jung posits a certain structure, the collective unconscious, which is the source of all mythology (and possibly all experience as well).

Jungian dream theory is open to much scrutiny at this point in the history of science and philosophy. It is impossible to prove beyond a shadow of a doubt that he was correct because his theories are akin to literary interpretations. He gathers various dreams from his patients and then tries to interpret them into a meaningful framework to support his theories. Pending new discoveries in dream research, one should remain quite skeptical.


excerpt taken from http://members.core.com/~ascensus/docs/jung1.html

Friday, April 24, 2009

The Drake Equation

Is there a way to estimate the number of technologically advanced civilizations that might exist in our Galaxy? While working at the National Radio Astronomy Observatory in Green Bank, West Virginia, Dr. Frank Drake conceived a means to mathematically estimate the number of worlds that might harbor beings with technology sufficient to communicate across the vast gulfs of interstellar space. The Drake Equation, as it came to be known, was formulated in 1961 and is generally accepted by the scientific community.

N = R* fp ne fl fi fc L

where,
  • N = The number of communicative civilizations
  • R* = The rate of formation of suitable stars (stars such as our Sun)
  • fp = The fraction of those stars with planets. (Current evidence indicates that planetary systems may be common for stars like the Sun.)
  • ne = The number of Earth-like worlds per planetary system
  • fl = The fraction of those Earth-like planets where life actually develops
  • fi = The fraction of life sites where intelligence develops
  • fc = The fraction of communicative planets (those on which electromagnetic communications technology develops)
  • L = The "lifetime" of communicating civilizations
Frank Drake's own current solution to the Drake Equation estimates 10,000 communicative civilizations in the Milky Way. Dr. Drake, who serves on the SETI League's advisory board, has personally endorsed SETI's planned all-sky survey.

Tuesday, April 21, 2009

Utalitariamisn

By John Stuart Mill

"Act as if to bring about the most amount of happiness to the most amount of people"

There are few circumstances among those which make up the present condition of human knowledge, more unlike what might have been expected, or more significant of the backward state in which speculation on the most important subjects still lingers, than the little progress which has been made in the decision of the controversy respecting the criterion of right and wrong. From the dawn of philosophy, the question concerning the summum bonum, or, what is the same thing, concerning the foundation of morality, has been accounted the main problem in speculative thought, has occupied the most gifted intellects, and divided them into sects and schools, carrying on a vigorous warfare against one another. And after more than two thousand years the same discussions continue, philosophers are still ranged under the same contending banners, and neither thinkers nor mankind at large seem nearer to being unanimous on the subject, than when the youth Socrates listened to the old Protagoras, and asserted (if Plato's dialogue be grounded on a real conversation) the theory of utilitarianism against the popular morality of the so-called sophist.

It is true that similar confusion and uncertainty, and in some cases similar discordance, exist respecting the first principles of all the sciences, not excepting that which is deemed the most certain of them, mathematics; without much impairing, generally indeed without impairing at all, the trustworthiness of the conclusions of those sciences. An apparent anomaly, the explanation of which is, that the detailed doctrines of a science are not usually deduced from, nor depend for their evidence upon, what are called its first principles. Were it not so, there would be no science more precarious, or whose conclusions were more insufficiently made out, than algebra; which derives none of its certainty from what are commonly taught to learners as its elements, since these, as laid down by some of its most eminent teachers, are as full of fictions as English law, and of mysteries as theology. The truths which are ultimately accepted as the first principles of a science, are really the last results of metaphysical analysis, practised on the elementary notions with which the science is conversant; and their relation to the science is not that of foundations to an edifice, but of roots to a tree, which may perform their office equally well though they be never dug down to and exposed to light. But though in science the particular truths precede the general theory, the contrary might be expected to be the case with a practical art, such as morals or legislation. All action is for the sake of some end, and rules of action, it seems natural to suppose, must take their whole character and colour from the end to which they are subservient. When we engage in a pursuit, a clear and precise conception of what we are pursuing would seem to be the first thing we need, instead of the last we are to look forward to. A test of right and wrong must be the means, one would think, of ascertaining what is right or wrong, and not a consequence of having already ascertained it.

The difficulty is not avoided by having recourse to the popular theory of a natural faculty, a sense or instinct, informing us of right and wrong. For- besides that the existence of such- a moral instinct is itself one of the matters in dispute- those believers in it who have any pretensions to philosophy, have been obliged to abandon the idea that it discerns what is right or wrong in the particular case in hand, as our other senses discern the sight or sound actually present. Our moral faculty, according to all those of its interpreters who are entitled to the name of thinkers, supplies us only with the general principles of moral judgments; it is a branch of our reason, not of our sensitive faculty; and must be looked to for the abstract doctrines of morality, not for perception of it in the concrete. The intuitive, no less than what may be termed the inductive, school of ethics, insists on the necessity of general laws. They both agree that the morality of an individual action is not a question of direct perception, but of the application of a law to an individual case. They recognise also, to a great extent, the same moral laws; but differ as to their evidence, and the source from which they derive their authority. According to the one opinion, the principles of morals are evident a priori, requiring nothing to command assent, except that the meaning of the terms be understood. According to the other doctrine, right and wrong, as well as truth and falsehood, are questions of observation and experience. But both hold equally that morality must be deduced from principles; and the intuitive school affirm as strongly as the inductive, that there is a science of morals. Yet they seldom attempt to make out a list of the a priori principles which are to serve as the premises of the science; still more rarely do they make any effort to reduce those various principles to one first principle, or common ground of obligation. They either assume the ordinary precepts of morals as of a priori authority, or they lay down as the common groundwork of those maxims, some generality much less obviously authoritative than the maxims themselves, and which has never succeeded in gaining popular acceptance. Yet to support their pretensions there ought either to be some one fundamental principle or law, at the root of all morality, or if there be several, there should be a determinate order of precedence among them; and the one principle, or the rule for deciding between the various principles when they conflict, ought to be self-evident.

To inquire how far the bad effects of this deficiency have been mitigated in practice, or to what extent the moral beliefs of mankind have been vitiated or made uncertain by the absence of any distinct recognition of an ultimate standard, would imply a complete survey and criticism, of past and present ethical doctrine. It would, however, be easy to show that whatever steadiness or consistency these moral beliefs have, attained, has been mainly due to the tacit influence of a standard not recognised. Although the non-existence of an acknowledged first principle has made ethics not so much a guide as a consecration of men's actual sentiments, still, as men's sentiments, both of favour and of aversion, are greatly influenced by what they suppose to be the effects of things upon their happiness, the principle of utility, or as Bentham latterly called it, the greatest happiness principle, has had a large share in forming the moral doctrines even of those who most scornfully reject its authority. Nor is there any school of thought which refuses to admit that the influence of actions on happiness is a most material and even predominant consideration in many of the details of morals, however unwilling to acknowledge it as the fundamental principle of morality, and the source of moral obligation. I might go much further, and say that to all those a priori moralists who deem it necessary to argue at all, utilitarian arguments are indispensable. It is not my present purpose to criticise these thinkers; but I cannot help referring, for illustration, to a systematic treatise by one of the most illustrious of them, the Metaphysics of Ethics, by Kant. This remarkable man, whose system of thought will long remain one of the landmarks in the history of philosophical speculation, does, in the treatise in question, lay down a universal first principle as the origin and ground of moral obligation; it is this: "So act, that the rule on which thou actest would admit of being adopted as a law by all rational beings." But when he begins to deduce from this precept any of the actual duties of morality, he fails, almost grotesquely, to show that there would be any contradiction, any logical (not to say physical) impossibility, in the adoption by all rational beings of the most outrageously immoral rules of conduct. All he shows is that the consequences of their universal adoption would be such as no one would choose to incur.

On the present occasion, I shall, without further discussion of the other theories, attempt to contribute something towards the understanding and appreciation of the Utilitarian or Happiness theory, and towards such proof as it is susceptible of. It is evident that this cannot be proof in the ordinary and popular meaning of the term. Questions of ultimate ends are not amenable to direct proof. Whatever can be proved to be good, must be so by being shown to be a means to something admitted to be good without proof. The medical art is proved to be good by its conducing to health; but how is it possible to prove that health is good? The art of music is good, for the reason, among others, that it produces pleasure; but what proof is it possible to give that pleasure is good? If, then, it is asserted that there is a comprehensive formula, including all things which are in themselves good, and that whatever else is good, is not so as an end, but as a mean, the formula may be accepted or rejected, but is not a subject of what is commonly understood by proof. We are not, however, to infer that its acceptance or rejection must depend on blind impulse, or arbitrary choice. There is a larger meaning of the word proof, in which this question is as amenable to it as any other of the disputed questions of philosophy. The subject is within the cognisance of the rational faculty; and neither does that faculty deal with it solely in the way of intuition. Considerations may be presented capable of determining the intellect either to give or withhold its assent to the doctrine; and this is equivalent to proof.

We shall examine presently of what nature are these considerations; in what manner they apply to the case, and what rational grounds, therefore, can be given for accepting or rejecting the utilitarian formula. But it is a preliminary condition of rational acceptance or rejection, that the formula should be correctly understood. I believe that the very imperfect notion ordinarily formed of its meaning, is the chief obstacle which impedes its reception; and that could it be cleared, even from only the grosser misconceptions, the question would be greatly simplified, and a large proportion of its difficulties removed. Before, therefore, I attempt to enter into the philosophical grounds which can be given for assenting to the utilitarian standard, I shall offer some illustrations of the doctrine itself; with the view of showing more clearly what it is, distinguishing it from what it is not, and disposing of such of the practical objections to it as either originate in, or are closely connected with, mistaken interpretations of its meaning. Having thus prepared the ground, I shall afterwards endeavour to throw such light as I can upon the question, considered as one of philosophical theory.


excerpt taken from http://www.utilitarianism.com/mill1.htm

Monday, April 20, 2009

The Hubble Space Telescope

The Hubble Space Telescope (HST) is a space telescope that was carried into orbit by the Space Shuttle Discovery in April 1990. It is named after the American astronomer Edwin Hubble. Although not the first space telescope, the Hubble is one of the largest and most versatile, and is well-known as both a vital research tool and a public relations boon for astronomy. The HST is a collaboration between NASA and the European Space Agency, and is one of NASA's Great Observatories, along with the Compton Gamma Ray Observatory, the Chandra X-ray Observatory, and the Spitzer Space Telescope.[3]

Space telescopes were proposed as early as 1923. The Hubble was funded in the 1970s, with a proposed launch in 1983, but the project was beset by technical delays, budget problems, and the Challenger disaster. When finally launched in 1990, scientists found that the main mirror had been ground incorrectly, severely compromising the telescope's capabilities. However, after a servicing mission in 1993, the telescope was restored to its intended quality. Hubble's position outside the Earth's atmosphere allows it to take extremely sharp images with almost no background light. Hubble's Ultra Deep Field image, for instance, is the most detailed visible-light image ever made of the universe's most distant objects. Many Hubble observations have led to breakthroughs in astrophysics, such as accurately determining the rate of expansion of the universe.

The Hubble is the only telescope ever designed to be serviced in space by astronauts. To date, there have been four servicing missions. Servicing Mission 1 took place in December 1993 when Hubble's imaging flaw was corrected. Servicing missions 2, 3A, and 3B repaired various sub-systems and replaced many of the observing instruments with more modern and capable versions. However, following the 2003 Columbia Space Shuttle disaster, the fifth servicing mission was canceled on safety grounds. After spirited public discussion, NASA reconsidered this decision, and administrator Mike Griffin gave the green light for one final Hubble servicing mission. This was planned for October 2008, but in September 2008, another key component failed.[4] The servicing mission has been postponed until May 2009[5] to allow this unit to be replaced as well.

The planned repairs to the Hubble should allow the telescope to function until at least 2013, when its successor, the James Webb Space Telescope (JWST), is due to be launched. The JWST will be far superior to Hubble for many astronomical research programs, but will only observe in infrared, so it would complement (not replace) Hubble's ability to observe in the visible and ultraviolet parts of the spectrum.

excerpt came from http://en.wikipedia.org/wiki/Hubble_Space_Telescope

Thursday, April 16, 2009

The Greenhouse Effect

Introduction

The Goldilocks Principle can be summed up neatly as "Venus is too hot, Mars is too cold, and Earth is just right." The fact that Earth has an average surface temperature comfortably between the boiling point and freezing point of water, and thus is suitable for our sort of life, cannot be explained by simply suggesting that our planet orbits at just the right distance from the sun to absorb just the right amount of solar radiation. Our moderate temperatures are also the result of having just the right kind of atmosphere. A Venus-type atmosphere would produce hellish, Venus-like conditions on our planet; a Mars atmosphere would leave us shivering in a Martian-type deep freeze.

Instead, parts of our atmosphere act as an insulating blanket of just the right thickness, trapping sufficient solar energy to keep the global average temperature in a pleasant range. The Martian blanket is too thin, and the Venusian blanket is way too thick! The 'blanket' here is a collection of atmospheric gases called 'greenhouse gases' based on the idea that the gases also 'trap' heat like the glass walls of a greenhouse do.

These gases, mainly water vapor ( ), carbon dioxide (), methane (), and nitrous oxide (), all act as effective global insulators. To understand why, it's important to understand a few basic facts about solar radiation and the structure of atmospheric gases.

Solar Radiation

The sun radiates vast quantities of energy into space, across a wide spectrum of wavelengths.

Most of the radiant energy from the sun is concentrated in the visible and near-visible parts of the spectrum. The narrow band of visible light, between 400 and 700 nm, represents 43% of the total radiant energy emitted. Wavelengths shorter than the visible account for 7 to 8% of the total, but are extremely important because of their high energy per photon. The shorter the wavelength of light, the more energy it contains. Thus, ultraviolet light is very energetic (capable of breaking apart stable biological molecules and causing sunburn and skin cancers). The remaining 49 - 50% of the radiant energy is spread over the wavelengths longer than those of visible light. These lie in the near infrared range from 700 to 1000 nm; the thermal infrared, between 5 and 20 microns; and the far infrared regions. Various components of earth's atmosphere absorb ultraviolet and infrared solar radiation before it penetrates to the surface, but the atmosphere is quite transparent to visible light.

Absorbed by land, oceans, and vegetation at the surface, the visible light is transformed into heat and re-radiates in the form of invisible infrared radiation. If that was all there was to the story, then during the day earth would heat up, but at night, all the accumulated energy would radiate back into space and the planet's surface temperature would fall far below zero very rapidly. The reason this doesn't happen is that earth's atmosphere contains molecules that absorb the heat and re-radiate the heat in all directions. This reduces the heat radiated out to space. Called 'greenhouse gases' because they serve to hold heat in like the glass walls of a greenhouse, these molecules are responsible for the fact that the earth enjoys temperatures suitable for our active and complex biosphere.

Greenhouse Gases

Carbon dioxide () is one of the greenhouse gases. It consists of one carbon atom with an oxygen atom bonded to each side. When its atoms are bonded tightly together, the carbon dioxide molecule can absorb infrared radiation and the molecule starts to vibrate. Eventually, the vibrating molecule will emit the radiation again, and it will likely be absorbed by yet another greenhouse gas molecule. This absorption-emission-absorption cycle serves to keep the heat near the surface, effectively insulating the surface from the cold of space.

Carbon dioxide, water vapor (), methane (), nitorus oxide (), and a few other gases are greenhouse gases. They all are molecules composed of more than two component atoms, bound loosely enough together to be able to vibrate with the absorption of heat. The major components of the atmosphere ( and ) are two-atom molecules too tightly bound together to vibrate and thus they do not absorb heat and contribute to the greenhouse effect.

Greenhouse Effect

Atmospheric scientists first used the term 'greenhouse effect' in the early 1800s. At that time, it was used to describe the naturally occurring functions of trace gases in the atmosphere and did not have any negative connotations. It was not until the mid-1950s that the term greenhouse effect was coupled with concern over climate change. And in recent decades, we often hear about the greenhouse effect in somewhat negative terms. The negative concerns are related to the possible impacts of an enhanced greenhouse effect. This is covered in more detail in the Global Climate Change section of this Web site. It is important to remember that without the greenhouse effect, life on earth as we know it would not be possible.

While the earth's temperature is dependent upon the greenhouse-like action of the atmosphere, the amount of heating and cooling are strongly influenced by several factors just as greenhouses are affected by various factors.

In the atmospheric greenhouse effect, the type of surface that sunlight first encounters is the most important factor. Forests, grasslands, ocean surfaces, ice caps, deserts, and cities all absorb, reflect, and radiate radiation differently. Sunlight falling on a white glacier surface strongly reflects back into space, resulting in minimal heating of the surface and lower atmosphere. Sunlight falling on a dark desert soil is strongly absorbed, on the other hand, and contributes to significant heating of the surface and lower atmosphere. Cloud cover also affects greenhouse warming by both reducing the amount of solar radiation reaching the earth's surface and by reducing the amount of radiation energy emitted into space.

Scientists use the term albedo to define the percentage of solar energy reflected back by a surface. Understanding local, regional, and global albedo effects is critical to predicting global climate change.


excerpt taken from http://www.ucar.edu/learn/1_3_1.htm

Wednesday, April 15, 2009

Bishop James Ussher

When Clarence Darrow prepared his famous examination of William Jennings Bryan in the Scopes trial, he chose to focus primarily on a chronology of Biblical events prepared by a seventeenth-century Irish bishop, James Ussher. American fundamentalists in 1925 found—and generally accepted as accurate—Ussher’s careful calculation of dates, going all the way back to Creation, in the margins of their family Bibles. (In fact, until the 1970s, the Bibles placed in nearly every hotel room by the Gideon Society carried his chronology.) The King James Version of the Bible introduced into evidence by the prosecution in Dayton contained Ussher’s famous chronology, and Bryan more than once would be forced to resort to the bishop’s dates as he tried to respond to Darrow’s questions.

The chronology first appeared in The Annals of the Old Testament, a monumental work first published in London in the summer of 1650. In 1654, Ussher added a part two which took his history through Rome’s destruction of the Temple in Jerusalem in 70 A.D. The project, which produced 2,000 pages in Latin, occupied twenty years of Ussher’s life.

Ussher lived through momentous times, having been born during the reign of Elizabeth and dying, in 1656, under Cromwell. He was a talented fast-track scholar who entered Trinity College in Dublin at the early age of thirteen, became an ordained priest by the age of twenty, and a professor at Trinity by twenty-seven. In 1625, Ussher became the head of the Anglo-Irish Church in Ireland.

As a Protestant bishop in a Catholic land, Ussher’s obsession with providing an accurate Biblical history stemmed from a desire to establish the superiority of the scholarship practiced by the clergy of his reformed faith over that of the Jesuits, the resolutely intellectual Roman Catholic order. (Ussher had absolutely nothing good to say about “papists” and their “superstitious” faith and “erroneous” doctrine.) Ussher committed himself to establishing a date for Creation that could withstand any challenge. He located and studied thousands of ancient books and manuscripts, written in many different languages. By the time of his death, he had amassed a library of over 10,000 volumes.

The date forever tied to Bishop Ussher appears in the first paragraph of the first page of The Annals. Ussher wrote: “In the beginning, God created heaven and earth, which beginning of time, according to this chronology, occurred at the beginning of the night which preceded the 23rd of October in the year 710 of the Julian period.” In the right margin of the page, Ussher computes the date in “Christian” time as 4004 B.C.

Although Ussher brought stunning precision to his chronology, Christians for centuries had assumed a history roughly corresponding to his. The Bible itself provides all the information necessary to conclude that Creation occurred less than 5,000 years before the birth of Christ. Shakespeare, in As You Like It, has his character Rosalind say, “The poor world is almost six thousand years old.” Martin Luther, the great reformer, favored (liking the round number) 4000 B.C. as a date for creation. Astronomer Johannes Kepler concluded that 3992 B.C. was the probable date.

As paleontologist Stephen Jay Gould points out in an essay on Ussher, the bishop’s calculation of the date of Creation fueled much ridicule from scientists who pointed to him as “a symbol of ancient and benighted authoritarianism.” Few geology textbook writers resisted taking a satirical swing at Ussher in their introductions. How foolish, the authors suggested, to believe that the earth’s geologic and fossil history could be crammed into 6,000 years. Gould, while not defending the bishop’s chronology, notes that judged by the research traditions and assumptions of his time, Ussher deserves not criticism, but praise for his meticulousness. The questionable premise underlying Ussher’s work, of course, is that the Bible is inerrant.

Ussher began his calculation by adding the ages of the twenty-one generations of people of the Hebrew-derived Old Testament, beginning with Adam and Eve. If the Bible is to be believed, they were an exceptionally long-lived lot. Genesis, for example, tells us that “Adam lived 930 years and he died.” Adam’s great-great-great-great-great-grandson, Methuselah, claimed the longevity record, coming in at 969 years. Healthier living conditions contributed, or so it was believed, to the long life spans of the early generations of the Bible. Josephus, a Jewish theologian writing in the first century, explained it this way: “Their food was fitter for the prolongation of life…and besides, God afforded them a longer lifespan on account of their virtue.”

To calculate the length of time since Creation, knowledge of more than the ages of death of the twenty-one generations was required; one also needed to know the ages of people of each generation at the time the next generation began. Fortunately, the Bible provided that information as well. For example, Genesis says that at the time Adam gave birth to his first son, Seth, he had “lived 130 years.” Augustine (as might a lot of people) wondered how a 130-year-old man could sire a child. He concluded that “the earth then produced mightier men” and that they reached puberty much later than did people of his own generation.

The Old Testament’s genealogy took Ussher up to the first destruction of the Temple in Jerusalem during the reign of Persian king Nebuchadnezzar. Ussher’s key to precisely dating Creation came from pinning down, by references in non-Christian sources, the precise dates of Nebuchadnezzar’s reign. He finally found the answer in a list of Babylonian kings produced by the Greek astronomer Ptolemy in the second century. By connecting Greek events to Roman history, Ussher tied the date of Nebuchanezzar’s death (562 B.C.) to the modern Julian calendar. Once the date of 562 B.C. was calculated, there remained only the simple matter of adding 562 years to the 3,442 years represented by the generations of the Old Testament up to that time: 4004.

Ussher next turned his attention to identifying the precise date of Creation. Like many of his contemporary scholars, he assumed that God would choose to create the world on a date that corresponded with the sun being at one of its four cardinal points—either the winter or summer solstice or the vernal or autumnal equinox. This view sprang from the belief that God had a special interest in mathematical and astronomical harmony. The deciding factor for Ussher came from Genesis. When Adam and Eve found themselves in the Garden of Eden, the fruit was invitingly ripe. Ussher reasoned, therefore, that it must have been harvest time, which corresponded with the autumnal equinox: “I have observed that the Sunday, which in the year [4004 B.C.] aforesaid, came nearest the Autumnal Aequinox, by Astronomical Tables, happened upon the 23 day of the Julian October.”

A London bookseller named Thomas Guy in 1675 began printing Bibles with Ussher’s dates printed in the margin of the work. Guy’s Bible’s became very popular—though their success might be as much attributed to the engravings of bare-breasted biblical women as to the inclusion of Ussher’s chronology. In 1701, the Church of England adopted Ussher’s dates for use in its official Bible. For the next two centuries, Ussher’s dates so commonly appeared in Bibles that his dates “practically acquired the authority of the word of God.”


excerpt taken from http://www.law.umkc.edu/faculty/projects/ftrials/scopes/ussher.html

Tuesday, April 14, 2009

Patriotism + Nationalism :(

"Law is a farce unless there is power to enforce it, and power to enforce international law against great states is impossible while each possesses vast armaments. Great states have, at present, the privilege of killing members of other states whenever they feel so disposed, though this liberty is disguised as the heroic privilege of dying in defense of what is right and just. Patriots always talk of dying for their country, and never of killing for their country.

War has so long been part of human life that it is difficult for our feelings and imaginations to grasp that the present anarchic national freedoms are likely to result in freedom only for corpses. If institutions could be created which would prevent war, there would be much more freedom in the world than there is at the present, just as there is more freedom owing to the prevention of individual murder"

excerpt taken from Bertrand Russell "Has Man a Future" pg 78-79

Monday, April 13, 2009

Metaethics

The term "meta" means after or beyond, and, consequently, the notion of metaethics involves a removed, or bird's eye view of the entire project of ethics. We may define metaethics as the study of the origin and meaning of ethical concepts. When compared to normative ethics and applied ethics, the field of metaethics is the least precisely defined area of moral philosophy. Two issues, though, are prominent: (1) metaphysical issues concerning whether morality exists independently of humans, and (2) psychological issues concerning the underlying mental basis of our moral judgments and conduct.
1a. Metaphysical Issues: Objectivism and Relativism

"Metaphysics" is the study of the kinds of things that exist in the universe. Some things in the universe are made of physical stuff, such as rocks; and perhaps other things are nonphysical in nature, such as thoughts, spirits, and gods. The metaphysical component of metaethics involves discovering specifically whether moral values are eternal truths that exist in a spirit-like realm, or simply human conventions. There are two general directions that discussions of this topic take, one other-worldly and one this-worldly. Proponents of the "other-worldly" view typically hold that moral values are objective in the sense that they exist in a spirit-like realm beyond subjective human conventions. They also hold that they are absolute, or eternal, in that they never change, and also that they are universal insofar as they apply to all rational creatures around the world and throughout time. The most dramatic example of this view is Plato, who was inspired by the field of mathematics. When we look at numbers and mathematical relations, such as 1+1=2, they seem to be timeless concepts that never change, and apply everywhere in the universe. Humans do not invent numbers, and humans cannot alter them. Plato explained the eternal character of mathematics by stating that they are abstract entities that exist in a spirit-like realm. He noted that moral values also are absolute truths and thus are also abstract, spirit-like entities. In this sense, for Plato, moral values are spiritual objects. Medieval philosophers commonly grouped all moral principles together under the heading of "eternal law" which were also frequently seen as spirit-like objects. 17th century British philosopher Samuel Clarke described them as spirit-like relationships rather than spirit-like objects. In either case, though, they exist in a sprit-like realm. A different other-worldly approach to the metaphysical status of morality is divine commands issuing from God's will. Sometimes called voluntarism, this view was inspired by the notion of an all-powerful God who is in control of everything. God simply wills things, and they become reality. He wills the physical world into existence, he wills human life into existence and, similarly, he wills all moral values into existence. Proponents of this view, such as medieval philosopher William of Ockham, believe that God wills moral principles, such as "murder is wrong," and these exist in God's mind as commands. God informs humans of these commands by implanting us with moral intuitions or revealing these commands in scripture.

The second and more this-worldly approach to the metaphysical status of morality follows in the skeptical philosophical tradition, such as that articulated by Greek philosopher Sextus Empiricus, and denies the objective status of moral values. Technically skeptics did not reject moral values themselves, but only denied that values exist as spirit-like objects, or as divine commands in the mind of God. Moral values, they argued, are strictly human inventions, a position that has since been called moral relativism. There are two distinct forms of moral relativism. The first is individual relativism, which holds that individual people create their own moral standards. Friedrich Nietzsche, for example, argued that the superhuman creates his or her morality distinct from and in reaction to the slave-like value system of the masses. The second is cultural relativism which maintains that morality is grounded in the approval of one's society - and not simply in the preferences of individual people. This view was advocated by Sextus, and in more recent centuries by Michel Montaigne and William Graham Sumner. In addition to espousing skepticism and relativism, "this-worldly" approaches to the metaphysical status of morality deny the absolute and universal nature of morality and hold instead that moral values in fact change from society to society throughout time and throughout the world. They frequently attempt to defend their position by citing examples of values that differ dramatically from one culture to another, such as attitudes about polygamy, homosexuality and human sacrifice.

excerpt taken from http://www.iep.utm.edu/e/ethics.htm

Sunday, April 12, 2009

Epistemology

Since at least the 17th century, a sharp distinction has been drawn between a priori knowledge and a posteriori knowledge. The distinction plays an especially important role in the work of David Hume (1711–76) and Immanuel Kant (1724–1804).

A proposition is said to be necessary if it holds (is true) in all logically possible circumstances or conditions. “All husbands are married” is such a proposition. There are no possible or conceivable conditions in which this proposition is not true (on the assumption, of course, that the words “husband” and “married” are taken to mean what they ordinarily mean). In contrast, “All Model T Fords are black” holds in some circumstances (those actually obtaining, which is why the proposition is true), but it is easy to imagine circumstances in which it would not be true. To say, therefore, that a proposition is contingent is to say that it is true in some but not in all possible circumstances. Many necessary propositions, such as “All husbands are married,” are a priori—though it has been argued that some are not (see below Necessary a posteriori propositions)—and most contingent propositions are a posteriori.

A proposition is said to be analytic if the meaning of the predicate term is contained in the meaning of the subject term. Thus, “All husbands are married” is analytic because part of the meaning of the term “husband” is being married. A proposition is said to be synthetic if this is not so. “All Model T Fords are black” is synthetic, since “black” is not included in the meaning of “Model T Ford.” Some analytic propositions are a priori, and most synthetic propositions are a posteriori. These distinctions were used by Kant to ask one of the most important questions in the history of epistemology, namely, whether a priori synthetic judgments are possible.

A proposition is said to be tautological if its constituent terms repeat themselves or if they can be reduced to terms that do, so that the proposition is of the form “a = a” (“a is identical to a”). Such propositions convey no information about the world, and accordingly they are said to be trivial, or empty of cognitive import. A proposition is said to be significant if its constituent terms are such that the proposition does provide new information about the world.

The distinction between tautological and significant propositions figures importantly in the history of the philosophy of religion. In the so-called ontological argument for the existence of God, St. Anselm of Canterbury (1033/34–1109) attempted to derive the significant conclusion that God exists from the tautological premise that God is the only perfect being together with the premise that no being can be perfect unless it exists. As Hume and Kant pointed out, however, it is fallacious to derive a proposition with existential import from a tautology, and it is now generally agreed that, from a tautology alone, it is impossible to derive any significant proposition. Tautological propositions are generally a priori, necessary, and analytic, and significant propositions are generally a posteriori, contingent, and synthetic.


excerpt taken from http://www.britannica.com/EBchecked/topic/190219/epistemology/247952/A-priori-and-a-posteriori-knowledge

Thursday, April 9, 2009

Carl Sagan's Baloney Detection Kit

The following are suggested as tools for testing arguments and detecting fallacious or fraudulent arguments:

  • Wherever possible there must be independent confirmation of the facts
  • Encourage substantive debate on the evidence by knowledgeable proponents of all points of view.
  • Arguments from authority carry little weight (in science there are no "authorities").
  • Spin more than one hypothesis - don't simply run with the first idea that caught your fancy.
  • Try not to get overly attached to a hypothesis just because it's yours.
  • Quantify, wherever possible.
  • If there is a chain of argument every link in the chain must work.
  • "Occam's razor" - if there are two hypothesis that explain the data equally well choose the simpler.
  • Ask whether the hypothesis can, at least in principle, be falsified (shown to be false by some unambiguous test). In other words, it is testable? Can others duplicate the experiment and get the same result?
Additional issues are
  • Conduct control experiments - especially "double blind" experiments where the person taking measurements is not aware of the test and control subjects.
  • Check for confounding factors - separate the variables.
Common fallacies of logic and rhetoric
  • Ad hominem - attacking the arguer and not the argument.
  • Argument from "authority".
  • Argument from adverse consequences (putting pressure on the decision maker by pointing out dire consequences of an "unfavourable" decision).
  • Appeal to ignorance (absence of evidence is not evidence of absence).
  • Special pleading (typically referring to god's will).
  • Begging the question (assuming an answer in the way the question is phrased).
  • Observational selection (counting the hits and forgetting the misses).
  • Statistics of small numbers (such as drawing conclusions from inadequate sample sizes).
  • Misunderstanding the nature of statistics (President Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence!)
  • Inconsistency (e.g. military expenditures based on worst case scenarios but scientific projections on environmental dangers thriftily ignored because they are not "proved").
  • Non sequitur - "it does not follow" - the logic falls down.
  • Post hoc, ergo propter hoc - "it happened after so it was caused by" - confusion of cause and effect.
  • Meaningless question ("what happens when an irresistible force meets an immovable object?).
  • Excluded middle - considering only the two extremes in a range of possibilities (making the "other side" look worse than it really is).
  • Short-term v. long-term - a subset of excluded middle ("why pursue fundamental science when we have so huge a budget deficit?").
  • Slippery slope - a subset of excluded middle - unwarranted extrapolation of the effects (give an inch and they will take a mile).
  • Confusion of correlation and causation.
  • Straw man - caricaturing (or stereotyping) a position to make it easier to attack..
  • Suppressed evidence or half-truths.
  • Weasel words - for example, use of euphemisms for war such as "police action" to get around limitations on Presidential powers. "An important art of politicians is to find new names for institutions which under old names have become odious to the public"

Wednesday, April 8, 2009

Evolution of the Eye




When evolution skeptics want to attack Darwin's theory, they often point to the human eye. How could something so complex, they argue, have developed through random mutations and natural selection, even over millions of years?

If evolution occurs through gradations, the critics say, how could it have created the separate parts of the eye -- the lens, the retina, the pupil, and so forth -- since none of these structures by themselves would make vision possible? In other words, what good is five percent of an eye?

Darwin acknowledged from the start that the eye would be a difficult case for his new theory to explain. Difficult, but not impossible. Scientists have come up with scenarios through which the first eye-like structure, a light-sensitive pigmented spot on the skin, could have gone through changes and complexities to form the human eye, with its many parts and astounding abilities.

Through natural selection, different types of eyes have emerged in evolutionary history -- and the human eye isn't even the best one, from some standpoints. Because blood vessels run across the surface of the retina instead of beneath it, it's easy for the vessels to proliferate or leak and impair vision. So, the evolution theorists say, the anti-evolution argument that life was created by an "intelligent designer" doesn't hold water: If God or some other omnipotent force was responsible for the human eye, it was something of a botched design.

Biologists use the range of less complex light sensitive structures that exist in living species today to hypothesize the various evolutionary stages eyes may have gone through.

Here's how some scientists think some eyes may have evolved: The simple light-sensitive spot on the skin of some ancestral creature gave it some tiny survival advantage, perhaps allowing it to evade a predator. Random changes then created a depression in the light-sensitive patch, a deepening pit that made "vision" a little sharper. At the same time, the pit's opening gradually narrowed, so light entered through a small aperture, like a pinhole camera.

Every change had to confer a survival advantage, no matter how slight. Eventually, the light-sensitive spot evolved into a retina, the layer of cells and pigment at the back of the human eye. Over time a lens formed at the front of the eye. It could have arisen as a double-layered transparent tissue containing increasing amounts of liquid that gave it the convex curvature of the human eye.

In fact, eyes corresponding to every stage in this sequence have been found in existing living species. The existence of this range of less complex light-sensitive structures supports scientists' hypotheses about how complex eyes like ours could evolve. The first animals with anything resembling an eye lived about 550 million years ago. And, according to one scientist's calculations, only 364,000 years would have been needed for a camera-like eye to evolve from a light-sensitive patch.

Tuesday, April 7, 2009

Fear, the foundation of religion and what we must do

Religion is based, I think, primarily and mainly upon fear. It is partly the terror of the unknown and partly, as I have said, the wish to feel that you have a kind of elder brother who will stand by you in all your troubles and disputes. Fear is the basis of the whole thing -- fear of the mysterious, fear of defeat, fear of death. Fear is the parent of cruelty, and therefore it is no wonder if cruelty and religion have gone hand in hand. It is because fear is at the basis of those two things. In this world we can now begin a little to understand things, and a little to master them by help of science, which has forced its way step by step against the Christian religion, against the churches, and against the opposition of all the old precepts. Science can help us to get over this craven fear in which mankind has lived for so many generations. Science can teach us, and I think our own hearts can teach us, no longer to look around for imaginary supports, no longer to invent allies in the sky, but rather to look to our own efforts here below to make this world a better place to live in, instead of the sort of place that the churches in all these centuries have made it.

We want to stand upon our own feet and look fair and square at the world -- its good facts, its bad facts, its beauties, and its ugliness; see the world as it is and be not afraid of it. Conquer the world by intelligence and not merely by being slavishly subdued by the terror that comes from it. The whole conception of God is a conception derived from the ancient Oriental despotisms. It is a conception quite unworthy of free men. When you hear people in church debasing themselves and saying that they are miserable sinners, and all the rest of it, it seems contemptible and not worthy of self-respecting human beings. We ought to stand up and look the world frankly in the face. We ought to make the best we can of the world, and if it is not so good as we wish, after all it will still be better than what these others have made of it in all these ages. A good world needs knowledge, kindliness, and courage; it does not need a regretful hankering after the past or a fettering of the free intelligence by the words uttered long ago by ignorant men. It needs a fearless outlook and a free intelligence. It needs hope for the future, not looking back all the time toward a past that is dead, which we trust will be far surpassed by the future that our intelligence can create.

excerpt taken from http://users.drew.edu/~jlenz/whynot.html

Monday, April 6, 2009

The Big Bang

One of the most persistently asked questions has been: How was the universe created? Many once believed that the universe had no beginning or end and was truly infinite. Through the inception of the Big Bang theory, however,no longer could the universe be considered infinite. The universe was forced to take on the properties of a finite phenomenon, possessing a history and a beginning.

About 15 billion years ago a tremendous explosion started the expansion of the universe. This explosion is known as the Big Bang. At the point of this event all of the matter and energy of space was contained at one point. What exisisted prior to this event is completely unknown and is a matter of pure speculation. This occurance was not a conventional explosion but rather an event filling all of space with all of the particles of the embryonic universe rushing away from each other. The Big Bang actually consisted of an explosion of space within itself unlike an explosion of a bomb were fragments are thrown outward. The galaxies were not all clumped together, but rather the Big Bang lay the foundations for the universe.

The origin of the Big Bang theory can be credited to Edwin Hubble. Hubble made the observation that the universe is continuously expanding. He discovered that a galaxys velocity is proportional to its distance. Galaxies that are twice as far from us move twice as fast. Another consequence is that the universe is expanding in every direction. This observation means that it has taken every galaxy the same amount of time to move from a common starting position to its current position. Just as the Big Bang provided for the foundation of the universe, Hubbles observations provided for the foundation of the Big Bang theory.

Since the Big Bang, the universe has been continuously expanding and, thus, there has been more and more distance between clusters of galaxies. This phenomenon of galaxies moving farther away from each other is known as the red shift. As light from distant galaxies approach earth there is an increase of space between earth and the galaxy, which leads to wavelengths being stretched.

In addition to the understanding of the velocity of galaxies emanating from a single point, there is further evidence for the Big Bang. In 1964, two astronomers, Arno Penzias and Robert Wilson, in an attempt to detect microwaves from outer space, inadvertently discovered a noise of extraterrestrial origin. The noise did not seem to emanate from one location but instead, it came from all directions at once. It became obvious that what they heard was radiation from the farthest reaches of the universe which had been left over from the Big Bang. This discovery of the radioactive aftermath of the initial explosion lent much credence to the Big Bang theory.

Even more recently, NASAs COBE satellite was able to detect cosmic microwaves eminating from the outer reaches of the universe. These microwaves were remarkably uniform which illustrated the homogenity of the early stages of the universe. However, the satillite also discovered that as the universe began to cool and was still expanding, small fluctuations began to exist due to temperature differences. These flucuatuations verified prior calculations of the possible cooling and development of the universe just fractions of a second after its creation. These fluctuations in the universe provided a more detailed description of the first moments after the Big Bang. They also helped to tell the story of the formation of galaxies which will be discussed in the next chapter.

The Big Bang theory provides a viable solution to one of the most pressing questions of all time. It is important to understand, however, that the theory itself is constantly being revised. As more observations are made and more research conducted, the Big Bang theory becomes more complete and our knowledge of the origins of the universe more substantial.

excerpt taken from http://www.umich.edu/~gs265/bigbang.htm

Sunday, April 5, 2009

How is time related to the Mind?

Physical time is public time, the time that clocks are designed to measure. Psychological time or phenomenological time is private time. It is perhaps best understood as awareness of physical time. Psychological time passes swiftly for us while we are enjoying reading a book, but it slows dramatically if we are waiting anxiously for the water to boil on the stove. The slowness is probably due to focusing our attention on shorter intervals of physical time. Meanwhile, the clock by the stove is measuring physical time and is not affected by anybody's awareness. When a physicist defines speed to be the rate of change of position with respect to time, the term "time" refers to physical time. Physical time is more basic for helping us understand our shared experiences in the world, and so it is more useful than psychological time for doing science. But psychological time is vitally important for understanding many human thought processes. We have an awareness of the passage of time even during our sleep, and we awake knowing we have slept for one night, not for one month. But if we have been under a general anesthetic or have been knocked unconscious and then wake up, we may have no sense of how long we have been unconscious. Psychological time stopped. Some philosophers claim that psychological time is completely transcended in the mental state called "nirvana."

Within the field of cognitive science, one wants to know what are the neural mechanisms that account not only for our experience of time's flow, but also for our ability to place events into the proper time order. See (Damasio, 2006) for further discussion of the progress in this area of cognitive science. The most surprising scientific discovery about psychological time is Benjamin Libet's experiments in the 1970s that show, or so it is claimed, that the brain events involved in initiating free choices occur about a third of a second before we are aware of our choice. Before Libet's work, it was universally agreed that a person is aware of deciding to act freely, then later the body initiates the action.

Psychologists are interested in whether we can speed up our minds relative to physical time. If so, we might become mentally more productive, get more high quality decision making done per fixed amount of physical time, learn more per minute. Several avenues have been explored: using drugs such as cocaine and amphetamines, undergoing extreme experiences such as jumping backwards off a tall tower with bungee cords attached to the legs, and trying different forms of meditation. So far, none of these avenues have led to success productivity-wise.

Any organism's sense of time is subjective, but is the time that is sensed also subjective, a mind-dependent phenomenon? Without minds in the world, nothing in the world would be surprising or beautiful or interesting. Can we add that nothing would be in time? If judgments of time were subjective in the way judgments of being interesting vs. not-interesting are subjective, then it would be miraculous that everyone can so easily agree on the ordering of public events in time. For example, first, Einstein was born, then he went to school, then he died. Everybody agrees that it happened in this order: birth, school, death. No other order. The agreement on time order for so many events is part of the reason that most philosophers and scientists believe physical time is an objective phenomenon not dependent on being consciously experienced. The other part of the reason time is believed to be objective is that our universe has a large number of different processes that bear consistent time relations, or frequency of occurrence relations, to each other. For example, the frequency of a fixed-length pendulum is a constant multiple of the half life of a specific radioactive uranium isotope; the relationship does not change as time goes by (at least not much and not for a long time). The existence of these sorts of relationships makes our system of physical laws much simpler than it otherwise would be, and it makes us more confident that there is something objective we are referring to with the time-variable in those laws. The stability of these relationships over a long time also make it easy to create clocks. Time can be measured easily because we have access to long term simple harmonic oscillators that have a regular period or “regular ticking.” This regular motion shows up in completely different stable systems when they are disturbed: a ball swinging from a string (a pendulum), a ball bouncing up and down from a coiled spring, a planet orbiting the sun, organ pipes, electric circuits, and atoms in a crystal lattice. Many of these systems make good clocks.

Aristotle raised this issue of the mind-dependence of time when he said, "Whether, if soul (mind) did not exist, time would exist or not, is a question that may fairly be asked; for if there cannot be someone to count there cannot be anything that can be counted..." [Physics, chapter 14]. He does not answer his own question because, he says rather profoundly, it depends on whether time is the conscious numbering of movement or instead is just the capability of movements being numbered were consciousness to exist.

St Augustine, adopting a subjective view of time, said time is nothing in reality but exists only in the mind's apprehension of that reality. In the 11th century, the Persian philosopher Avicenna doubted the existence of physical time, arguing that time exists only in the mind due to memory and expectation. The 13th century philosophers Henry of Ghent and Giles of Rome said time exists in reality as a mind-independent continuum, but is distinguished into earlier and later parts only by the mind. In the 13th century, Duns Scotus clearly recognized both physical and psychological time.

At the end of the 18th century, Kant suggested a subtle relationship between time and mind--that our mind actually structures our perceptions so that we can know a priori that time is like a mathematical line. Time is, on this theory, a form of conscious experience.

In the 20th century, the philosopher of science Bas van Fraassen described physical time by saying, "There would be no time were there no beings capable of reason" just as "there would be no food were there no organisms, and no teacups if there were no tea drinkers," and no cultural objects without a culture.

The controversy in metaphysics between idealism and realism is that, for the idealist, nothing exists independently of the mind. If this controversy is settled in favor of idealism, then time, too, would have that subjective feature--physical time as well as psychological time.

It has been suggested by some philosophers that Einstein's theory of relativity, when confirmed, showed us that time depends on the observer, and thus that time is subjective, or dependent on the mind. This error is probably caused by Einstein's use of the term "observer." Einstein's theory does imply that the duration of an event is not absolute but depends on the observer's frame of reference or coordinate system. But what Einstein means by "observer's frame of reference" is merely a perspective or framework from which measurements could be made. The "observer" does not have to be a conscious being or have a mind. So, Einstein is not making a point about mind-dependence.

excerpt taken from http://www.iep.utm.edu/t/time.htm#H2

Saturday, April 4, 2009

Objective vs Subjective reality

If everything is an illusion, then does it make any difference? We can’t do anything about it. So it makes no practical difference. Would it rob you of your motivation to do anything? It might not make a difference to some people, but then it might to others. The question of the ultimate nature of reality can make some difference to some people. Some philosophers are pragmatists in the ordinary sense. Some are positivists. A positivist says that if we can perform no experiment that would show a difference, then there is no real difference. They say it is just a matter of semantics. If we are just figures in other people’s dreams, then it follows that we have no control of our lives, and that could make a difference to people. Russell addresses the question about our beliefs about ordinary things. The issue is the reality of color. Are colors part of reality or merely subjective appearances? Russell says that color disappears in the dark.But we might disagree, and say that the color remains, even if we can’t see it. So we say color is part of reality, even though the appearance changes.Russell is saying color is subjective, not objective. One of his arguments is that things look different colors in different lights, and none is the real color. Russell wants to argue that all senses are subjective. Smell, touch, taste, color, sound. Ordinarily, we tend to say that some appearances are fairly objective: we can be wrong about them. There are real distinctions we make about appearance that make a difference to practical life. On the other hand, it’s not clear that science can provide an objective account of what colors things really are. Shapes: in a two dimensional field, tables do not look rectangular. But in a three dimensional field, normal tables do look rectangular. Could shape itself be subjective?Are there any objective properties of physical objects?

  • Atomic composition
  • Their reality
  • Mass
  • Number of atoms/molecules
  • Size and relationship of sides – shape

Something is subjective if it varies from person to person and there is no way to determine which person is right. Something is objective if there is a definite right answer about it that does not vary from person to person.

Friday, April 3, 2009

The Russell-Einstein Manifesto

IN the tragic situation which confronts humanity, we feel that scientists should assemble in conference to appraise the perils that have arisen as a result of the development of weapons of mass destruction, and to discuss a resolution in the spirit of the appended draft.

We are speaking on this occasion, not as members of this or that nation, continent, or creed, but as human beings, members of the species Man, whose continued existence is in doubt. The world is full of conflicts; and, overshadowing all minor conflicts, the titanic struggle between Communism and anti-Communism.

Almost everybody who is politically conscious has strong feelings about one or more of these issues; but we want you, if you can, to set aside such feelings and consider yourselves only as members of a biological species which has had a remarkable history, and whose disappearance none of us can desire.

We shall try to say no single word which should appeal to one group rather than to another. All, equally, are in peril, and, if the peril is understood, there is hope that they may collectively avert it.

We have to learn to think in a new way. We have to learn to ask ourselves, not what steps can be taken to give military victory to whatever group we prefer, for there no longer are such steps; the question we have to ask ourselves is: what steps can be taken to prevent a military contest of which the issue must be disastrous to all parties?

The general public, and even many men in positions of authority, have not realized what would be involved in a war with nuclear bombs. The general public still thinks in terms of the obliteration of cities. It is understood that the new bombs are more powerful than the old, and that, while one A-bomb could obliterate Hiroshima, one H-bomb could obliterate the largest cities, such as London, New York, and Moscow.

No doubt in an H-bomb war great cities would be obliterated. But this is one of the minor disasters that would have to be faced. If everybody in London, New York, and Moscow were exterminated, the world might, in the course of a few centuries, recover from the blow. But we now know, especially since the Bikini test, that nuclear bombs can gradually spread destruction over a very much wider area than had been supposed.

It is stated on very good authority that a bomb can now be manufactured which will be 2,500 times as powerful as that which destroyed Hiroshima. Such a bomb, if exploded near the ground or under water, sends radio-active particles into the upper air. They sink gradually and reach the surface of the earth in the form of a deadly dust or rain. It was this dust which infected the Japanese fishermen and their catch of fish. No one knows how widely such lethal radio-active particles might be diffused, but the best authorities are unanimous in saying that a war with H-bombs might possibly put an end to the human race. It is feared that if many H-bombs are used there will be universal death, sudden only for a minority, but for the majority a slow torture of disease and disintegration.

Many warnings have been uttered by eminent men of science and by authorities in military strategy. None of them will say that the worst results are certain. What they do say is that these results are possible, and no one can be sure that they will not be realized. We have not yet found that the views of experts on this question depend in any degree upon their politics or prejudices. They depend only, so far as our researches have revealed, upon the extent of the particular expert's knowledge. We have found that the men who know most are the most gloomy.

Here, then, is the problem which we present to you, stark and dreadful and inescapable: Shall we put an end to the human race; or shall mankind renounce war? People will not face this alternative because it is so difficult to abolish war.

The abolition of war will demand distasteful limitations of national sovereignty. But what perhaps impedes understanding of the situation more than anything else is that the term "mankind" feels vague and abstract. People scarcely realize in imagination that the danger is to themselves and their children and their grandchildren, and not only to a dimly apprehended humanity. They can scarcely bring themselves to grasp that they, individually, and those whom they love are in imminent danger of perishing agonizingly. And so they hope that perhaps war may be allowed to continue provided modern weapons are prohibited.

This hope is illusory. Whatever agreements not to use H-bombs had been reached in time of peace, they would no longer be considered binding in time of war, and both sides would set to work to manufacture H-bombs as soon as war broke out, for, if one side manufactured the bombs and the other did not, the side that manufactured them would inevitably be victorious.

Although an agreement to renounce nuclear weapons as part of a general reduction of armaments would not afford an ultimate solution, it would serve certain important purposes. First, any agreement between East and West is to the good in so far as it tends to diminish tension. Second, the abolition of thermo-nuclear weapons, if each side believed that the other had carried it out sincerely, would lessen the fear of a sudden attack in the style of Pearl Harbour, which at present keeps both sides in a state of nervous apprehension. We should, therefore, welcome such an agreement though only as a first step.

Most of us are not neutral in feeling, but, as human beings, we have to remember that, if the issues between East and West are to be decided in any manner that can give any possible satisfaction to anybody, whether Communist or anti-Communist, whether Asian or European or American, whether White or Black, then these issues must not be decided by war. We should wish this to be understood, both in the East and in the West.

There lies before us, if we choose, continual progress in happiness, knowledge, and wisdom. Shall we, instead, choose death, because we cannot forget our quarrels? We appeal as human beings to human beings: Remember your humanity, and forget the rest. If you can do so, the way lies open to a new Paradise; if you cannot, there lies before you the risk of universal death.