The scientific method
if built up of facts, as a house is built of stones;
but an accumulation of facts is no more science
than a heap of stones is a house.
The term "scientific method" gets paraded about as if the average person untutored in science knows what it is. Even in my engineering college days, I was taught the practice of the scientific method, but was given no philosophical understanding of it. I had to take a philosophy elective in logic to fill in the gap. So it is no surprise that so few understand what the scientific method it. I would describe it as a method of logic designed to keep us tethered to tangible reality. We don't have to be scientists to apply it.
I came across a book that explains the scientific method better than any publication I've seen. It's titled How to Think Straight About Psychology by Keith E. Stanovich. As the title suggests, the author is out to set the record straight because he thinks Freud and popular psychology has given the profession a bad name. The author's writing is so clear and concise that I've decided to copy selections verbatim. While his focus is psychology, I've sprinkled in some illustrative examples to put the scientific method in a broader perspective with an emphasis on religion.
The word "empiricism" defines the practice of relying on observation. Up to Galileo's time it was thought that knowledge was best obtained through pure thought or appeal to authority. Galileo's accusers refused to look through his telescope.
Scientific observation is termed systematic because it is structured so that the results of the observations reveal something about the underlying nature of the world. This is done by comparing theories with observations. The results of observations will either support or reject said theories.
Scientists avoid theories that are not testable or not solvable. Examples would be "What is the meaning of life" or "When did the universe begin." Even the question of "How did life begin" may not be solvable.
The criterion says that for a theory to be useful, the predictions drawn from it must be specific. The theory should predict what should and what should not happen. If they don't happen, the theory has to be modified or replaced with an entirely new theory. Either way you wind up with a theory closer to the truth. In contrast, if a theory does not rule out possible observations, then the theory cannot be changed, and we are frozen into our current way of thinking with no possibility of changing.
As one example of un-falsifiability, a shaman might apply some magic potions to heal a sick person. If the person gets well, he takes credit for it. If the person dies, he says it was the will of the gods. He can't be wrong no matter what the outcome. The god hypothesis on any matter cannot be falsified. Today's psychologists consider Freudian theory to be scientifically useless: it explains human behavior after the fact; it can explain everything: it makes no specific predictions.
There is a misconception that one theory is as good as another, as if they were unverified hypothesis, mere guesses, hunches. On the contrary, a theory in science explains a body of data and makes predictions about the results of future experiments. What scientists most often mean by a solvable problem is a "testable theory." The way scientists make sure they are dealing with testable theories is by ensuring that they are falsifiable.
When embedded into the principle of falsifiability, a successful theory is not one that accounts for every possible happening because such a theory robs itself of its predictive power. Bad theories do not put themselves in jeopardy in this way. They make predictions that are so general that they are almost bound to be true.
The difference between a layperson's and the scientist's use of the term "theory" has often been exploited by some religious fundamentalists who want creationism taught in the public schools. Grand theories that are so global, complicated, and fuzzy can be used to explain everything. Such theories are constructed for emotional support because they are not meant to be changed or discarded.
Hypotheses are specific predictions derived from theories (which are more general and comprehensive). Current viable theories are those that have many of their hypotheses confirmed. If the hypotheses are confirmed by the experiments, then the theory receives some degree of corroboration. They are called hypotheses because they are incomplete, not because they are wrong in every respect.
The Bible has many errors, omissions and contradictions with established facts. Those determined to defend the Bible as inerrant, content themselves with hypothesize possibilities.
There are many relationships that have been confirmed so many times that they are termed laws because it is extremely doubtful that they will be overturned by future experimentation.
That scientists gravitate to those problems on the fringes of what is known and ignore things that are well confirmed (so-called laws)-is very confusing to the general public. It seems that scientists are emphasizing what they don't know rather than what is known. This is because to advance knowledge, scientists must work at the outer limits of what is known.
Religionists have taken these debates at the fringes as a weakness when it is a strength. Conversely, religion's weakness is its tradition of clinging to pure thought and authority. It's the same fallacious logic when Galileo's accusers refused to look into his telescope.
Essentialism verses operationism
Essentialism is defined as the idea that the only good scientific theories are those that give ultimate explanations of phenomena in terms of their underlying essences or their essential properties. People who hold this view usually also believe that any theory that gives less then an ultimate explanation of a phenomenon is useless.
Scientists do not claim to produce perfect knowledge; the unique strength of science is not that it is an error-free process, but that it provides a way of eliminating the errors that are part of the knowledge base. Nor does science attempt to answer "ultimate" questions about the universe. Scientists consider questions about "ultimate" to be unanswerable, and claims of perfect or absolute knowledge tend to choke off inquiry. This is why scientists reject essentialism.
Instead, science advances by developing operational definitions of concepts -i.e. how things operate. The operational definition removes the concept from the feelings of a particular individual and allows it to be tested by anyone who can carry out the measurable operations. As such, theories must be grounded in, or linked to, observable events that can be measured.
For examples, scientists can explain how gravity operates, but they cannot explain its underlying essence. They do not engage in word games such as what is the meaning of the word life. They cannot define hunger by feelings of discomfort; they would use something measurable like blood sugar.
Scientists consider testimonials worthless as evidence of truth. First, there is the placebo effect which is well documented in medical research. Second, there is the vividness problem. When faced with a problem-solving or decision-making situation, people retrieve from memory the information that seems relevant to the situation at hand. Thus, they are more likely to use the facts that are more accessible to solve a problem or make a decision. Testimony is also dependant on honesty.
Testimonials open the door to pseudoscience such as astrology and parapsychology. Christianity, Judaism and Islam's claims to being revealed religions are based on testimonials.
Correlation and causation
The presence of correlation does not necessarily imply causation. The limitations of correlational evidence are not always so easy to recognize. When the casual link seems obvious to us, when we have a strong pre-existing bias, or when our interpretations become dominated by out theoretical orientation, it is tempting to treat correlations as evidence of causation.
Stanovich gives a case example when, based on statistical evidence, Pellagra was believed to be a transmitted disease caused by unsanitary conditions. Joseph Goldberger suspected it was caused by inadequate diet. He thought that the correlation arose because families with sanitary plumbing were likely to be economically advantaged. To prove his point, he tried to infect himself and volunteers with the body fluids of Pellagra victims; nothing happened. For his second test, he fed one group of volunteers on a high-carbohydrate low-protein diet and another group a more balanced diet. Within five months, the low-protein group was ravaged by Pellagra.
The directionality problem
When correlations become apparent, it is a common error to confuse effect for cause. There are such an abundance of these that pass for conventional wisdom that I couldn't scratch the surface. People will cite a list of social ills as if they were root causes, when in reality they are the effects of underlying causes. 1) When the economy is running well, political officials take credit; when it goes sour, they blame consumers for not spending enough. 2) My interest in nutrition, led me to discover that that most metabolic illness are caused by poor diet. Pharmaceutical medicines alleviate the effects of bad diet, but do not address the underlying causes while causing side reactions. 3) Are religious people moral because they believe in (or fear) God? Or are they moral because they had those inclinations in the first place?
Human behavior often has multiple causes. History, economics and psychology come to mind.
Connectivity and Convergence
The connectivity principle states that a new theory in science must make contact with previously established empirical facts. To be considered an advance, it must not only explain new facts but account for old ones. The theory may explain old facts in quite a different way from a previous theory, but explain them it must. This requirement ensures the cumulative progress of science.
If a new theory accounts for some new facts but fails to account for a host of old ones, it will not be considered a complete advance over old theories and, thus, will not immediately replace them. Instead, the old and new theories will contend simultaneously in the marketplace of ideas until a new synthesis renders them all obsolete.
The breakthrough model of scientific progress leads us astray by implying that new discoveries violate the principle of connectivity. This implication is dangerous because, when the principle of connectivity is abandoned, the main beneficiaries are purveyors of pseudoscience and bogus theories.
In what Stanovich calls the "Einstein Syndrome," his achievement has made it the dominant model of scientific progress in the public's mind. The tabloids are notorious for headlines that start with "New Breakthrough.." These theories derive part of their appeal and much of their publicity from the fact that they are said to be startling new. The second stratagem is to dismiss previous data by declaring them irrelevant. They say the theory is so new, such data are said not yet to exist. It's a rich environment for the growth of pseudoscience.
Evolutionary theory, the bugbear of creationism, displays connectivity with such disparate areas of science as paleontology, embryology, morphology, biogeography, and others. If the universe and Earth are only about ten thousand years old, then the modern sciences of cosmology, astronomy, physics, chemistry, geology, paleontology, paleoanthropology and early human history are all invalidated. Darwin's theory wasn't perfect. Called pangenesis, he abandoned the principle of connectivity to explain the mechanism of heredity to go along with natural selection. It was abandoned because it did not cohere with the rest of biology. The problem is that creationism shows no connectivity with anything else in science-in biology, geology, ecology, chemistry and genetics. Evolution shows extreme connectivity with all the other sciences.
Many laws and relationships in some sciences are stated in probabilities rather than certainties. We can see this in such fields as medical science, meteorology and psychology. Human activities generate the most controversies. In debates, it is easy to find exceptions to the general rule. For example, medical science can predict with confidence that the odds of developing lung cancer are greater among smokers, but it does not hold in every case. Therefore, in what is called cognitive illusions: it is a fallacy of reasoning to overweight individual case evidence and underweight statistical information.
The Gambler's Fallacy
The gamblers fallacy is the tendency for people to see links between events in the past and events in the future when the two are really independent. Two outcomes are independent when the occurrence of one does not affect the probability of the other.
Chance and randomness
Our brains have evolved in such a way that they engage in a relentless search for patterns in the world. We seek relationships, explanations, and meaning in the things that happen around us. What confounds our quest for structure and obscures understanding? You guessed it: probability. Or more specifically: chance and randomness.
Chance and randomness are integral parts of our environment. The mechanism of biological evolution and genetic recombination are governed by laws of chance and randomness. Why do bad things sometimes happen to good people? Answer: chance and randomness, being in the wrong place at the wrong time. There is a common tendency to search for explanations of coincidental events on the mistaken idea that rare events never happen. The laws of probability don't guarantee even distribution.
What is seen and what is not seen
This is not specifically discussed in Stanovich's book, but it improves our skill at thinking scientifically. One should be alert to looking not merely at the immediate but the longer effects and side effects of any act or phenomenon. As discussed above, in nature there may be a chain of causes or multiple causes. With human actions, there can be a chain of consequences that cascade into areas not apparent.
In what he calls the broken window fallacy, Frederic Bastiat, explains this common failure to take into consideration all the consequences of an action.
We live in a sea of disinformation on topics for which we have no expertise, and even our own lives are full of uncertainties. The scientific method provides a framework on which to improve our judgment. In addition it requires curiosity, alertness and the willingness to trade better ideas for lesser ones.
These sources do a commendable job of explaining science and debunking pseudoscience.