Monday, May 24, 2010

Recognizing Bad Science Part 1

This is the first part of what will be an occasional series on recognizing bad or poor science. 

One of the aspects of astronomy that makes it so fun is that there are so many weird and wonderful things in the cosmos, from black holes to the Big Bang.  Unfortunately, this vast zoo of the unexpected and unknown has resulted in lots of unscientific and unsupported claims.  If you are interested in the Universe but don't have the time to spend learning advanced physics and math, how can you separate good science from bad science and pseudoscience?

Thankfully, there are some basic criteria that just about anyone can learn to apply that will help.  In this new occasional series, I'll outline some of the easier criteria to apply.  Tonight we start with bad science as seen on TV. 

A few weeks ago I was flipping through channels on the TV.  I stopped on the History Channel when I saw a glimpse of hieroglyphics.  Hoping to get a Zahi Hawass hit, I stopped just in time to hear the narrator speculating that the ancient Egyptian images of gods with human bodies and animal heads may have been the results of genetic experiments by the ancient Egyptians.  Then some random talking head popped up and said that "mainstream science" could not prove that the Egyptians didn't experiment in genetics.  As I reached for the remote to move on to more substantial fare like the Three Stooges, the show went on to imply that aliens taught the ancient Egyptians about genetic engineering.  (Dr. Hawass was nowhere to be seen and presumably had nothing to do with this program.)

In that short 30 seconds of television, I saw one of the most common signs of bad science.

First, let's discuss the scientific method.  The scientific method is the process of science.  In a short and idealized form, the scientific method starts with an observation of something, which leads to a hypothesis (an educated guess) about what the natural laws might be that would lead to such an observation.  This hypothesis should make a prediction about what should happen in a different situation.  Next an experiment is devised that creates that new situation, and the prediction of the hypothesis is compared to the results of the experiment.  If they agree, the hypothesis gains credibility.  If they disagree, the hypothesis is sent back to the drawing board.

So, let's look at the hypothesis that images of humans with animal heads are the results of genetic experiments by the ancient Egyptians.   It's not to hard to think of predictions that such a hypothesis might make.  Artifacts from clean laboratories, microscopes, computing power, even hybrid mummies dating from ancient Egypt are all obvious predictions, and there are many more that could be made. And none of these have been found.

But the talking head in the program jumps the rails of the scientific method even earlier when he insisted that nobody could prove this hypothesis wasn't true.  That's a completely unscientific statement.  A scientific approach requires that the hypothesis make testable predictions that are then found to be true.  Or, put another way, a hypothesis should be assumed to be wrong until it is proven correct.  Similar to the legal standard "innocent until proven guilty", the scientific standard is "wrong until proven right".

Let me be clear, there is nothing wrong or unscientific about making speculations, even if those speculations are way outside the mainstream.   It's fine if a person wants to hypothesize that aliens taught the ancient Egyptians how to make chimeras.  But the burden of proof lies on the speculator to prove his or her assertation, not on the rest of us to disprove it.
Let's briefly consider one example of the proper application of the scientific method.  Between 1907 and 1915, Albert Einstein developed the idea of general relativity.  From our vantage today, we often fail to appreciate how radical of an idea this was.  General relativity not only described how gravity worked; it introduced entirely new ways of looking at matter, space, and time.

Einstein and others excited by his idea realized that general relativity had prove itself, so they set about forcing general relativity to make predictions.  One of these predictions was that the sun should slightly bend the light from background stars, and bend it about twice as much as Newton's Law of Gravity predicted.  This bending could be measured during a total solar eclipse, though it is a very difficult measurement.  It took several failed expeditions over several years before Sir Arthur Eddington finally obtained evidence in 1919 that starlight was bent as much as predicted by general relativity, a measurement confirmed in 1922 by William Wallace Campbell (who  had led a few failed expeditions) and a team from Lick Observatory.

Over time, several more predictions of general relativity have been made, and experimental results have agreed with these predictions.  Therefore, even though general relativity was a radical theory when it was proposed, it has become widely accepted as true. 

Yet there are still aspects of general relativity that remain to be tested.  General relativity predicts the existence of gravitational waves.  We have seen evidence for gravitational waves, but they have not been directly detected yet.  Many people think that general relativity might fail on subatomic scales, though this is extremely hard to test. And if dark matter and dark energy (which are allowed but not predicted by general relativity)  do not actually exist, then general relativity probably cannot explain many observations of of the distant universe. 

In summary: if you ever hear anyone try and justify a new hypothesis with the argument "You can't prove it isn't true", they aren't talking science, and you should be very wary of what they have to say.  If they say, "here's what the hypothesis predicts, and here's which predictions that have been verified, and here are some more we have yet to test", then you are much more likely to be hearing good science!

No comments:

Post a Comment