Studying human beliefs reveals they are usually wrong. Scientific beliefs are wrong less often than others because modern science is built on this realization.
In an earlier post, I explained how I went from trusting science based on authority and personal preference to trusting it for well-considered reasons.
To come to this conclusion, I learnt about the history and methodology of science, about religions, psychology, philosophy … What I came up with was a realization that, throughout history, people have firmly believed in an endless list of false things about the world, and we are always vulnerable to doing so again.
As I wrote: Once you see all those beliefs laid out side by side in an endless list, all the different gods, and old physical theories, and ineffective treatments, and oppressive racist or sexist beliefs, and misinterpreted observations … and see how many of them contradict each other or have been disproven by more objective methods, and how alleged eternal truths vary from culture to culture and have clearly traceable historical origins, and how easy it is to fall into various powerful illusions … the only reasonable conclusion is to be extremely skeptical.
What I promised to explain later was why science is different. Why does seeing human fallibility about just such things as science explores lead to believing in science?
The answer is apparently simple: modern science, which has not existed for all that long, is effectively based on this very skepticism. The scientific community will not believe anything before it has been very well proven by methods that are specifically built so that the hypothesis may be disproven.
One of the reasons people will believe in false things is confirmation bias: The tendency to look at the evidence in a way that makes it seem to confirm your belief. One of the things the scientific method is built to do is to make this more difficult. You need to do tests that clearly distinguish between confirmation and the opposite. In addition, other scientists are ready to question your results.
Another thing that makes it hard to find out what is really true is that the world is full of confounding factors. For example, who knows whether an apparent cure was really produced by some particular treatment? That’s why scientific studies, where possible, include factors like randomization and control groups that seek to eliminate other variables as well as possible. This is not enough, however, and a single study is not thought to prove much; only converging evidence from multiple sources leads to scientific consensus.
None of this is to say that science does these things perfectly. Scientists are still human and have human difficulties, and the ideals are often not followed. Only very well established scientific facts, such as (in spite of what propagandists say) evolution, can be trusted as near certainties.
The attitude of trusting science shouldn’t be one of trusting everything scientists say – it should be one of realizing it’s the best thing we’ve got for certain kinds of questions. Ideally, one should know enough about science to know whether claims made in the name of science are really good science.