Scientific Method Archive

Irrationality of Science

The scientific method, as with any process, is not immune to adverse human influences.

Journalism has a persistent bias for the new and exciting. They sell in pop culture, and as it turns out, they sell in scientific culture too. This creates unintended consequences.

Unlike pop or mainstream journalism, objectivity and peer review form critical cornerstones of science’s scientific method. Summarizing “Journalistic Deficit Disorder” (The Economist, September 22, 2012 edition) and “The Truth Wears Off” by Jonah Lehrer (The New Yorker, December 13, 2010 edition), scientific journals tend to prefer studies that:

  • Will sell more publications
  • Explore popular fields
  • Produce exiting, outlying results
  • Prove their hypotheses
  • Are new, not reruns of previous studies
  • Produce supporting results for a new, fashionable paradigm
  • Have substantial corporate investment or interest

These tendencies pressure scientists and researchers whose careers, reputations, incomes and funding depend on publicity their works receive. Several consequences undermine the credibility of science and research as a result:

  • Emphasis on proving outlandish hypotheses
  • Diminished importance of peer review
  • Increased biases in interpreting data and statistics
  • More focus on confirming popular findings or those with substantial financial backing
  • Defunding contrarian work
  • Skewing results toward extremes

Exciting often means extreme. In science it’s outlying results such as found in the bell curve. As Lehrer writes, since outliers receive the press, duplicating results is often difficult. Therefore, while hypotheses might be true, they’re not as true. However, as is more often the case, results are wrong, caused by inadequate research methodology, poor statistical analysis or normal human biases.

In other words, we can’t practice safe science by simply relying upon the scientific method. Human nature is too strong, even in scientists and trained researchers. We need to provide our own protection. That means educating ourselves on the scientific method and on the questions to ask. It also means taking nothing on blind faith . . . even science.

 

3 Comments so far. Join the Conversation

Problems with Science

By Mike Lehr
Irrationality of Science

The scientific method, as with any process, is not immune to adverse human influences.

The biggest problem with science are people, not only scientists but the people who fund, publish, cite and use it. As sharp as the scientific method (SM) is – the overarching process powering science and academic research – as a process of inquiry, unchecked human biases dull it as they do with any process.

Supposedly, SM’s ultimate bulwark against such biases is peer review (review of findings and process by others in the field). However, it’s under assault from money, prestige, publication and unconscious biases. Consequently, peer review is barely up to the task of providing this defense anymore, so much so that market forces are producing opportunities for firms to do what scientists increasingly have difficulty doing themselves: protecting science from sloppy research (“Metaphysicians” [The Economist, March 15, 2014 edition]).

Such problems with science are not new. In 2005, John Ioannidis, Professor of Health Research and Policy at Stanford School of Medicine, published in PLOS Medicine a groundbreaking paper Why Most Published Research Findings Are False (see also “Science, Its Irrational Aspects” for additional related research). Jonathan Schooler, University of California Santa Barbara, is another who has taken on “broader issues and associated questions regarding the frontiers of science.”

Many of these problems originate from extending science beyond its inherent limitations. For example, science cannot prove great leadership begets great business. Pragmatically, this means soft sciences such as psychology, medicine and sociology as opposed to hard ones such as chemistry and physics will contain the biggest infections of adverse human influences. Another major source of problems is scientists’ belief they are immune to unconscious, subjective influences. Yet, this belief often makes people most susceptible.

Science has greatly improved our lives. All of it is because scientists have used it creatively, wisely and appropriately. Let’s ensure it stays that way.

 

Be the first to comment

This entry is part 6 of 9 in the series Leadership - The Secret

3 Gold StarsA professor and I were discussing the effect of goals on employee performance. He was commenting that research shows they raise performance until people believe they are unattainable at which point performance becomes worse than if no goals existed. I then asked him if any research showed that leaders – who could increase people’s belief in themselves – could get people to accept previously unrealistic goals as realistic and thus achieve even higher performance levels. He replied, “No,” but then asked, “How would you set up an experiment to test that?”

I did not know, but it got me thinking and finally concluding after considerable thought and research that we cannot use the scientific method to prove that good leadership begets good things. For example, before running such an experiment we have to define good leadership and good results. Since leadership has a high emotional component, it exposes itself to much subjectivity.

Even if we could do this, defining good results is difficult itself. Let’s say we use profits as a starting point. Do we base our assessment on months, quarters, years or five-year periods? Do we assess only during leaders’ tenures or include afterwards to see in what condition they left their groups? If so, how long after their tenure do we examine? Finally, are profits our only consideration? Do morale, social considerations, ethics and other such tangibles enter the picture?

Okay, assuming we could create good definitions, how do we establish the controls? If we pit a good leader against a bad leader, how do we create identical conditions? How do we ensure the same economic forces, people, money, markets, etc.? How do even ensure the same uncertainties would arise?

In the end, good leadership is very much like good dancing, athleticism or engineering. It’s subjective. It’s scientifically unproven.

 

Be the first to comment

Irrationality of ScienceIrrationality enters science when people either operate the scientific method or are its subjects. Scientists are not immune to pressures, biases and subjectivity. Despite humor to the contrary, they are human. Moreover, people as subjects of experiments are so different that any one individual could respond quite differently from an experimental group. Finally, money greatly influences “objectivity.”

The editorial “How Science Goes Wrong” and the article “Trouble at the Lab” (both: The Economist, October 19, 2013 edition) detail influences calling science’s objectivity into question. This does not mean science is not important to us, but it’s also not gospel. It requires vetting even when it works well.

For example, scientists suffer even simple biases associated with names. Common names are more likely to receive research grants than unusual ones. As another one, peer review occurs when other scientists rerun experiments to verify results of the experiment’s originators. However, time, money and recognition work against this. There is not enough of the first two and little of the third in publications which tend to prefer hot, interesting topics to boost sales. Consequently, many findings never go through this important self-regulating tool. Scientists might not be any better at self-regulation than bankers are.

On the other side, when people are subjects, headline-making findings could be the result of an unusual mix of people. Even if peers achieve similar findings, there is no guarantee any one individual will respond similarly. Pharmaceutical examples are good here. Even if clinical trials show a particular outcome, dosages must still be adjusted to ensure severe, adverse reactions don’t occur in patients sensitive to the drug. Social sciences, where it’s difficult to enforce controls, offer other examples.

What irrationalities influence people? You also found ones that influence scientists.

 

2 Comments so far. Join the Conversation

This entry is part 13 of 15 in the series Creative Innovation

One of the points Giovanni Gavetti makes in “The New Psychology of Strategic Leadership” (Harvard Business Review, July-August 2011 edition) about associative thinking but holds true for all aspects of creative innovation and decision making are our own biases. As a result of “the human mind’s confirmatory nature,” “Strategists often look selectively for evidence that supports the analogy” they’ve formed in associative thinking.

In other words, when doing our research we are more inclined to focus on evidence, or types of evidence, supporting our points rather than contradicting them. For instance, we might value statistical evidence over anecdotal or empirical evidence. We might value evidence produced by the scientific method rather than an alternative process such as trial and err. Yet, in both cases, accepting different types of evidence or evidence produced by different processes, stimulates creativity. Moreover, by holding the team to these things, such as requiring quantification, not only do we restrict creativity but we reinforce the status quo, inertia.

However, it’s difficult for people to come out from under their own biases. This means it becomes incumbent for the managers of these teams to be prepared and have the talent to lead the change that innovation brings. One thing that truly distinguishes leadership from management is the degree to which each must promote change. That includes change in evidence and processes the team will consider in evaluating options.

Thus, while diversity in our creative innovation teams is important, diversity in our approaches and processes to tackle problems and to make decisions are too. We can look at an organization’s policies and processes as a form of “group bias” that can impose itself on our teams and drastically negate their inherent advantages.

Beware of not only individual biases but institutional ones too.

Be the first to comment

This entry is part 3 of 3 in the series Over Thinking Decisions

What’s the antidote for over thinking (OT) as referenced in Ian Leslie’s article, “Non Cogito, Ergo Sum,” (Intelligent Life, May/June 2012 edition)? It begins with four steps.

First, recognize warning signs. Awareness alone helps us focus on minimizing our thinking by giving our first thoughts priority before diving too deeply into the problem.

Second, ensure a positive frame of mind before contemplating the problem. Avoid feeling pressured, tense, afraid or anxious. Most importantly, take advantage of times when we really feel like tackling the problem. Don’t pass these up for other tasks.

Third, find a quiet setting; this encourages creativity. Distractions interrupt it.

The last step is relaxing which also encourages creativity. Lie down without reading material. Tiredness encourages our minds to think less structurally, allowing creative juices to flow. Thus, exercising before (or while) tackling the problem helps. Hot showers also encourage relaxing.

After these steps we can employ several techniques depending upon the warning signs we notice:

  • Avoid thinking about the consequences of our decisions and focus on the solution, the plan of action.
  • Skim the information (don’t read intensely) or ignore some altogether since it’s likely repetitive.
  • Alter the way we approach the problem especially if it is within our area of expertise and that expertise has a standard problem-solving methodology (i.e. scientific method, extensive research, cost-benefit analysis).
  • Minimize interactions with others especially if they do nothing but heighten expectations (i.e. “How are you ever going to resolve this!”) or terminate discussions if they begin talking about your decision.
  • Limit the time we think about decisions by moving up the deadline or delaying when we address them.

Still, we shouldn’t fret if we don’t succeed the first time. Training our minds, as with our bodies, takes practice.

 

Other posts in this series:

 

Be the first to comment

House of Arbitrariness & Conditionality

 

We often view measurements as unchangeable. A meter is a meter, a pound a pound. We often forget that at some time someone somewhere declared what those were and that they would be a standard. The point is this: arbitrariness underlies almost all objective standards by which we live.

For example, in the January 29, 2011 edition of The Economist, the article, “The Constant Gardeners”, explores the kilogram. The official standard is a platinum-iridium alloy cast in 1879. However, today, its weight seems to vary from its copies by up to 69 micrograms, about half a grain of sand, an important variance when weighing small things. So, the question is this: How heavy is a kilogram . . . really?

The relevancy to problem solving is similar to that which I wrote in my post, “Arbitrariness: The Cornerstone of Conditions”:

By searching for the underlying arbitrary aspect of any apparently objective situation, we can often find the perspective – when altered – that can cause us to see that situation in a different light.

For example, when someone asks us, “What’s the best way to get from A to B?” we often give the fastest route. The assumption being that the “best way” is “fastest” when “best” could have many different attributes. Over time, the best-fastest link becomes the arbitrary point – when altered – that sheds a different light on what route might be best such as the most scenic one or the most fuel-efficient.

As a more sophisticated example, consider our reliance upon “proven outcomes.” What does that mean especially when you cannot scientifically prove that good leadership begets good results? Thus, when we look at what it took to be proven, we often find that it’s subjective based upon who is determining what “good leadership” and “good results” are.

 

Be the first to comment

Our names unconsciously influence people. We humorously smile at actors who change their names making them more appealing. Yet, some people relate because they wish their parents had given them better names.

Even in a field striving for objectivity such as science, your name can influence the peer review process. In the August 20, 2011 issue of The Economist, the article “A Black and White Answer” reports racial name research by Donna Ginther of The University of Kansas indicating it does. The article also references the 2003 racial name study, Racial Bias in Hiring, by Marianne Bertrand of the University of Chicago and Sendhil Mullainathan at the time of Massachusetts Institute of Technology in which names influenced who received job interviews.

While the article focused on the racial connotation of names, an October 23, 2008 article of The New York Times mentions research about non-racial correlations focused on similar names, initials, sounds and letters. Of course, if we overlay the concept of branding from advertising on these two areas of research and the territory between them, we come back to “what’s in a name?”

From an intuitive perspective, what connotation does each of our names have? What feelings do people get when they hear it? How do we feel when we run across names far different from ours, ones we can’t pronounce? Subconsciously, do they trigger our defense mechanisms? All you need to do is look at popular baby names to know we do not distribute names randomly even if we account for ethnicity.

What we can learn from science in this research is that no matter how objective we think we are it is no match for the unconscious emotions truly driving our decisions.

 

Be the first to comment

In Robert Heinlein’s science fiction book, Starship Troopers, the instructor, Mr. Dubois says, “One can lead a child to knowledge but one cannot make him think.” Automatically, a picture forms in my mind of a person who collects a garage full of tools and doesn’t fix anything or who collects a kitchen full of utensils and always orders out. There are many people who treat knowledge the same way; they collect it but never think about it or employ it.

Often I will begin certain seminars by declaring, “You won’t learn anything new, but if you’re like others, you’ll still find it helpful.” We are so preconditioned to view the stuffing of our minds as a benefit, that we have difficulty seeing how this could be true. So, I go on to say, “Most of what I will cover you already know; however, I will present it in a way that will encourage you to think about it differently and take action.”

I contend that rather than go out and collect more knowledge, if we just use even 20% of what we already know but don’t use, we would see substantial changes in our careers and lives. How many people collect business improvement books as though they were collecting stamps?

Intuitively, we know that we must consider the emotional aspect of knowledge. This appears in the form of motivation to think and employ that knowledge. Simply, learning something new shouldn’t be the benchmark of a worthwhile learning effort. Did it encourage us to look at things differently? Did it move us from inertia to action?

Now, that is real power.

3 Comments so far. Join the Conversation

Knowledge States

By Mike Lehr

While helping a non-profit, a board member said, “We can only deal with a problem if we know there is one.” Here the state knowledge assumes alters our perspective. In this case, it causes us to ignore the idea of prevention, dealing with problems before they arise. In reality, problems don’t care whether we know or prove they exist. Thus, if knowledge’s form can alter our perspective and prevent us from seeing potential solutions, it is important to have a grasp on the different states of knowledge.

To that end, I’ve created the map to the right. It has five basic states: Unknown, Aware, Know, Prove and Quantify. Each is a subset of the previous one:

Knowledge Map

Knowledge Map


 

  • Unknown: Not knowing what we don’t know
  • Aware: Knowing what we don’t know, or not being able to express what we do know
  • Know: Knowing without proof but being able to express what we know
  • Prove: Using approaches that adhere closely to the scientific method or the one used in courts of law.
  • Quantify: Being able to count, calculate or formulate.

By looking at knowledge’s states in this manner, we see how much reality we exclude if we only accept what is quantifiable and provable. Imagine in warfare or the game of poker if we took no action unless we could prove it was the right one. Business is not immune to this. Therefore, success is more determined by how we treat what we don’t know or barely know and not by how we treat what we can prove and quantify. Thus, if we lived by the advice of the board member above, we would surely fail without a great amount of fortune.

Be the first to comment