Human beings are experts at motivated reasoning. When Chase Utley of the Dodgers broke Mets player Ruben Tejada’s leg with a hard slide into second base, Dodgers fans saw a player who was aggressively helping his team to win a playoff game, while Mets fans saw a dirty play that deserved punishment. Fans of each team had access to the same data, but differed in their analysis. Unfortunately, the mechanisms of motivated reasoning kick in unconsciously from the moment we look at data. As a result, there is a tendency to see what we expect to see. That is, there is a danger that data will not lead us to think differently, but instead to solidify existing beliefs that really should have been challenged. In order counteract motivated reasoning, it is crucial to start treating data the way scientists do. Scientific method starts by making predictions about what you would expect to observe if a particular belief about the world is correct. If that prediction is consistent with the data, then you can continue to hold on to that belief. But, if the prediction is inconsistent with the data, then you have to revise your understanding of the world and generate new hypotheses to test. A second reason why it is important to make predictions about what you expect to see first, before looking at the data, is that patterns of data are often quite subtle. Often, we imagine that there will be some big, obvious pattern. For example, perhaps people will click on red buttons more than blue buttons. These effects of a single factor (like the color of a button) are called main effectsin statistics lingo. It turns out that there just aren’t that many main effects in the world, and most of them are ones we know about already. For example, people have already spent lots of time looking at the ways men and women differ. That is a main effect. Instead, most of the key insights in data involve what are called interactions. The best way to tell that you have an interaction is that when someone asks you whether a particular factor matters, you have to say, “It depends.” What it depends on is the interaction. For example, studies of advertising effectiveness ask whether it’s better to give people a message focused on positive benefits they will get from using a product or negative problems the product will avoid. It depends. If the product is one that is generally associated with desirable things (like cars or lipstick), then it is better to focus on the benefits the product will provide. If the product is one (like medications or diapers) that is associated with undesirable things, then focusing on the problems the product will avoid is better. The problem is that as the number of different things you have measured grows, the number of combinations you have to test to find these interactions grows as well. As a result, the best way to find these interactions is to approach the problem scientifically and to develop questions that lead you to examine your data looking for particular interactions. You might say that you can’t approach your business scientifically, because good scientists can run controlled studies to test their beliefs, while you cannot run that many experiments with your business beyond the occasional test on a website. Remember, though, that astronomy and geology are perfectly good sciences, even though they have to focus mostly on gathering information from the world as it is rather than running experiments. Just because your data did not result from a true experiment doesn’t mean you can’t approach it like a scientist. In the end, data can be a powerful source of new insights for your company, but only if you allow it to change your existing beliefs rather than reinforce them. This blog first appeared on Harvard Business Review on 10/20/2015. View our complete listing of Human Capital Analytics blogs.
Agile processes for stable teams
March 08, 2021
Three Ways to Navigate Political Divides at Work
October 07, 2020