Despite advances in computer technology and our growing dependency on asking questions from the Internet, we may not trust the answers we find.
A fascinating 2013 study by Ryen W. White, from Microsoft Research, and published online April 18, 2014 in the Wiley Online Library presents the hypothesis that searchers approach search with pre-conceived biases and opinions about the answers to their questions.
“Participants who had strong presearch beliefs related to a particular outcome were unlikely to change those beliefs even in light of significant contradictory evidence when we manipulated the result list.”
Information retrieval research has focused on the impact of rank position, domain preference and captions and text that appear within the top search results. This is what most content writers think about when developing articles and blog posts and what search engine optimization and marketing strategies capitalize on.
Less understood are how search engines mine behavioral signals. Several studies indicate that search engine results can provide skewed results lists biased toward a particular perspective if many searchers share a particular belief and recommendations and preferences for information are based on myths and common misconceptions.
In other words, you can’t always believe what you see in search engine results when asking a question.
One example provided is from a study (Rochman, 2011) where a parent using a search engine to seek an objective answer to the question “Can vaccines cause autism?” may encounter a biased result list learned from aggregated user behavior despite there not being scientific evidence of a link. Some studies want to know if people might benefit from exposure to diverse opinions to controversial opinions found in search results.
When you consider that many people already do not trust search engine results that list paid links, ads, Wikipedia and for Google, Knowledge Graph answers first, the implications become clear. From the human computer perspective, we seek answers to resolve uncertainty. How do we do that when the answers are skewed and biased when delivered to us?
Add to this the growing manipulative power and influence of social media and one wonders that we can get any factual information at all from search engines. Consider what happens when search engine data is controlled by government. It is easy to see how propaganda can be presented as answers to medical questions, cultural behavior questions and political decision making.
For White’s study, he asked simple medical yes or no answer questions and asked for the participants’ belief before seeing the search result and after. When asked, “Is congestive heart failure a heart attack?” respondents were first asked if they leaned yes, lean yes, equal (unsure), lean no or no. Next, they were shown the top several search results with instructions, “Below is a search engine result list for the yes/no question above. Use this list to try to answer the question, clicking on search results as needed.”
Once they have found their answer, they are asked, “Mark the answer below that best matches your belief about the outcome given that you have reviewed the search results.” The same ratings are provided, yes, lean yes, equal (unsure), lean no or no.
The study also asked two practicing physicians to review 1000 yes or no medical questions and provide an answer to each one. They were asked to think of the most common scenario of circumstances that could apply when a searcher types such a question on the Internet. Of the 1000 questions, there were 647 questions for which the physicians agreed on the yes or no answer.
This may be of interest to those following Google’s attempts to provide medical answers they claim to be accurate and based on medical doctor input.
Another part of the study may capture the interest of search engine marketers because it asked participants to describe what part of the caption, or what some refer to as rich snippets, answered the question for the searcher. One common tactic is to come up with an article headline displayed in the form of a question most commonly used in search queries. However, less often applied is the answer to that question in the quick view of the meta data or an indication that the answer is easy to find or even provided, which becomes a persuasive design and conversions factor.
What happens when a search result does not provide an answer or the answer the searcher wants to believe is the correct and accurate response? How do search engines take into account beliefs and biases and even searcher belief revision?
“Our results show that even for assigned questions where participants may not have a personal commitment, they often stick with the original belief as they selectively explore the results presented by the search engine. Changes in belief are typically small as a result of searching and remained that way for some searchers even when we manipulate the relative availability and ordering of answers in results to isolate the effects of presearch beliefs.”
White’s research concluded that search engines do not promote accurate answers to direct question queries either through ranking or displaying answers directly. Personalization or other retrieval settings are an area that need more research.
For further reading:
Cho, J., & Roy, S., (2004) Impact of search engines on page popularity.
White, R.W., (2013) Beliefs and biases in web search.
White, R.W., & Horvitz, E. (2013) Captions and biases in diagnostic search.
Image credit: “Man Searching” by hywards, www.freedigitalphotos.net