Google Play icon

New model for belief-revision which accounts for confirmation bias

Share
Posted July 22, 2014

Many of our cognitive decisions can be said to be made without our own consent. For example, it has been observed that people who perform pointless or futile tasks tend to make up fairly consistent explanations as to why they are behaving in such way when asked. This cognitive phenomenon is usually referred to as cognitive dissonance, and its main characteristic is that a person rationalizes their behavior without strictly believing it.

Bayes' rule is often used as a reference in modeling belief-revision. Yet it fails to account for more nuanced aspects of the way humans change their opinions. Image credit: Bayes' Theorem MMB 01 by mattbuck via Wikimedia Commons

Bayes’ rule is often used as a reference in modeling belief-revision. Yet it fails to account for more nuanced aspects of the way humans change their opinions. Image credit: mattbuck via Wikimedia Commons

Another similar unconscious cognitive phenomenon is confirmation bias. It happens when a person selectively chooses those opinions which are in accord with their prior beliefs. Examples of such bias can be found in all sorts of contexts, ranging from politics to media to everyday life.

Besides being at times amusing and otherwise frustrating features of our minds, such phenomena pose serious challenges to cognitive scientists aiming to provide models that explain how we acquire, form and change our opinions.

Standard belief-revision models in cognitive science assign subjective probabilities to propositions for each opining agent. For example, someone might be 90% certain that the next day is going to be sunny, thus assigning the probability 0.9 to this particular fact. Then, once someone else comes up with the opinion that tomorrow is going to be 30% cloudy, the first person has to account for this and compute the probability of tomorrow being sunny given the 30% chance that it will also be cloudy. In this way the fact presented by the second person helps the first to revise their opinion. The scheme itself corresponds to one of the fundamental rules in statistics called the Bayes’ rule, and for this reason the presented model is called Bayesian.

Bayesian models are widely used in today’s research in artificial intelligence. They often provide an adequate description of subjective probability which is though to be essential in human reasoning. But Armen E. Allahverdyan of Yerevan Physics Institute (Armenia) and Aram Galstyan of USC Information Sciences Institute in California, USA, claim that Bayesian model for believ-revision fails to provide an adequate description of how we change our opinions.

One of the major shortcomings of the model described above is that it becomes unrealistic when reapplied multiple times. For example, if the first person has already revised their opinion about the next day being sunny given the 30% chance that it will be cloudy, according to the Bayesian model they would once more change their opinion given exactly the same information. What is more, the probability the first person assigned to tomorrow being sunny will gradually diverge toward zero, while they will be more and more convinced that it is going to be cloudy with each repetition of the same argument.

Another problem is that according to the Bayesian model an agent will revise their opinion even when presented with exactly the same information that they already believe to be true.

Both of these problems seem to counter the way humans form and change their opinions. We do not necessarily assign zero probability to a belief that by some argument turned out to be less likely than we expected, given that the argument is repeated sufficient number of times.

That is why Allahverdyan and Galstyan devised a new, non-Bayesian belief-revision model which accords better with empirical findings. It solves the two problems described above, and also gives a mathematical description of such cognitive phenomena as confirmation bias, cognitive dissonance and others.

Empirical studies of confirmation bias aim to measure how a person’s willingness to change their opinions depends on the discrepancy between their own beliefs and those that are presented to them. It has been thought that the relation is linear – that is, someone is willing to change their opinion as much as the persuasive argument is close to their own. However, although it is true that humans tend to reject those opinions which are either too far or too close to their own, according to the gather data the relation is linear only in small discrepancy levels and non-monotonic otherwise.

Allahverdyan’s and Galstyan’s model accounts for the details above. It also accomodates the possibility of cognitive dissonance – agent’s having two widely conflicting beliefs simultaneously – into the mathematical description, which is harder to do in the Bayesian model.

The kind of model which accounts for various involuntary and sometimes conflicting cognitive phenomena can be thought of as more accurate and fuller description of the way our minds work. But besides that, mathematical description of such details may bring science closer to human-like AI in its ability to formalize more and more nuanced aspects of our minds.

Article: Allahverdyan AE, Galstyan A (2014) Opinion Dynamics with Confirmation Bias. PLoS ONE 9(7): e99557. doi:10.1371/journal.pone.0099557, source link.

Featured news from related categories:

Technology Org App
Google Play icon
84,790 science & technology articles

Most Popular Articles

  1. Top NASA Manager Says the 2024 Moon Landing by Astronauts might not Happen (September 19, 2019)
  2. How social media altered the good parenting ideal (September 4, 2019)
  3. What's the difference between offensive and defensive hand grenades? (September 26, 2019)
  4. Just How Feasible is a Warp Drive? (September 25, 2019)
  5. NASA's Curiosity Rover Finds an Ancient Oasis on Mars (October 8, 2019)

Follow us

Facebook   Twitter   Pinterest   Tumblr   RSS   Newsletter via Email