

Each argument will do a little damage to your prior against Republican beliefs. Your friend just has to give you a good enough argument. Now you're doubtful the raw experience (a friend making an argument with certain inherent plausibility) is the same, but the context (ie your very low prior on the Republicans being right about something) makes it unlikely. But suppose your friend makes a plausible-sounding argument for a Republican position.

You believe it your raw experience (an argument that sounds convincing) and your context (the Democrats are great) add up to more-likely-than-not true. Your friend makes a plausible-sounding argument for a Democratic position. Normal Bayesian reasoning slides gradually into confirmation bias. Now you're doubtful the raw experience (a friend saying a thing) is the same, but the context (ie the very low prior on polar bears in California) makes it implausible. But suppose your friend says she saw a polar bear on the way to work. You believe her your raw experience (a friend saying a thing) and your context (coyotes are plentiful in your area) add up to more-likely-than-not. Suppose you live in an ordinary California suburb and your friend says she saw a coyote on the way to work.

The cognitive version of this experience is normal Bayesian reasoning. The factors at play here are very complicated and I’m hoping you can still find this helpful even when I treat the gray box as, well, a black box. Still other times it will weight them 50-50. Other times it will place almost all its weight on context and the end result will barely depend on experience at all. Sometimes the algorithm will place almost all its weight on raw experience, and the end result will be raw experience only slightly modulated by context. These diagrams cram a lot into the gray box in the middle representing a “weighting algorithm”.
