Transcript: Why we Believe with Michael Shermer

This is an automatically generated transcript from Descript. Please excuse any mistakes.

Turi: Today, we’re thrilled to be talking to Dr. Michael Shermer. Michael is a science writer, historian of science, founder of the skeptic society, which is one of the biggest societies of skeptics anywhere in the world and editor in chief of its magazine skeptic. His most recent book is called giving the devil his Jew on the free speech Wars raging across the West, which is of course also central to our Parlia project. And to which you’ll find a link in our show notes. But today we want to be talking about. Belief on which Michael wrote the seminal book called the believing brain. Um, we’ll be talking about belief, not just as a phenomenon related to God and extraterrestrials and all sorts of, um, other things which skeptics are, uh, interested in debunking, but also more generally thinking about belief as, as one of the ways in which we think Michael, we’re thrilled to be talking to you. Thank you for joining the Parlia podcast.

Michael: Oh, well, you’re welcome. And thanks for having me on, I appreciate it,

Turi: Michael, if you don’t mind, I want to kick off with a bold statement that you, that you, uh, announced in your book, which is the following. That belief comes first. The evidence for it comes after. What, what do you mean.

Michael: I mean that our brains are wired to think more like lawyers than scientists that is to say, to win arguments, to bolster our case, to reinforce what we already believe, which we usually formed from non empirical sources that is to say influences in our immediate environment, family, friends, colleagues, mentors, school, literature. Pop culture, media and so on. And then if asked to defend our belief, we come up with reasons. But those weren’t the reasons usually that we initially formed our beliefs. And, uh, so th the larger picture is to what extent are our senses designed to, for what’s called veridical perception. That is to say perceiving the world as it really is.

And we know from evolutionary theory that, you know, Uh, perceptions are very species specific, the famous example, being bats, uh, or maybe dolphins with echolocation systems, whatever a shark looks like to my brain. It surely doesn’t look like that to a bat or to a dolphin’s brain or, or, or say a tree branch to a bat, a bad sprain, you know, cause they have different sensory apparatus. So, uh, the image, whatever that would be on their brain is going to be rather different. From mine. And that’s good enough because all you have to do is survive and reproduce and get your genes to the next generation. The purpose of evolution is not designed brains to accurately correctly identify reality as it really is.

Although you have to know, you know, some basics, like there really is a tree limb there, or there really is a shark in the water for the dog. But beyond that, but beyond that, Uh, you know, it’s more like, um, if we want to get social about it, you know, since we’re a social, primate species is to, you know, show our tribe, our group, you know, is, is the best group or we have the best arguments or we have the best theory and. You know, so political tribalism is very much about winning arguments, not about figuring out what’s true. So, you know, we might make the distinction back to where I began and I’ll finish that answer here that, you know, lawyers, their job is not to figure out what happened. They’re there they’re hired to defend their client.

Let’s say you’re a defense attorney and you know, like, uh, Alan Dershowitz famously says, you know, he didn’t even ask OJ if he did it. He doesn’t want to know that that’s not his job to figure out what OJ actually did. His job was just to get them off. That’s it. And if he, if he doesn’t try to do that, he’s not doing his job. He could be fired. Whereas a scientist would think, well, no, I don’t want to win or lose. I just want to figure out what actually happened. So those are the kind of two different ways of thinking about it. And, um, our, our brains are wired more like the lawyer than the scientist.

Turi: That’s a beautiful metaphor. Um, why do you think belief is sort of so unimportant for us, it’s critically important to be able to see the tree ground so we don’t smack into it and the sharks. So we don’t get eaten by it and make sure that we know what kind of mushrooms we should be eating. And not there we rely on evidence, but why is evidence so unimportant to our beliefs? And yet our beliefs are so important to who we are.

Michael: So, um, it’s not that evidence is not that evidence isn’t important. It is important once you have the belief, uh, that you then find evidence to, to support it. So there were pretty good at, at, uh, collecting evidence that fits and ignoring evidence that doesn’t fit or counters our beliefs.

So this is famously called the confirmation bias. Uh, where for example, you look for and find confirming evidence for what you already believe, and you ignore the disconfirming evidence. So once you formed a belief, say in a particular conspiracy theory, you know, the Jews are running banking in the media. Well, then all of a sudden you’re going to start noticing newspaper articles and television news stories and whatnot about, you know, this guy’s doing that. Oh, he’s Jewish. Oh, that explains it. And you’re going to automatically ignore. Or rationalize a way, anybody who’s powerful in banking or the media, who’s not Jewish. And so you just notice the hits, you forget the misses. That’s a confirmation bias, uh, under the larger umbrella of motivated reasoning. We all do it. And even scientists who are trained not to do it, still do it anyway. Right. And so one of the built-in mechanisms of science is. Is self-correcting and machinery in which your colleagues are going to challenge you.

And they’re going to look for the disconfirming evidence to try to disprove your hypothesis. So you better do it first. And that’s the motivation there for the scientists to act like a scientist rather than a lawyer, because of the social nature of science, the colleagues are going to challenge him on it

Turi: From an evolutionary neurological point of view. You describe our belief function as an example of patternicity this word, which I hadn’t come across and you’ve just given us a beautiful example in confirmation bias, we look for patterns out in the world. What is this patternicity that you described? Why did we evolve it? Why is it helpful to us?

Michael: So I, um, I coined two terms, cause I felt like the, the language of skepticism was a little short in this area and that’s patternicity and agenticity so patternicity is the tendency to find meaningful patterns in both meaningful and meaningless noise. And the other terms like app up Afinia and one other. You know, they only talk about misperceiving things that is seeing that, uh, seeing a pattern in random noise. That’s not actually there. Uh, I wanted a broader term. That is to say, um, some patterns are real. Some are not, how do we know the difference? So patternicity is a tendency for the brain to wire up and find connections.

What, whether the connections are real or not. And therefore we need some other tool to discern whether the pattern is real or not. And that, and that’s called science. So. My thought experiment on how this evolved is. Imagine you’re a hominid on the Plains of Africa three and a half million years ago. Your name is Lucy. You’re a little tiny. Uh Australopithecine afarensis and you’re a rustle in the grass. Is it a dangerous predator or is it just the wind? Well, if you think the rustle in the grass is a dangerous predator and it turns out it’s just the wind, that’s a type one error, false positive, and, and you just kind of move around it or you run off.

But if you think that the rustle in the grass is just the wind and it turns out it’s a dangerous predator. That’s a type two error that is to say, you’ve, you’ve misperceived it and your lunch. That’s a, that’s a high cost error to make. So my argument is that, um, we evolve the Tennessee to make more type one errors than type two errors. That is to say. Assume that most Russell’s in the grass or dangerous predators rather than the wind, just in case. And therefore this is the basis of magical thinking. Superstition connecting a to B and assuming that a causes B or a, is this equivalent of B or it’s a proxy for B and therefore that’s called learning.

I mean, that’s a good thing to do to survive and reproduce and flourish again, not to not. Per vertical perception where you get it right every time, for example, why, why can’t you just wait and collect more data about the rustle in the grass? Well, the answer is predators don’t, um, wait around for prey animals to collect more data. You know, they, they, they try to stay up wind. So the scent can’t be tracked as data or, you know, they’re camouflaged or they stock and sneak up on their prey. All of that is to prevent the. A potential prey animal from collecting and updated to make a correct decision. So since the prey animal can’t do that.

And asked to make a snap decision. And so the tendency is there to err, on the side of caution, that is be a little paranoid, uh, by the way, this is in, in, in later work, I’ve applied this to conspiratorial thinking, you know, most conspiracies have to do with negative things, bad things. People cheating, people, taking advantage people, uh, uh, collaborating in secret to gain some unfair financial or moral or political advantage over others, without them knowing about it. It’s always something negative. And, and the reason for that is the negativity bias. That is there’s more ways for things to go bad. Good. So we’ve also evolve this propensity to notice bad things and pay close attention to them. That’s not always good for optimism. It makes us more pessimistic, but again, as a, as kind of an evolutionary logic behind it, that makes perfect sense.

Turi: On some level, it sort of sounds like. As you said learning, it sounds like thinking extrapolating, trying to make sense of the world around us. That’s is that not kind of what the human mind does when it’s most intelligent? So what’s the, what’s the flow in this method of interpreting the world.

Michael: Well, that’s, that’s the point. It’s not a flight, it’s not a bug in the system. It’s a feature. It’s a good thing that we find patterns because again, a lot of patterns are real. So let’s take something like, uh, do CO2 gases, global warming. Well, there’s a pattern for this, you know, the famous, uh, uh, graph that Al Gore, uh, demonstrated quite.

Strikingly visually in his film, an inconvenient truth where you have this jagged saw tooth curve going ever upward of increase in CO2 gases, followed with a small leg time behind of the earth getting warmer. So is that a true pattern or is that a false pattern? That is, are we seeing a pattern that isn’t really there? Or is it really a pattern? Well, you know, so for 25 years of climate science, climate scientists have determined. Yes. That is a real pattern. There is a causal link, uh, to, uh, between these two variables. And this gets to a deeper question since you asked, um, you know, w how do we determine causality in the first place? David Hume famously initially defined it. As a co uh, constant conjunction, a happens to be happy. JF happens to be happens. It happens to be happens and the brain thinks, huh? I think there must be a link between a and B. That’s called learning again, the rustle in the grass, the dangerous spreader. Uh, but as human than famously debunked, his own definition, you know, the rooster crows and the sun rises and this happens every morning.

So it’s natural to think that, Hey, Hey, I can, I can make the sun come up, watch. And, uh, and of course, uh, then he, uh, you know, Uh, gave a second definition, which is a counterfactual theory of causality. That is to say, if you remove a does be happen anyway, so you silence the rooster and the sun comes up, it’s like, Oh, okay. So it wasn’t the rooster after all, it was something else that, right. So. You know, if you remove the CO2 gases, would the, uh, earth start cooling? Well, yes, we, we, you know, we know examples, uh, where, you know, there’s variation from year to year, some time period, some time period where these fluctuate and the Earth’s temperature also fluctuate. So they’re moved the one you were, you, you still have the other, or you don’t. And that that’s another way of determining causality. So. And again, I’m kind of getting a philosophy of science here for you, but, but back to the brain, you know, that’s, uh, again, R w you know, we, science is so new, this whole, like, discussion of what is causality, how do we determine it? You know, the, you know, the randomized controlled experiment in which you hold all other variables constant, and you vary just this one that you’re interested in, you know, a drug treatment say, You know, that’s all new. That’s just like barely a century and a half old, two centuries at most going back to him, maybe two and a half centuries, you know, the human mind evolved over millions of years of, of understanding causality anecdotally. That is just to say things that happen in our environment. So again, it’s not that we’re bad at it. I mean, gosh, we could do quantum physics for gosh sakes. I mean, it’s, it’s amazing what we’ve been able to figure out about the world. But in our day-to-day lives, that’s not how most of us think, you know, we just go about our days noticing anecdotes and, you know, I vaccinated my kid and then, you know, a month later he was diagnosed with autism.

There must be something to that. Yeah. Okay. That’s just an anecdote, but you know, it’s powerful one and. You know, Richard Dawkins makes this point that we evolved in middle land in Africa where, you know, things are of a middling size, like say from an ant to a mountain range and they move at a midline speed, you know, walking speed to running speed, or maybe lightening it. The past is what you can barely see. You know, so things like, uh, uh, global warming or evolution that happened over happens over thousands or tens of thousands or millions of years, or, or continents drifting. You know, we just, uh, quantum physics, expanding universe is galaxy. I mean, there’s nothing in our world that we evolved in to. Any sense of these things, you know, they, they’re just counter-intuitive, it’s just like, this is just mystifying how this could be. Gotcha. And so it requires extra steps called science.

Turi: So just to recap, humans have embedded a pattern-making facility, a spectacularly good pattern discerning. Faculty let’s call it. Um, and the desire to see patterns of desire, because, because they’ve helped us survival, um, embedded from the earliest times back to Lucy, trying to figure out how to not to get eaten by whatever equivalent of lion she was sitting next to that actually helps us think. And it’s helped us discover how the world is made from Einstein through to Richard Dawkins. As you say, The key piece here. And it turns out that just as a, as a side point, that all sorts of species kind of all species have this pattern making facility. And I’m going to ask you to talk about that in a second, but the difference between say, um, the difference between. What you would classify as a belief or a piece of information is the corroboration. And again, to come back to a point you were making here about the difference between science and anecdote, the human mind is conditioned to rely on anecdote. I see anecdote as a way of reinforcing the patterns that it sees in the world and his mum finds it much, much harder to use, to engage with science on that level.

And this is in a sense what, um, what leads to these misplaced. These th this misplaced thinking this extension of belief into the world.

Michael identified this fundamental capacity for pattern discernible discerning. Um, on behalf of humans, um, but you also described another activity that we do and you call it agenticity you say, and I quote you, we are not natural born. Supernaturalists driven by our tendency to find meaningful patterns and then kicker and impart to them. Intentional agency, help us understand that last part.

Michael: Well, let’s go back to our hominid on the Plains of Africa and here in the rustle, in the graphs, is it a dangerous predator just to win? What’s the difference between a dangerous predator and the wind wind is an inanimate force. A predator is an agent. With intention and its intention is to eat me. Therefore, I should be very cautious about it more than just an inanimate object or forest. So that idea of imparting agency to things, although I should note in, in many animistic, um, animistic, worldviews, you know, even the wind has a, you know, agency to it, you know, it’s the gods behind it or, or everything has some kind of agency to it.

So that idea can expand quite. Dramatically, but the idea is that it’s better if I assume intention, because therefore I can take it more seriously rather than just things happen. That things happen for a reason. And this comes, you know, all the way back to, you know, conspiracy theories or modern religious beliefs or beliefs and spiritualism or angels and demons. And so on, you hear this meme amongst, uh, uh, religious believers in particular. Everything happens for a reason. Right. You know what, and this could be, you know, fate, you know, fate or, um, synchronicity or some kind of destiny all the way up to, you know, the evangelical belief that, you know, God, God, you know, knows the fall of every Sparrow.

And, you know, he’s, he’s, he’s kind of directing everything I do from finding a parking space at the mall to getting this job or finding, finding this person to marry and so on. That, um, is much easier explanation of how the world works. It’s like, um, I was just writing about, uh, Q Anon recently and, uh, you know, this whole idea that, um, you know, everything happens for a reason, uh, applies to this. Let me just read you this, uh, this opening. Thing. I got here from a Q on posting. Have you ever wondered why we go to war or why you never seem to be able to get out of debt? Why there is poverty division and crime? What I told you, there was a reason for it all. What if I told you it was done on purpose?

And then I went on to say, you know, these things. Th th these things are curious, you know, why are there, why is there war? Why is there crime and so on? Well, it turns out there’s scholars that do nothing but study these problems. And, you know, they just have elaborate explanations with, you know, a half a dozen different variables operating at the same time. You have to run these elaborate regression equations to, uh, teased out, which is the most influential variable that causes crime to go up or down. And, and there’s great debates, you know, is it the broken windows theory of crime or is it this poverty or whatever. Right. But, you know, to the average person, it’s like. Screw that. I mean, what if there’s just like 12 guys in London called the Illuminati

Turi: on evolutionary terms? I get why all of us want simple explanations? Cause it just hurts. I had to come up with a complicated ones, but what’s with the intentionality. Why do we so yearn for intentionality? Why do we need it to be 12 guys in a room in London? Why does it need to be the Illuminati of the Bilderberg group, a bad guy or God? What’s the point of that intentionality and evolutionary terms.

Michael: Well, the point would be, uh, it brings into sharper focus. Uh, the problem you want to identify. And I think there’s another cognitive process here where, uh, kind of a sense of self and self-identity in your head. And therefore it’s easy to project that into other things. So this is called theory of mind where, you know, we, we mind read each other. I don’t mean psychically. I mean, you know, I imagine what you’re thinking and you imagine what I’m thinking. And then I imagine that you’re thinking what I’m thinking.

And so on all human relationships is based on this mind reading and we have to kind of project ourselves into the other person’s shoe and think, well, is he gonna find this funny if I tell this joke and if I were him and I heard it, what I think is fun. Okay. So, you know, agenticity is a little bit like that. I’m projecting my own and I’m, I’m an intentional agent. I walk around with intention and make behaviors based on those intentions. I bet other people do. And then it’s a small step from other people, too. Animals have intention. They do. You know, and so on. And then all of a sudden you’re, you’re, you’re concocting demons and angels and gods and, and secret conspiracies

Turi: the pattern matching again on some level. It’s your understanding that because you understand the world through your own, agenticity it makes sense for you to, for you to describe external phenomenon to you via the lens of agenticity too.

Turi: Um, I, there is a wonderful story that I have to ask you to tell about BF Skinner’s pigeons. There is no need to tell it is because it talks both to all species capacity to make patterns and to that issue of agenticity. Could you just, could you recount that experiment for us?

Michael: Yeah. So BF Skinner famously in 1954, uh, uh, put these pigeons on a random variable schedule of reinforcement that isn’t in. Instead of saying, I’m going to reinforce the pigeon every, uh, every time after every six pecks of the key or, you know, three on the right and two on the left, and then you’ll get a reward and, you know, these, you know, they’re bird brains, but they’re not that dumb. They figured out the pattern and they figured out how they can get more food.

Uh, so in one experiment, he just. It put it on a random schedule where there was no pattern at all. And yet the pigeons all figured out that there was some pattern just by, uh, whatever they were doing just before they got rewarded is what was repeated. So scanner famously defined a reinforcement is anything that causes the organism to repeat its behavior. So maybe just before it got this random reinforcement, it did, you know, it did. Two twists to the left and one of the right. And then all of a sudden it got rewarded. So it thinks, Oh, it thinks it just kind of remembers automatically. Well, let’s see, I, I did two counterclockwise moves and then one clockwise move.

And then I got, I’m going to just repeat that. So basically the organisms and this applies to rats as well. And humans just go to Las Vegas, to the slot machines and watch people with their elaborate superstitious rituals that they, they go through. Uh, so all organisms do this and again, it’s just. It’s that a happens then B happens, whatever you were doing just before you got better. And this explains a lot of so-called alternative medicine, like homeopathy, homeopathy remedies, do nothing by definition. There’s nothing in there, right. Uh, of the original substance. Cause they, they filter it out well, what’s, what’s it actually doing well, what really is going on is, uh, whatever you did. Just before your headache went away or your, your tumor went into remission or, or your, your achy knee got better. That’s what gets the credit? So it’s not just homeopathy. Of course. It could have been that you, you, you went and did a meditation session or you got more sleep or you ate this or that, uh, remedy, whatever it is, that’s what gets the credit. So again, it’s that part of that anecdotal thinking.

Turi: which is, which is wonderful as a beautiful example in humans, which I think. All of us would recognize we’ve all, either drank too much of a substance and made us feel sick or eaten too much of that substance and made us feel sick and forever after there is no way you’ll be able to smell the thing that you ate or drank without replicating that nausea. It kicks back in that’s right. Pattern-making live pattern-making in humans, right? It’s exactly. What’s been going on.

Michael: You’ve taught yourself. So there you have a particularly important one color version. Um, uh, what does a version training or a, a version connection there where. If it’s something that’s toxic that you could die from, it’s probably good that yeah, you get it in a, what’s called one trial learning. Uh, that is to say you, you, you drink too much. In my case, college drank too much red wine and I got violently sick and I couldn’t drink red wine for years after that. Because if you’re, again, back at the homicide and the Plains of Africa and you, and you eat a poison mushroom and you get violently ill and you don’t die.

Hi, lucky you don’t try that again. Right? So there’s, there’s not enough time to. To run like a hundred experiments and see which, you know, what’s actually going on there, if it, that, that it’s that mushroom, but not that one that’s slightly different color and that one’s safe and that one’s not, you know, it’s probably just better to be repulsed by all mushrooms just in case.

Turi: Yeah. No, that makes lots of sense. And, um, it’s amazing to see that pattern making facility, that pattern discerning facility at work so instantly in that way. But again, let, let me, let me use that as a, as a way of asking you about. We’ve spoken about these sort of these cognitive processes, both patternicity and agenticity, which drive our sort of intelligence thing and also drive our beliefs.

But there’s also a physicality to belief. And, um, I’d love, I’d love to ask you a little bit more about that then your research. It sounds as if there are certain types of people who are more prone to seeing patterns in the world, more prone to beliefs, um, than others. There’s a therefore. Physical element, maybe a biochemical element to belief. Is there a, is there a belief chemical in the body or that certain kinds of, um, are there certain kinds of chemicals which enhance belief?

Michael: Well in the believing brain, I wrote about dopamine because dopamine is associated with learning and reward and reinforcement again, back to Skinner. Uh, and so if you are given an extra doses of dopamine, which, which is done with certain disorders, um, like, uh Parkinson’s because Parkinson’s attacks the dopaminergic neurons in the brain and kills them. So. Parkinson’s patients have less dopamine, which is why they shake because dopamine is also involved in controlling muscular movements in the motor cortex. And so I speculated there that, uh, perhaps is variation. Well, first of all, back up, we know there’s variation between people on. All traits, including how gullible or skeptical you are. So some people are just inherently more skeptical, some are more gullible. So for example, on surveys, people that tick the box for one conspiracy theory, tend to believe most of them are people that tick the box for astrology. Also think Bigfoot is real and psychics can really speak to the dead and, and whatnot.

So there seems to be some kind of tendency to believe these things are not based on something. So of course, upbringing, culture, all that makes a difference. But in the brain, I was speculating about dopamine. Maybe people that are higher in dopamine tend to find more patterns and connections. And I made sure to point out that this is not necessarily a bad thing. I mean, artists and musicians and architects and scientists and so on, who are very creative at coming up with new patterns or finding new patterns, uh, you know, we, we reward them with Nobel prizes and with, uh, you know, with big contracts for, for music and art and whatnot. And commissions and so on. That’s a good thing. Uh, but the problem is, is, is maybe if you’re, so open-minded to finding new patterns when it comes to what, what we wanna know is an empirical truth. Is it really true or not? Maybe you’re too open to pretty much every claim that comes down the pike being real. So I, in the book, I use a example of Kary Mullis, who I knew who’s Nobel prize winner for the C uh, the, the PCR, uh, technique of, of, uh, uh, analyzing DNA.

And that, but he also, I used to meet him at, uh, these TED-like conferences called it adventures of the mind for these advanced high school kids. And he was a super interesting guy, really smart and just interested in everything, but he was totally convinced, you know, astrology was real and aliens were here. Uh, HIV doesn’t cause AIDS. He was skeptical of climate change. I mean, he just pretty much everything we debunk and skeptic, he, you know, he’s right on board with it. I’m like, Oh my God, Carrie, you know, this guy, you’re obviously smart. So it is an intelligence. That’s the problem here is something else. And I just got the impression, he was just super open-minded plus he did, apparently he did a lot of mind altering, uh, uh, Uh, uh, chemicals and, uh, you know, maybe that also affected, you know, his dopamine levels or how open he was, again, it’s not bad to be open, you know, but, uh, but the problem is if you’re so open-minded, you know, that then everything is true. And if everything is real and true, then nothing is how do you know? And, uh, and that’s the problem.

Turi: That’s fine. It’s amazing. So this idea that you’re your capacity to form beliefs can be stronger or weaker. We’ve spoken about, um, Some cognitive processes, which are sort of hardwired in us. We’ve spoken about the belief drug dopamine, and the fact that some people have it a little bit more than others and tend to be able to find or discover or imagine patterns in the world that have maybe not that. But you also talk about a fascinating experiment by Michael Persinger and his famous God helmet. Um, which leads you to say we sort of, we’re sort of hardwired to believe in God. Can you explain what person had did and what that tells us about how belief works in the world in our heads?

Michael: Yeah, persevere is interested in and kind of drilling down to the neurology or the neuroscience of not just God believes, but I mean, he called it the God helmet. It’s this motorcycle helmet was, uh, elect, uh, electrical solenoids on the side that bombard your temporal lobes with these electromagnetic waves. It’s relatively harmless. They’re very, very light. Uh, something like a slightly stronger than a cell phone signal say, but you’re not. Getting zapped and harmed.

His idea was to, to look for first of all, a person who kind of accepts at face value, you, the stories people tell of their paranormal experiences and he wonders. Is there something going on in the brain to explain it? Well, a lot of times there’s nothing going on because there’s nothing to explain. They’ve misperceived what they thought happened, but let’s set that aside for a minute. You know, what he wanted to know is maybe. Um, you know, all paranormal supernatural activity is a product of these neural processes that are happening well. What would that, what would be the equivalent of that in the environment?

The equivalent of solenoids on a motorcycle helmet, bombarding your temporal lobes. You know, and, and here, he kind of goes off the rails. I think suggesting that for example, earthquake activity causes these, uh, electrical fields in the environment that you are living in. And so you have these experiences. I don’t think there’s anything to that. I think it’s more probably internal, you know, just like Oliver’s facts talking about, you know, all of his books are, every chapter is so patient, he had that had some weird, uh, experience and, you know, he. But he drills down and pinpoint, Oh, it’s right here. You know, V1 in the visual cortex or it’s here on the temporal lobes at the fusiform gyrus, which is the facial recognition software of your brain. And that’s fried because of a tumor or a stroke. And therefore this person has faced blindness. You know, I think it’s more like that. There’s just some neurochemical thing that, uh, that’s causing the experience to happen internally. Not, not an external event at all.

Turi: It sounds as if. Belief is also heritable, right? There are lots of twin studies, which suggests that, um, identical twins share a more likely to, to, to, to share beliefs than say fraternal twins, who are themselves more likely to either believe or not believe than normal siblings. So this is their belief, gene.

Michael: Yeah, I think, um, the research on this is pretty clear now with, uh, twin studies conducted by behavioral geneticists, that much of our temperament personality and preferences, which incorporate beliefs are highly heritable. And that is to say, The differences between people are largely accountable by their genes. So for example, uh, just take something simple, like, um, uh, you know, shyness, you know, introversion versus extroversion on the big five personality dimensions, you know, identical twins are, are very similar in those characteristics compared to fraternal twins and, and people raised in different homes and even twins raised in different homes and so forth. So we know there’s a genetic component there too, something like that. Now when you get to something like, well, uh, spirituality or religiosity twins are more likely to be similar in terms of their religiosity and spirituality than our non twins. Well, what, what could that possibly mean? It can’t be a gene for being a Catholic or something like that.

Well, no, uh, but it’s, you know, the tendency to want to want it, didn’t say a religious community. And if you happen to live in a country that’s highly Catholic or highly Protestant. Like Germany, where my wife is from, it’s pretty well split 50, 50 Protestant Catholic, you know, so you’re going to be one or the other, of course, you’re going to gravitate to one of those because that’s the cultural element. And then if we go to like Republican Democrat, liberal, conservative, you know, what. Yeah. Again, twins are similar on those. What could that possibly mean? They can’t be a gene for being a Democrat. No, but you tend to like to hang around people that are like you, that feels better. And you know, you like certain characteristics about the environment, you know, that let’s say you’re high and openness to experience you like to travel a lot.

You’re open to new ideas and try new things. Well, those people that. Score high on those features also tend to be more liberal. They travel more, they’re more open to different cultures, and therefore they’re more open to say immigration policy, a more open immigration policy. Therefore they’re going to drift toward the democratic party rather than the Republican party. And that’s how we end up with those kind of features. And from there, you can go from any direction to religion or beliefs in the supernatural, whatever there’s going to be a genetic component to it.

Turi: That makes lots of sense. Michael. I want to end with spinosa is conjecture, which I love. Um, your rephrasing of it is belief comes quickly and naturally skepticism is slow and unnatural, and most people have a low tolerance for ambiguity. You’ve already touched upon this. The fact that humans are much quicker to jump to. Anecdotes than they are to evidence. And there’s a fundamental difference between the two here, but, um, if that’s the case, um, how do we keep ourselves mentally healthy? What’s your, what are your mental keep fit exercises.

Michael: Yeah. First of all, since I wrote that in the believing brain, let’s see, that was 2011. Um, there’s been new research showing that we’re maybe not as gullible and maybe we’re more skeptical than I had, uh, written about then because there’s new research showing, uh, that is hard to convince people, to say, change political parties, change religions, give up beliefs. Uh, when you present them with, with say new evidence. Uh, because again, I got this part, right. That, you know, we’re committed to our certain beliefs. Uh, but for example, Hugo mercy and his book not born yesterday makes up a point that most political ads, most corporate advertising is a complete waste of money. Almost nobody changes their mind or buys a new product because they saw a commercial or an ad or, you know, for this candidate or that car. Or whatever, and that, uh, you know, most people don’t join cults, for example, I mean, uh, I write a lot about cults and the characteristics of cults and why people join cults, but most people don’t join cults, you know, and there’s, you know, like tens of thousands of self-help movements and groups and religions and sex and this and that and the world and, and most of them are harmless.

They don’t become cults. And, you know, most people are, uh, you know, disinclined to join a cult that, you know, that’s damaging and so forth. So it may be that we’re not. That gullible. So I’m encouraged by that actually it’s a more optimistic view of human nature than I had written about initially. And, uh, and so that’s good, but the problem obviously still exists of like political tribalism, you know, th that what’s emerged in the last. Four years is, you know, this whole, um, a bubble, you know, this media, these media bubbles, it’s we live in these silos where, you know, if you’re liberal, you only read the New York times. If you conservative, you only read the wall street journal, uh, or if you’re conservative, you watch Fox news and listen to conservative talk radio.

And if you’re a liberal, you don’t, you listen to other shows or watch Rachel Maddow or whatever. Uh, so the solution to this is, is force yourself to be exposed to other. Positions, you know, if you’re a pro-lifer, you know, read what the pro choice positions are and vice versa, you know, if you’re a liberal read the wall street journal op-ed page pages to see what these people are thinking about, do how good are your arguments? Uh, if you don’t even know what the other side is arguing. They’re not very good. You know, this is the point I make and giving the devil as do, you know, quoting John Stuart mill. He who knows only his own side, doesn’t even know that you got to know what the other position is. You’ve got to force yourself to talk to people that are different from you and this back to science, you know, this is what science does is this idea of, you know, to make sure that you haven’t gone off the rails, uh, before you put it into print, you know, talk to one of your colleagues and say, you know, I have this idea. I, you know, just to make sure I haven’t gone off the rails. Can you take a look at this and give me some feedback? Okay. Particularly somebody who would not tend to agree with you, or you wouldn’t want to ask a student this because they’re going to think, well, I’m going to tell him what I think he wants me to tell him.

No, that’s not what you want. You want a so-called team of rivals people that don’t like you or disagree with you or. Motivated to find something wrong with your ideas. That’s who you want in your circle, along with your friends and people who agree with you. Of course.

Turi: Well, that’s a beautiful place to end. That’s very much what Parlier is trying to do. Um, and it’s very much up our alley. This has been a thrilling conversation. Thank you so much for taking the time to talk to us. And I hope we get to talk to you again about your latest book.

Michael: Well, and again, one of the solutions to this problem is, is. People like you and your site, this is, we just need more of that, you know, just this kind of open conversation and dialogue.

This page was last edited on Wednesday, 31 Mar 2021 at 10:12 UTC

Discuss