Kyle interviews Steven Sloman, Professor in the school of Cognitive, Linguistic, and Psychological Sciences at Brown University. Steven is co-author of The Knowledge Illusion: Why We Never Think Alone and Causal Models: How People Think about the World and Its Alternatives. Steven shares his perspective and research into how people process information and what this teaches us about the existence of and belief in fake news.
Thanks to Galvanize for sponsoring this episode.
People define fake news in very different ways, and they also tend to believe that they know so much more than they really do. According our guest, Steven Sloman, there's a sense in which fake news is like pornography, meaning, you know it when you see it. While most of us believe that we have direct access to reality and some things are true, and other things are clearly false, Steven doesn’t.
Welcome to Data Skeptic, a podcast about Data Science and Fake News, from an algorithmic perspective. Here's your host, Kyle Polich.
Kyle: Coming to you from an undisclosed location in the Los Angeles area, welcome to the podcast famous for resolving Goldbach's conjecture. This is Data Skeptic, and you're listening to our ongoing series on fake news. Today, I'll be interviewing Steven Sloman, a professor of Cognitive, Linguistic and Psychological Sciences at Brown University, and the current Editor-in-Chief of the Journal Cognition.
He's the author of several books, including my favorite, “The Knowledge Illusion: Why We Never Think Alone” This was one of the books that really got me thinking about fake news. So I was glad to have the opportunity to speak with Steven on his research.
Steven: People define fake news in very different ways, and so, I think it's important not to get, too caught up in the definition. There's a sense in which fake news is like pornography, that is, you know it when you see it. I'm not one who believes that we have direct access to reality and some things are true, and other things are clearly false.
Nevertheless, there are certain things we’re told, that are grounded in empirical reality, and there are, other things in which people are trying to intentionally deceive us about; and I think fake news refers to the ladder kind of thing.
Kyle: Do you think this is a new phenomenon? Obviously it's in the public consciousness, but is something fundamentally different today than it was maybe 20 years ago?
Steven: Well, something is fundamentally different, but, no, it's not a new phenomenon. In fact, I think it's hundreds if not thousands of years old. People have been trying to, deceive one another by, telling them stories about the horrible things the enemy has done for instance, since time immemorial; and in fact, Walter Lippmann wrote a book in 1922 describing propaganda in World War 1 that, was clearly, descriptions of events that did not occur, and, descriptions of people who are nothing like, those descriptions.
And, moreover, you know, there's a history of stereotypes and caricatures of all kinds of people that, go back hundreds of years. Fake news is part and parcel of how societies have operated, since time immemorial. Clearly, technology has changed the nature of the game, in more than one way. So on one hand, it's provided us untold more amounts of information, but, we just now have access to all kinds of information that we didn't have access to before. That information can correspond to reality so we more of that, and, information sometimes doesn't correspond to reality; and we have more of that too.
So there's a much bigger problem in sorting through, all of the data, that we now have on our eyeballs, and, at our fingertips. Kyle: So I have, a little bit of a bias, I guess. I feel like, I can tell fake news when I see it, and then I'm not going to fall for it. And maybe that's a dangerous point of view to have. Do you find that, people generally think that way, or are people aware that they’re in a world that's actively working on deceiving them?
Steven: I definitely think that, the cognitive system is such that, when we encounter information, we naturally perceive it to be factual. Dan Gilberts, the social psychologist at Harvard who, started the whole literature on this topic, he, distinguished what he called the Cartesian View of Truth with the Spinoza View of Truth, where Descartes said, “We encounter a fact that we decide whether it's true or false. If it's true we accept it, if it's false we reject”, whereas Spinoza said, “We encounter a fact, we automatically enter into our database of knowledge, and then, we might maybe, under the right circumstances, decide that it's false; and throw it out”
And, what Gilberts’ data suggests, is that people are much more Spinozan than they are Cartesian. That the act of rejecting stuff, is not automatic, but takes a real effort. So in that sense, we all think, that the information we encounter, is reality, in the same sense that we think that the stories we hear are sort of direct reflections of actual events in the world.
Though I suspect, it's also the case, that if you ask people, “What proportion of the news that you are told by the media is true?”, they would tend to give low estimates, right. I mean, the psychology, of what people accept, in the media, and don't accept, is, rather complicated; and depends a lot on our ideological commitments.
Kyle: In my mind, there's sort of a gradient, you know. At one extreme, there are, academic publications in pure review journals, with high impact factors, and that, you know, it's not ground truth, but it's about as close as we can get, to the other end. Which is maybe the rags you pick up in a grocery store line. And, I'm always trying to assess, where a bit of information falls along that Spectrum. I’m I, somewhat unique in that way, or do other people have these abilities to kind of place context around news when they receive it?
Steven: I wouldn't say you're unique, but I would say, you're…, a member of a rare breed. And of course, I don't know you Kyle, like, we’ve never met before, and so, I would really want to test you, and find out whether you're really as critical as you think you are. I'm sure you are, since…
Kyle: Well, I didn’t claim I was good at it. I just said I aspire to it
Steven: Yeah, I don't think most people do. In Psychology, there's this test called the Cognitive Reflection Test. Developed by a guy named Shane Fredericks, who is now at Yale. It's 3 very simple questions, all of which, have the following structure; there's a clear intuitive answer that pops to mind, but it's an answer that's wrong, and, it's not hard to find the right answer, but it does take a small amount of cognitive work.
This…, so, I'll give you an example. This isn't one of the questions, but it gives you a flavor of them. How many animals of each kind did Moses load on the ark?
Kyle: Well, I'm inclined to say two, but I feel like there's a trap being set for me here [laughs]
Steven: Well, in fact, Moses didn't load any animals on the ark, right. It was Noah that loaded animals on the ark. So, there was a trap [laughs], and, the point of the example, is that there is this simple answer to, pops to mind, but it doesn't take much thought, right, to discover that, “Yes, there's a trap”, and, to come up with the right answer.
And that's the nature of these simple questions on the Cognitive Reflection Test, so, there's an intuitive answer, that, you have to inhibit, in order, to think a little bit more, to come up with the correct answer. So people who get these questions right, we can call reflective. People who don't get them right we can call intuitive or non-reflective. And it turns out, the vast majority of the population, is non-reflective, right.
About 70% of Americans are non-reflective, and, over 50% of MIT students, are non-reflective. There are non-reflective people everywhere, and every discipline, in every area of life. And, what the data shows, is that, non-reflective people, are much less likely to deliberate, about, what they encounter. And so, they're much less likely to evaluate fake news and decide whether it's true and false. They're much more likely to just accept it and go on with their day.
Which is not unreasonable in a way, right. I mean, the vast majority of information you encounter in the day, you just accept, and, go on with. You don't, question, everything. Yes you should be critical of news, but that requires putting news into a special category. It requires a real, thoughtful act, in order to be, critical. And even with news, you can't do it all the time. It would just take too much time.
Kyle: So, from the statistics you gave. If we consider non-reflective people to be, sort of an ailment, then it's rather prolific in culture. Is that something you think we should be working on, better educating people, trying to develop these skills, or, do we just, need to have a society that works within the construct of knowing people are like this?
Steven: I would say, mostly the latter. I do not want to say non-reflectivity is an ailment. Any trait, that 70% of people have, I would not want to call an ailment. You know, there's a reason that, we're not reflective about everything. It takes time, to be reflective. It takes, resources, to be reflective, and it inhibits other kinds of processes that we might want to engage in, right. It might make us, less creative, to be reflective.
So, we have to trade-off, deliberating and being critical of everything, with, all of our other goals in life. So, of course, we want to teach people critical reasoning strategies, in the sense that, when there's an opportune time to be reflective, or to be critical, people should know what to do. I gotta tell you though, it feels to me like, that's not really, the critical problem with fake news. Now, if I come across something…, so if I'm standing in line in the grocery store, then, that's one thing, but if I'm sitting at my computer, and, come across some wild claim. I don't, generally find it really hard to discover whether it's true or false. I mean, there are websites you can go to, but you can also just Google it, and see whether, credible organizations are reporting it as well.
Critical reasoning skills are important. I actually think there are two other things which are more important. One, is just making people aware that, critical reasoning, requires effort. It's something, you do have to think about, it’s something that, you do have to learn to do, because there are biases built into the cognitive system, but I think even more important than that; we don't want to just focus on individuals, we have to focus on, the culture.
Individuals fundamentally, don't think alone, but we're, constantly dependent on others, for, information, for, all kinds of cognitive processing for remembering things, for doing calculations. There's a sense in which whenever we do anything, we do it as a community, and, so, if we're going to be critical…, since we can't be critical all the time, we have to be reminded by others when to be critical. What we really need more than anything, is a culture in which, it's okay to say to someone, “Is that true? Where did you learn that? Is that credible? Is that plausible? Here's a reason why that might be wrong”, right.
There are communities already, in which, I think that is the norm. Effective scientific lab meetings are like that. People are constantly being challenged. People make claims and other people scratch their heads, and are skeptical. Any sort of philosophical debate is also like that. It's okay to be skeptical. So, I feel like, that, is the closest thing we can come to a, silver bullet, to solve this problem of fake news, to induce, to allow a culture, which allows skepticism. I guess you know that already, given the name of your organization.
Kyle: Well, we aspire to that as well, yeah. So, we're touching on a lot of the topics mentioned in your book, “The Knowledge Illusion”, which I think is a great read. I definitely wanna recommend to listeners to check out. Let’s start there. Could you tell me a little bit about the book and what you cover in it?
Steven: The book is a two-fold argument, where, we point out that, as individuals, we’re relatively ignorant, and, that we don't understand fully how ignorant we are. That we think we understand how things work better than we actually do. So that's, the one, theme of the book. The second theme, is an attempt to explain, given that individuals are ignorant, how is it that humanity is able to accomplish so much?
We operate within a community of knowledge. There's a division of cognitive labor, such that, whenever we do anything, we do things as a group. We cover a lot of data, from cognitive science, and other areas. And, a big part of the book is drying out implications of this view for, politics, for education, for a relationship with technology, for decision making, and, for one or two other things that are slipping my mind at the moment.
Kyle: So I think the book…, really, you guys built that case very strongly, and I didn't find a point that I thought was particularly controversial, in that regard, that, you know, people are in certain ways ignorant, and that, we get past that with communities of knowledge. Is there…, is this a radical new idea in your field, or is it kind of something that researchers have been migrating towards for a while?
Steven: It's certainly not a radical new idea. Walter Lippmann again, had that similar ideas back in 1922, and, I'm sure, one could find precursors of the ideas, even before that. There are certainly precursors of the idea in psychology, cognitive science, and then philosophy. I actually think the importance of the idea is not so much that it's entirely novel, but that, it's, not consistent with the way, first, cognitive science thinks about the mind, but second, how everyday people, think about the mind.
In cognitive science, there's a tendency, to build models of mental processing, mental functioning, which are, descriptions of what's going on inside the skull. And in fact, a lot of people now, are doing this at the neuro-level, that is, they’re building models of how the brain works, and, those models are in some sense replacing a lot of the fear rising that's going on in cognitive science.
This way of thinking, of everything being inside the skull, I find, deeply wrong-headed. If we really want to understand how human beings process information, we have to think about things that are much higher level. We have to think about interactions between people. It's, not revolutionary in cognitive science, but it's something that, diverges, from the mainstream view in cognitive science, and in fact, mainstreamed direction that cognitive science is moving.
So, human beings, are always looking from the inside out. We see the environment around us, we see the world that we’re living in, and we see how it impinges on us as individuals. We see what we bring to the table, and, it always seems different, than what others, are bringing to the table. And so, it's very natural for individual human beings to think about, their own mental processes as, things that are going on inside their brain.
In fact, we are doing the book; it turns out that, much of…, many of the conclusions we come to, the inferences we make, the decisions we make, the actions we take, are, a function, not only of brain processes, but, of all kinds of other stuff going on in our bodies, in the world, and, in other people's minds. We outsource a lot of the information processing we need to do, you know, we want to know whether a policy is good or bad.
We don't just think it through, but we see what, the people we respect think. And in fact, that's the right thing to do. I happen to be, a great believer in climate change, but it's not because I can give you a, super-sophisticated analysis of, how and why the Earth is warming, and what the consequences of that are. I mean that's just not the kind of scientist I am. I rely on the knowledge that other scientists have, and I incorporate it. I'm not fully aware of the degree to which I incorporate other people's thinking, and other people's beliefs, into my own.
Kyle: When you put it that way, it seems like a strength of our culture in a way to me, I mean. If you asked me, “How does a computer work?”, something I have some expertise in. I can tell you a great deal of information, but, you know, at some level, I don't know how the transistor is actually manufactured, but I know someone else out there knows it, and takes care of that for me.
So in that regard, is, this way that we share collective knowledge. Can we measure it? Can we ascribe any sort of properties to it that we want to study? And how, maybe scientifically, where a culture is, even if its individual people, aren't up to speed on the latest and greatest?
Steven: That's an interesting question. We are studying it and in many ways the way, my lab, is focusing on the issue, is, by trying to figure out the ways in which people outsource problems, without being aware that they're outsourcing those problems. I have a grad student, Babak Hamation, who has, shown that, we often use single words as satisfactory explanations for things, right.
So, I'll give you an example that's not from his studies, but it'll give you a sense of what he's doing. So someone might say, “Why is this food healthy?” And, someone else might answer, “Because it's organic” Well, I would argue that, organic actually doesn't carry a lot of information.
Kyle: It's made of carbon, right.
Steven: Well, okay. So it tells you that, but, that's a fact that you probably already knew, right, that it was made of carbon. You know, people use organic, to mean multiple different things. I spent…, I lived in California for a while, and they're sort of five different levels of organicity in Berkeley, right, depending on which supermarket you go to. People mean very different things by it. And yet, if you say it, it turns out, people think that's a satisfactory explanation. In general, words, service as, satisfactory explanations, if those words are entrenched in a community, and not if those words are not entrenched in a community.
So what the buckets shown is, he comes up with situations where the word literally has zero informational value, informational content. It carries zero bits of information, but, if the community uses the word people will say it's a satisfactory explanation. And if someone just…, has come up with the word on their own, then it's not a satisfactory explanation. So that's one example of how we outsource judgments, that we have to make, to other people, without really being aware that we're doing so.
I could go on with examples if you want me to.
Kyle: Sure. Absolutely.
Steven: So, another one…, is work, that, my colleague Eleanor Amit has done, in which she shows that, you describe an act by someone, like; the CEO is selling some drug, and the drug, either has, a seriously harmful effect, or, it has a not such a bad effect, still bad, but not so bad. Moreover, we tell people, either, selling the drug is legal, or selling the drug is illegal. And then we say, “How bad is this act?” On a scale from 1 to 7, how morally acceptable is what the CEO did.
People, don't care at all about the impact of the drug, that is, the impact of the drug, has zero influence, on people's judgments of moral acceptability. What matters to people is whether the act was legal or illegal. It’s quite shocking. I mean, data are really quite powerful, you know, it's important to understand that, they only see one impact or the other, and they only learn whether it is legal, or illegal. They don't get a variety. They're not comparing one situation to another. Our explanation for this, is that people appeal to legality, because it allows them to outsource this very difficult question of how moral the action is.
Instead of answering the question, “How moral is this action?”, they're able to answer a much simpler question, “How legal is this action?” So what they're doing is, outsourcing the question of morality, to the people who make the law. So that's at least our hypothesis. That's another example. Here’s a third example, and this is more of a sort of hypothetical example of how we outsource. You can imagine a society in which, I believe, I understand, the effect of say, some medical insurance policy, because the people around me, think they understand the effect of the medical insurance policy.
Kyle: [laughs] Knowledge by Osmosis then…
Steven: Exactly. And we actually have some, direct data that, learning that other people understand does increase your own sense of understanding. So if I think I understand because the people around me think they understand, and the people around me think they understand, because the people around them think they understand, right, and everybody has the sense of understanding, only because they're surrounded by other people who have a sense of understanding, then you can have an entire community with this deep sense of understanding, even though nobody understands.
So, that's, one consequence of outsourcing, and not knowing that you're outsourcing, being non-reflective, about outsourcing. And to be honest, I think that is probably the most important role that fake news place in our society. I don't think it's the information content per say, it's more the…, you have other people who are telling you, “Look at these evil people. Look what they do, right, look at this conspiracy that's going on”
I hate them, and now everyone who sees this piece of news, is gonna, be more likely to hate them, and we have reason, because, we understand, because, here's some information. The person who's…, obtaining the fake news doesn't necessarily fully comprehend or even accept, what they're seeing, but they nevertheless are left with, this sense of affiliation with the people who delivered it to them, because they know that, other people are seeing it too.
There's a sense of community affiliation which of course is to be contrasted with the target group in the fake news which you know, inevitably, you build up resentment for. So, it's just a way of polarizing people by making them feel closer to one community, and more distant from another community, and indeed polarization is…, has increased by leaps and bounds in the last several years.
Kyle: Well, if we were to take this idea of…, you know, my confidence, that I know something based on…, if I think people around me know it, I could describe that as a strategy, right. You know, in one sense I could go independently learn everything, which would never…, I’d never complete my task, or I could trust that, you know, all my neighbors seem to do an activity, it doesn't seem too bad for them, I'll just, take for granted, that maybe someone out there has, double-checked things, and that, they've arrived at the truth, and I can follow their lead. Is there an issue with that strategy overall?
The alternative being that, like you were saying, I have to go and learn everything myself. Have we settled to this because it's sort of a game theoretic, nice place to be? Is that why societies behave this way?
Steven: Oh, I'd say it more strongly than that. We've settled on…, I mean, evolution has settled on this strategy, because, it's the only option available to us. The world is incredibly complex, and, you know, it's not only the case that it would take our entire lifetime to learn everything we had to know to say, “Make the best possible political decision” It would take our entire lifetime to learn everything we have to know about any one issue, in order, to make a wise decision. You know, I don't even know if it's evolution so much that has built-in this strategy as much as, you know, metaphysical reality. Like the world is just so complex, that we can only be specialists in some tiny, narrow, little, sliver of possibility; and, we depend on others, we have to depend on others, we have no choice, but to depend on others.
So, yeah, we have to use that strategy. We have no choice, but, we can't be aware, that we’re using. So I do try to be reflective, I mean, I try to be reflective about whether news is true or false. I also try to be reflective about the source of my own beliefs, so, you know, I suffer as much hate and resentment about certain politicians as anybody else does. I'm constantly trying to tell myself that, I'm being manipulated by my own community, because I think I am, you know, that's just, how communities work.
And I think if we were all a little more accepting of that fact, then perhaps, conversation would be a little easier. We’d be able to reach compromise a little more quickly, because we'd be less confident about our own accuracy.
Kyle: Speaking of confidence, there's an interesting approach you can take, that I learned about in the book. You know, when I…, hear things about the way in which I, like everyone else, is ignorant in many ways, it's kind of disheartening. I have this goal of, believing as many true things and as few false things as possible…, yet knowing that that’s sort of unattainable, but, one thing that really…, brought me some…, you know, I don’t know if pleasure is the right word, but I was happy to learn about it from the book; was, how you can, get someone to see the light by asking them…, I guess exploring the illusion of explanatory depth. Can you tell me a little bit about the mechanisms that, you can take people through there?
Steven: So, this is a psychological phenomenon first uncovered by a couple of psychologists named Leon Rosenblatt and Frank Kyle, and what they did was, they took every day common objects like zippers, and ballpoint pens, and toilets, and they asked people, how well they understood how they worked. People, you know, thought they understood that they worked on a 7-point scale. They gave numbers on average like 4 and 5, and then they said, “Okay, how do they work? Explain it, in as much detail as you possibly can”
And people, handled them hard, and struggled, and found that they were not able, to give, a complete explanation, or anything close to a complete explanation. So, when, Rosenblatt and Kyle, again asked them, “Now, how well do you understand how the thing works?” People lowered their judgments. In other words, people themselves admitted that, before-hand, they had been living in this illusion of understanding, or the solution of explanatory depth as Rosenblatt and Kyle say, that, in fact, their understanding is not what they thought it was.
So, the process of explanation revealed to them, that, they don't understand. So, what I did with, some of my colleagues is, show the same thing in the political domain. Just prior to the 2012 election, we gave people some policy issues, things that were common discourse at the time, like, whether they should be a cap-and-trade policy, and carbon emissions, whether there should be unilateral sanctions on Iran, right, some things never change. And…, we asked people how well they understood the issues. We asked them to explain them, and then we found that, that lowered the subsequent judgment of understanding.
By having people explain, or try to explain, how these policies would lead to consequences, we punctured their sense of understanding. What we also showed, is that, puncturing their sense of understanding, also punctured their confidence. So people became, less confident, in their attitude, regarding the policy, after trying to explain how it worked.
Kyle: Seems like a very powerful mechanism though, then, that maybe we could take advantage of.
Steven: It does. And…, I do think that we should take advantage of it, you know, there are actually examples out there, where we have taken advantage of it. So, along with Abak, my grad student, and, some others. When we look at people's attitudes toward same-sex marriage. Those have changed a lot over the past 10 years, right, since Obama took office. And, what we did was, we took a bunch of comments on Reddit about same-sex marriage, and we did this statistical analysis of them. We applied what’s called the Topic Model or an LDA analysis, and we showed that, over time, people discuss the topic less in terms of their sacred values, and more, in terms of consequences.
So, the nature of the discourse about same-sex marriage, has changed, in parallel, with changes in people's attitudes. Minds change, when the conversation changes from one about, you know, people's basic intrinsic sacred values, to one about, what the actual consequences of the policy would be. If you're talking about, what the Bible says about marriage, or about how everyone has a fundamental right to choose who they love, then, you're not going to achieve much consensus.
You're not going to change anybody’s mind, but, if you're talking about, whether having two fathers or two mothers, has an effect on a child's welfare, well, that's something you can actually talk about, and it turns out that, you know, which of those people are talking about, is in a sense up to them. That you can frame issues in different ways and, how you frame them, will matter. So that's the positive side. I've got to be honest that it's not clear to me, that one can, easily scale up, the kind of experiment we ran to, you know, an entire nation’s discourse, because, when you ask somebody to explain something, you kind of annoy them.
People don't like being asked to explain things, especially, when they can't. People don't like having their ignorance pointed at. So they’ll accept it, and they'll realize they're ignorant, but then, they won't want to talk to you. It's something that we should do, and actually, I think it's the media's responsibility more than any…, to change the discourse, from one about…, well, basically from what about, who's going to win the next election, which is 90% of what they talk about, to one about, what the actual consequences of things would be, like, how is our world working.
I find it very annoying when I encounter almost all news these days, even respectable outlets. Even, you know, PBS and NPR. There’s so little discussion about the gory details, by which, things change, by which policies are going to have effects, because it's hard to talk about that, right, it's not always pleasant. It’s sort of…, it's a little like going to engineering class, instead, what we talk about are; what we value, and what our friends think, and what the polls say, and, you know, who's going to win the next election, again, and again, and again.
Kyle: Well Steven, I know…, this may seem like it's a little bit of a left turn to close out our interview, but, this program which is now focusing on fake news, has just come off of a long stretch of discussing artificial intelligence, which, in a way, does come up in the book. Could you leave us with a teaser about your thoughts on, how machines that make decisions, fit into the way in which human beings share and retain knowledge?
Steven: Well, look. It's very clear that the most important member of our community of knowledge these days is Google. If we want to learn something, that's, the primary place we go, and in fact, we all have dinner partners who we wish would stop going there. The internet in that sense, is critically important. It has, you know, completely widened, our community of knowledge to an incredible degree. That's the good news. The bad news, is that, technology is not a member of our community in the same sense that other human beings are, members of our community.
Technology does not share our intentionality. So human beings, have this, very special ability to share goals, with others, to focus attention on common goals, and of ways of achieving those goals, so that, we’re constantly collaborating together, right. Human beings have this incredible ability to work together. And there are psychologists who argue that there's no other species that comes close to human beings, and, the ability to do that. What's clear, is that, technology, is, in another dimension.
Technology is completely unable, for the most part, to share our intentionality. You know, engineers are busy trying to, get Google to understand what we mean when we type in a search query. It takes a human being to really figure out what it is that another human being is trying to achieve, and to step in and help them achieve that. And that's the reason that, technology has led people astray so often. In the book we cover a variety of examples, of how, various transportation tragedies, are a result of, people misusing the technology, or the technology misunderstanding what the humans were trying to do.
When we use AI, we have to be super careful, because, we come more and more to depend on it, even though, it's lacking, this really really important trait; the ability to figure out what in the end we’re trying to accomplish.
Kyle: So I think that's an excellent challenge to developers of future AI systems, you know, perhaps, there will one day, be a system where, I can give it my intentions, of the types of articles that I want related to whatever my interests are; and then it would, automatically optimize, to bring me as many true statements and as few false statements as possible, but until we have such machines, it seems we're going to continue living in a world where the internet brings us a mix of real and fake news. If fake news is at least partially unavoidable, what are the consequences for society?
Steven: We have a tendency to think, that fake news is just information and it changes people's beliefs, right. We see this information, we update our beliefs in a way that corresponds to the fake news. It's not at all obvious that that is the consequence. The consequence maybe one that's more emotional, or as I suggested earlier, it may be more about, who we affiliate with, who we love, and who we hate. And it may not, really affect our beliefs at all.
One thing we do know, is that, we're much more likely to…, be critical of fake news that's inconsistent with our beliefs, but that maybe more because the fake news is questioning our communities, questioning our team, our family, those who we love, and so we respond in the kind of emotional way. These are all open questions, and I hope we're able to answer them some day.
Kyle: Yeah, me too. And I…, I look forward to, watching your future work, and, this topic in general. Is there a best place people can follow you online?
Steven: I have a website, the Sloman Lab website, but it's full of academic papers. I really do hope they go, and read the book, and, read my next book when it comes out as well.
Kyle: Excellent. Well, that's “The Knowledge Illusion: Why We Never Think Alone”, and be sure to give me a heads up when your next book hits the stands.
Steven: Thanks Kyle. It's been a great pleasure.