Transcript: Why Rational People Get Stuck in Echo Chambers with Jens Koed Madsen

This is an automatically generated transcript from Descript.com, please excuse any mistakes.

Turi: Today we’re over the moon to be talking to Jens Koed Madsen, who is a cognitive psychologist at the London school of economics. He’s interested in misinformation and complex human environment systems. He’s really focused on how people change their beliefs and how they act in social networks.

In 2018. Yens did the most extraordinary experiment, which seems to confirm or seem to confirm some of our worst fears about the ways that social network manipulate humans. Into echo chambers, um, radicalize their opinions and help polarize them. That’s what we want to unpick today. And then follow up, looking at subsequent work. The ENS has done around the world that the role that media can have inside the social network ecosystems. So yams tell us about large networks of rational agents. Tell us about this experiment.

Jens: Yeah. And, uh, thanks so much for having me on this podcast. I’m really honored to have been invited. Um, well, I mean, fundamentally it came down to, um, a question that sort of sprung into my mind in December, 2017. I was listening to Mark Zuckerberg. Talk about, um, the, the benefits of social networks and social media.

You’ve heard it so many times, especially from people who are developing a social media, which is like, Oh, all we need to do is connect people. And like the more we can bring together people and the more people we can bring together. Uh, the more like the sort of classic marketplace of ideas, the bad ideas will float to the bottom. The good ideas will rise to the top. And it’s all like this classic idea that we’ve, we’ve heard it so many times, um, parroted to us, uh, way back when from like invite enlightenment philosophy, um, and sort of like the foundations of deliberative democracy. All the way up to now, um, Silicon Valley tech people, and like, it was just like a thought struck me and it was all like, well, I mean, that’s a really nice assumption and it’s a really nice sort of intuitive prediction, but does it actually hold? So we were interested in testing this fundamentally, whether or not increasing the scale of the network is, uh, helpful in decreasing the amount of echo chamber and polarization that we’re seeing.

Turi: So the key thesis here of Zuckerberg is the more people we can put into these systems, the faster we will process the bad ideas out. Yeah. If foster will accelerate towards the truth and that’s what you were testing.

Jens: Yeah, exactly. So we built this, um, this big model, uh, of a simulation of a social network. And we put in, uh, to this model, different components we put in some agents who are optimally rational. So they integrate information in a totally rational way. They are completely honest. So they will always say exactly what they think about the world. Uh, they’re a hundred percent trusting, so they will always trust a hundred percent what everyone else is telling them, which incidentally is a good thing in a world where everyone is a hundred percent honest.

Um, and they’re quite open-minded. So they’re willing to engage with anyone within say 95% of the, what they consider to be, um, uh, a possible reality. So they’re really super open-minded, that’s totally rational, totally trusting and totally honest. And they have complete perfect memory so they can remember everything they’ve ever seen in their lifetimes. The ideal citizen, the ideal citizen, we, we thought that we would put the best possible idealized citizen into this network because we wanted to avoid, uh, baking in some results where like we just put in a bunch of bigoted people and then look, they became bigoted. Once they were in a social media network, we wanted to really have like a as clean a test as possible of this idea of the marketplace of ideas. So we implemented these idealized citizens in a social network structure. And all we did was then to allow them to sample a bit of information on their own, uh, to begin with. Um, and then, uh, from, from a thousand distribution, a normal distribution. And then all we did was to let them talk to each other. So each agent could talk and say like, Oh, I’ve seen this. And they could send that information to the other agent who would then integrate it in an optimally rational. Perfect way. Um, and then they would update their beliefs. And then the next time that agent talks to another agent, then that agent would then teach the new agent, Oh, I’ve seen this and this and this. Um, and so the whole bunch of evidence that they’ve seen throughout that lifetime. So yeah, the only manipulation then that we wanted to do to test this theory, that the more we put people together, there’s like the more people we put together, the better it becomes. Is we manipulated the extent to which the agent had access to different amounts of this network. So for instance, in one experiment, um, each agent could talk to 5% of the total network. Um, in another simulation they could talk to 10%, 50% and then a hundred percent of the network. Of course, if, um, if the theory is correct, uh, the marketplace of ideas and this theory that connecting people is going to increase, like help this problem.

Then as we increase the size of how many people you can talk to in that network, It should bring more and more people together into like the middle. And I can sense this about the ideas, which are correct, and yeah, and I, and I should say for the, for the absolute sort of a perfect scenario here, we also implemented an objective truth. Uh, into the model, which the agents have to find. So there is actually a truth out there in this model simulation, which the agents were trying to find. So basically the more people in our simulation who were close to that truth, like to that a true statement, the better the agents could have said to like effectively find the right solution or the right idea.

Turi: Gotcha. So does it bring everybody with us as a Zuckerberg or as John Stuart mill or as many other people have suggested, and then the results were not as nice as we would have liked?

Jens: No, unfortunately not. Um, so this was, this was kind of what surprised us a bit when we, when we ran these simulations, I should say that like, uh, there’s one component that I haven’t mentioned yet, which is if you are. Uh, two or more standard deviations away from what you, each agent thinks is the right world. Then they will sever contact with that person. So basically if you meet someone in the, in the social network and you think, like they say like, Oh, the moon is made of cheese. You go like, okay, this is beyond the pale for what I could consider to be even reasonably rational or possible.

So in that case, they won’t engage with that person. So those are the only mechanisms that we had in. And like, indeed, as you say, the results weren’t as promising as, as, uh, John Stuart mill at Al would have wanted them to be. So we found that the more we increase the connectivity of people, the more people got stuck in extreme positions and echo chambers on the extreme edges of the belief structures. And this is not only, so not only did we not find a positive effect of, um, of increasing the network connectivity, not only did we not find like no effect, we found a negative effect, such that the bigger we made the network, the worst his problem became. And you can think of this in a way, um, as like, imagine that you had like, um, like three people in, in a, in rural. Um, and one is a socialist. One is a social Democrat and one is a communist. And like, if they’re in a tiny little town, they can only talk to themselves if they want to have like a left wing political club. So they kind of have to come together in this little club. But now imagine that you take those three people and you take them into a big town like Manchester.

Uh, all of a sudden there’s a social democratic society. There’s a socialist society and there’s a communist society, which each of these people can now self-segregate into and self purify into something that becomes extremely confident about the ideological positions. Now take that communist, shove him into London. And all of a sudden, not only do you have a communist society, you have like all these variations of communist societies, like . Exactly. Exactly. So what was showing in a sense is that the greater we make the network, the more people include. Naturally by, by, by virtue of that, the, the greater diversity of the opinion you also include, and the more options you give for the agents to just talk to the people with whom they already agree and further sort of become increasingly confident that their version of reality is the right one. And that’s fundamentally contradictory to, I think at least in that way, the interpretation of the marketplace of ideas and this all foundational principles of this connectivity that is foundational, both for deliberative democracies and for, um, for social media networks, um, as they’ve been conceived,

Turi: Yeah, this is fascinating, obviously. What you’re demonstrating here is this extraordinary phenomenon, that social networks, the larger, the social network, the more likely they are to lead to polarization because people will be able to find their tribes to the very smallest degree.

Um, there’s one way of understanding that, which is to say that this is a a spectacular, uh, breeding ground for new ideas, and to go really deep into those new ideas. That on some level for the marketplace is a great thing. But the key piece that you show is that actually, when people get stuck into those little bubbles, these echo chambers, as you described, they stay that and they radicalize inside them.

Jens: Yes. So there’s, there’s some good and bad news in that way, because as you say on the good news, like one of the things that I think is so tremendously exciting about these types of models is that they allow us to test these fundamental theories and like, and build better structures that exactly, uh, trying to like adhere to the marketplace of ideas.

It might not be that connectivity is the answer, but it may be that we need to think creatively about how we design these structures. Um, but as you say, like the negative thing about this is that once you allow people to have their self organizing principles in the social media structures, we see increased echo chamber formation and like polarized, like extremist positions that are bubbling up in this network. Um, and you can just like, it’s so easy to think about like how this plays out in real life. With the intellectual dark web with Q Anon. Like it’s some, no one ever sort of starts in that extreme. That’s all like gently pushed into it, uh, through their connectivity and through algorithms started suggesting content for them that might be, um, like keeping them online on that those platforms.

So like to understand like explicitly how these networks can self organize and how people can deviate and trail along little path dependent, um, like, uh, tracks, um, which can lead to either extremism or to like a more convergence in the middle. I think it’s incredibly important to understand these structures, uh, fundamentally, and not just from sort of a principle of like intuitive, um, like enlightenment philosophy.

Turi: That makes sense for, so two, two things to tease out of this. The first is that, um, even with the idea that there is no difference between the people who end up in these peculiar radicalized echo chambers and everyone else. So that’s the first point to make so that your great uncle Joe who’s joined Q Anon. Is just as, is just as likely to be late new cognitively, um, as not that’s one,

Jens: at least in this model. Um, so obviously we’re not saying that there’s no differences between people in real life. We’re just saying that you don’t need a special sauce to be stuck in these echo chambers and these extremes positions.

Turi: Exactly. And that talks to the second fundamental point, which is it. Um, and it’s a point that you’ve made elsewhere, which is that we focus on fake news and misinformation and polarization, um, and the role of politics in sorting us into other areas, different camps, et cetera, et cetera. We think about victims. And actually, we never really think about the actual architecture of the networks themselves as being systemically, designed or systemically incapable of, uh, of anything other than their sort of radicalization on the fringes.

Jens: Yeah, no, exactly. And like, I think like in order to really get to grips with, um, deep radicalizing people and understanding the deleterious effects that disinformation can have, like, we cannot just focus exclusively on flagging, like disinformation and on teaching citizens, how to critically evaluate data. I think we have to fundamentally think about like systemic properties of how these algorithms are self selecting and curating the newsfeed for people, as well as like the capacity for sharing, um, like disinformation that have been brought, um, to, to everyone. So I think like you’re entirely right in saying that like the focus has been a lot on. Like the fake news itself, like the content itself or the citizens and their inability is to sort of pick up on fake news or whatever. And this is not to say that those things aren’t important, but there’s like this third component, which is the structure of the information system itself. Um, and it has fundamentally flipped in the last couple of decades, uh, with the introduction of social media, it’s gone from a traditionally top-down, uh, mass media communication model. Which like in the last, like, since Gutenberg’s press and since the invention of people, it’s just been like an increase in top-down, uh, capacity to reach larger and larger audiences. Like, but like, think about like who had access to paper who had access to printing press who had access to radio TV? It was always a, self-selected like little group of people who are either literate or who had access to these media conglomerates and who could push that agendas.

So in that way it was a great democratizer once the social media companies came in and said like, no, no, no, hang on. Everyone can contribute now. But the big challenge, at least from an information theoretical perspective is that that fundamentally change, um, a system from being like a top-down to being a top-down and bottom-up information system where information flows in radically different ways. And I think we haven’t caught up in our theoretical understanding and appreciation of what that means. And this is why like, so the model simulations that I and my colleagues have been making and, and many other colleagues around the world are making as well, um, is an attempt now to try and use more sort of sophisticated computational computer simulations.

To really understand, like, what is it? And like, what is the impact of like these structures and why, what are the impacts of these cognitive and social, psychological, um, like under ideas and assumptions in conjunction with, uh, these, uh, structures and algorithms, not as an isolation from it, but in conjunction with it and interplay between them.

Turi: So, so, so yes, we talk about misinformation and bad actors. And we also talk about that design of these social networks and interest in Harris, which I’ll link to in the show notes as written a lot about the ethics of attention mongering and the like button and what that does on Twitter and Facebook elsewhere, all of this, we’re talking in a sense to the details of these social networks, but what your, your model looks at is the very function of them and how they.

Contribute to the polarization. So then that’s experiment one fascinating, slightly heartbreaking. Um, but it gives us the basis from which to, uh, from which to look at how to fix things possibly. And one of the things that you then subsequent again, I think it was in 2020 was another experiment where you looked at how you dropped, take the same model and drop a broadcaster into the middle of it. So not just the name, work of the Twittering masters, but you’d take a big BBC or a, uh, or, or an NPR. Whatever it might be drop it into the middle with authority. What does old fashioned media do inside a social network?

Jens: Yeah. So we’ve been looking at this in two different ways. So the first reason why we wanted to drop a broadcast in was to see if we could break the echo chamber effect by essentially, um, giving like a clear signal to all the agents in the model. Um, eh, at a certain frequency about the true state of the world. So the first broadcast that we have plunked into this model was simply put every five interactions instead of interacting with someone within the network, they just got broadcast a message. Every single agent got broadcast a message with the true state of the system.

Um, so like, um, like just the, the truth, um, and the integrated in the exact same way that they would integrate any other evidence from any other person. So we thought like, okay, like having 20% of the interactions being literally with the truth, um, we thought like, okay, this surely must break these products, these echo chambers, but it didn’t. Um, so it just goes to show like how believably resilient these findings are in the model, um, that despite 20% of the interactions being with literally the truth, we still found agents who got stuck in extreme positions. So that was the first little foray into broadcasters. And the second one we thought like, okay, like, instead of just using this as an intervention, to see how we can sort of scratch the echo chamber in, towards the middle and in, towards the truth, um, why don’t we have competing broadcasters and see what media landscapes would do to the population?

So what we could manipulate with now is how frequently broadcasters would engage with the, with the people who they could engage with as well, but also what they said. And then media strategies. So just to give you one example, we put in to the model, two broadcasters, so two competing broadcasters, uh, one broadcaster, we conceptualized as a partisan disinformation. So they would constantly say something that was wrong, um, and just like blasted it out, like BOM BOM, BOM, BOM, BOM, constantly just say something that’s wrong. And the other one we conceptualized as a populous news channel. That just reflected what is the majority Marion belief. And we’ll just, we’ll, we’ll say that. Um, so like reflecting as many people as possible. Uh, like their opinions as possible. And what we saw in this, uh, in this model was that we could manipulate the degree to which like beginning, uh, like how, how wide beliefs people had in the beginning. But like eventually what happened in this model is like this, this, this informal that’s just like pounding away on something that’s disingenuous and false eventually actually drags by virtue. Um, the populace broadcast to over to itself. Because eventually enough people will start believing in this, uh, in this disinformation channel and because there’s no critical voice to push against it. And because there’s populist, um, uh, channel is essentially just following what is most popular and what is, what is commonly held in the population as the population slides towards the test informer. So slides the, uh, the, uh, the, uh, the disinfect, the popular news channel.

Turi: So, we see this politically. All over the place when I’m, I don’t know,Marine le Pen in France with the less humbling moments, you know, continues to drive through a particular message, eventually pulling the center, right party closer to her because they need to collect more of her votes. But what you’re, this you’ve done in an entirely abstract modeled way. And on a broadcaster perspective, you’re describing essentially how the Overton window moves is that right?

Jens: Yes, exactly. Um, and you can think about it, like in, in this scenario that I described to you, you can think about it as, um, like a news network that is committed to quote unquote fan balanced or like whatever, like, so like saying something on both sides because they kind of have to then reflect debate on both sides of the divide and the average of the population. So they would kind of have to say something to the mean of the population and how that interacts then with shifting population dynamics, given a disingenuous misinformed, we have other types of decisions. We have other types of broadcasts as well. So we have a very sneaky disen former, uh, one that I’m very proud of that we came up with. Um, and that’s a dissident forming broadcasting network that co-ops the mean of the population. So what that does. It initially, it, it, it sits itself at the mean of the population to gain credibility, uh, with, with, uh, with the electorate or with the citizenry. And then every step of the model, it moves 5%, uh, towards misinforming message that it wants to push eventually. Uh, so it’s dragging the population with it. Um, and again, like combining that with a populist, uh, then you get, get essentially like two little ships that are sailing towards a disinformation a message. Because instead of like, like if we have like a disinformation that just starts way too, too extreme and just starts going, like the moon is made of cheese, boom.

Um, like it will find no audience because again, like people, uh, there’s there’s too far away from, from what people believe. So we, we introduced this sneaky little, this informa. Um, in order to see if we could sort of lead like the shepherd leading, uh, so the sheep kind of a thing without calling people sheep,

Turi: Can that sneaky disinfect, this informer only ever dragged the population on drag the messaging 5%. And incrementally and 5% steps. What is feasible if they were to do a 20% jump where they lose the population behind them and it wouldn’t work, I’m talking about it. Let’s, let’s anchor it in a, in an example of say the BBC, the BBC is criticized for having moved too far left. It’s also been criticized for having been having moved too far. Right. Um, by both sides, both of them think of it as sort of the arbiter of what counts as reality as well to collated by mainstream British means. Um, but yeah. Does that, that, that imperceptible shift one way or another, how far can it be sneaky this informal? Not to call it BBC sneaky, but, um, as an, as an example, how fucking it moved one side or the other without losing everybody.

Jens: Yeah. Like, so, and I wouldn’t call it BBC at these informal obviously, but like, um, but the, in the model, it depends entirely on, um, like in the model, it depends entirely on the cognitive structure. So like in our model, 20% would be too far, but again, that’s in the context of an idealized citizen. So it entirely depends on the psychological description that you put in. The next kind of step is to understand the psychology of the citizens in more detail, um, above and beyond an abstracted idealized citizen. Once we get to a state where we have a better psychological description of the citizen, We have now a potential for introducing a media landscape and like various types of disinformation channels, uh, disinformation and misinformation from the bottom up as well as from the top down.

That allows us fundamentally to stress test information systems. So it allows us a modeling tool to in principle, figure out when and why. Uh, information systems are vulnerable to disinformation and who within the information system, uh, vulnerable to disinformation. As well as the most effective ways of combating combating that disinformation campaign. So this is not where we are yet, but like, this is where I want to take this model in the next three to five years. Um, it’s the fundamental idea. And this is taken from incidentally, um, environmental and ecological sciences, where I’ve been doing some work on environmental sustainability. Where some of these same modeling principles are being used, because if you think about it, like an environmental system is incredibly complex and interactive, like species eating, other species, things, migrating, uh, like environmental factors, doing whatever they do to like species.

Um, so in that, in that, so ecological and environmental sciences, these kind of dynamic models to stress test, the fragilities of systems ecosystems are kind of commonplace. Um, but they’ve never really, to my mind, at least been applied with any degree of psychological sophistication to this problem of disinformation. And this is exactly what we’re trying to do. Is to describe it as an ache ecosystem, if you will, in the same way that, uh, epidemiologists are describing, um, the sort of spread of diseases, you can think of it as an input semiology, they were sort of trying to figure out what the spread of misinformation does, um, on a complex dynamic system. And this is kind of where we want to take this.

Turi: So, so this is moving us exactly where I wanted us to go, but I guess just a little recap. Um, large networks have polarization built into them by the very nature of the connectivity that they supply. You drop a broadcaster in a truthful broadcaster, and it has some impact, especially if that broadcaster.

Well that educator or whatever you want to call is brought in really early. It can have an impact on the vitality of, of, of thoughts if it’s brought in Allie. Yeah. The earlier the better the earlier the better, um, yeah. Populous broadcasters or sneaky broadcasts, broadcasters, Australia, straightforwardly, disinformation, dissident, this informant, they wish you mean actors of any kind. They can have an app. They haven’t, they have an impact, not just on the sectors of the network that they’re touching, but actually across all of it because they will pull them ecosystem itself in that direction. Um, you say, I think, uh, elsewhere that, um, especially when trust is low. Um, populous broadcasters tend to do better so that you can also look at media ecosystems, um, from the perspective of the citizenry, not just from the perspective of the broadcasters or the networks itself. So when you’re in an environment where there’s real lack of trust amongst the citizenry, it turns out that certain kinds of media thrive. Um, and those aren’t necessarily the good ones. And that feels a little bit where we are today. Is that, would you agree?

Jens: Yeah, exactly, exactly. And I think like, And don’t get me. I get me, um, like don’t get me wrong. Like, um, still in the infancy of this, of the science of understanding these complex information systems. Um, so we need like much more research into like the psychology, the social connections, structures, and networks and all of this. But exactly, as you say, like one of the tests that we’ve been running is in low trust conditions where we manipulate the degree to which people have faith in each other and faith in, in, in, in systems.

Um, and once we lowered that threshold and lowered that perception indeed, as you say, like put more polarized and extreme media started to do better. Um, and this is, this is kind of just as a, as a natural product of how those interactions shape out, how the manipulation of the, uh, not manipulation, but like the impact and the influence of the various broadcasters have on the citizenry and how that whole ecosystem then starts to vacillate and move. And self-organize. And like these vacillations are so critical, like to understand. And I think that’s where the real facilities come in. Because like, as you say, in a, in a world with low trust, it’s easier to go in and like further a erode the trust of traditional news or like establishment, like figures like politicians or journalists or NGOs, um, which also then undercut the ability to fact check stuff.

Turi: Um, so, um, yeah, this brings us very nicely into this last part that I want to ask you about, which is you’ve described a particular systemic architectural challenge. What are these social networks? You’ve taken it one step further and look to the, again, architecture, the actual challenge of major ecosystems with broadcasters involved, which I mean, newspapers, radio politicians with loud, loud voices. Yeah. TV channels. Um, and there is, there’s some systemic challenges. So what do we do about this?

Jens: I mean, there’s a couple of different things that I think would be beneficial. Um, and again, I’ll stress that like in these components, it’s not, it’s not just a structure. It’s also the citizens, the psychology of the citizens. And it’s also like a political thing. So like, just as for sort of solutions or for things that I might want to want to push, um, like that speaks to each of these various components. So. On the one hand, there’s a technical solution, uh, which we’re already talking about right now, which is to flag misinformation. I think that’s going to do very little, to be honest, because I think it depends on the credibility of the flagger. Like whoever flags that. But like it’s, it’s still worthwhile and especially Twitter,

Turi: Twitter, try to launch this thing and was, was, was attacked roundly by all sides of this idea of policing.

Jens: Exactly, exactly. Um, like one more sophisticated way to sort of pseudo flag also to at least think of things is to sort of, um, maybe ask people why they believe something or like, instead of just making claims and like, uh, about statements, uh, say like a little argument for why you think. Something is true, um, or link to a paper or link to some source that you think doesn’t mean.

Um, but again, that, that would kill a lot of social media and that that’s never going to fly. Um, like a second thing that, uh, technologically could be done is obviously to berries things like deeper in the newsfeeds that are flagged as potential disinformation or harmful information. But then again, then you come down to who decides what is harmful. And that’s a really big challenge because like you can’t have Mark Zuckerberg say, decide that and a cadre within Facebook to decide what is and what is not good information to be receiving. Uh, so who gets to decide that is a really big thing that I think as a society, we have to take very seriously, um, and not just leave it to politicians because politicians will have a vested interest in a particular version of reality and not just leave it to technological companies and tech companies, because they too will have a particular best at version of reality, but should there be some kind of arbitration sort of way of, of. Deciding, but like, again, we don’t want to nitpick and we don’t want to limit people’s freedom of speech. So it’s a really, really hard talent task. But I think some of these flagging mechanisms and like justification for what you’re trying to argue within the social media might be a good thing. Um, one thing that Twitter has done, um, that I’ve seen is sometimes if you try to retweet something where it has a link to an article, but you haven’t clicked on that article.

Uh, it very nicely says, uh, I show you want to retweet this without reading this first. And it invites you to go in and read it. We were just kind of nice because like, so I heard about this and I, and I went in and I tried it and it, it does do it, which is kind of nice because like, it means that you can’t at least just in a passing kind of a way, just retweet something on the back of a, of some source, sending it to you. Now on a citizen, kind of a side, um, what, uh, John and Sandra at Cambridge are doing on inoculation projects and what, uh, Sarah and and others are doing with debunking, um, is, is support work. And I think that’s really needed as well. We need to, I think educate ourselves and our citizens and our children as well to be critical consumers of information. I think that needs to be on the school, like, uh, like critical theory and critical reasoning. Um, I think needs to be on the, on the, on the docket, on the school’s curriculum. I think that would be, that’d be a fantastic thing. If we would take seriously, like, like evidential reasoning, logical argumentation, critical reasoning, and critical thinking.

Um, like teach every kid how to draw a Bayesian network and describe to them like how evidential reasoning for a lot of adults to teach first. Oh, I know. I know. But like, like that, that would be fantastic if we could go educational programs around critical reasoning and finally like a bit more so like a, uh, as a fun pet theory of mine or pet suggestion of mine. So, um, I would like, uh, people who are running for official office to be held at the same. Uh, standards of evidential reasoning as, uh, companies who are trying to sell a products, because fundamentally like politicians who are trying to run for official office, they’re trying to sell themselves as a, as an ideological product, as a society product, uh, that they can be used to sort of, uh, cure ailments in the society. Be it inequalities discrimination or whatever else is on the, on the agenda. Um, so I would like, uh, for politicians to be held to the same account. So for instance, if a toothpaste or like, let’s say a soda manufacturer comes out and says like, Oh, drink my drink, my fizzy drink. It will, it will give you better teeth.

Um, like there’s, there’s an advertisement then standard that we’ll just go out and say like, you cannot make that claim because like, that’s just not true. Um, there’s, there’s no such thing. There’s no backing for any of this. Why don’t we help hold our elected officials to the same degree of, of scrutiny as we do toothpaste. And Coca-Cola, this is not to say that they can’t say anything, but like, uh, like if they have opinions about things, they can voice their opinions about anything. But if they’re making references to factually based statements where that can be fact checked and, and, and, and discerned to the, to like some, uh, some degree of veracity, Um, if that found to be erroneous, then they should be asked to stop saying that on pain of paying fines. I think like if I went out and said, um, Britain has the most populous country is the most populous country in the world. Uh, and that was my linchpin and my cornerstone of my policy. Like is obviously like contradicted with fact like Britain is not the most populate populated country in the world. And so someone might go in and say like, you’re making a fact-based statement that is obviously demonstrably wrong. So like the, the big debate, I think for the 21st information’s like 21st century. Of information systems is how we structure that balance in a way that allows for all the liberal democratic rights that we hold. So dear, without opening ourselves up to be incredibly vulnerable to disinformation attacks from malevolent sources. And there’s no easy answer to that.

Turi: Yeah, that’s, that’s, uh, that’s, uh, you’ve wrapped it for us beautifully. Um, this fundamental tension between, uh, allowing for all the flowers to bloom and making sure that. Precisely as a result of these kinds of architectural features of our media ecosystem, that your models so beautifully described, we are also protected from, um, from the worst of those.

Thank you so much for walking us through these amazing experiments in such a clear, um, and engaging way. That was perfect.

Jens: My absolute pleasure and thanks so much for having me.

This page was last edited on Tuesday, 6 Apr 2021 at 16:53 UTC

Discuss