Transcript: with Michal Kosinski

This is automatically generated by, please excuse any mistakes.

Turi: Today, we’re thrilled to be talking to Michal Kosinski, Michal is professor of psychology and associate professor of psychology at Stanford graduate school of business, where he studies humans in a digital environment using computational methods, AI, and big data, looking at all the digital footprints that we leave Michal I should also say was behind the very first press article warning about the threat of Cambridge Analytica. And today we’re going to be talking about two pieces of research is just completed. That asks even more probing questions about privacy in today’s world. Michal great to be talking to you,

Michal: Turi.. Thanks for having me.

Turi: Can we kick off, can I ask you to give us a brief overview of these two pieces of research that you’ve done for us then to start digging in a little bit deeper?

One that, um, that openly available algorithms are able to predict. Sexuality at a degree of precision, which I think in some instances comes close to the precision that AI can find, um, cancers in mammograms, one of the oldest and most, most sort of most accurate uses of AI. And two that, um, that these same algorithms. Are able to predict political orientation to a much higher degree than humans can. So those are the two pieces. Can we jump into the F the first one first, what did you discover about what, um, these, uh, what this facial recognition algorithms can, can, uh, can call from images of people’s faces about sexual orientation? Michal: So being educated as a, in a kind of a classical, uh, psychology school. Yeah. I, um, I was convinced that faces should not be linked with intimate traits. And, uh, so this is why I w I was treating those, uh, those patterns with quite some skepticism. But then when I started working on my research, I realized that, uh, psychology provides us with quite a few. Um, different mechanisms that should in fact be responsible for the existence of links between facial features and intimate traits such as political views or a personality. And those mechanisms come in three, uh, sets one set, uh, encompasses mechanisms that, uh, essentially, uh, where our mind influence. Our faces influences our faces and a few examples here, for example, uh, if you are a liberal person, politically liberal person.

So this is property of your mind. And this will affect your choices of community and, uh, and friend groups that you’re hanging out with. It will, uh, influence activities that you engage in. It will influence fashion choices, fashion choices, also pertaining to how you shave yourself, what makeup you’re using, what hairstyle you’re using and so on.

So. And this is just one example of how you’re appropriately of your mind. Political liberalism will affect properties of your face, um, such as her style and so on. Now there’s a second set of mechanisms where your face affects your mind. And one of examples here is physical attractiveness people with attractive faces. Receive more eye contact from the moms, you know, as infants, then they are more successful. Typically in the educational system, they get better grades. They later receive better salaries. They get promoted more easily. They get invited to parties, uh, more easily than people that are not likely to have attracted faces.

They get voted for, uh, in, uh, Political elections with higher likelihood. And of course this kind of E this kind of, uh, increased popularity. And better outcomes that attractive faces to some extent facilitate are going over the lifetime to affect people’s minds. If you have a higher salary, if you have a better job, if people like you more, uh, you will be, uh, uh, and there’s actually evidence showing that people with more attractive faces become more extroverted with time more social, because simply they receive a lot of positive feedback whenever they interact with others.

And then there is a third group of mechanisms, those mechanisms and compass factors that affect both people’s faces and, uh, people, uh, minds people’s minds. Um, those include genes. We know that facial appearance is heritable, but we also know that political orientation, personality, intelligence, and many other psychological factors are too. Significant extent, heritable hormones affect our facial affect our facial appearance. Her line, this original collagen or facial hair is affected by the stuff’s Jerome. And we also very well know that the stuff’s thrown and is going to affect, uh, is affecting our behavior, social dominance, uh, levels of aggression and so on.

So now, uh, it is actually, it became surprise. It was surprising to me to realize that. I go to educated, to learn. I learned about all of those mechanisms and yet I never connected the dots to realize that, Hey, you know, if, um, we are writes about, uh, those mechanisms, it should imply that faces should be linked with, um, With, uh, psychological traits. And yet it was treated as a, just a completely crazy idea in mind in my field. So, and this is why all of my colleagues in the field and myself were pretty surprised with those findings. Turi: There’s a hint of course, of. To put it kindly phrenology that old 19th century science of measuring people’s faces and working out whether they were naturally British too, I suppose, towards criminality.

Um, and, uh, and to put it less nicely, sort of, there’s a, there’s a, there’s a stench of mangling around this as well. And of course, one of the areas of your concern here, which you flag all through your work is precisely how this facial recognition technology. Now that it’s been understood to be based in science in these three areas that you’ve just described could potentially be used for political or social purposes.

Michal: Look, I fully agree. And this is why, uh, I was so concerned to discover that those technologies have been developed and are being used the main problem with phrenology and related pseudoscientific, uh, disciplines. Is that. We should just not be making judgements about people based on their faces, regardless of whether those judgments are accurate or not. It just doesn’t matter whether the judgment is accurate or not. We should just not be doing it. Now, the fact they’re very unpleasant and uncomfortable fact that those judgements are actually to some extent accurate. And I should stress here. They’re accurate when made by AI, not by humans. We actually have a lot of evidence that humans are better than random when judging people’s political orientation, personality, uh, sexual orientation, and other intimate traits from their faces.

But people, his accuracy is so close to random. It’s better than random, but so close to random, that is just lack of giving in practical. Luckily we cannot just judge someone’s political views that easily from her face. No, sadly, it turns out that AI can do this with a very significant, very remarkable accuracy. And now, because as I said before, People’s traits should not be judged based on their faces. That’s not how we should go about things. The fact that technology can do it somewhat accurately. And the fact that people seem to be using this technology to do, to do exactly that as one can very easily find in press or in patents that are published in one line publicly. Uh, and that’s, those are reasons to be pretty worried about that.

Turi: So just jumping into both these studies, we’ll link to both in the show notes, but, um, in terms of facial recognition, algorithms capacity to determine sexual orientation, you found that, um, it was somewhere between 80 and 90% accurate in men.

And, um, 70 to 85% accurate for women, which is radically better than the human equivalent, which I think you said people are able to determine the sexuality of men 61% accurate at you, which is just a little bit better than random. And, um, and almost not able to determine the sexual, uh, sexuality of, of women at all. So there’s a big, big difference there. What was it based on. Is it, um, you, you mentioned in the study, larger jewels, narrow and noses. What are the, what, what if we’re going to go into the phonology as peculiar as it is, what did emerge out of, um, your research there?

Michal: So first a little side note on how accuracy is calculated. Um, so of course, given that on average, uh, um, uh, the gay, uh, population is about 7%, six to 7% of the entire population. On average, if you just said, Hey, everyone is straight, you’d have around 93% accuracy. Right. So in order to avoid, uh, uh, to go to overcome this problem, uh, the accuracy in studies like that is usually, um, Presented as would we call a area under the curve, uh, quite efficient.

And this area is this, this coefficient is an equivalent of estimating accuracy in the following situation, where you take one game and one straight individual, you show them to a judge, a human judge, or a computer judge, a decision maker, and then measure or count how many times. Judge made correct decision. And in this way, it was kind of overcome the issue of the fractions in the population being imbalanced. And the fact that, you know, different populations are studied in different, in different studies. So now bringing in to this one kind of oversimplified coefficient allows you to compare accuracy across many different scenarios and many different studies. So yes, the accuracy, our study was about 90%, 90% plus. When algorithm was presented with, uh, five images of a person around 80 plus percent when it was presented just with one image of a person. And I need to stress that this algorithm has not been trained to predict sexual orientation. We are talking about off the shelf, facial recognition algorithm that you have built in that is built in, in your phone and it’s built in and Facebook and Instagram’s algorithms of detecting faces.

So I’m. I’m not, I’m not going to go and train a specific algorithm, trained to predict people’s political orientation or sexual orientation, because the whole point of my research is those algorithms are warring and they should not be used. But I believe that if someone, uh, trained an algorithm specifically to do that, and again, from what we know from patents and press reports, it seems that people are doing this. The accuracy of those are good algorithms, probably higher than whatever, you know, I found and shown in my kind of studies auditing and this technology. So to go back to your original question about what is actually giving the signal, uh, well, we don’t know exactly because those algorithms making those distinctions are so-called black box algorithms. So they would make a decision and you can measure that curiosity with this decision very easily, but they will. It’s very difficult to understand. How does the decision he’s made you can, of course, look under the hood at the algorithm. The algorithm is just maths. So you can just check, you know, how the equation works.

The problem is that this equation is. Has millions of coefficients that, you know, all change depending on what phase they’re looking at and so on. And it’s just extreme. And those coefficients are not interpretable. It’s not that there’s some coefficient that is responsible for, you know, with, of height. Of the face. So in order to try to see what might be giving signal, we are as humans in general, when we interact with those black box algorithms, we are kind of reduced to trying to search for some, um, Keen stare in the data. So in the context of this particular study, we looked at average faces. When we look at average faces of gay and straight people, we look, we try to compare the shape of the outlines of facial features such as jaw and so on.

And we try to look at different adornments that people wear such as glasses or, uh, or, um, jewelry on their faces and so on and try to identify it. You may see manually, uh, kind of the differences and then, um, kind of test whether those were the differences that computer was using in its predictions. And it turns out that shape of the face.

So the shape of the jaw differed between gay and straight, uh, populations and the computer was judging the shape, uh, the shape of the was using shape of the jaw as, as. Uh, as source of signal now, does that mean that gay and straight people have different shaped faces? Well, we don’t know because we’re using pictures that people take have taken on themselves and uploaded to.

Yeah, social media and dating websites. And of course, when people take pictures of themselves, they hold camera at a different angle and different distance from the, from the face, which of course affects the shape of, uh, facial features. Uh, so, but at the end of the day, that doesn’t matter from the point of view of the prediction, right. Uh, Still whether this difference in shape of your Jo stems, from how you keep hold the camera or whether it stems from what your shape or your real shape of the jaw is at the end of the day, computer can still employ the signal to make a very accurate prediction.

Turi: Michal that’s extremely carefully navigated. Um, and it’s a tricky shark-infested. These are tricky shark-infested waters. Um, can I ask you to do in exactly the same way, um, the same sort of analysis on what you discovered around political orientation

Michal: political orientation? Actually, the issue is even more difficult as we, uh, We analyzed dozens of different factors stemming from expression. And it seems that conservatives seem to be smiling a little bit more and liberals are 1% more likely to be wearing glasses. Right? So that, those, uh, the, some difference also in facial hair that there’s a slightly different. Conservatives were slightly less likely to wear, to have essentially have a beard on a social media picture. But now those differences are tiny, literally, you know, one or 2% or 3% maybe difference between conservatives and liberals. So. And even all combined, when you combine all of those interpretable features that we looked at the orientation of the face, whether you look down or Apple left or right.

Uh, what are you wearing? Sunglasses, whether are wearing beard, what facial expression you adopted. So essentially extracted a bunch of interpretable facial cues. If you combine all of them together, the prediction accuracy will be around 60%. Now, if you use a black box approach that doesn’t look at those explicit features, but just takes the face as it is, and extract some features that are not fully. Um, then, well, we cannot, we have no way of fully understanding that accuracy sat on. It goes from 60% to 70 plus percent. And I should mention by the way, that of course. Features such as gender age ethnicity also are related to political orientation enabled, predicting political orientation, but have controlled for those in this study.

Right? So we, that curiosity that we’re discussing here has been what’s computed while controlling for age, gender ethnicity. So they’re kind of obvious giveaways of, um, of the orientation.

Turi: Thank you for that. So, um, in light of your first point that, um, physics psychology physionomy and sexuality physionomy and political orientation of our associated in those three ways that you flagged initially your face impacts your politics, your politics impacts your face. Um, and there’s a whole bunch of inherited. Genetic, um, information which has been passed through to.

Michal: Oh, no, the only genetic there’s there’s also there’s those, there are hormones. There’s also environment. Yeah. Where you grew up with the environment where you grew up in affects what your face looks like. Think about growing up in the city and a big closing apartment, most of the time, versus growing up in the country and having access to, you know, wonderful outdoors activities, uh, and essentially being exposed to different amounts of sun sunlight. Uh, so again, growing up in different community will shape your political orientation, but it will associate, uh, your facial appearance.

Uh, there is affluence, you know, wealthy people use very different products to, um, uh, put on their face. Uh, they also groom, uh, in different ways. They are engaged in different activities. Uh, so wealth will affect your facial appearance. And of course we very well understand that privilege and wealth affects also your psychological traits that they shape your psychological traits traits over a long time.

Turi: So there’s an enormous amount of environmental material here, which goes in, but I still want to ask you the sort of, uh, the, the, the one-to-one perhaps sledgehammer, perhaps banal question, which is if. Literally our sexuality and our political orientation is written on our faces. I remember as a child being told, you’ve got guilty written across your face whenever I get it wrong.

Um, but if literally our sexuality and our political orientation is written in our faces, what does that tell us about opinions and the immutability of opinions? The immutability of sexuality. Does this having done this research, having looked as carefully as you have here, do you wonder, um, whether actually. We are how fixed are personalities?

Michal: Well, there are much better ways of studying this issue and answering this question. Like, for example, looking at the genes and genetic, uh, and the fraction of variance in your political orientation, personality, and other psychological traits that are explained by genetic factor.

And you can study it very carefully using twin studies, uh, adoption studies where twins, identical twins, uh, um, we hope hopefully that’s happening as rarely as possible, but there have been situations where identical twins have been adopted by two different families. And then you can study. How, uh, their political views or their personality or their intelligence and other traits, how they are related, you can calculate differences between identical twins and non-ideal identical twins or just siblings. Uh, so essentially there’s just much better study designs that can teach us about those kinds of permanents. Uh, Permanent elements in our trade. So you can look at stability of personality over lifetime. You can just see. And in fact, there’s a lot of evidence showing that while conscientious well organized kid, even the most well organized kid.

It’s is less well-organized than, you know, most of the adults. Right. But a very well-organized kid will become a very well organized, a doubt. So essentially the position inside the population. We’ll stay constant, relatively constant, even if the entire population, as they age become better organized and more consensus. So there’s essentially all sorts of better designs to make those claims. Now, our study was mostly focused on price, advocacy, uh, risks, and it shows essentially that our political orientation and other psychological traits. Are displayed on our faces so that, that we, uh, that one can very easily conclude, but they also suggest that, you know, maybe our faces suggest, uh, influence what our political views are and, and, and other traits.

Uh, so the bottom line is here. The fact that our faces are connected with our personality, uh, East consistent. With a theory or hypothesis is consistent with a hypothesis that our personality is relatively stable or our political views are relatively stable, but it’s not proving it. Right. Uh, uh, there are much better studies to prove, uh, that those phenomenon are relatively stable.

Turi: This most striking element of your research anyhow, on, on privacy is that we’ve always known that our Facebook likes could be harvested to build a kind of an aggregate portrait of who we are, but you can always delete your Facebook likes. Right? The thing about your face is that. It’s very expensive.

Michal: No, you know, you can’t delete your Facebook likes, you know, once you liked something it’s, uh, you can close your Facebook account. Facebook will retain your data. Not to mention that as bunch of companies that in the meantime recorded your data. Also, you may maybe avoid creating Facebook likes by, you know, not liking anything on Facebook, but try to avoid. Purchasing things with your credit card or not using credit card at all in today’s society, try to avoid using email. I’ll try, avoid using, you know, what’s up and or texting, you know, and not a modern try to avoid using a phone that is sending your voice in a digital manner, uh, over, you know, the, the cable and it’s the voice that is being recorded. It’s essentially impossible to leave in a modern society. Live a fulfilling and happy and efficient life. Without using those technologies and essentially leaving digital footprint and only the most privileged. Once of us can do it. If you’re a CEO of a big company, you know, maybe you can have assistants doing stuff for you and you kind of avoid leaving much footprint, but if you’re a single mom working two jobs, you cannot afford going to a bank in person. You have to use online banking. You cannot afford sending people letters within your hands. With your hand, you have to, you know, be using those modern technologies. You cannot avoid using Google maps to get to places right. That’s a sign of privilege, people saying, Oh, I’m just going to leave Facebook and everything is going to be fine.

Not to mention that it’s also a bit silly, you know, because you left Facebook, you kind of Spanish into you, but it was a punishing to others. I like having con I’ve been connected with my friends. So when my friends leave all of those environments, that is a bit silly and punishing to me as well. And it’s also meaningless because the data is still being recorded. Not to mention that it’s also this kind of escaping into privilege, right? We are the solution to the risks. Privacy risks is not that wealthy, affluent enable people, just make sure that those risks don’t apply to them. And just, you know, the poor and underprivileged are suffering. The solution should be holistic, should essentially be applied to everyone. Yeah. Equal.

Turi: I was, I, you you’ve made my day worse. I was going to suggest that what you’re flagging here, this. The extraordinary capacity for, um, facial AIS capacity for facial recognition was sort of the cutting edge of privacy infringement. But actually you’re saying it’s just, it’s part of, part of the general mass, which of course, which of course it is. Um, there is something extremely, um, threatening, however about the idea that all the CC I live in the UK, where we have giant hi. Percentage of CCTV cameras for the population and the idea that all those CCTV cameras are not just tracking me, but also figuring out my sexuality, my political opinions and the rest of it. It feels very threatening, but you’re right to point out that, um, it’s just one of the very many digital footprints that we leave, uh, around the world.

Michal: And one that is relatively difficult to analyze, right? It’s just facial visual data. It’s just larger, you know, the, the, your, your picture profile picture is, has, you know, it’s few megabytes, whereas, you know, the record of all, every single thing you liked on Facebook throughout your life is, you know, probably, you know, few kilobytes, uh, if that, uh, so visual data is more difficult to capture more difficult to process and so on. And by the way, again, people, people get freaked out by those studies showing that you can predict something from the face. But let’s not forget. You can predict that and more and more accurately from your Facebook likes and your tweets and your history of credit card purchases and so on. So it was just one of many examples of insensitive data.

One could essentially argue every, every kind of data we’re leaving behind is highly sensitive. In my earlier studies, I’ve shown that Facebook likes. Uh, our revealing of, uh, intimate traits, Facebook likes that people usually think of as very innocent and superficial, you know, it’s a like of this picture of a funny goat or, or like of whatever it is. Funny status that my friend posted on Facebook, what can possibly learn from that? Well, Your sexual orientation, your political views, your personality, your intelligence, what drugs you’re taking, and even, you know, whether your parents were divorced when you were a kid or not all of this, essentially clearly visible and predictable from digital footprints, even as innocent as Facebook likes. Essentially the bottom line is that this data that we’re leaving behind is just the tip of an iceberg. When you combine this data with. Modern predictive algorithms. You can extract so much more information from it than just data cabbies itself.

Turi: So you’ve spoken about, um, this post-privacy world that many have have mentioned. You’ve described us as already being in it. Um, how do we manage it? Michal: Well? So first of all, I wanted to start with saying that I’m pretty uneasy about, uh, the idea of privacy being taken away from us. This is why I became interested in privacy research and I, uh, And I would call myself a privacy scholar. And my initial hope was that if I just put enough work and outers put enough work in starting privacy risks and developing privacy protecting technologies. And if we all just vote for right to politicians, you know, that are, have our privacy, uh, at hearts, and then maybe we’ll just be able to, uh, Stop, uh, this trend of, uh, our privacy being taken away from us. Now, unfortunately, the more I know about this issue, the more I studied this issue, the more I observe what’s happening around us, the more I realized that unfortunately, Privacy is gone already and it will be even more gone in the coming months in the year.

So we are essentially already leaving in a post-privacy era where a motivated third party, BDR government, or government of another country be the corporation, or even your neighbor with a little time. And a little skill can essentially learn about you. Uh, way more than you think and discover your intimate traits, predict your future behavior, including behaviors and traits that you are even not, uh, aware of. Uh, you know, as, as she such in the context of predicting people’s personality or their intelligence from their digital footprints, you may have never measured your personality. You may have never measured to intelligence. Uh, you may be young enough. Not to think about what your political views are. And yet a relatively simple, we’ll be able to look at your data compared with what it knows about other people and make a very accurate prediction about, uh, you know, traits, future behaviors, um, and so on.

So now having said that, uh, I should also say another thing that the fact that That losing our privacy while being scary and carrying many negative, uh, aspects also has quite a few advantages, uh, taking away yours. And my privacy also, uh, takes the way from, uh, criminals, corrupt politicians. Uh, The human traffickers, uh, and, uh, all sorts of, uh, big corporations, not paying their taxes, governments trying to hide stinky things at somewhere in the database. Now, uh, this more transparency, uh, is going to have many advantages for individuals and societies now realizing that whatever you write in your email or whatever you write in your private message to someone. Uh, realizing that this might is essentially, it’s not private anymore. That first of all, it’s not private even originally, because if you’re writing an email to someone there’s at least one other person that has access to this information, and by the way, you can choose to share it.

Right. Uh, but now if you also realize that there’s also Facebook that looks at it and the Google that looks at it and your government and the government and other country suddenly, uh, it will make you think twice before you write something stupid or, or, or, or ugly unpleasant. And this thinking twice will not only make you safer in the long term, because essentially just not going to be writing some things that just should not be written, but also. May make you a slightly better person, the person that thinks twice before they write something stupid and possibly incriminating or embarrassing, he’s just a better person. Uh, in many ways, uh, I believe now the fact that we all are losing our privacy at the same time, more or less also means that.

Certain taboo subjects will, uh, go away. Right? So, uh, if we cannot really make sure that the world easily forgets what’s on this picture from this crazy party when we’re 21 or 22, uh, and now we are running for prime minister 40 years later. Uh, it’s much more likely that this picture will be dragged out from some, you know, depths of Facebook or Instagram. Or whatnot, but the same would apply to everyone else in this society. And maybe we’ll just stop obsessing so much with, you know, how many beers, uh, someone drank when they were 22 or what did they smoke? Right. Essentially, some of those taboo subjects that today are, you know, When you lose privacy of the fact that whatever you smoke weed as a, as a teenager, you know, today, a big problem, maybe 10 or 20, or maybe actually not that big anymore.

And maybe in three years from now, it’ll be much less of a problem. And in fact, it brings me to another, uh, uh, thought here, which is that very often we treat privacy as a quick fix as a band-aid. To much deeper and serious problems. Let me give you following example. Uh, I very much appreciate the fact that my sexual orientation is private information. I have some control over this information simply because information about your sexual orientation can get you in big trouble in many places. Right. But then the problem here is not the privacy itself. The problem here is the prejudice and bias. Both in societies and in legal systems, that creates a situation in which we cannot safely share information about our political orientation, sexual orientation, religiosity, and add a traits. Right? And in fact, some people may argue that a world in which I can freely share information about my political orientation, I can freely share information about my religion, sexual orientation and so on without fearing repercussions. There’s a better world to, uh, to okay. By now it also kind of shows how ridiculous, um, it was sometimes. Would you come when we essentially say, well, if you’re afraid of. Uh, persecution on political grounds. Just keep your political views, private. This is just an inhumane thing to say. It’s essentially like telling people, uh, or they may actually just bring it to a, to another example. Like we have no privacy of our gender, right? Our gender is visible immediately. When you interact with someone, it can be heard in voice, uh, can be, even read from names that we are having. Right. And we know that sexism exists. No one would suggest though that the fix to sexism, the problem of sexism is privacy of gender. No one would say, Oh, you know, let’s just recommend to women to hide the gender. And, you know, the world will be equal for them, stupid and inhumane. And first of all, we should celebrate differences. We should be allowed to express ourselves in whatever gender way we wish. Right. Without having to worry that we are going to be now suffering from prejudice and the same. Uh, applies to other characteristics that we’re kind of keeping private out of fear of, um, of persecution or prejudice or, uh, or whatnot.

I’m hopeful that essentially, as unfortunately, our privacy is being taken away from us, we will treat it as an opportunity and it will behave like, uh, adults, uh, about it. Right. So, and the history actually gives us some examples. Uh, Harvey milk was arguing. That if you can afford coming out as gay, uh, you should do this. You will pay the price you will pay, uh, uh, with job opportunities since, uh, you pay maybe by being excluded from certain communities, maybe, uh, you pay by your family, excluding you. And in fact, Harvey milk paid the highest price he was, uh, murdered, uh, for, because he came out now, thanks to those heroes. That followed his advice and risk and came out, uh, risking, uh, the lives often and paying the price, but they also paved the way for everyone else to follow. They paved the way, uh, by removing the taboo SAB suit, by proving to the neighbors and to politicians and society as a whole that, you know, yeah. Maybe people are. It’s normal. Great, wonderful people. Not, not, uh, not, uh, evil perverts as the way, often painted by religious leaders and political leaders. And I hope that’s, uh, people would see that we can do the same. Now one could argue that if you can afford losing your privacy of your political orientation, religiosity sexual orientation, maybe.

You should do this because there are people in other places, in other countries in that in neighborhoods that cannot afford, uh, yet to lose their privacy and you losing your privacy will help to pave the way and change the ways in which society is thinking. We’ll remove the taboo subjects and, uh, remove the taboo. Yes. And then also change re reduce the pressure that is on, you know, Those pioneers, that was first people that are losing their privacy. And we should also change the way in which we think about issues. One example is, let me just give you one particular example, how, uh, Western countries are selling, uh, arms and technology to. Uh, countries such as Saudi Arabia and then buying oil from them in return. And when voters told their politicians, Hey, you know, why would we, uh, have interactions of this kind with a country which stones you to death when they find out that you are gay, this doesn’t seem to be a right partner to have such a friendly trade relationship with where the politicians would say, where are the that’s kind of a bitch? And distasteful, but you know what, if you’re just gay and you’re born in Saudi Arabia, just why don’t you just stay quiet about this? Well, we know, unfortunately that gay people, people, um, um, that make choices, not accepted in the society. Has, uh, no way of, uh, keeping it as private hiding it. And this is why there’s no more time to left that one. Sorry. No more time left to actually, uh, take action and to make sure that’s, we are, uh, making decisions and voting for politicians and choosing social and legal structures in which the rights of, or minorities are protected because those minorities. Uh, essentially have less and less opportunity to, uh, to hide themselves behind privacy.

Turi: Michal What a fantastic place to end, um, exhortation to us all. Um, this has been fascinating, um, hugely instructive. Uh, we will link to, um, many of your pieces of research, um, in the, in the show notes and all the many people that you’ve discussed here. Thank you for joining us.

Michal: Thanks. Thanks for having me.

This page was last edited on Wednesday, 12 May 2021 at 09:57 UTC