MENU

Reclaiming Truth in the Age of Information Disorder

June 21, 2024

By Skoll Foundation -

“You can’t solve any crisis before solving the information crisis,” said journalist Patricia Campos Mello to kick off a 2024 Skoll World Forum panel about credible truth and information. Mello was quoting former U.S. Vice President and Forum speaker Al Gore amid concerns that policymakers and technology companies aren’t doing enough to address online hate, disinformation, and threats to human rights.

Emergent technologies have made “truth” more difficult to discern. How do we safeguard truth and promote accuracy in the face of growing information disorder? Panelists agreed that addressing mis- and disinformation requires trust-building within and between communities and news platforms. They also focused on trustworthy information and forward-looking policies and regulation. To that end, legislation against hate speech or false information could positively impact the information environment.

Watch the session and read the transcript to hear more about reclaiming truth from Mello and other leaders, including Sam Gregory of WITNESS; Imran Ahmed, founder and CEO of the Center for Countering Digital Hate; and Mevan Babakar, news and information credibility lead at Google.

Transcript from “Reclaiming Truth in the Age of Information Disorder,” filmed on April 10, 2024 at the Skoll World Forum:

Patricia Campos Mello: It’s so great to be here. Good morning, everyone. This session, the idea is to discuss the importance of access to quality information. And we have the privilege of having here four people who are in the front lines of defending information integrity. As we know, emerging technologies are a growing challenge for the access to quality information and in this year, which we have a record number of people voting in elections around the world, it’s even more important to guarantee that people can access information to make informed decisions.

So having said that, I’m going to introduce our great panelists who, as I said, are heroes in the front line of defending information integrity. To my far left, not in ideological terms or anything, we have Imran Ahmed, did I pronounce correctly your name? Okay, cool. CEO and founder of Center for Countering Digital Hate. He’s an authority on identity-based hate, extremism, disinformation, and conspiracy theories. And he advises politicians around the world on policy and legislation.

To my far right, also not ideologically, Mevan Babakar, did I pronounce this good?

Mevan Babakar: Yes.

Patricia Campos Mello: Okay. She’s the News and Information Credibility Lead at Google, working to tackle misinformation globally. Mevan was previously Deputy CEO of Full Fact, the UK’s independent fact-checking charity, and she founded Full Fact’s automated fact-checking team, which built AI technologies to help scale the work of fact checkers.

To my right, we have Nighat Dad, Executive Director of Digital Rights Foundation. She’s also a Member of the UN Secretary-General’s High-level Advisory Board on AI and a Founding Member of Meta’s Oversight Board. And here we have Sam Gregory, the Executive Director of WITNESS. He leads their strategic plan to fortify the truth and champions their global team who support millions of people using video and technology for human rights. Sam Co-Chairs the partnership on AI’s expert group on AI and the media.

Well, I would like to start by asking Mevan, how did you get here? I mean, you come from a background of activism and social issues, and then you got into disinformation studies and now you’re at Google. How did you get here?

Mevan Babakar: Yeah, I ask myself that question every day. I think from a very young age, I didn’t take democracy for granted. I was born in Iraq, I’m Kurdish from Iraq, and I was born into essentially a genocide of the Kurds. And a big part of me understanding that story as an adult was also understanding just how much misinformation, disinformation, propaganda, the stories that were the fabric of our society were manipulated. And how that manipulation led to harassment, hate, and actually led to discrimination, which ultimately led to my family and I becoming refugees for five years and eventually coming to the UK.

So for me, the issue of mis and disinformation isn’t just about posts on the internet. It’s about the real world harms and the real world impacts that this has. And those harms are more than just, you know, someone being wrong on the internet. It can actually affect people’s lives, it affects health risks, life, it affects democracies. And so, when I voted for the first time in the UK, I was quite shocked at the quality of debate in the country and considering just how important democracy was and how the breakdown of it in my own life had led to this outcome. I didn’t take it for granted in the UK setting. And so, when I saw just how much misinformation was affecting my friends, my family, I ended up campaigning.

And I campaigned on many different issues around democracy, like from voting rights to fact checking, eventually, because I believe that people should have the right to good information. And that’s not necessarily giving people a yes or no answer, a true or false answer, but actually giving them the means to decide for themselves, giving them the context that they need to be able to decide for themselves at a time when it’s really vital.

And so, fact-checking led me eventually to The International Fact-Checking Network, which was 300 fact-checking organizations around the world. But even with all the will in the world, there was a lot of unknowns. There was a lot of questions that I had that were not being answered by the platforms, frankly. And so, I joined one, and that was the only way that I was going to get some of the answers to some of the questions I had. And I’ve been a year and a half at Google and I’ve learned a lot and I’m using what I’ve learned to then build tools to actually help the people on the ground who are doing that important vital work day to day.

Patricia Campos Mello: Great. I’m going to ask you a lot about those tools that we really need them, but first I’m going to ask, Nighat, how did you decide that you wanted to help empower women in the digital sphere? How did you get to this role?

Nighat Dad: Right. So more than a decade ago, I was practicing law and only focusing on women’s rights in the offline space. And slowly and gradually, women in my country started getting access to technology. But then they started facing a lot of hurdles and challenges in using that technology. For instance, just like a feature phone, mobile phone, no access to internet back then. But even facing harassment on phone was something that they absolutely had no idea how to deal with.

They came to me as a lawyer. I had no idea how to deal with, we looked into the laws, there were no laws. And then I think that kind of pushed me to look into a state of, you know, information and communication technology in my own country. But from the perspective of marginalized groups, from the perspective of women, sexual minorities, really just minorities who, in many countries, in global majority, Global South, so-called democracies, have really no real space to talk about their issues like in the physical world because there is a fear, there is a chilling effect of their free speech.

So internet provided that space to them. And I was so excited that, okay, we have now this space where women can actually, while sitting in their homes where they’re not allowed to go outside of their homes, they can actually access something where they can exercise their fundamental rights like free expression or access to information. And the challenges that they were facing, I’m like, so we really need to address those challenges. And it was hard when you don’t have resources.

I had to pave my own way as a one-woman army and I was like, “I cannot do this alone.” Started looking into international practices, organizations, and I think that’s how I got into this work. I started Digital Rights Foundation in 2013, but then that abuse and harassment that me and my team started addressing, by the time, it actually increased, even though we had some tools, laws, you know, platforms started coming into the debate, started addressing some of the issues. Governments started making regulations, bad ones, and it didn’t really address the issue.

So we start, we initiated this helpline, Cyber Harassment Helpline in 2016. And I felt it’s so important that we focus on community solutions. So I started from Pakistan and now I ended up sitting at, you know, UNSG AI High-level panel, and I feel it’s such a privilege to be there, not actually becoming part of the Global North debate, but actually bringing perspective of my own communities onto those tables and tell them, “You cannot decide alone. You know, you have to listen to us. You should know what are the ground realities. You should know we are still facing abuse, harassment, disinformation. You are talking about generative AI. We are still facing, you know, the issues that started several decades ago.”

And I just wanted to share on a very good news, a couple of days back, we won a case of woman journalist. I have a legal team as well, and it was against the disinformation of our biggest TV channel in the country. We fought for two years, that disinformation on TV channel led to online abuse that the woman journalist faced for two years. We first lost the case to the regulatory body of media and then won the case from Islamabad High Court and the TV channel had to apologize to her publicly.

Patricia Campos Mello: Congratulations, that’s an awesome win for all of us. Sam, you’ve been working with human rights and human rights activists for over 25 years, since you were born, I mean since you were five. And you are focused now on the conversions of technology and the way technology can be used, you know, to subvert democracy or to threaten activists. So can you tell us a little bit about how you came to this work?

Sam Gregory: Sure, I was seven, not five. No, I was one of those very annoying people who at the age of 18 told people what I wanted to do, and it was to combine media and activism, and really my trajectory is all about how you engage with the evolutions in particularly audio-visual technology over the last 25 years, and how do you put it at the service of human rights defenders in a more globally diverse range of journalists?

And I, at the moment, I’m very centered in the world of deepfakes and AI, which can feel very novel, but frankly from the perspective of human rights defenders and journalists, so much of it feels old. And what I mean by that is, looking back over the last 25 years, you know, the problems of not believing human rights defenders and frontline journalists, the problems of targeting journalists and human rights defenders with gender-based violence, the problems of security, the problems of competing amidst the volume of information, all of those are problems we’ve experienced before. Now that’s no consolation, but it’s at least a starting point to say, as we start to engage with this moment, how do we do that?

And so, a lot of the work of WITNESS, which my work has been very bound up with, I’ve worked for the same organization for 25 years because I care deeply about the mission, is to think how do we put these tools at the service of human rights defenders, but also how do we make sure that the infrastructure that sets the terms for human rights defenders works for them? Because otherwise you’re just working with one hand, perhaps both hands, tied behind your back, right? The infrastructure of technology sets the terms of how journalists and human rights defenders work.

And so, as we sort of fast forward to now, like, a lot of what I grapple with in my work and at the work at WITNESS is, you know, how do we deal with the fact that for frontline human rights defenders, they have to defend the truths of what they are experiencing and expose an even greater sort of burden of proof? How do they fortify their truths in this really complex environment? And sometimes that’s about facts. How do you show your facts? But it’s also about narratives and so, you know, my story and my work is always about facts, sometimes it’s also about narratives, facts and narratives complement each other in human rights work and journalism.

And then layer on top of it, in about six or seven years ago at WITNESS, we started working on saying an initiative called Prepare Don’t Panic. And we were working on deepfakes long before anyone else cared about this issue, when it seemed very hypothetical. And what we said is, “We’re about to set ourselves up for the same failures as we’ve had with social media.” So I know others on this panel like Imran have struggled with the failures of social media, the failures to regulate that early. And we’ve experienced that in WITNESS’s work, so many places where social media failed the human rights defenders and the frontline journalists.

And so, when we started working on deepfakes, we said, “Can we try and avoid repeating the errors of social media? Can we center the voices of human rights defenders and journalists so we don’t end up spending, for me the next 15 years of my life, fighting a rearguard battle, but more importantly, for everyone in this room and journalists and human rights defenders, we don’t end up fighting a rearguard fight against AI that we spent the last 15 years fighting around social media?” so.

Patricia Campos Mello: Thank you, Sam. I think this is essential and it’s a concern for all of us, you know, how to avoid making the same mistakes we made regarding social media. Imran, you came from UK politics and you also worked with the finance, I mean, very diverse background. Can you tell us?

Imran Ahmed: Yeah, I mean, I could go through my entire CV, it’s rather convoluted, but I’ll start in parliament. So I was a special advisor to the Right Honorable Hilary Benn, the Shadow Foreign Secretary for the Labor Party between 2012 until after the referendum. And in about 2015, three things started happening simultaneously. One was the rise of a virulent antisemitism, a really activistic antisemitism in elements of the British left, which we could tell were being driven by memes and themes being promulgated online.

The second thing, I don’t know if anyone’s ever worked as an advisor in politics, but you’re basically human chattel. So when the referendum happened, the leader of our party didn’t want to lead that referendum campaign for labor, and it was given to a man called Alan Johnson. And Alan liked the look of me, he had me powdered and brought to his office and I was his advisor for the campaign. And in that campaign, we saw a massive amount of conspiracy theories and disinformation and hate, primarily aimed at Muslims.

A conspiracy theory that the EU was trying to bring in Muslim men to rape 14-year-old girls and thereby destroy the white race. Conspiracy theories about election rigging, conspiracy theories about if you put your vote in pencil to leave, they would rub it out and put it, and put “remain” in pen. And then as a result of those conspiracy theories and the disinformation and lies and hate being spread online, my colleague Jo Cox, who was a 35-year-old mother of two, was shot, stabbed, and beaten to death on the streets of her constituency in Batley and Spen in Northern England, where I come from. And it broke my heart because I realized that politics wasn’t viable with a wave of weaponized disinformation that was sweeping over our country.

And because it was happening on both left and right simultaneously, and I could look overseas to Hungary and see Orbán, I could look to Brazil and see Bolsonaro, I could look overseas and see this happening, it was happening in the US, too at the time, if you remember, in the summer of 2016. I realized it wasn’t based on contingent circumstances. There was no one individual who was responsible for it. There was something fundamental that was happening.

And what I realized was that the primary locus of information exchange of where we create and maintain relationships, where we negotiate our social morals, the norms of our attitudes and behavior, where we negotiate our values, even where we negotiate the corpus of information that we call facts had shifted to digital spaces and that bad actors were able to weaponize them incredibly effectively.

Now what I then spent two and a half years doing obsessively was studying those bad actors and actually going to the platforms. And I talked to the platforms and I realized that they welcomed me in and they said, “You know, come and speak to our teams in San Francisco and Ireland, all over the world, we need to know about what you’re finding.” And it was only after a couple of years that I realized that they’d known all along. That the entire process of policymaking was actually an elaborate gaslighting scheme and that they were never going to change. They had no intention of doing so because the economics disincentivized them from doing so.

In fact, because of the peculiarities of US law, of section 230 of the Communications Decency Act 1996 passed before the first social media platform emerged, the fiduciary responsibility of those executives was actually to do nothing because they were not responsible if harm was caused and they did nothing. And if they did something, it’s a cost and they lose revenue because all content is monetizable assets. And when I realized that, I realized that I had to find a way to create costs for the production and dissemination, the mass weaponized, accelerated algorithmically, distributed content of harm around the world, of this harmful content. To stop making it so profitable for them because it’s addictive and we know that.

And so, I set up CCDH as a formal organization in September, 2019. I’m really proud to say that we launched in a sort of fit of, you know, I was angry, and now we’re 30 people around the world, 20 in London, we’ve just opened an office in Brussels, we have 10 in DC. And what we do is we create through both regulation and through by working with advertisers in a collaborative way, and my background in business helps in that, as well. I started my career at Merrill Lynch, bizarrely, after going to med school. Don’t even ask. Like, most disappointing child to South Asian parents.

My mother, when she heard I went to Merrill Lynch, she went, “So you go to people, they give you, like normal bank, they give you 10 pound, 20 pound, now you give them million pound, two million pound.” I’m like, “I’m not, I’m not a teller, I’m a strategist.” She has no idea. Anyway. So that’s what CCDH is and that’s why I set it up. And now it’s an organization that has the ability to move the dial.

I’m really proud of that. And I know that’s true because Elon Musk sued us for costing him, he said, “A hundred million dollars in advertising,” when we put out a research report showing that when he took over the platform, there was a 202% increase in use of the N word on his platform, which I think is disgusting. And so did the New York Times, which put it on the front page and so did lots of advertisers who thought that they don’t want to advertise on a platform that’s rife with hate, that makes it the freedom of speech for abusers, makes it a toxic environment for everyone else. And they saw that and he lost money and he sued us for it, and I’m very proud to say that a few weeks ago, we won.

Patricia Campos Mello: It’s great, we’re celebrating two victories here. Nighat’s victory, your victory, congratulations. And I just wanted, since you were talking about this issue, it is great that, you know, the case was dismissed, but we know there’s a growing wave of intimidation efforts against disinformation researchers. I mean, there’s a backlash. How do you see this?

Imran Ahmed: So first of all, I see it as inevitable. Look, CCDH is the inevitable reaction to an industry that thought it could get away with murder for so long. Did they not think that at some point an organized civil society countermeasure, because we’re checks and balances on that industry. That’s fine. That’s how democracy works, right? Checks and balances to it, so that we can avert tyranny.

And my family are Afghan Pashtun, same tribe as the Taliban. Like, believe me, my grandfather wrote me letters telling me exactly how quickly a country can collapse into tyranny from, you know, from the 70s when we had women in government. So I am aware, like, this is something, that seed in the back of my head, that we need checks and balances, they’re vital. They’re vital to the health of a democracy. So we were the inevitable check and balance. But you know what, when we attack them, do you not think that they will fight back? Of course they will.

And so, what we’ve seen, actually, over the last couple of years, in particular, as we’ve had success with things like the Online Safety Act, I was very proud to be the first witness to give evidence to that committee in parliament, that’s now law, the Digital Services Act in Europe, there are efforts underway in Washington, and I’ve given evidence to Congress on those, that they’ve actually worked really hard to shut down the avenues for us to get data. So Meta, for example, threatened to sue Laura Edelson, a researcher at New York University, for the same thing that Elon ended up suing us for and to try and stymie hers.

They’ve shut down CrowdTangle, which was the most effective tool we had to see the dissemination of harmful content on their website. TikTok, after we put out a research report showing that within 2.6 minutes of setting up an account as a 13-year-old girl, you get eating disorder content pumped to you algorithmically from the For You feed, within eight minutes, self-harm content. They shut down our ability to see how many times that content had been viewed. So they’re making themselves more opaque in response to attempts to hold them accountable because why? Because transparency is vital for accountability. And so, as a result, we’ve seen them all shutting down.

Now the irony is that Elon sued us, not for defamation, he couldn’t, everything we said was accurate. He sued us for the act of doing research. He sued us for scraping data, which he said broke the terms of service of his platform and asked for $10 million for that. But what he has done is make the most eloquent case possible, ironically, for why it’s absolutely necessary that we have statutory data access pathways for researchers and academics so we can understand those platforms. And that sits at the heart of our staff framework.

So what we actually do in our advocacy model, we don’t want to- I don’t like the idea of anyone banning speech, what I like the idea of is more speech and more informed speech. So we want transparency, T, accountability, A, responsibility if they are negligent, that they should be economically responsible, create disincentives for the production of negative externalities, and that leads to a culture of safety by design. Now, unfortunately, that spells “TARS” and “STAR” sounds better, so that’s why we call it the STAR framework.

Patricia Campos Mello: You’re talking about really necessary regulation and transparency and access to researchers, but as we know in countries that are not mature democracies, regulation can also be really bad regulation and increase the power of government. And I was speaking with Nighat about this, you know, about taking care, I mean, being careful about what kind of regulation is being discussed and implemented because in other countries around the world, it’s not as easy to give more power to government to, you know, how do you see the danger of having the wrong kind of regulation in countries that are, you know, still-?

Nighat Dad: Yeah. So we have already witnessed that in the, not only in my countries in South Asia, in different countries, several regulations initiated by the government very quickly in the name of holding platforms accountable. And you know, someone who have been very critical to platforms around their behaviors, especially when it comes to different jurisdictions and those behaviors are very unequal. They will listen to EU and they will listen to US and UK, they won’t listen to us, right?

So I am all for platform accountability, but we are also dealing with other actor, right? So it’s our governments, the governments who will try to hold them accountable in the name of accountability, but their basic objective is to basically control descent in their own country on their platforms. And we saw that already. So, you know, I’m not saying that we don’t need regulation, but we have to be very cautious. And that’s why I always push for global majority into these conversations, that when Global North is looking into these regulations and laws, please do not only look within your own jurisdiction, do not only look towards your own sovereigns, think about public and people who are actually dependent on you, you know?

We see champions of human rights, you know, in Global North. And our governments, also, you know, give us examples that if they have done this, we will also do this and they just copy paste it. There is no structure there around rule of law, they copy paste and then they will implement it according to their own wishes and desires. And that makes our work very difficult. And that’s why I’m like, we need regulations, but we need good regulations, we need to bring that conversation into our own jurisdiction. I cannot, like, not everyone of us can leave our countries. I’m still in Pakistan working for the communities, working to try things better, the system. And I really want, you know, our countries to do better.

And I think that’s why we all need to find solution, how we can introduce good regulations in our own countries. Maybe it won’t be like DSA, it won’t be like US executive order, but what are the good practices we can bring and push our own governments? But how we all can play a role, like I’m also an activist, I also face intimidation, discrimination, disinformation. There are times when, you know, I have to relocate myself, you know? But I’m still there working, so I think we really need to look into the ecosystem, how ecosystem is responding and not in silos. Think about other marginalized communities as well, think about companies that we think are authoritarian, but they’re still there. They are in the UN, they are giving their opinions, they are shaping the decisions. So that’s what my opinion is, is that our regulation, it’s difficult, it’s complex, but I think we just need to find a way to play our part.

And global majority, you know, not only focus on Global North, Global North also make mistakes. It’s not that they are the perfect ones. So if they are making mistakes, admit them and then see how they can bring global majority actors into the- We are the ones who face challenges, we are the ones who bring examples. We face the ground realities and we can shape your policy better because we are bringing real examples into the room.

Patricia Campos Mello: Great, thank you. Sam, you also value the engagement of the community, empowering the community and human rights defenders, you know, to deal, to combat this information. So regulation alone is not enough. How have you been trying to engage communities and to empower human rights defenders to combat?

Sam Gregory: Yeah, so I think there’s two directions that we’re trying to do that and one is- I also think it’s really important to focus on how communities create trustworthy information, right? I think our focus on mis and disinformation often obscures the fact that what most people want to do is share information that can be trusted. Now the challenge there is it’s getting harder to do that because of the pressures around it. So a lot of our focus is how do you support people to- If they’re documenting facts, they document in ways that are harder to challenge, that meet the burden of proof that’s expected of people and how do you help them to compete?

And there’s an unfortunate dynamic that happens in human rights settings of the expectation of having, you know, a forensic visualization, an OSI investigation for every human rights crisis, when in fact, sometimes you just have your camera or you have an interview. So it is challenging, but you’ve got to really focus first on actually supporting the direct witnesses, the human rights defenders and the journalists to create trustworthy information.

The second is thinking about narrative strategy. And I think most of the battles we’re fighting are about narrative, not facts. And what we’ve heard from, for example, land rights defenders in Latin America is how do we compete in the narrative battle, and again, as I said, I think AI and some of the new advances layer new problems on that because, for example, one of the challenges we frequently see is like, the narrative overload. Our narratives are overwhelmed by volume, while unfortunately AI makes it easier to create a greater volume of competing narratives, particularly when it doesn’t matter whether they’re true or false, it just matters that there’s a volume of them. So I think as one part, is you invest very deeply in communities and say, “We want to support you and we want to help you create trustworthy information and also challenge falsehood.”

Now the challenging falsehood is also complicated by AI. I’ll share a story from a mechanism we run called the Deepfakes Rapid Response Force, it’s the only global mechanism that enables frontline journalists and human rights defenders to go to media forensics experts when they encounter a piece of media that is claimed to be AI or might be made with AI. Again, there’s a very gray area here where people kind of use the excuse that something’s made with AI to dismiss compromising reality. And what we’ve seen in that mechanism, it’s pretty hard.

You find that, you know, one result shows that a piece of audio is faked, another one suggests that it’s real, you reach 85 to 90% probability and that’s when you have experts there. So when you translate that into the front lines of human rights defense and journalism, we have a real challenge. In access to the tools for detection, access to the skills in this emerging AI era where it’s just super easy to create falsified content and super easy to claim that real is fake.

I want to pick up on Nighat’s point, which is, and it’s really where we focused our work around synthetic media and deepfakes for the last six or seven years is, like many technological infrastructures and policies, this is being built primarily based on Global North priorities, and a certain set of Global North priorities. And so, the way we built our advocacy agenda was by bringing together journalists and fact checkers and technologists from 2018 and 2019 when the threat was hypothetical to say, “It may feel hypothetical, but people are building the infrastructure now for how this will be responded to.” And I think that’s really important that we bring in global majority voices, human rights defenders and journalists early in technology, not late.

And so, what that’s meant for us is, for example, trying to make sure that some emerging standards that exist or are coming into place around how you understand the recipe of how AI and human was used to create a piece of media, right? Where did AI come in to make that video? Where was a human involved in the editing process? Really focuses on some concerns that we heard really clearly from human rights defenders and journalists globally, which were, “Is this going to be privacy protecting or is it going to be a honeypot for the types of surveillance and fake news laws that we are seeing around us? Is this going to be globally accessible on the tools that we use? Is it going to be available on the types of platforms that we have available to us at a cost we can provide? And is it designed with the languages, the media formats that we’re engaged with?”

Otherwise it becomes something that is perfect for the New York Times, perfect for the FBI maybe, but not great for the rest of us to engage with an increasingly complex world. And I think it’s critical we engage on both ends of the spectrum, support people who are on the front lines, we need to support their truths, their facts, their narratives, but remember that a lot of this is set at the policy and infrastructure level and that’s where the voices are missing in these emerging technologies.

Patricia Campos Mello: Can you very briefly explain to us why are audio deepfakes a great concern right now in terms of identifying?

Sam Gregory: Sure, so I’ll give an example, actually, from this morning where we had a audio deepfake that came in from a country in Africa that’s actually seen a lot of audio fakes recently and claims of audio fakes. This is a global phenomenon that’s affecting elections as well as conflict zones as well as everyday life. It was a piece of audio that sounded like it was recorded off the radio and our media forensics experts looked at it and they came back and they said it’s likely not AI, and this happens a lot, people are saying it’s AI because they want to dismiss the truth, but we’re having a problem because we don’t have other sources so we can’t sort of verify it against other sources, and audio is hard to track down the original and maybe doesn’t have an original. It’s radio and all our systems are trained on clean audio, it’s in Sudanese, Arabic, right? Well, did we train our systems on that? We primarily trained on English, probably North American accented English, right?

So you have all of these technical challenges that you then have to explain to a suspicious public, right, “We are 85, 90% certain this is faked, we’re 85, 90% certain this is real.” And let alone the fact that this had to go to media forensics experts and in the time that has happened, probably whatever rumor has set in, or truth, this is faked or it’s not. So we have this very big challenge around detection that is common to many scientific communication challenges and particularly disadvantages frontline communities. And it’s partly technical, it’s partly a scientific communication problem, and it’s partly a question of who’s being resourced and who’s being prioritized.

Patricia Campos Mello: Mevan, you guys are developing several tools to help fact checkers, journalists, navigate this new world of AI and disinformation in general and also, you know, engaging in pre-bunking. Can you tell us a little bit about the tools you’re developing and why it is important to strengthen journalism?

Mevan Babakar: Yeah. I think I’m just deeply pragmatic about the fact that this is a very complex problem and it’s changing all the time and there are definitely lots of different issues, as you have heard here, all the way from business models to people on the front lines who are doing journalism and fact checking, or to access to research and there’s so many layers of different problems and each of them require a deep intervention.

The bit that I work on and I have the most experience in is thinking about how do we give people who are at those front lines, and how do we give the ecosystem, the ability to have insight into the problem a bit better? How do we give them the ability to check things and that’s for the expert users, but also for the everyday public? And more and more, the public just want the ability to appraise trust for themselves. They want to, they’re overwhelmed with information that’s pointing in lots of different directions and we want to build a world where somebody can say with confidence, “I trust this piece of information and here’s why.”

And some of the public-facing things that are happening at Google are about giving people that extra context. So there are features like “about this image” or “about this result” on search that gives you more than just the information about the result, it gives you an extra layer of information. So in the case of an image, it will tell you how old is this image? Has it been like, in the public sphere for five, 10 years? Have you seen repetitions of it? Has it been fact checked before? That kind of thing. So that’s the public-facing version of it.

Then there’s this work that I’m particularly proud of that’s coming out quite soon, it’s a piece of research that actually took every single fact check that’s been written since 1995 ’til 2019 and it extracted from it all of the images that were actually fact checked. This is hundreds and thousands of fact checks that were looked at. And on top of that, we added two million annotations. So understanding exactly what were the extra pieces of information that were available now, what were the types of manipulation that took place, how quickly did someone respond to it? And as far as I know, that’s one of the biggest pieces of research that’s been done on image fact checking in particular. And some of the things that came out of that, I think were really fascinating and actually helped us figure out what are the solvable problems in image verification?

So 80% of all the fact checks that have been published involve some sort of media. We haven’t been able to quantify that before, but we can now. In the past two years, video misinformation has gone up significantly and we expect it to go up even more. 20% of all the images that have been fact checked were screenshots, which was surprising to me, and we’re still trying to figure out why, but screenshots, surprisingly. And I think the most telling thing, which actually helped in figuring out what the next steps are for me and the team that I work with is the image manipulation that takes place a lot of the time, over 60% of the time, is actually not a manipulation of the image itself. So it’s not pixel manipulation, it’s context manipulation. So it’s the same image, but it’s just been taken out of context.

So you might see an image of Afghanistan in 2001 and someone might republish it, the same image, and say it’s Ukraine 2022, right? So it’s actually like a relatively simple manipulation. But now that we have that kind of context and we can go even deeper and understand that the context manipulation is actually mostly date, location, and person, that’s a very solvable problem all of a sudden. Can we build tools that actually give expert users when it matters most, that kind of information at a glance? And so, the Google research team and I are essentially building into the fact check explorer, this image context tool which extracts all the context of an image on the page that it’s at and shows you over time, like, creates a timeline of how that image has changed and tells you whether the context of that image has changed.

That’s a relatively new tool that we’re beta testing at the moment. But I think the combination of that research, finding a really deeply solvable problem, and then actually building something that is being used now by 1,600 journalists and fact checkers around the world, hopefully builds a little pilot of something that can happen more and more in this space because I think for all that we talk about mis and disinformation, we still don’t know enough, actually, about what are the key solvable problems in it? And I agree that part of that is down to the fact that we need more research and we need more openness about that.

Patricia Campos Mello: It’s great how you’re mentioning that, of course AI and deepfakes are a concern, but you know, context, which is a pretty basic and classic form of disinformation is still pretty much widespread. And even though it is a big concern, you were mentioning, Nighat, that the gender disinformation has been increasing and it’s getting- Cyber harassment against minority groups has been getting more and more sophisticated. Could you tell us in what ways is that changing?

Nighat Dad: Yeah, I mean, so we are, during elections, we basically started working with, actually, Skoll awardee, Meedan, I think they are sitting here. So they are working with a lot of partners around the world and we are one of them from Pakistan. So we basically collected data and we focused on women journalists, women politicians. So our last elections happened in 2018 and a recent one happened in February, 2024, and we actually saw this like- So, people were having conversation around the fact of generative AI, use of AI during elections, and we actually witnessed it, like in real time.

So bad actors started creating synthetic histories of women politicians, women journalists, especially women journalists who were actually sort of doing a lot of journalism around political parties and sort of being very critical around their manifestos and their role, and, you know, Pakistan is also a very interesting country when it comes to politics. So a lot was happening and we started getting a lot of these complaints, but my team was collecting this data, but we just collected it, we just had no idea how to deal with it.

But then the kind of role that platforms played on the eve of election, which I don’t think that they were ready to face and we also didn’t anticipate, it was basically political party supporters on the eve of election created so much generative AI content in support of their candidates. So one of the candidates, one of the politicians who is a very famous, you know, celebrity, former prime minister of Pakistan, Imran Khan is in the prison, he was in the prison on elections, as well.

So a lot of this audio defect that, Sam, you were talking about, his speeches, you know, fake speeches were, you know, all over these platforms and then they were boosted by the supporters. So you know, political ads and we were like- And hundreds and thousands of people were then sharing it and then, you know, not only on those platforms, but WhatsApp, as well. So it was not only how political parties used generative AI and then used platforms to boost their content and with no really oversight.

And that’s why I’m like, “Hmm–” So platforms are ready to deal with US elections, but they were not ready to deal with Pakistani elections. They were not ready to deal with Indonesian elections or Bangladesh elections. And now after all our examples, there is a focus on Indian elections and that’s where you’re like, “Okay, we are the same region,” but you know, like, they did to us, but it has happened. But now what they can do is they can learn from our examples and the data that we have collected.

I think that’s why, what we are trying to do is that, I have done this for so long, you know, telling people what is happening to us and asking for their support, but now I’m like, you know, we need to do South by South collaboration, so we need to make our friend- One of our friends is sitting here, Amber, sorry to put you on spot, but there is this community that we are trying to make, it’s colorized community, and it’s like Global South to Global South collaboration because I think it’s very important for us to share our own examples with each other and see what kind of resources we can make to empower our own self instead of looking at other people. And I think Global North can learn from us.

We have so much to tell you, a lot of things that we have already faced, you can learn from us and see how you can make your elections better, platforms can learn from us. But yeah, TFGBV, tech-facilitated gender-based violence has gone really high. And I think that’s why we should not only limit ourselves to regulations and policies, but see how we can empower communities who can deal with this, and one of the cases, you know, the journalist winning against our biggest TV channel.

Patricia Campos Mello: Thank you. I know I still owe you guys a question, but we need to open for questions from the audience, so maybe you can start by taking this, but you want to add something?

Imran Ahmed: I just want to add one point.

Patricia Campos Mello: Sure.

Imran Ahmed: I mean, and Mevan, you know, with the greatest respect, I don’t think we need more research at this point, because that’s the thing that social media companies tell me every single time. Because when we do the research, nothing happens. I’m going to give you two very, very quick examples that are relevant to your company.

One is a study that we did on Google search. We found that two years after George Floyd was murdered, that Google was selling the search term “George Floyd cause of death” and rerouting it to a Daily Wire article that was paid for by the Daily Wire, that said that he’d been killed because of a fentanyl overdose. Two years after and they were being paid for that. And we showed them a list of appalling things, including greenwashing terms, they were selling net zero to an oil company. So the first result you get is disinformation. And given that 98.6% of results never go past the first page, you basically own the information ecosystem, it distorts the lens through which you see the world. So that verb “to Google” actually means to be misled for money, nothing changed.

The second example would be on YouTube, we’ve just done a study- Every Earth Day, YouTube announces, which is a Google company, announces that they are going to deamplify and demonetize climate denial content, every year. So we check it every year and they haven’t, they just announced it. They haven’t actually done anything about it. We just put out a research report a few months ago. I’m actually flying to New York tonight to go and speak to Al Gore’s Climate Leadership Corps about that. And again, we’ve done the research, but the action’s not being taken. And the problem is, fundamentally, a system that’s broken. This is not just, either, about disinformation, about one guy here and tinkering around the edges and can we fact check everything that’s out there? It’s the amplification by algorithms.

Zuckerberg himself once put out a chart showing that engagement on his platform, so X axis is how violative the content is, Y axis is engagement. Flat, flat, flat until you get to the edge of being violative and then it goes shooting up. And he said this was a problem that Facebook had to confront that actually, the most violative content gets the most engagement, usually from people saying, “This is nonsense,” or, “This is disgusting,” and therefore gets the most amplification. The algorithms then amplify it.

This is not just about the existence of disinformation, it’s about the mass acceleration of that disinformation at the expense of the truth because it engages people, keeps them on platform, maximizes ad revenues. That’s the real problem we’re dealing with here. Please don’t let’s make this about someone said something stupid once on the internet, that’s not the problem. The problem is platforms that amplify it and profit from it.

Patricia Campos Mello: Do you want to comment on that Mevan?

Mevan Babakar: I mean, it’s not really– I think–

Imran Ahmed: Sorry, I didn’t mean to be- I know it’s not you personally, Mevan.

Mevan Babakar: No, it’s not me, personally.

Imran Ahmed: I do.

Mevan Babakar: I’m like a year in, but okay. But I hear you, look, I hear you. I get it. The thing that I think when you say that, because I do agree with you, I think it’s an issue, I think the amplification of it is obviously the thing that is most scary and the thing that’s growing. The question that I have around it is, there’s amplification on one side, but also there’s human behavior on the other side. We know that our brains are also wired to actually be more responsive to things that cause fear, that cause surprise, that cause disgust, and those are actually the things that are also amplified, right? I’m not saying that that is therefore the only answer, but I’m saying that’s part of it.

It’s not necessarily just by itself the algorithm, it’s also how we all behave as a society and where we decide as a society is the line. And actually what does that line look like in terms of different cultures, in terms of different societies, in terms of like, places where there are fewer or more press freedoms. That’s actually a moving target. And the only point I was making about research is that we need to actually delve deeper to understand where that moving target is. That’s all.

Patricia Campos Mello: Okay, we could have this conversation forever, but we want to get questions, give the chance for the audience to, if you could identify yourself before asking the question. Okay. Okay, one from each side. One here, okay.

Marco: Hello, good morning. I’m Marco from Brazil. And Imran, as I was listening to you, I was reminding myself of the relationship between fossil fuel industries and climate change. Of course, back in the 70s. Exxon had a program called Bell Labs, they had the best climate scientists, they were doing research on lithium batteries, renewable energies, until someone in the company realized that the profit pool of the industry was so large that they could just engage in propaganda and they have deceived us for 40 years, right?

So what worries me when I hear you speak is that because of AI, the economic power of digital platforms has been increasing dramatically. Mathes’ market cap has increased, in the past year, three Netflix’s. In the day of its latest acquired financial results, which were so amazing, it added the value of Shell to its market cap in one day. Scott Galloway from NYU has labeled this “metastasis.” And so, what I’d like to ask you is, how do you see this playing out? Because the economic power with AI of digital platform is growing so quickly, how can civil society and government, you know, stand up to this?

Patricia Campos Mello: Let’s just gather three questions and then you all answer. The lady over there.

Tessa Dooms: Good morning, I’m Tessa Dooms from the Rivonia Circle in South Africa. I want to go into the human behavior part of this conversation because part of what we are trying to grapple with is- So we’ve got digital divide issues, which means there are very small pockets of where the digital kind of filters into the majority of the population that don’t have high data access, aren’t going to be able to fact check for themselves or use many of the tools that are created from the side of the platforms.

And so, what we’ve been experimenting with in different formats, is how do we empower people in communities to start doing the kind of fact checking and what does it look like to do organic fact checking from below? And so, being able to say to people, “This is how we have conversations that help us get to better truths,” even when we don’t know who exactly to trust from the outside. From the outside might be the platforms, it might be the Global North, it might be scientists, we don’t know who to trust in general terms, but we do trust each other. And so, how do we start encouraging people to have conversations that build trust?

Asking the right questions, figuring out as communities of people who trust each other, how we come up with the most reliable, most useful answer to the misinformation or to what is disinformation and misinformation. Because it’s also not enough to say, “This information is good and that information is bad,” because anybody can make that claim. It’s what people trust and what resonates with them in their communities, in their context, that’s most important. And I think we’ve lost, in focusing on the digital solves, we’ve lost the human behavioral solves that say, “From community’s perspectives themselves, we need to be able to have citizens that drive new forms of testing what is true based on what works in their context and actually gets the right outcomes for them in their communities.”

Patricia Campos Mello: Thank you. Two more questions. Okay, one here and one here. Gentleman here.

Darshan: Hi, I am Darshan from Oxford. One of the questions I had is two-pronged. One of them is how, and sorry to bring up Google again, but I’ll give an example on YouTube, there’s a guy called Monu Manesar who keeps getting, he’s got a gold play button and he’s one of the Hindu extremists in India who’s actively murdered people and YouTube is funding him. And I want to try and understand, like, we can talk about all this stuff, but when do we kill that model?

And the other side of it is, those platforms also, we talked about the human side, they also very quickly facilitate the ability to take down activists. So my account gets reported all the time from India constantly, you know, and from other places. And I’m just one person, but like, I’m sure it happens to everyone else. So coming up and sort of saying, “We’re going to fact check and we’re going to do this,” it’s very quick, for example, in India for the BJP IT Cell to literally put millions of people reporting one account and it gets taken down.

So on the one side you’re funding the extremists, on the other side, you’re taking down the people fighting them. How can we deal with that before we go into anything new? Because if we don’t fix that, you know, all our democracies are in danger, but for YouTube to be funding a mass murderer and giving him a- It’s a very weird photo of him with a gold play button from YouTube. So I just wanted the question that.

Patricia Campos Mello: Thank you.

Darshan: Yeah.

Patricia Campos Mello: And this lady here in the front.

Mona Shtaya: Thank you. My name is Mona Shtaya, I’m from Digital Action. We’re the convener for the Global Coalition for Tech Justice that has over 200 organizations, Nighat’s organization and Sam’s organizations are part of that. So my question is also regarding YouTube. Like Nighat said, that hopefully that companies are taking lessons from what happened in Indonesia, Pakistan, Taiwan, among others since the beginning of the year, to reflect that on the Indian elections. However, one of our partners, Global Witness, they’ve conducted a very recent investigation where they tested YouTube by trying 48 ads in English, Hindu, and one more language. And they were all approved, although they are violating your community standards, as well as they contain misinformation and disinformation, and they were all approved, every single one of them.

And this is not the first time, this is not the first platform, every other platform has also the same thing. And if anything, this can show us the systematic failure, unfortunately, to address the disinformation, misinformation that is basically leading to real world harm and affecting, like, democracies globally. So maybe this is a question on how can we address that together, since I know, like, we, among other civil societies, have been reaching out to companies and engaging with companies for so long, trying to convince you guys to announce your plans on how you are addressing or handling different tech harms during this year, the biggest year for democracies. And unfortunately there is a problem, a lack of transparency on how companies are handling that. Which, like, bring us here together to speak about that globally. Thank you.

Patricia Campos Mello: Maybe we could start with Mevan, and then give some space for Sam, who hasn’t spoken.

Mevan Babakar: I’m going to give you a deeply unsatisfying answer, which is I don’t work at YouTube. Like, I understand they’re both owned by Alphabet, but the values of YouTube are very different, the policies are different, I work on the Google side and I’m not going to talk about how YouTube does its policies. Sorry, but I just don’t think that’s my place right now. It’s funny also, because I used to be on that side, making the same comments about YouTube, but I do think that you’d be better answered from somebody in the YouTube trust and safety team and not me. So sorry about that.

Patricia Campos Mello: Do you want to comment on other-?

Mevan Babakar: I do, I want to comment on the community aspect of it. I think the community aspect is really important, actually. I think it’s really important to not just identify what pieces of misinformation are affecting a community, but actually how to then engage that community in answering it for themselves. And Meedan, next to you, does a brilliant job of doing that. And I think that, actually, it’s more than just giving people the answers, it’s about giving them the tools to be able to assess that for themselves, but not also that, but, you know, even narratives have come up a lot.

Certain narratives affect certain communities a lot more. And actually giving people the understanding on what is this narrative? Is it going to come up again in a different guise? Is it actually something that’s recurring? Because a lot of the misinformation and narratives that we see are recurring. You know, you mentioned earlier, Imran, that there’s like, election misinformation around, “If I vote, it’ll be rubbed out and then replaced.” That comes up every single year in every election and there’s lots of different types of misinformation that like, that are repetitive. And actually, we could do a better job of giving people ahead of time a heads up that this is going to happen. And I think the communities are a big important part of that.

Patricia Campos Mello: Thank you.

Sam Gregory: I’ll comment on the question around community-based work, and I think it’s easy to kind of mysticize AI as a new element to this, but the only way our ability to deal with mis and disinformation is going to be successful in an AI era is embedding it in community-based communicators. Because that 85 to 90% probability, that trust is probably going to be explained by talking to someone you trust, not by, you know, talking to someone abstractly in the scientific communication terms.

On the AI exacerbating where we are now, there’s very clearly risks in the way AI is going to exacerbate problems around advertising and surveillance, right? Like the way we interact with chat bots, the way we interact with AI systems is a way in which we give more information, we provide more information to the companies who control them, more personal information, more information that could be used for surveillance that will probably be used for advertising unless people can come up with an alternative model to fund the way they work, right? So we’ve got two exacerbations there.

The place I’ll focus though, where we do our work at WITNESS is, of course, on the impact on the information ecosystem and how do we understand AI’s presence, and there’s a lot of hypotheticals here. And I want to be clear, like, we’ve done- A lot of our work is like, de-hyping AI and its impacts until we really know, right? So what are going to be the impacts of having increased volume of information? What are going to be the impacts of an increased velocity, an increased personalization, an increased ability to dismiss the truth by claiming it could have been falsified?

We don’t really know the answers, so I do think we need research there. I’m not saying that as a defensive thing, but actually we need to know so we can come up with the responses. But as we’re waiting for research, we should also be pushing for a range of ways we put safeguards in and regulation. What I mean by that is there are some robust foundations that need to be part of regulation on AI to support us in this.

One is something that’s very widely seen already, which is transparency about how AI is being used. Without transparency, we have no way, as consumers going, “AI is being used in this individual content item.” And no way as regulators of knowing this model is abusive, this approach is creating mis and disinformation at scale, or child sexual abuse material at scale, or non-consensual sexual images at scale. So we need transparency and we need to also think very clearly about liability across a broader pipeline.

And I think one danger of the way we thought about platform accountability is we focus very much on the social media, the kind of distribution side. When we’re talking about AI, the only way that the preventive measures will work, like detection, like ways to understand the recipe of how AI and human was used in the media, is if we have that accountability that runs through from the foundation models, to the deployers of that, to the tools, to the distribution, and that includes, of course, many parts of a conglomerate like Meta or Alphabet, but it’s not just Instagram or WhatsApp or Google search, right?

And so, we have to think more broadly and we have to do that now and we have to emphasize transparency to benefit both regulators and consumers and citizens. And we have to emphasize the ability to have liability where it needs to sit across that pipeline of responsibility. Otherwise the solutions we’re proposing and that are needed by frontline human rights defenders, journalists, and our societies, won’t work and there won’t be accountability for the broader systemic issues.

Nighat Dad: So, you know, I was- So with regards to AI, there’s a really good of mine, Rumman Chowdhury, who is now a US NY on artificial intelligence. And we both were talking to each other right after Pakistani elections and I was like, “You know, there is this form of generative AI content which is not harmful, but which is there and it’s very uncomfortable.” And I don’t know how, it’s like, politician who is behind the prison, you know, presenting his image as a very cuddly, cute politician. So the one is Indonesian, but the Pakistani one also has the same kind of- and I’m like, “It’s not harmful, what we will call it?” And we were just thinking.

And I don’t know, she coined a term and her article just came out in the Nature yesterday, and she coined a term “softfakes.” So, you know, like, I think it’s the- This is a space that we all are exploring and I think people really need to pay attention what communities are dealing with, how they are coining these new terms, because these terms are actually representing their realities. So this is what is happening in elections. But you know, with regards to monetization, I actually could relate to it.

You know, there’s this right winger in Pakistan who’s very anti-woman, anti-feminist movements, really put them in danger, had this like golden YouTube button, and we were like, “Oh, well you are a religious right winger and you are against any technology, you are against TV, you are against radio, and women should not listen to songs and you are having this golden button.” So they thrive on, you know, monetization.

There’s this very good research, I think it’ll come out in a few days, a very good friend of mine, Victor, I’m forgetting the last name, but she has done a lot of work on Myanmar and her research is coming out. Please keep, you know, a lookout for that research because it’s all about monetization, building on the experiences on the global majority. And with regards to, there was a CSO question, I think, you know, CSOs can only participate when they are resourceful, you know, and if we can build their resilience. If we are okay CSOs participating from Global North, then it’s fine. But if we really want to have like proper space for CSOs around the world, we need to enable their environment where they can actually thrive and can do their work and not being threatened and silenced by their powerful actors around them.

Imran Ahmed: To answer the question on what can we do, look, we have a- The situation we are in now is as a result of market and regulatory failure, complete market and regulatory failure. And it makes me laugh when people say that we can rewrite the business models of a company like Meta. You know, Mark Zuckerberg’s younger than I am, he’s worth a hundred billion dollars. One of the greatest, most parasitic business models in history. Get other people to write the content, have an algorithm that reorders to put the most addictive stuff first. Bob’s your uncle. You’re worth a hundred billion dollars. He’s not going to change it. He’s going to do everything he can to defend it. They’re going to reduce transparency.

When we put out our, probably our most famous research report, Disinformation Dozen, showing how 12 people produce 65% of the disinformation on the pandemic that was shared online, the anti-vax disinformation, it led President Biden to call Facebook “killers” because they were putting out the content that led to 200,000 people in America choking to death in ICUs because they thought the vaccine would harm them. 200,000 preventable deaths. The problem that we have is that those companies have no incentive to change the way that they behave. And we’ve got to change it.

We have a broken oligopolistic market with a very small number of companies with weird names like Meta and Alphabet that actually own everything, that amplify hate and disinformation. It is a weaponized age and this is not something that conventional solutions or tinkering around the edges will fix. I once had the great privilege of meeting Bobby Shriver, who’s JFK’s nephew, and I explained to him this really early on in CCDH, like year one, so during the pandemic, and I said to him, explained to him what I was doing and what I thought was happening. And he said to me, “This reminds me of something my uncle faced in the Cuban Missile Crisis.”

The admirals were coming in and saying, “This is the conventional way to respond to, you know, threats by the Soviets,” but actually, he realized that all those admirals for all their genius and all their experience, their experience was meaningless because the nuclear age changed everything. It changed what diplomacy means, what peace means, what war means, how to conduct yourself, the importance of all the different aspects that go into that decision making process and the weights. And so, he did his own thing and he was right.

We are going to need to deal with the nuclear age of disinformation where we are being overwhelmed in our information ecosystem with disinformation and hate content that re-socializes our world. We are the experiment. You can’t really fully see how bad it is because we’re actually in it right now. If you go home and say, “You know what, you know, ex politicians are fascists,” we are basically, we are being polarized. That vociferous, sclerotic sort of process that’s happening to our society, we’re victims of it too.

And so, we are going to have to rethink things and that fundamentally means that we can’t just tinker around the edges, we’re going to have to change the economics of this industry once and for all. And the way that we do that is by, we cannot start banning types of speech because you’re right, other countries will immediately start banning them too. But what we can do is force genuine transparency from these companies of their algorithms, of their content enforcement policies. If they take content down, tell us why and what rule you applied. If you leave it up, tell us what rule you applied and why you came to that decision. Help us to understand how you operate your systems. And transparency of the advertising.

We need meaningful accountability, and that means better equipped bodies that can hold them accountable, whether that’s select committees in Congress or it’s Ofcom in the UK or regulatory bodies, and then just like every other company, if you buy a cup of tea and they’ve got cleaning fluid in it and it kills you or hurts you, you can sue them, right? Negligence, product design law, they should be subject to the same laws. And that’s why we’re saying that section 230 of CDA 96 in the US needs to be amended to add a reasonable act attest to it, that you get that protection if you are reasonable.

Now those three changes are small in reality. Transparency, accountability, and responsibility. No one disagrees with them. I’ve briefed this to Republicans, they agree with it. We just need to get the heck on with it because in the meantime, there’ll be more and more victims of social media. And I’ll leave you with one last thing, and I want to go back to Meta because we haven’t talked about them enough. On the Board of my organization is a man called Ian Russell. This is why I do this work.

You know, Ian’s daughter, Molly was 14 when he walked in and he found her hanging. And when the coroner investigated it and forced Meta to show them what they had shown her on Instagram, took them three years or so to actually provide the data to the coroner. The coroner’s conclusion was, she’d been shown so much content so frequently by the algorithm that it would’ve been inevitable for her to conclude that if you hurt inside, you hurt yourself outside. And that if you really hurt inside, you take your own life.

And my wife and I are having a little girl in September, September the sixth, she’s due, she’s going to be called Lillian, we hope. And we pray that she’s healthy and happy. But I’m scared about what will happen if we don’t do something about these platforms because she may not ever have the mental health issues that cause her to take her own life, but I know that one day she’ll look in a mirror and think, “I’m a piece of shit and I’m fat and I’m ugly and I’m brown and I’ve got a mustache,” because, you know, she’s going to be one of us, just because she’s been forced these images down her throat and I don’t want that to happen.

Patricia Campos Mello: Thank you. We have only two minutes, so I was going to ask the three of you very challenging thing. We know there’s no silver bullet, but in two sentences, what does solution look like?

Mevan Babakar: So why don’t we start with Sam.

Imran Ahmed: More research.

Nighat Dad: I can go with it. So I think, working with the entire ecosystem I think is an answer which addresses everyone in the world and not just one part of the world. And that is working with the communities, enabling those communities, holding your governments accountable when they make bad laws. Keep holding platforms accountable, but also more transparency from them. Imran mentioned different committees.

I sit on independent Oversight Board, we have been asking the same question to Meta. And they have responded to some of them very positively. So I think we have to push all these actors from all sides. I sit on these big boards, but then I work with the young women, young girls in Pakistan, go to the public schools, speak to them, because I think it’s so important to let them know what kind of information disorder they are living in.

Patricia Campos Mello: Mevan?

Mevan Babakar: You know, my time working in fact checking has taught me that you need a deep respect for everyone’s point of view, and that the only way that people actually change their minds is when you start from accepting that the person in front of you is actually giving you your point of view, right? You have to treat it with that kind of reverence and that kind of respect. And so, I think that the thing that would help more is more of these conversations, but starting from that point of everyone has something really valuable to add to this conversation. It’s not something that one person is going to come with one beautiful package solution. It requires everyone from different walks of life at every level of society. And yeah, it actually does require research, as well, like. But I think it has to start with that deep reverence and respect for one another.

Patricia Campos Mello: Sam, one minute.

Sam Gregory: One minute.

Patricia Campos Mello: She’s looking at me.

Sam Gregory: That very generous, goodness.

Patricia Campos Mello: Sorry.

Sam Gregory: Support a diverse range of human rights defenders and frontline journalists to be able to do their work in this new age. Invest in them, support them, and make sure their voices and a range of the communities they serve are heard in building an infrastructure that will ensure we can tell real from fake when we need to in the new age.

Patricia Campos Mello: Thank you so much. Thank you everyone. This was great.

Related Content

223760
Skoll World Forum Interviews: Spotlight on India's Health Care System
Skoll Foundation - , September 17, 2024
As the health care system in India struggles to meet surging demand, hundreds of millions of people are being left behind—and made more vulnerable to disease and chronic health conditions.…
223740
Measuring Systems Change
Skoll Foundation - , September 9, 2024
Shifting our world’s systems toward greater inclusion and equity is complex work. It requires developing a shared understanding of progress and clear signals to guide the way. What are the…
223721
Learn how Indigenous Peoples are holding corporations accountable for deforestation
September 4, 2024
Indigenous Peoples’ lands are key to combating climate change, but corporate greed puts them at risk. Meet the land and environmental defenders fighting to hold them accountable. Indigenous Peoples are…