Cyber Security Awareness Webinars | Wizer

How To Spot & Stop Disinformation

Written by Wizer Team | Oct 31, 2024 4:05:34 PM
The following is a transcript of the “How To Spot & Stop Disinformation” panel - has been minimally edited for clarity. 

What Is Truth? Is It Absolute Or Subjective?

Gabriel Friedlander: Let’s start with the first question. It’s a bit philosophical, but I think it will set the stage. What is truth? Is it absolute, personal, or something else? I often hear people say, “this is my truth.” What’s your take on this? 

Charity Wright: I’ll start. This weekend, I spent some time with someone who said, “facts can change, but truth does not.” I struggle with that because I argue that truth is subjective and largely influenced by your worldview. Facts can evolve as we gain more context. If we have evidence of a specific event, we can assess what we know to be true at that moment. I’m interested in hearing what the other panelists think.

Gabriel Friedlander: I would add that you may have facts, but not all the facts. The question is, how many facts do you have? Missing pieces of information can change the narrative if you only have part of the facts. Rafi, you mentioned some interesting things about this—do facts even matter?

Look For The Intent Behind the content

Rafi Mendelsohn: We might be looking at this from a slightly different perspective. Cyabra isn’t a fact-checking tool. If anyone knows someone in the fact-checking industry, buy them a coffee because they’re likely overwhelmed by the amount of misinformation being spread. When considering truth, we must also look at the motives of those spreading information. Malicious actors don’t care about truth; they seek out topics that provoke the most emotion and outrage. This can change rapidly. The mechanisms they use—fake accounts, deepfakes, etc.—make it challenging to navigate truth and fact-checking.

Gabriel Friedlander: It sounds like we’re too focused on facts and truth without considering who presents those facts and their motives.

Rafi Mendelsohn: Exactly. The effectiveness of spreading misinformation is crucial. For instance, there’s a technique called Deceptive Imagery Persuasion (DIP) identified by disinformation researcher Ryan Macbeth. It involves using factual images in misleading contexts. For example, images from one war zone might be presented as if they are from another. The key issue is the narrative being amplified and the motives behind it.

Gabriel Friedlander: That connects to what Charity said—an image may be factual, but if it’s not from the correct context, that missing information is critical. Andrew, what do you think?

We need to differentiate between the victims and the perpetrators because those who are duped by disinformation don’t intend to spread falsehoods.

Andrew Fox: We can talk about emotion and the brain later, but one relevant point is truthful intention. It’s important to consider the victims of disinformation, who may genuinely believe what they’re sharing is true. We need to differentiate between the victims and the perpetrators because those who are duped by disinformation don’t intend to spread falsehoods.

Gabriel Friedlander: So, if someone believes a fact is true and spreads it, they are a victim, not a malicious actor. This complicates the issue further.

Andrew Fox: Yes, if I convince you that this object is an apple and you tell everyone it is, that’s not your fault. We need to be careful about whom we target in countering misinformation.

Charity Wright: That’s a crucial point. A few years ago, my colleagues and I developed a framework for analyzing disinformation based on the Diamond Model used for cyber intrusions. Instead of using “adversary,” we referred to these individuals as “influencers.” Many spreading misinformation might not know it’s false, similar to someone sharing unverified posts on social media. It’s important to use terminology accurately because not everyone shares misinformation with malicious intent.

Gabriel Friedlander: I hope we can explore some solutions later because this issue is becoming increasingly complex. We’ve discussed motives, and we now recognize that even those with motives can also be victims. Eyal, when does disinformation cross the line and start causing harm?

When Is Disinformation Harmful

Rafi Mendelsohn:  Disinformation becomes harmful when it undermines trust, erodes decision-making, or compromises security. For instance, deepfake-based impersonation in phishing attacks can mislead individuals into taking dangerous actions because they believe in the authenticity of the message. The shift from being misleading to manipulative is where disinformation becomes very harmful.

Gabriel Friedlander: If I’m a victim of misinformation—let’s say I take action based on it—I might not care about the motive at that point. I wonder how we can protect ourselves from harm, regardless of how misinformation originated. Let’s discuss the life cycle of a misinformation campaign. How does it start, and how fast does it spread?

The Life Cycle of Disinformation

Rafi Mendelsohn: It really depends on the platform. The rates at which disinformation propagates on platforms like TikTok differ from traditional media.

Gabriel Friedlander: Are these campaigns started randomly, or are they organized efforts by malicious actors?

Andrew Fox: We need to clarify our terminology. As Mark Bowinski mentioned in the chat, we should distinguish between misinformation and disinformation. Disinformation is spread with a malign intent, while misinformation spreads unintentionally when someone believes they are sharing the truth.

Rafi Mendelsohn: To build on Andrew’s point, the question of who is behind disinformation is critical. We need to understand the intent behind the campaign. Disinformation can come from state-sponsored efforts, organized crime, or opportunistic individuals. It’s challenging because, in social media, the anonymity of fake accounts complicates our ability to discern the sources.

Gabriel Friedlander: So, you’re saying we shouldn’t focus too much on identifying the source because it’s often too difficult?

Rafi Mendelsohn: No, we should always look for the source. Understanding who is behind these campaigns is essential.

Does Disinformation = Uneducated?

Rafi Mendelsohn: I recently came across something that really struck me regarding the Salem witch trials. While we know the trials' history, I was surprised to learn that Salem had one of the highest literacy rates in the world at that time. This community was relatively affluent and well-educated, yet they fell prey to a significant disinformation campaign about witches. It raises questions about how misinformation can thrive, even in knowledgeable societies.

Salem had one of the highest literacy rates in the world at that time [of the Salem Witch Trials]. This community was relatively affluent and well-educated, yet they fell prey to a significant disinformation campaign about witches. It raises questions about how misinformation can thrive, even in knowledgeable societies.

Also, about one-third of Americans aged 18 to 29 rely on TikTok for news. Compared to the global context, these young Americans are quite privileged, yet this trend is concerning.

Gabriel Friedlander: It seems that with less access to information, there’s a lower risk of manipulation.

Rafi Mendelsohn: True, but malicious actors target where people are most engaged. The focus should be on understanding where the attention is and the techniques being used to exploit these communities, so we can counteract them effectively.

Gabriel Friedlander: More screen time on social media likely increases the need for self-education to mitigate risks.

Rafi Mendelsohn: Exactly. To borrow Andrew’s words, greater skepticism is essential with increased screen time.

What About Disinformation And Corporate Brands?

Gabriel Friedlander: Let’s steer this back to the corporate realm. If a company faces a misinformation attack, how should we respond? Once we detect it, and possibly identify the source, how do we address the damage to our brand? Should we outright deny the misinformation or take another approach?

Charity Wright: I can tackle this. I work with several large enterprises in North America, and this is a frequent topic, especially concerning hacktivism fueled by geopolitical issues. Companies often become targets if their leadership takes a public stance on controversial issues.

We must focus on disrupting disinformation campaigns. Research shows that simply stating the truth might not change opinions; people are wired to trust what they see and hear first and most. This makes social media a powerful tool for spreading misinformation.

Research shows that simply stating the truth might not change opinions; people are wired to trust what they see and hear first and most. This makes social media a powerful tool for spreading misinformation.

We can flag disinformation and provide context—clarifying what is true versus false. Additionally, PR teams can launch positive campaigns to counteract negativity, while legal resources can help remove malicious content affecting our brand.

Eyal Benishti: You’re right; once something is online, it’s nearly impossible to erase it completely. We must focus on educating consumers to help them discern information. With the ease of creating convincing content using AI and deepfake technology, it’s vital to prioritize user awareness and education—starting early, perhaps in elementary school.

Andrew Fox: What’s intriguing about this discussion is that the challenge isn’t new. Humans have always faced deception; what’s changed is the scale of misinformation possible today. The internet enables unprecedented information flow, making it hard for individuals to manage.

People make about 17,000 decisions daily, many of which are automated. The key is teaching individuals how to identify which decisions require careful thought and which can be ignored. Training should focus on helping people recognize when to engage in critical thinking and how to approach it effectively.

The key is teaching individuals how to identify which decisions require careful thought and which can be ignored.

What Is Key In Identifying Disinformation?

Gabriel Friedlander: Andrew, if I want to learn about this and get some tips as a beginner, what practical steps can I take tomorrow to incorporate more skepticism into my life? I don’t have the time to completely change myself right now. Are there specific changes in behavior that can help?

Andrew Fox: It’s all about knowing what to look for, and I keep emphasizing this because I believe it's crucial: beware of emotional triggers. When someone tries to deceive you, they often aim to provoke an emotional reaction. So, if you come across something online that shocks or angers you, pause before reacting. This can be especially tough when scrolling through social media, like when you see something outrageous from a politician you already dislike. That's your confirmation bias at work, reinforcing your existing beliefs.

When someone tries to deceive you, they often aim to provoke an emotional reaction.

You should be aware of biases like anchoring bias, where you cling to the first idea you encounter, and confirmation bias, which leads you to seek out information that supports your preexisting views. It's essential to double-check and triple-check information. If you find something important, verify it before making a decision. For trivial matters, like whether the Indiana Pacers won a game, it might not matter as much, so you can simply scroll past it. But for issues you care deeply about, take the time to confirm the information before reacting. Developing good habits in your decision-making process and being selective about what affects your emotions is key.

...for issues you care deeply about, take the time to confirm the information before reacting.

Self-Awareness: A Tool Against Disinformation

Gabriel Friedlander: Awareness plays a huge role in this, particularly self-awareness. Many people believe they are immune to misinformation, thinking they have the truth while others are misled. It's crucial to recognize that misinformation is everywhere. If you think you possess all the truth, you're likely mistaken. So, first and foremost, develop self-awareness.

One tip I recommend is to adjust how you search for information. Instead of simply Googling a topic, add terms like "fraud," "scam," or "misinformation" to your search. This habit can help you uncover if you're being manipulated. Remember, even Google has biases and will give you results based on your previous searches. By adding those keywords, you can explore perspectives you might otherwise miss.

Speed Is Key To Combat Brand Misinformation

Rafi Mendelsohn: I look forward to the day everyone approaches information as thoughtfully as Gabriel. For brands facing misinformation, speed is essential. A narrative that emerges might not be truthful, but if a company witnesses it going viral—especially if they're publicly traded and see their stock price drop or journalists inquiring about the claims—they face a significant challenge.

Understanding the narrative and who is spreading it is vital. We've seen brands attempt to address online crises by having customer support engage with each negative commenter on social media. Unfortunately, they often find out that a significant portion of those accounts are fake, wasting resources on bots instead of real people.

To respond effectively, brands need to identify which accounts are causing the most damage—these might be "super spreaders"—and either work to address those or take them down. Uncovering the context of the situation quickly will help you make informed decisions, whether to communicate your position, respond to journalists, or tackle harmful narratives.

Uncovering the context of the situation quickly will help you make informed decisions, whether to communicate your position, respond to journalists, or tackle harmful narratives.

Some companies hesitate to engage in emotional topics, thinking that by staying quiet, they can avoid controversy. However, ignoring discussions about your brand can backfire, especially if those conversations are negative. Even if you don't engage, you might find your brand becoming part of a viral discussion, often led by bot networks. The key is to uncover the narratives and accounts rapidly.

While this won’t provide an immediate solution, it will help you navigate the situation and make informed decisions regarding your response.

Who's Responsible Within The Company To Address Misinformation Campaigns?

Gabriel Friedlander: Who is responsible within a company for addressing this? Is it the security team or PR? It seems like such a vast issue.

Rafi Mendelsohn: That's a great question! It really depends on the organization. Sometimes, there isn’t a clear owner. Security teams often look to marketing because they manage social media accounts, while marketing teams focus on metrics like brand mentions and sentiment. When the CEO comes to them concerned about share prices due to negative online discussions, it becomes less about who has the keys or the budget and more about the immediate need to address the issue.

I believe we will see significant changes in the next 12 to 18 months regarding who owns monitoring and tracking misinformation. We might even see the emergence of a Chief Disinformation Officer role to tackle these challenges.

Charity Wright: In my experience, it’s the threat intelligence team or the security personnel. We already have many years, decades, of experience analyzing digital warfare. Now, this propaganda is propagating through digital means. We already know how to analyze the capabilities of certain threat actors, the infrastructure they’re using, and how to identify IP addresses, domains, and social media accounts.

Oftentimes, security teams ask me, “Why are they putting this on us? We’ve never analyzed disinformation before.” It really does require an extra skill set of understanding the psychology behind it, like what Andrew does.

I think it’s important to teach and educate those teams to incorporate the psychological aspects of our human vulnerabilities. This way, we can understand both the technical vulnerabilities behind the campaign and the psychological encounters. A lot of the threat intel teams in these big corporations are responsible for analyzing any disinformation that impacts their brand because they’re also in charge of brand monitoring.

Does Disinformation Leak into AI Tools?

Gabriel Friedlander: Okay, last question. I actually have a whole topic that I didn’t touch on, but I will at least touch on it now with one question that could probably be a whole webinar itself. Does disinformation leak into AI and ChatGPT and all those? Is it beyond just AI? Yeah, basically, that’s the question. Is it now entering all those data models? And if it is, is it permanent? How do we change it? And how do we validate their outputs?

Eyal Benishti:  Outputs are being validated to a certain degree. I know that companies like OpenAI and others are using human reinforcement methods to review and fine-tune their answers. They’re also scrutinizing the data they feed into their models, but nothing is perfect.

Again, it seems that in the future, we will have models to look after other models to ensure that the data they use is reliable. This is because everything is happening on such a massive scale. Even when they can kind of cross-reference and use more than one source to remove bias, I think even that is not 100 percent secure in terms of preventing bias from being added to our models. We haven’t even started to address the alignment problems they’re trying to deal with. So yeah, there are a lot of problems to address.

Gabriel Friedlander: And we have a lot of AI feeding other AIs with information. People are using ChatGPT to write stuff—static data—and then other AI models are being fed what ChatGPT wrote. So it’s just a monster feeding itself. Again, I won’t open it up; I have a lot of questions around that.

About The Panelists

What I want to do is, each of you is an expert in this area, so I would love for the audience to hear about what you guys are doing and what services you can offer to help mitigate this. This is part of addressing this issue, combining technology and the human aspect. So please feel free to talk about your company, your products, and your services so the audience will know the different types of solutions available to tackle this huge topic.

Eyal Benishti: If you’re worried about social engineering and AI-generated phishing emails, business email compromise, and all types of fraud that land in inboxes and lure people into actions they shouldn’t take or to give away information—
our company, IRONSCALES, can help with that. We are fighting AI with AI. Our adaptive AI platform is very good at understanding the real intent behind communications, including who is communicating with whom and what type of information they’re usually exchanging. If your vendor was compromised or if someone is impersonating one of your vendors or customers, we can help you protect against that.

Gabriel Friedlander: Just to add a little bit, because I know you guys, right? We’ve been working for some years together, and I’ve seen a lot of the scams today. Many phishing emails don’t even involve a link anymore, right? People are communicating, and the attack doesn’t happen immediately. It’s not just an email with a link; it’s back and forth, back and forth. You really want to listen to those conversations and pick up on the sentiment and the style of communication to stop the attack in its tracks. Phishing emails have become much more advanced than a single email with a link, so this is what you guys are doing, and I think you’re doing a great job. I wish I could do this for each one of you, but yeah, go ahead.

Andrew Fox: Through my think tank, The Henry Jackson Society, we offer counter-disinformation training. We look at the mechanics of it, the psychology behind it, and we provide some solutions on how people might want to go about countering it. We’ve done a couple of tours of university campuses, helping students. So yeah, if anyone’s interested in that training, then by all means.

Gabriel Friedlander: Are there virtual and physical options? What type of training is it? Is it for the whole company or just management?

Andrew Fox: We can tailor it to anyone who wants it, whether it’s senior management, someone on the shop floor, university students, or a team that wants to learn so they can cascade the training themselves. We can even do "train the trainer" sessions.

Charity Wright: At Recorded Future, we have over 15 years of experience putting together research and analysis around how cyber threat actors work. Now, we’re applying that same skill, experience, and tools to malign influence operations. We have a team of over 100 researchers and analysts contributing to this work, checking each other’s biases, and being thorough in not just investigating but revealing to our clients—both companies and governments—how these threat actors operate, their objectives, and how this will eventually impact their organizations. We provide not just educational pieces but a holistic understanding of campaigns and how to defend against them.

Gabriel Friedlander: That sounds really cool! I would love to see how this works one day; it sounds fascinating. Rafi?

Rafi Mendelsohn: I think webinars like these are fantastic because a big part of what we do at Cyabra is educating and thinking about the frameworks and the way malicious actors operate. As I mentioned before, when it comes to private companies, governments are fully aware of this challenge, and they have the tools and are using them. We have worked with 19 different democracies over the last 12 months, purely helping them monitor and safeguard elections.

However, for brands, there’s a big “what they don’t know” challenge, which makes it really hard for companies to address and understand the issue. So, I’d like to extend an open invitation. If people want to understand a little bit more about their brand or the risks associated with their brand online, we’d be happy to look into it. We can run a search and use our platform to shine a light on a dark corner of the internet, which is part of our mission—to fight disinformation.

Andrew Fox: I can vouch for Cyabra because I’ve done some work with Rafi’s company, and the outputs are absolutely fantastic. I can heartily recommend them.

Gabriel Friedlander: So if you are unsure whether someone is spreading misinformation about you, it’s better to find out sooner rather than later. That’s essentially what you’re saying, Rafi?

Rafi Mendelsohn: I would guess that most people are here because they think something’s going on, right? Or they’ve seen enough examples or are encountering accounts that seem a bit fishy or fake, but they’re not really able to put the pieces together. I think fighting disinformation is an exciting time for this industry, and there are fantastic tools and solutions out there. However, it’s still an emerging industry.

If you think of it like a Persian rug, from the top, you see a beautiful design, and you can see exactly what’s going on. But if you look from underneath, it’s not as clear. The more threads you pull, the better idea you have of the picture on top. The picture on top is how disinformation campaigns are run, and we’ll never see the full picture unless we’re working in the disinformation campaigns for a state actor or a criminal organization. The more threads we can pull—all of these tools and features are fantastic. If you want to pull more threads and follow up on that hunch of what you’re seeing online, we’d be happy to use our cyber platform to see what we can uncover.

Gabriel Friedlander: Thank you, everyone. This conversation has highlighted the varied nature of disinformation and the need for collaboration among various sectors to tackle it effectively. 

More Resources on Security Awareness Programs

Building A Winning Security Awareness Program

Building A Healthy Cybersecurity Culture

Security Awareness Done Right

Creating Impactful Videos for Security Awareness Training

Security Awareness Training Highlights PDF