Certain and Wrong: Why False Facts Feel True
By
November 25, 2025
From The Staff Biology Psychology Technology
How likely are we to believe in misinformation or disinformation when it comes from someone we trust? Cecilie S. Traberg, psychologist and assistant professor of digitalization at Copenhagen Business School in Denmark, joins the show to give us a full scope of misinformation, disinformation, propaganda, and why people can be vulnerable to false claims—especially when they come from people they trust.
TRANSCRIPT
[Ad]
[Music: "Wandering" Remix by Nat Keefe]
[Celia]
Welcome to Wired For This—a deep dive into how we think, believe, change, and connect.
In this limited series, we explore the psychology of human behavior and neuroscience—what drives us forward, what holds us back, and how we navigate a world bursting with noise, contradiction, and complexity.
Today, we’re joined by Cecilie S. Traberg. She’s a psychologist and assistant professor of digitalization at Copenhagen Business School, where she conducts research on social influence, misinformation, and the psychology of beliefs. She holds a PhD in psychology from the University of Cambridge and was recently a Visiting Scholar at Harvard Business School and Princeton University. She also holds two master’s degrees in psychology and social cognition from the University of Cambridge and University College London.
From American Scientist, I’m Celia Ford, and you’re listening to Wired For This.
[Celia]
What first drew you to studying why people believe what they believe?
[Cecilie S. Traberg]
I think my curiosity about belief started really early—just from how I grew up. I went to nine different schools throughout my education. I went to school and lived in four different countries. I was always in very international environments with people from very different cultural backgrounds, religious backgrounds, who had very different perceptions of the world and beliefs about the world and about people.
I think being in all these different environments led me to be curious about what it is that actually leads everyone to have different beliefs and how social influence processes could impact how we see the world and what we think about it, how we judge information, and how our beliefs come to be.
When I got to university, I studied marketing and psychology. I really enjoyed the parts of marketing that were all about how we purchase certain things to express our identities, and how our sense of self is developed, and also the persuasion elements. What makes someone persuaded? How can you change someone’s mind?
I wasn’t really comfortable with trying to then use what I had learned there to actually influence people to buy products that they didn’t need. But then I quickly realized that I could actually turn this into a career in research and continue studying these processes. I thought that was the perfect career path for me.
[Celia]
Selfishly, I’m glad you pivoted away from marketing because your current job is very cool. Before we dive too deep into your work, I want to make sure that we understand the distinction between “disinformation” and “misinformation.” They’re not the same, right? What’s the difference?
[Cecilie]
Normally, at least in the research world, when we talk about misinformation, that’s meant to be any information that turns out to be false or misleading or inaccurate, but it doesn’t have to be on purpose.
For example, if a journalist misinterprets a scientific article and writes it up in the wrong way, or something that is published that later turns out to be false, or someone simply doesn’t have enough expertise to write what they’re writing, but they’re not actively trying to mislead anyone, we would call it misinformation.
But disinformation is when someone is actively trying to mislead you or manipulate you or spread falsehoods intentionally.
Sometimes we add a third one, which is propaganda—which is like disinformation with a specific political intent, because disinformation could have other intentions than just political.
[Celia]
It sounds like all three of these are bad. But do they differ in terms of how, potentially, dangerous they can be?
[Cecilie]
Each one can be dangerous in different ways. It all comes down to the information, the sharer, and what ends up happening in social networks when it spreads.
[Celia]
In case you missed it, Dr. Philip Lorenz-Spreen told us all about the spread of information across social networks in episode four of this series. Add it to your queue if you need to catch up! Back to Cecilie.
[Cecilie]
Misinformation—some recent work shows that information that is simply misleading, or framing, for example, vaccines as being dangerous because they link it with autism.
But also, often misinformation is things that are mildly misleading, or where the narrative is spun in an unintentionally framed way. That can end up spreading further, because these are often things that come from very popular outlets that people have high trust in. They’ll end up being more influential, even though the level of danger of that misinformation might, at the information level, not be as dangerous.
Whereas with disinformation, sometimes when it’s intentional or when it’s more orchestrated and large-scale, potentially each individual article would have less of an effect, but it might be strategically spread a lot more.
Often, misinformation is spread because someone wants to share a piece of information that they—either they want to share it with their network because they think it’s important for people to know. Then, when they share it online, they’re not intending to mislead anyone, but they themselves don’t realize this is actually misinformation.
[Celia]
One example: Back in December, a bunch of news outlets published write-ups of a peer-reviewed chemistry paper, saying that black plastic kitchenware could be slowly poisoning us. But a tiny typo in the original paper meant that the amount of toxins shed by these utensils was overstated by an order of magnitude—and published everywhere.
[Cecilie]
When it later turns out to be false, it’s impossible to take that back. In that way, misinformation can have the same large-scale effects as disinformation and propaganda.
[Celia]
A couple of years ago, you published a paper about psychological inoculation against misinformation, where you describe misinformation as spreading like a contagious virus that we could potentially vaccinate against, in a sense. I have a couple of questions about this. First, what makes false information so infectious?
[Cecilie]
For one, false information tends to be very shocking. It tends to be much more novel-sounding. Usually, it’s made to grab people’s attention, whether it’s intentionally fake or just intentionally shocking because a journalist wants to highlight something.
It also plays on negative emotions like fear and anger and disgust, which tend to rile people up and draw people’s attention in. When people see these negative headlines that play on things like moral outrage that get us riled up, people want to share it with each other. They’re much more likely to want to share these headlines with each other than something that’s much more boring and factual and straightforward.
The second aspect is also that misinformation really often draws on group differences. It highlights or polarizes us—highlights group differences and riles us up against each other. When this group element comes in, it means that people often want to get involved and share this on behalf of their group, or to signal some form of group identity, which means that it’s much more likely, then, to be shared, because people are sharing it for signal value as well.
Another aspect is that it’s just, in general, highly emotional. People are more susceptible to believing misinformation when they’re put in an emotional state. It’s not the same for factual information; you’re not more likely to believe boring factual information when you’re in an emotional state. It’s only the case for misinformation.
And it’s infectious because we want to share it with others—to signal our identity, to fit in with our group, and also to signal what we know. It’s infectious because we’re in these echo chambers, where you only follow and are friends with people that, on some level, you have some trust in—that you like in some way.
Just like a virus, it can only spread via a human host. Someone needs to spread that information, and misinformation just lends itself much more to being spread through people than factual information. Facts are often much more boring than misinformation.
[Celia]
That makes sense. People love gossip, and we rarely gossip about dry, factual things. Can you help me understand what psychological inoculation might look like in practice?
[Cecilie]
As you said, psychological inoculation is a sort of vaccine for your brain. Essentially, with biological vaccines, where the patient is exposed to a weakened dose of a virus and in order to build antibodies to respond to future, stronger versions of the real virus—the psychological vaccine is also supposed to expose people to weakened doses of misinformation or a manipulative argument or tactic.
What we mean by a weakened dose is, essentially—you don’t give people a really persuasive, strong piece of misinformation, because if you give them that first, they’re going to be persuaded by that information. You have to give them something [information] that is somewhat weak. Either it could be that the argument isn’t that good, or there’s some form of humor involved. You don’t play on real, existing groups in society. You don’t tell people real false information.
This also has to be combined with a form of threat. Your mental immune system needs to feel like it’s under attack, or it will be attacked in the future. You have to combine this weakened dose, or the actual tactics that you’re teaching people.
[Celia]
In one study, Cecilie’s team had people read an inoculation message, a few sentences explaining that headlines often try to use emotional language to trick you into paying attention. Compared to controls who didn’t see the inoculation message, people who were warned about misinformation were less likely to perceive misinformation as reliable.
[Cecilie]
You need both a sense of threat in the form of affect, making someone aware that they’re going to be attacked, but then also, you need to give them the tools to defend themselves against that persuasive attack.
[Celia]
I imagine that, like with a biological virus, you’d need herd immunity across a large group of people in order for the threat to go away. You’ve also written about the potential for herd immunity against misinformation—how might that work in practice?
[Cecilie]
That’s what me and my colleagues are trying to grapple with at the moment. One strategy is to target as many people as we can. Making sure that all these tools are as available as possible.
One way is to gamify inoculation and have people play online games that anyone with an internet connection can access.
[Celia]
She’s doing this with an immersive murder mystery game called Solomon’s Secret, where players have to constantly question whether other characters are manipulating them. The project is looking for collaborators who love storytelling and educational gaming! So if that's you, you can get in touch. I’ll post a link in the show notes.
[Cecilie]
But at the same time, I think it’s unrealistic to assume that we can just inoculate everyone.
A second problem with a lot of these tools is that they tend to be picked up by those people who want to learn about misinformation, who are curious. Those people tend to be the ones who are the least likely to fall for misinformation, and so we’re preaching to the choir.
There are two ways to go about this. On the one hand, we need interventions that meet people where they are, which don’t necessarily focus on misinformation itself but which focuses on our whole entire information ecosystem and how we’re evaluating information in general. It’s not always useful to tell someone that this is a tool for misinformation, because everyone disagrees about what misinformation even is.
Interventions that focus more on identifying how our beliefs are even formed, how we’re influenced by others, and how we should also challenge each other and challenge each other’s information sources—that’s something I think research should move toward.
A second aspect is that many communities listen more to certain people within their groups. We call these social references. One strategy for inoculation is also to target those individuals’ social references that people pay attention to, such that they can spread the message throughout their networks. We’re not trying to target each individual, but we’re targeting people of influence who can then influence others to get better at identifying misinformation, disinformation, and propaganda.
A final, but really important aspect is also our education system. Currently, most Western education systems are built on the notion that we consume information in the same way that we did 10 or 15 years ago—we just don’t anymore. For example, recent work looking at who is most susceptible to misinformation finds that Gen Z are actually among the most susceptible.
The education system is still geared toward teaching us how to evaluate media sources, how to cross-check our references—a very critical-thinking approach which older generations also grew up with.
But that better matches the way those older generations consume information, because they still get information from traditional media outlets, journalistic articles, news coverage, and cable news. I’m not saying that’s free from misinformation—far from it—but their education matches the way they consume information.
Whereas, younger generations are still being taught what my generation and older generations were taught in terms of critical thinking and how we evaluate information. But what we really need is— within education systems—to actually teach students how they’re influenced by each other. All these students who get all their information on TikTok and Instagram from each other, from influencers, from their social networks—sharing information—we need to teach younger generations how they’re consuming information and reach people much earlier than we’re currently trying to do, when often the damage has already been done.
[Celia]
In the US, at least, we’re facing a huge crisis in science, in part because a large portion of our population doesn’t trust the elite institutions that do and talk about science. But these same skeptics might trust their close friends or charismatic people on YouTube. What influences what sources different people end up finding trustworthy?
[Cecilie]
There are several factors that contribute to how credible we see a source. Some psychologists say it can be broken down into their trustworthiness and their expertise, where their expertise is whether we think they have the knowledge to share what they’re sharing, but trustworthiness is often much more complicated.
Something very simple that contributes to whether or not we think someone is credible is their likability. It’s a simple term, but whether we actually like someone—it can be if they make us feel at ease when we’re with them, but it can also be if we feel that they’re similar to us. This can be based on several different attributes; it can be if we have a similar background, we grew up in the same place, we have a similar appearance, or if we share the same gender.
Something that’s very important that often shows up in research and in practice is if we have the same political orientation because it signals a deeper underlying similar and shared value system. If we feel that we share a value system with someone, we’re much more likely to accept any information that they share with us, even if it’s totally outside of what we perceive as their expertise.
I might follow an influencer who’s a mom, who shares very relatable mom content, and then all of a sudden she’s sharing something about politics. I might feel inclined to trust her view on that because I share the same background, or we have certain attributes that we share. But also, if I know that someone shares my political views on several key topics, or we vote for the same party or something like that, if they share something about nutrition, I’ll be more likely to believe them, simply because we share opinions on something totally unrelated. This is something that can often lead us astray.
[Celia]
A quick aside: While some cognitive biases may just be that—biases—plenty of people have good reason to distrust people and institutions that they don’t identify with, given harm caused in the past.
For example, the US government has exposed humans to radiation and dosed them with LSD without consent, and kept those experiments under wraps for decades. Until the mid-nineties, women were rarely included in clinical trials, leaving medical institutions poorly equipped to treat women in clinics. Police and the criminal justice system are biased against Black and brown people. The list goes on.
[Cecilie]
I want to share one of my favorite studies, in which the researchers wanted to test whether there’s a form of "epistemic spillover effect," they call it. If you share political views with someone, you’ll trust them for anything.
They had people do what they call a “blap task,” which is a fake task that they devised in which you’re supposed to identify whether some little figure is a blap or not a blap. There’s no system to whether it’s a blap or not. It’s random figures. But you’re supposed to think there’s some form of system.
You can choose who to get advice from. It turns out that one of these advisors shares your political orientation and the other one doesn’t. But the one who shares your political orientation gives you more wrong advice. Every time you get advice from them, it turns out that they’re wrong. It’s not a blap when they say it’s a blap, and it’s a blap when they say it’s not a blap.
But in the study, participants consistently preferred to receive advice from this wrong advisor who shared their political orientation, on a task that was totally unrelated to politics—even though they were wrong. It highlights that, first, people want to listen to others who share their political views, even on topics that are totally unrelated. Also, we still want to listen to them even though they turn out to be wrong, simply because we share the same views.
[Celia]
There are circumstances where it might be safest to automatically distrust someone whose political beliefs include wanting to strip fundamental human rights from people who share your identity. But in other circumstances, there’s probably a point where you should accept good information from sources you otherwise disagree with and vice versa. How can people overcome this bias, and when does it make sense to do so?
[Cecilie]
One thing is to just be self-aware, that our perceptions of trust in someone often have nothing to do with what they’re actually talking about. It can be because of all these values that I just described previously, in terms of similarity, likability—it could even be someone’s attractiveness. If they’re attractive, we’re more likely to listen to them and so on.
Number one, it’s just having that self-awareness around how our perceptions of credibility are not actually always rooted in true credibility. We should be better at paying attention to the history of events. Have they been delivering accurate and true information up to this point or not? Seeing the whole history of a person.
Also, what they do and how they respond in the face of updated evidence. Do they stand their ground and not admit to being wrong, even when the evidence is screaming in their face? Or do they update, apologize, and explain and update their beliefs? If they’re not someone who updates, then we know their communication is misleading or deceitful in some way, because they’ve not updated what they’re saying in line with the evidence.
[Celia]
Earlier, you mentioned that some people are more resistant to misinformation than others. What do you think contributes to these individual differences?
[Cecilie]
I have the overarching belief that if you put anyone in a similar context, they will also be susceptible to misinformation. We can’t pinpoint individuals and say, "This person is going to be more likely to fall for misinformation because of some trait." But, often, the context is going to lend itself to believing misinformation.
That being said, there are some very clear differences we see in research that highlight that there are—statistically speaking, on a population level—many differences between how likely people are to believe misinformation.
One is age. Younger people are more likely to fall for misinformation. Also, much older people as well—it’s a kind of bell curve.
I think with the younger generation, it’s a mismatch between where they consume information and how they’ve been taught to filter media. It just doesn’t align with where they get their information.
The speed at which misinformation spreads on social media platforms like TikTok and Instagram is unprecedented. We can’t stop it. They’ve grown up in a time where—even when I was studying marketing, we learned about the "human era": No one trusts institutions anymore. People want humans. Gen Z is getting information from other humans, not from institutions and so on. But that means—there are so many more humans than there are institutions. There’s so much more information spread between individuals that can just go viral.
A recent study by a colleague at Cambridge showed that Gen Z is also very aware that they are vulnerable to misinformation, which is kind of the opposite of another finding, which is that conservatives tend to fall for a lot more for misinformation than liberals, and yet they are extremely confident in their ability to identify misinformation. It’s a clear mismatch. They don’t realize.
This can also be explained by several different factors. One is that they have a higher need for certainty—for the world to be certain—than liberals. They’re less open-minded. Often simple explanations are the explanations they prefer rather than handling uncertainty—“Oh, we just don’t know.” They want an answer. Misinformation is often there to fill that gap.
Also, there’s the factor of cognitive reflection— our thinking style. Are you someone who tends to think about things deeply and analytically, or are you someone who goes with your gut and your intuition? Misinformation also often plays on that. It targets those who think intuitively. Often, people who are more conservative tend to go with their intuition rather than processing things in a very deep and analytical way—not that they don’t, but it’s just on average.
And of course also education. People who are more educated are less susceptible to misinformation. If we’ve been given the skills to spot things, we’ll be better at it. But also, I remember in the paper that people who were more educated had a bigger gap between their actual performance and their perceived performance. The more educated we are, sometimes, we also get overconfident and think, “I have a good education, so I know what’s true and false.” That can be quite dangerous when we don’t realize how susceptible we are.
[Celia]
Now, with AI making it very easy to create convincing fake content, how can we evolve our psychological defenses to keep up?
[Cecilie]
It’s a difficult time for misinformation because, as you say, it can be generated at scale. It’s also much easier to create things that look very credible, polished, professional, and trustworthy. That can lower our natural skepticism. AI and LLMs [Large Language Models] can be used to generate super-persuasive content, personalized attacks, and so on.
But I think there’s also potential for researchers and practitioners to take advantage of AI. The same tools that spread misinformation can potentially also help us fight it. AI can also then be used to detect patterns, to potentially personalize these inoculation messages, to figure out why someone is individually being targeted by misinformation, to help people decipher information and headlines. Potentially, AI can be used to generate traffic meters for information—also at scale—and keep up with this pace and help identify manipulative content before it spreads.
Just as we saw with the COVID-19 pandemic, with researchers, doctors, and biologists—researchers across the world in companies and universities, they got together at an unprecedented scale to work toward developing a vaccine—because of the gravity of the situation, and also the immense need for a quick turnaround.
We’re seeing the same thing with AI, in that, researchers across fields—in political science, communication, machine learning, physics, psychology, and sociology—in all fields are really working very quickly at the same time as this is happening to understand how we can prevent AI from making the misinformation system a lot worse.
[Celia]
Before we wrap up, I’d love to ask for some advice. What does your research say about how we can have productive conversations with people we care about who hold beliefs that we suspect are built on misinformation?
[Cecilie]
This is one of the hardest things but, for one, it helps to recognize that not everyone prioritizes factual accuracy. Maybe that’s an insult to some, but I don’t mean it in that way. For some people, what matters most is whether something feels true—whether it aligns with their values, their community, or their lived experience.
Even the people who don’t care that much about what truth is or what is true, they care about autonomy. They don’t want to be manipulated or lied to. Instead of saying to someone, “That’s false,” it can be sometimes more effective to ask whether they’re sure the people promoting that are acting in their best interests. Get them to identify the underlying motivations of whoever they’re getting this information from. See whether there could be an alternative motive for sharing this information.
It also never works to just tell someone that something is factually incorrect if their belief is deeply ingrained in their values. Therefore, it can be beneficial to just hear someone out and help them explain why it is that they have come to have that belief in general. Once you see why someone holds that belief, it’s easier to understand how they got there. Once you understand how they got there, you can tackle the underlying reasons for their belief in that piece of misinformation. You can talk about that rather than just coming in with one simple fact that you expect is going to refute that.
A final thing is also that sometimes we can’t. Sometimes we can’t change people’s false perception. Sometimes we need to not talk about misinformation. If it’s not something that’s damaging to their health or damaging to their social lives, if it’s just a belief they hold that’s technically untrue, there’s no need to go in and try to change that. When it becomes important is when it’s actually impacting their health, societal health or well-being, or their social lives are impacting others negatively—then it’s quite important to talk about.
But then it’s also—maybe the final thing is to figure out whether you’re the right messenger of this inoculation or information, or whether there’s someone they’re going to trust more. As we just talked about, if you share very different political perspectives, potentially you are not going to be the one who’s most effective at changing a deeply ingrained false belief in that individual. Someone else would have a better shot at that.
[Celia]
All very helpful. Cecilie, thanks for joining us.
[Cecilie]
Thank you so much for having me.
[Music: "Wandering" Remix by Nat Keefe]
[Celia]
Thanks again to Cecilie S. Traberg for joining in on this episode of Wired For This. You can find links to our sources in the episode description.
You’ve been listening to a podcast by American Scientist, published by Sigma Xi, the Scientific Research Honor Society.
Wired For This is produced and edited by Nwabata Nnani and hosted by me, Celia Ford.
Thanks for listening.
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.