Technology is not just changing, it's transforming the way we take in and process information.
In this episode of Wired for This, Jason Lodge and Philipp Lorenz-Spreen discuss how we consume, process, and share information—and how these practices are changing as our relationships with technology evolve.
TRANSCRIPT
[Music: Wandering by Nat Keefe FADE IN]
[Celia]
Welcome to Wired For This—a deep dive into how we think, believe, change, and connect.
In this limited series, we explore the psychology of human behavior and neuroscience—what drives us forward, what holds us back, and how we navigate a world bursting with noise, contradiction, and complexity.
In this episode, I’m excited to share highlights from two conversations I had: one with Dr. Jason Lodge, and one with Dr. Philipp Lorenz-Spreen. We talked about how we consume, process, and share information, and how all of this is changing as our relationships with technology evolve.
Jason Lodge is the Director of the Learning, Instruction, and Technology Lab and professor of educational psychology in the School of Education at The University of Queensland, in Australia. He explores the cognitive, metacognitive, and emotional aspects of learning, particularly in higher education and digital environments. He’s also an award-winning educator and advisor to the Australian Government on technology in education.
Philipp Lorenz-Spreen leads the junior research group “Computational Social Science” within the Center Synergy of Systems at TU Dresden, in Germany. He and his team study the societal impact of digitalization and how complex online discourse affects democracies worldwide.
From American Scientist, I’m Celia Ford, and you’re listening to Wired For This.
[Celia]
Every morning, I wake up and read newsletters, mostly roundups of headlines from the day before. And almost every day, I’m surprised by what stories I actually remember. I asked Dr. Jason Lodge to help me understand why some information seems to “stick,” while other things seem to go in one ear and out the other.
[Jason]
One thing we know that is a significant bottleneck is our ability to pay attention to only a certain number of things at any point in time. That, I think, allows us a window to understand the way that these technologies are impacting–what are the things that we selectively pay attention to? What are the things that we, perhaps, don’t?
Research for decades has told us that the things we select to pay attention to are much more likely to be processed, obviously, because they enter into consciousness, and then they enter into memory. They’re things that then impact us in a persistent way.
The way that technology has evolved, for various reasons—some of it is due to advertising being built into these tools—It’s about trying to capture that selective attention.
[Celia]
I asked Dr. Philipp Lorenz-Spreen the same question.
[Philipp]
Novelty and surprise are elements of information that we prefer. Our attention guides us to that, because obviously it’s important to know new things. If it’s something we already know, why should that be interesting to us?
But there are other factors as well, especially if you think about the competition between information. There are more sources, more outlets that try to get our attention.
There, negativity and other emotions are often something that guides our attention. There’s this term negativity bias, which is a thing people already intuitively knew for a very long time. Psychologists have looked at it, but now we also see it in data. Negative headlines online are more successful. They get produced more over time. Negativity seems to be something, as well as emotions like outrage, often negative emotions.
There’s another factor, which is out-group and in-group feelings. That seems to be another factor that we humans are quite susceptible to. Whenever content is talking about us or them or us vs. them, it can be more successful than others.
[Jason]
There are some more insidious ones as well, which are aligned with this idea of fluency. What’s easier for me to process? That can be things like, I have a particular bias in how I see the world, and I come across somebody who says something that aligns with that bias. That’s easier for me to make sense of because I already have this idea in my mind. That’s how we end up with these sorts of bubbles that people talk about, where people are only talking to each other within their bubble and they might all have the same misconceptions underpinning their thinking.
It can also play out in terms of things that are entertaining, things that are aesthetically pleasing. These all feed into this idea of information being processed more fluently that we like. It’s not hard work for our minds to be able to process that stuff. That will tend to also draw our attention away, because it feels nice for things to go down easily. That includes information.
We know again from a lot of the research, if you want to learn something deeply, that requires hard mental work. Often that hard mental work also comes with confusion, frustration, anxiety sometimes, depending on what the material is that you’re trying to learn. These are not things that should be avoided. These are critical parts of the learning process.
[Celia]
Most of us don’t have time to put that kind of mental effort towards skimming our news feeds every day—the constant firehose of information is simply too much, and discourse shifts too quickly. Philipp’s research group studies how people share information online, and how public discourse evolves within social networks.
[Philipp]
Online we know there are millions and even billions of people being active, posting stuff, writing content, sharing and engaging with it.
We have the content. We can look at the text, which is also increasingly possible through the development of language models.
We can also look at the metadata, like the engagement. What gets a lot of likes? How does it move through the network? Who is sharing it? We can see if there are certain terms that get a lot of traction. We see these trends going up and trends going down again.
But we can also look at networks, where we can follow who is sharing certain content in the network, who’s resharing it. How does it move through the social network? These are all things that become very quantifiable now. Of course we need computers for that. This is all digital data that we can then process.
[Celia]
Back in 2019, Philipp’s team used these tools to study collective attention—how long bits of cultural information circulated around social media before being forgotten.
[Philipp]
We were inspired by sociologists who have been talking about social acceleration, about this idea that technological progress speeds up our lives, makes us more efficient, but in turn we also put more tasks in our lives and society moves faster. In this vicious cycle we’re basically, in a way, trapped.
We thought it would be cool to measure that, to really quantify it. We went for various platforms. We started with Twitter. Back then it was quite easy to get Twitter data for longer periods of time. We did actually see that—what we call public discourse here is the popularity of hashtags.
The popularity of hashtags, that increased quicker, and also lost popularity quicker over the years. We thought, okay, that’s a hint that there’s some acceleration going on.
Then we went for other data sets. Some of them were also online, like Google search queries: How quickly interest in a search query rises and falls. Also how Reddit discussions, for example, evolve over time. But we even went outside, offline so to speak, and we looked at how authors in the Google Books data set, digitized books, how they used certain terms. We looked at how movie tickets are sold.
We also saw these accelerating dynamics. These waves of interest in public discourse jump from one topic to the other very quickly.
[Celia]
We can blame at least some of this acceleration on our devices.
[Philipp]
We did hypothesize that this has something to with technicologic process—our information systems have become faster. We have push notifications. We have access to news 24 hours. These are the drivers.
The consequences—what does it do to the quality of discourse, for example? This is something I’m still researching, but obviously you could think that the quicker that turnover is, the more difficult it is to keep up, for example, journalism. They have to do their research. That usually takes some time. If there’s this very brief wave of interest and you have to catch that, you’re probably better off just publishing something you have on the topic, which is maybe not of that high quality.
And by that–we are back at this vicious cycle. We have to keep up with this, but at the same time, we might do this to the detriment of some other qualities, like the depth of the discourse.
Long, complicated content of course needs more resources. We need more time to read through it and understand it. If there’s a simple claim saying, “It’s their fault,” or “I solved this problem,” that’s easier to process, so we attend to it more easily.
[Celia]
This makes sense, given everything we know about human cognition. Like Jason said, our brains generally don’t like working too hard, even though that effort is important for deep learning. He explained more:
[Jason]
Many of the things we need to understand in the world are complicated. They just are. There’s no avoiding that. In almost all circumstances—not all, but the majority—learning those things requires effort.
The problem that we have—I like to draw the analogy between our minds and our bodies. In many ways, if you want to get fit, similarly you have to put in hard work. We don’t necessarily see the muscles of our brains, or our body fat percentage decreasing in our heads, in the same way we can see those effects in a concrete way in our bodies.
It can feel really tempting for us to just get to the finish line and not realize that just getting there was not the point of the exercise. It was the process you go through in that learning —the hard work that takes you from point A to point B— that is actually the key piece. It’s not just about getting to the end result here. You have to invest the effort in order for that to be an effective learning experience.
If you can find a way to shortcut that, one that feels like you get to the end result without doing that work, that’s a tempting thing for a lot of people.
Something we’ve seen a lot, and this was particularly true when we started to use multimedia resources for learning, things like videos and podcasts, videos in particular—the more fluent and easy to process the information is, and video is fantastic for this because you can create amazing animations and talk through things in a slow and considered way—students can rewind if they need to go over something again. This gives us a feeling that the information is sometimes easier to process than it should be.
To give a clear example of this, you might have seen some of those fancy documentaries that are produced sometimes about really complex ideas, like cosmology or quantum physics. Because they’re so beautifully made, at the end of watching that for an hour you feel like, oh, I completely understand quantum physics now.
Of course you don’t, but because there is something about the way in which the information is so easily processed—it’s entertaining. It’s beautiful to look at. It’s presented very well. It lures us into being overconfident about how much we know.
[Celia]
This overconfidence can create a mismatch between what we think we know, and what we actually know.
[Jason]
Any time we’re learning something, it’s always happening on two levels. On one level is the actual stuff we’re learning. The pure, if you like, cognitive component of it, which is "I’m acquiring this knowledge. I’m figuring out this new way to use the information," whatever that might be, whatever the task is or whatever the learning is.
Running alongside that, always, is another level, which is the way that we monitor and understand our own learning as it’s happening. It doesn’t just happen as an automatic process. We’re constantly monitoring how we’re going. On the basis of that monitoring we make decisions about where we go to next.
We call these processes, broadly, metacognitive, or thinking about our thinking if you like. The whole cycle—we refer in the research to this notion of self-regulated learning. It does involve these critical components of making judgments about what I know relative to what I need to know, understanding where I need to go in terms of my goal-setting, but also being able to make good decisions about what I do next on the basis of where I think I am relative to where I need to be.
[Celia]
We'll be right back.
[AD]
iFoRE 2025 registration is now open. Be among the first to register for IFoRE ‘25, Sigma Xi’s annual conference featuring award-winning research presentations, keynote speakers, and panels. This year’s virtual format offers an international conference experience at affordable rates with no travel required. Visit www.experienceifore.org to learn more and register today.
[Celia]
You’re listening to Wired for This.
[Celia]
But having access to near-infinite information all the time, via the supercomputers we carry in our pockets and our backpacks, can trick our brains into thinking we know more than we really do. Jason said that can be a problem.
[Jason]
Now, because technology lures us into thinking that we can get to the finish line faster, what tends to happen over time is that these misjudgments compound. That is, I think I understand this thing better than I actually do, and as a result of that I’m not making the best decision about what to do next.
To give you one example, I’ve just watched a very beautiful video explaining quantum physics to me in an hour. At the end of that my judgment is that I understand quantum physics. My decision is I don’t need to study that anymore. For students, that misjudgment is leading them to make the wrong decision when they probably need to decide to actually spend more time studying the material, because they’ve developed a level of overconfidence, which is the misjudgment here. This becomes a bit of a cycle.
[Celia]
There is a way to break the cycle.
[Jason]
In our research studies we test people’s knowledge. We use the classic testing approach to see where their understanding is at. But we also ask people how confident they are. How much effort do they feel they need to put into something? How difficult did they think the material was?
Somebody who doesn’t perform very well on a test feels very confident, didn’t feel like they needed to put in much effort, and didn’t feel the material was difficult. It’s clear that that person is overconfident in their learning.
The trick for us all as individuals, and for teachers, is that it’s really about taking whatever opportunity we can to test our own understanding in a meaningful way, or to help our students to test their understanding in a meaningful way. For example, one way we can do this as individuals is if I think I understand something, try to explain it to someone else. If you can’t explain it to somebody else in a way that makes sense to them, you probably don’t understand it quite well enough. This is classic work that people call the testing effect.
Until you test your level of understanding in some way, whether through an exam or having a conversation with somebody else and trying to explain it to them, it’s hard to get a good grasp on your level of understanding.
But that calibration is critical. If we think about the end result of a lot of these processes to create people who have expertise, then what is an expert? An expert is not just someone who knows lots of things. It’s someone who knows the limits of their knowledge and is well-calibrated in terms of what they know and what they don’t know. That, I think, is the journey that we all want to go on.
[Celia]
We’ve alluded to it already, but digital technology is fundamentally changing how we take in and process information. This is a relatively new field of research, but scientists like Jason have already made some surprising observations.
[Jason]
Our brains are essentially doing the same sorts of tasks and activities that they were previously.
What obviously has changed is that the vehicle by which information comes to us, and the volume of information that comes to us, has changed significantly. Some of the ways in which the information is packaged are creating some issues with making sense of the information that we come across.
What tends to happen is that—this was true pretty much since the internet became available. When we process the information from a screen, we often will do it in such a way that we’re searching for something.
We see this in eye tracking studies, where people’s gaze goes to on a screen. It will often follow a pattern that aligns with the navigation functions that we use to work our way around that digital environment. This is very different from the sorts of patterns you see if someone is, for example, reading from a physical book or a piece of paper, in which people tend to be much more linear in their approach to reading it.
What has happened, though—we have some very recent data on this, which is very interesting and slightly scary. For young people in particular—we’re doing a lot of this work with secondary school, high school students—a lot of the behavior we previously saw in digital environments, this scanning and searching behavior, is now translating over to the way they also process information from books, from physical resources.
It’s kind of scary that some of this processing is translating across, because we also know from the research that we’ve done that you don’t necessarily process the information as deeply when you’re scanning and searching through it as opposed to working through it in a linear way, when you’re reading the material properly.
It generally means that when we work from screens, as many of us have gotten used to, we become quite good at being able to synthesize different pieces of information and pull those things together in ways that you don’t necessarily get from reading something in a linear way, from a physical resource. There’s an up side and a down side to that. But the data we have would suggest that the down sides at this point are winning. This surface-level processing of stuff, even on paper, is starting to become a real problem.
[Celia]
All of this starts to get weirder when we consider our relationships with generative AI.
[Jason]
Generative AI has taken this and really expanded on the way that these technologies allow us to have a fluent interaction with them. Because while it’s still a tool, it feels like a collaborator. It’s not a collaborator, but it feels like a collaborator.
We know that because—I ask this of people all the time. Do you say please and thank you to generative AI? The majority of people say that they do.
Do you say please and thank you to a calculator? Do you say please and thank you to your car? Your car has 40 computers in it. There’s something about this technology that makes it feel like we’re collaborating with something.
What that then does is it takes this fluency, because generative AI is also really good at explaining things. Sometimes it explains the wrong things in the wrong ways. It makes mistakes. But there’s something about that fluency that comes from generative AI that particularly seems to be driving people to feel like they understand something better than they do. It does such a great job of explaining things. But perhaps it does too good a job.
We’re starting to see these clear differentiations in terms of thresholds of use that we’ve been talking about. The first one is just getting in and using these tools at all. Just getting in there and testing them out and seeing what a prompt does and all that sort of thing.
The second level is where you—we see this with students in some of our studies. They treat this technology like previous technologies. Something like a calculator or a search engine. They put something in, they expect a response from it, and off they go. It’s very transactional.
When people pass that threshold into the way in which these tools have been designed to be much more interactive, it then feels much more like a conversation. That’s where we start to see some of this fluency impact on the way that people process that information.
[Celia]
Even though it sometimes feels like our devices control us, we do have agency here. We can be intentional about building better information ecosystems for ourselves, on our phones and within our social networks. I asked Philipp for his advice.
[Philipp]
I think many people say that they are not particularly happy with their information consumption. They say they’re on their phones too much, or they mindlessly scroll through things and so on.
Sure, you can try to practice self-control and stuff like that. But I think this is putting a bit too much responsibility on the individual. We’re facing a technological system that’s trying to get our attention. The algorithms of platforms, but also people who produce content, obviously they do this very intentionally. In that sense I think—I would rather focus on that part.
Some of these biases and tendencies are probably very old and deeply ingrained in our psychology. They’re evolutionary. We can’t just easily change that. We can’t just tell ourselves, don’t look at negative information anymore, ignore that, just look at positive stuff. That wouldn’t really work.
The problem is that these platforms, with their algorithms for example, by learning about what we engage with, they’ve learned our biases very well, and now they amplify them even further.
Then of course this can also do something to us, because we see more of that content. We engage more with it. We have a kind of feedback loop going on here.
And yes, I think we have to break those feedback loops, in one way or another. I do think that we need to have a bit of a societal conversation about what kind of information environment we want to have. These things are also human-made. The algorithms, the platforms, the technology that drives our information environment now, it’s not a force of nature. It’s something that people developed, and that people can also change. We should tackle these parts.
[Celia]
So, if the goal is to build better online platforms, we should have a clear idea of what “better” looks like. So, I asked Philipp.
[Philipp]
If I were to think about the ideal information environment, I would think about an information environment that’s less driven commercially. Many of the big platforms, they have commercial interests. We’ve moved into this advertisement-driven business model on the internet mostly. This is the root cause of some of the things we’re seeing.
The advertisement business, it’s quantity before quality. It’s better to show a lot of ads to the customer and keep them on the screen longer. But these are values that are in contradiction with the values that we want as a society for our public discourse, which might be information quality or time well spent or constructive interactions with others. These are all not really goals that are baked into those systems. I do think we have to bring them back in.
[Celia]
One thing that can make public discourse feel biased, polarized, or hostile, is our tendency to connect with people who already share our values.
[Philipp]
That’s something we can easily find online. We find this all the time. Of course this can lead to surrounding yourself with specific voices that, for example, are echoing, quite literally, your positions. By that you become less understanding of other political sides, for example.
Of course the intuitive suggestion would be to just follow some people from the other political side, even though it’s not that simple. Some results show that it might have the opposite effect. You get repulsed from that side. I don’t think that we already know enough about those dynamics to make very concrete suggestions.
I want to emphasize that also here, the power in shaping our own information environment is slightly limited, and it’s being reduced with time. If you think about platforms like TikTok, the social network element is becoming less important with time, because they are much more focused on the “for you” algorithm, they call it. The algorithm decides for you.
[Celia]
Before we wrapped up, I asked Philipp what he would change about how we share information online.
[Philipp]
The first thing I would do is to make the algorithms transparent. This would give users back some more control, because they might say, okay, this algorithm, I don’t like that algorithm that shows me all these things. Maybe I can change that. Or I’ll switch platforms. But if we don’t know what the algorithm is actually doing, we don’t have the possibility to make those decisions.
And then also, on the societal level, opening up the algorithms would help, for example, researchers like us to understand the dynamics. Why is the algorithm choosing certain content? Is it actually the psychological biases that are amplified?
Once we crack open the black box, then we can start talking about how we want to redesign it.
[Music: Wandering by Nat Keefe FADE IN]
[Celia]
Thanks again to Jason Lodge and Philipp Lorenz-Spreen for joining in on this episode of Wired For This. You can find links to our sources in the episode description.
You’ve been listening to a podcast by American Scientist, published by Sigma Xi, the Scientific Honor Society.
Wired For This is produced and edited by Nwabata Nnani and hosted by me, Celia Ford.
Thanks for listening.
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.