1 August 2018 395 words, 2 min. read

Filter bubbles: scientific evidence that algorithms aren’t to be blamed

By Pierre-Nicolas Schwab PhD in marketing, director of IntoTheMinds
What can you discover about filter bubbles when you analyse datasets with billions of tweets over a 8-year period ? Some very interesting results for sure.  In particular that you don’t need algorithms to stay within an echo chamber and […]

What can you discover about filter bubbles when you analyse datasets with billions of tweets over a 8-year period ? Some very interesting results for sure.  In particular that you don’t need algorithms to stay within an echo chamber and that bridging echo chambers comes at a price. This contributes to the ongoing discussion on the very existence of algorithmic filter bubbles.

Introduction

Garimella et al. (2018) studied political exchanges on twitter Their research differs from that of Bakshy et al. (2015) in that both the message as well as the network to propagate it were studied. This is a signification methodological improvement over the study by Bakshy et al. which focused on the consumption of cross-cutting content. One should however remember that Bakshy’s study (based on Facebook data and not on Twitter’s) came at a time when Facebook was accused to influence voters’ opinions. The focal point was therefore on the content at that time.

The title of this section may look partisan itself; but it’s pretty much the spirit of the research results.
Those who propagate partisan messages (i.e. content leaning exclusively towards one end of the political spectrum) enjoy statistically higher retweet rate, retweet volume, favorite rate and favorite volume

Bridging echo chambers comes at a cost

So-called bipartisan users (i.e. users who tweet messages from different sides of the political spectrum) have less central position in their network (represented by their PageRank) and are less popular (as measured by number of retweet rate and volume as well as favorite rate and volume). However no significant difference was found in the number of followers.
Being bipartisan and trying to expose your community to opposing political opinions comes therefore at a cost : that of decreased engagement from your community.

Do we need algorithms to be trapped in a filter bubble

This research shows that we don’t necessarily need an algorithm to be trapped in a echo chamber. Cognitive dissonance and group effects are sufficient.
Twitter doesn’t filter messages out (there is no recommendation algorithm as on facebook to filter messages out) and yet polarization is a reality. Bridging echo chambers is not rewarded; users are not nudged to express less polarized opinions.
Nudging users towards less polarization is something that could be harnessed by next-generation recommendation algorithms. They must become tools to promote democracy.



Posted in Research.

Post your opinion

Your email address will not be published. Required fields are marked *