A Mathematician Walks Into a Bar (of Disinformation)

The Evolving Landscape of Media and Disinformation
Terms like disinformation, misinformation, infotainment, and “algowars” have become commonplace in discussions surrounding the future of media over the past few decades, leaving a noticeable mark on the English language. Significant concern has arisen regarding the impact of social media, ranging from individual psychological and neurological effects to broader implications for democratic societies. As Joseph Bernstein recently observed, the transition from the perceived “wisdom of the crowds” to the prevalence of “disinformation” has been remarkably swift.
Defining and Identifying Disinformation
What constitutes disinformation? Does it genuinely exist, and if so, where can it be found, and how can we reliably identify it? Is it necessary to be concerned about the content presented to us by the algorithms of our preferred platforms as they attempt to maximize user engagement? These complex mathematical and social science inquiries initially sparked Noah Giansiracusa’s interest in this subject.
Noah Giansiracusa’s Research
Giansiracusa, a professor at Bentley University in Boston, possesses a background in mathematics, with research focused on areas such as algebraic geometry. He also demonstrates a consistent inclination to analyze social issues through a mathematical framework, exemplified by his work connecting computational geometry to Supreme Court decisions. His most recent publication, “How Algorithms Create and Prevent Fake News,” delves into the challenging questions surrounding the current media environment and the ways in which technology both intensifies and mitigates related issues.
A Conversation on Twitter Spaces
I recently hosted Giansiracusa on a Twitter Space, and recognizing the limited accessibility of these discussions after they occur (due to the platform’s ephemeral nature), I decided to extract the most insightful segments of our conversation for wider dissemination and archival purposes.
This interview has been edited for brevity and clarity.
Danny Crichton’s Questions
Danny Crichton: What motivated you to undertake research on fake news and subsequently write this book?
Noah Giansiracusa: I observed a significant amount of insightful sociological and political science discourse concerning fake news and related phenomena. Simultaneously, on the technical side, figures like Mark Zuckerberg have suggested that AI will resolve these problems. It appeared challenging to reconcile these perspectives.
Politicians, such as President Biden, have expressed concerns about misinformation on social media, stating it is “killing people.” However, it’s difficult for them to fully comprehend the algorithmic complexities involved. Conversely, computer scientists possess a deep understanding of the technical details. I find myself positioned between these two groups, allowing me to adopt a broader perspective.
Ultimately, I was driven by a desire to explore further the interactions between society and technology, where complexities arise and mathematical precision becomes less straightforward.
Nuance in Existing Research
Crichton: Given your mathematical background, entering this debated field with diverse viewpoints, what aspects of current research are accurate, and where might nuance be lacking?
Giansiracusa: The quality of journalism in this area is remarkable; I was impressed by journalists’ ability to grapple with technical concepts. However, I noticed a tendency to cautiously present findings, often quoting press releases or experts rather than deeply investigating the underlying details. This seems to stem from a fear of making errors.
My experience as a mathematics teacher has shown me that people are often hesitant to express ideas for fear of being incorrect. This apprehension extends to journalists reporting on technical subjects. In pure mathematics, exploration and experimentation are encouraged, with details verified later. This approach, fostered by my mathematical training, actually reduced my apprehension about making mistakes.
Furthermore, many algorithmic processes are less complex than they appear. While implementing these algorithms may be challenging, the core principles are often readily understandable. The key lies in identifying the input variables and the desired output.
The Challenge of Transparency
Crichton: A significant obstacle in analyzing these algorithms is the lack of transparency. Unlike the collaborative nature of the mathematical community, many companies are reluctant to share data and analysis.
Giansiracusa: It appears there is an inherent limit to what can be deduced from an external perspective.
Studying YouTube’s Algorithm
Consider YouTube as an example. Researchers attempting to determine whether the recommendation algorithm leads users down paths of conspiracy theories and extremism face a significant hurdle. The algorithm utilizes deep learning, incorporating hundreds of predictors based on search history, demographics, and viewing behavior. This personalization makes it difficult to conduct meaningful studies.
Most studies rely on “incognito mode,” simulating a user with no prior history. This experience differs drastically from that of a real user with an established online profile. Consequently, effectively studying the YouTube algorithm from the outside remains a challenge.
The only viable approach, in my opinion, would involve a large-scale study recruiting volunteers and tracking their online activity. However, even this method is complicated by the algorithm’s reliance on individual data, making it difficult to draw aggregate conclusions.
Moreover, even those within these companies who understand the algorithm’s design may struggle to predict its actual behavior. It’s akin to Frankenstein’s monster – a creation whose operation remains unpredictable. Meaningful study requires internal access to data and dedicated resources.
Evaluating Metrics of Misinformation
Crichton: Numerous metrics are employed to assess misinformation and measure engagement on platforms. From your mathematical perspective, how robust are these measures?
Giansiracusa: Attempts to debunk misinformation often inadvertently increase engagement through comments, retweets, and shares. Consequently, engagement metrics may not accurately reflect positive or negative responses. The data is often aggregated without distinction.
A similar issue arises in academic research, where citations are used to gauge success. However, even demonstrably flawed research, like the Wakefield study linking autism to vaccines, can accumulate numerous citations, including those from researchers debunking the claims. A citation remains a citation, regardless of its context.
This ambiguity extends to engagement metrics. When someone posts a comment expressing disbelief, how can the algorithm determine whether it’s supportive or critical? While AI language processing could potentially address this, it’s uncertain whether it’s being implemented, and it would require significant effort.
The Threat of Synthetic Media
Crichton: Finally, let’s discuss GPT-3 and the concerns surrounding synthetic media and fake news. There’s a fear that AI bots will flood media with disinformation – how concerned should we be?
Giansiracusa: My approach, stemming from my teaching experience, is to remain impartial and present information objectively, allowing individuals to form their own conclusions. I aimed to cut through the debate and allow both sides to be heard. I believe that newsfeed and recognition algorithms amplify harmful content, which is undeniably detrimental to society. However, there’s also significant progress in utilizing algorithms to effectively limit fake news.
There are those who believe AI will solve everything, providing truth-telling and fact-checking capabilities. While progress is being made, this outcome is unlikely to be fully realized and will always require human oversight. Conversely, there’s an irrational fear that algorithms will become so powerful they will destroy us.
When deepfakes emerged in 2018, and GPT-3 was released a few years later, there was concern that they would exacerbate the problem of fake news. However, with some distance, we can see that their impact has been less significant than anticipated. The primary challenge is psychological and economic rather than purely technological.
The original authors of GPT-3 conducted a test where they expanded a single headline into a paragraph and asked volunteers to identify the algorithmically generated text. They achieved accuracy barely above random guessing. While impressive, the experiment involved extending short text snippets. Attempting to generate full-length articles would reveal inconsistencies and a lack of coherent thought.
GPT-3 can create convincing articles, but the main reason it hasn’t been transformative is that fake news is often low-quality and cheaply produced. It’s easy to pay someone to create a large volume of fake news articles quickly.
Mathematics has instilled in me a sense of skepticism, encouraging me to question assumptions and approach information with a critical mindset.
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
