Intel / North America

Real-time monitoring of security incidents, escalation signals and threat indicators across global hotspots, focusing on rapid alerts and emerging risk developments. Topic: North-America. Updated briefs and structured summaries from curated sources.
AI and disinformation – How can Europe safeguard trust in the media?
AI and disinformation – How can Europe safeguard trust in the media?
2026-02-05T01:06:21Z
Summary
Artificial intelligence is significantly influencing societal interactions and media trust. The AI for Trust project, funded by Horizon Europe, aims to combat disinformation through collaborative efforts. The project seeks to create an online platform that collects and processes data from social and news media using advanced AI tools, providing insights on disinformation to various stakeholders. AI serves as a powerful tool that can both threaten and provide solutions in the fight against disinformation, particularly during elections. The financial burden of producing quality news complicates the maintenance of a healthy information ecosystem. The European Union faces increasing pressure to uphold the Brussels effect amid challenges posed by disinformation and the influence of large technology companies. Disinformation is a multifaceted issue that requires a comprehensive approach beyond simply prohibiting misleading content. Effective tools must empower individuals and organizations to combat misinformation while fostering democratic participation and integrity in online communication. The reliance on AI tools to combat disinformation assumes that technology alone can address deep-rooted societal issues, neglecting the influence of powerful entities that may manipulate these tools for their own agendas. Collaboration among various stakeholders is essential for developing effective tools to combat disinformation. Engaging end users in the project helps tailor tools to their workflows, making them more effective. The discussion highlights the need for extending public service regulation to digital information providers, including AI platforms, as younger audiences increasingly rely on influencers for news.
Perspectives
Analysis of AI's role in combating disinformation and the importance of collaboration among stakeholders.
Proponents of AI in combating disinformation
  • Advocate for the use of AI tools to enhance fact-checking and transparency
  • Emphasize the need for collaboration among stakeholders to develop effective solutions
  • Highlight the importance of empowering individuals and organizations to combat misinformation
Critics of AI reliance
  • Question the effectiveness of technology alone in addressing deep-rooted societal issues
  • Raise concerns about the influence of powerful entities manipulating AI for their own agendas
Neutral / Shared
  • Acknowledge the complexity of disinformation as a multifaceted issue
  • Recognize the need for a comprehensive approach beyond simply prohibiting misleading content
  • Discuss the importance of media literacy and public trust in combating disinformation
Metrics
other
the AI against disinformation collaboration
collaborative efforts to tackle disinformation
This collaboration is crucial for addressing shared challenges in combating disinformation.
we built a collaboration that we called the AI against disinformation collaboration.
cost
very expensive USD
producing news and verified knowledge
High costs hinder the sustainability of quality information.
it's expensive to produce news of course as you all know it's also very expensive to produce scientific knowledge
impact
very fast and cheap
AI's role in information dissemination
AI's efficiency alters the landscape of information sharing.
AI on the other hand as we all know is making it very fast and cheap and easy to create and spread
risk
democracy itself is at risk
the predatory nature of the information ecosystem
The erosion of shared realities threatens democratic processes.
it's becoming a predatory battleground in reality for large geo economic actors where democracy itself is at risk
other
the so-called Brussels effect
pressure on the EU
It highlights the EU's struggle to maintain influence in the face of disinformation.
the so-called Brussels effect can withstand the what you can call the billion as effect
other
platforms do use AI and algorithms to detect this type of content
AI in content moderation
Indicates the current reliance on AI for managing disinformation.
platforms do use AI and algorithms to detect this type of content
other
the conversation isn't so much about the AI tools and the problems around those
focus of the discussion
Shifts the focus to the broader implications of disinformation.
the conversation isn't so much about the AI tools and the problems around those
cost
100 euros or hundred dollars EUR
cost of an AI-generated newspaper
This low cost highlights the accessibility of disinformation tools.
for 100 euros or hundred dollars you can buy a fully AI generated newspaper
Key entities
Companies
Blue Sky • European Commission • European Fact-Checking Standards Network • Finconnes • GROC • Google • Horizon Europe • Mastodon • Rotas Institute • Twitter • US Department of Homeland Security • University of Cambridge
Countries / Locations
USA
Themes
#diplomatic_activity • #escalation_risk • #information_warfare • #ai_accountability • #ai_disinformation • #ai_for_trust • #ai_regulation • #ai_threats • #ai_tools
Timeline highlights
00:00–05:00
Artificial intelligence is significantly influencing societal interactions and media trust. The AI for Trust project, funded by Horizon Europe, aims to combat disinformation through collaborative efforts.
  • Artificial intelligence is reshaping societal interactions with information. It presents both opportunities and challenges in the media landscape
  • The event focuses on how Europe can maintain trust in media amidst the rise of AI and disinformation
  • Jennifer Baker serves as the moderator, guiding the discussion and facilitating audience questions through the Slido tool
  • The AI Act and related initiatives aim to address the risks associated with AI technologies. This includes the AI content action plan
  • The AI for Trust project, funded by Horizon Europe, is a collaborative effort to combat disinformation using AI
  • A senior official provides opening remarks and reflects on the projects three-year journey
05:00–10:00
The AI for Trust project aims to create an online platform that collects and processes data from social and news media using advanced AI tools. This initiative seeks to provide insights on disinformation to various stakeholders, enhancing the understanding of the European media landscape.
  • The AI for Trust project aimed to create an online platform that collects data from social and news media. It processes this data with advanced AI tools
  • This platform provides disinformation insights to fact-checkers, journalists, policymakers, and researchers. It enhances the understanding of the European media landscape
  • Collaboration among project partners was crucial. They developed innovative technology and research while maintaining a cooperative spirit throughout the projects duration
  • The project faced challenges due to rapid developments in AI. This was particularly evident following significant events like the acquisition of Twitter and the public release of ChatGPT
  • A shared technological infrastructure and data space were deemed necessary. This was to address common challenges faced by various projects tackling disinformation
  • The efforts of the AI for Trust project contribute to building a shield for European democracy. This was a key topic in their recent collaborative event in Brussels
10:00–15:00
AI is a powerful tool that can both threaten and provide solutions in the fight against disinformation, particularly during elections. The financial burden of producing quality news complicates the maintenance of a healthy information ecosystem.
  • AI is a powerful tool that can pose threats and offer solutions in the fight against disinformation. It is essential to harness AI effectively to protect democracy and maintain information integrity
  • Misinformation and disinformation can have severe consequences, especially during elections. The impact of misleading information on democratic processes is a critical concern that must be addressed
  • The challenges facing information integrity are compounded by the high costs of producing quality news and verified knowledge. This financial burden makes it difficult to sustain a healthy information ecosystem
  • AI facilitates the rapid and cost-effective creation and dissemination of information. This alters how people process and perceive content, potentially leading to an erosion of shared facts and realities
  • Generative AI is increasingly used to deceive and influence public perception through targeted messaging. This creates a predatory environment where democracy is at risk from powerful economic actors
  • Trust in technology companies and government institutions is fragile and can be easily lost. The intertwining of trust in technology and government complicates the regulatory landscape for democracies
15:00–20:00
The European Union faces increasing pressure to uphold the Brussels effect amid challenges posed by disinformation and the influence of large technology companies. AI tools present both opportunities and risks in combating disinformation, necessitating a focus on transparency and accountability in their application.
  • The pressure on the European Union to maintain the Brussels effect is increasing. This is especially true in the context of disinformation and the influence of large technology companies
  • Media literacy alone is not a complete solution to the challenges posed by disinformation. Many individuals remain unaware of the predatory nature of the information environment
  • Disinformation is complex and often not illegal, which makes it difficult to address through policy. AI tools can both create and combat disinformation, complicating the response
  • Platforms currently use AI to detect disinformation. However, the focus should be on ensuring transparency and accountability in how these tools are applied
  • The consequences of flagging misleading posts must be carefully considered. Disinformation can range from harmless to potentially harmful, especially in influencing elections
  • A comprehensive policy agenda should prioritize making platforms responsible for their use of AI in content moderation. It should also address the systemic risks associated with disinformation
20:00–25:00
Disinformation is a multifaceted issue that requires a comprehensive approach beyond simply prohibiting misleading content. Effective tools must empower individuals and organizations to combat misinformation while fostering democratic participation and integrity in online communication.
  • Disinformation is a complex issue that cannot simply be prohibited or ignored. It often exists in a gray area where misleading content may not be illegal but remains problematic
  • AI tools can facilitate the spread of disinformation and help combat it. The challenge lies in ensuring these tools are used transparently and accountably to address disinformation effectively
  • Creating a healthier information environment requires a broader approach than tackling individual instances of disinformation. It involves building systems that support democratic participation and strengthen the integrity of online communication
  • Fact-checking is essential in the current landscape, but it is not sufficient on its own. A more comprehensive strategy is needed to address the systemic issues affecting information environments
  • Tools developed to combat disinformation should empower individuals and organizations rather than rely solely on large technology companies. Citizen fact-checkers and journalists need accessible resources to counter misinformation effectively
  • Technical solutions must be user-friendly while providing powerful functionalities. These tools should evolve to keep pace with the changing strategies of disinformation campaigns
25:00–30:00
Artificial intelligence is increasingly being utilized to spread disinformation, significantly threatening journalism and media integrity. The efficiency and low cost of AI tools make them essential for combating disinformation while also raising concerns about their potential misuse.
  • Artificial intelligence is being used to spread disinformation, posing a significant threat to journalism and media integrity. For instance, AI-generated newspapers can produce thousands of articles tailored to disinformation agendas
  • Deepfakes have become increasingly sophisticated. A recent incident during the Irish elections involved a realistic deepfake that falsely claimed a leading candidate was withdrawing just before the vote
  • The efficiency and low cost of AI tools make them essential in combating disinformation. Leveraging AIs capabilities can help counteract the threats posed by disinformation campaigns
  • AI can assist in enforcing existing legislation, such as the Digital Services Act. This act requires platforms to mitigate risks associated with disinformation and provide more transparency about the content circulating on their platforms
  • Practical tools like fact-checking are essential for compliance with regulatory frameworks. The code of conduct on disinformation emphasizes the importance of fact-checking obligations for platforms
  • Projects under the Horizon program aim to enhance fact-checking efficiency and detect manipulative behavior. These initiatives focus on using AI to analyze polarization in debates across member states