In the ever-changing digital landscape, where algorithms discreetly impact our online experiences, a team of digital detectives is working hard to uncover the hidden mechanisms that influence our daily lives. Meet AI Forensics, a European non-profit organisation dedicated to holding large tech companies and their unseen algorithms responsible.
In an exclusive conversation with NGI, Katya Viadziorchyk, AI Forensics’ head of funding and partnerships, discussed the organization’s journey and objective. The story begins with Claudio Agosti, who belived that algorithms should be submitted to public examination. This view inspired the formation of Tracking Exposed, a community of activists who set the groundwork for what would later become AI Forensics. Claudio and Marc Faddoul co-founded AI Forensics in 2021.
“Algorithms are very smart and tricky,” Katya explains. “They provide you with information that you are searching for, creating a sort of rabbit hole effect.” This seemingly innocent procedure, however, can take users down potentially dangerous roads, impacting anything from consumer decisions to democratic processes.
In its pursuit of the hidden algorithms that govern our online environment, AI Forensics’ work is similar to digital archaeology. Researchers and activists can see what’s going on behind the scenes of big tech platforms with their tools, which are open-source software. “The hardest part in that process is basically data collection,” Katya reveals, describing their use of both official APIs and more adversarial methods when necessary. Their audits are always data-driven, but the data they need is typically not made available by the platforms. Some tools allow them to mimic user behaviors, and collect behavioral data on the algorithm, using methods such as sock-pupetting, scraping or user data donation. These adversarial approaches allow them to be fully independent from the platforms, and guarantee the integrity of the data.
A spotlight on digital democracy
Their investigations have yielded impressive results. In one recent project, they exposed the unreliability of Microsoft’s Copilot (formerly Bing Chat) in providing accurate information during elections. “About a third of answers related to elections were containing factual errors,” Katya discloses, highlighting the potential impact on voter information and, by extension, democratic processes.
The true power of AI Forensics’ work came into focus with their recent investigation into political advertising on Meta platforms. In a report titled “No embargo in sight” the organization uncovered a startling reality: a flood of pro-Russian propaganda ads were reaching millions of European users, largely unchecked by Meta’s moderation systems.
But AI Forensics doesn’t just uncover problems; they enable change. Their reports led to the European Commission initiating proceedings against Meta, potentially resulting in significant fines for the tech giant. This showcases the real-world impact of their digital investigations.
Advancing algorithmic auditing tools with the support of NGI
The organization’s commitment to transparency extends to their own tools. By releasing their auditing software for free, AI Forensics aims to democratise the field of algorithmic accountability. “The idea is to empower the research ecosystem,” Katya explains, hoping to grow a community of digital watchdogs.
Furthermore, AI Forensics is currently engaged in an NGI Search project that aims to enhance their algorithmic auditing capabilities, particularly for LLM-based search engines. “Our project focuses on algorithmic auditing of LLM-based search engines,” Katya explains. “It primarily funds developments initially undertaken for our EU elections research involving Microsoft Copilot. However, our scope extends beyond this, encompassing pipelines for YouTube and TikTok as well.”
The project has allowed AI Forensics to significantly improve their tools. Starting from a basic codebase used to scrape answers from BingChat for Swiss and German elections, they have now refactored their code to be more modular and robust. This enhancement not only supports their study of LLM-based search engines but has also led to the development of data labeling tools and more resilient scraping pipelines for platforms like YouTube and TikTok.
Katya emphasizes the importance of this ongoing work: “This funding is crucial because these pipelines can have a limited lifespan. Each time YouTube, TikTok, or Microsoft Copilot updates their interfaces, our pipelines can break, necessitating ongoing maintenance and fixes.”
Looking ahead, the team is focusing on code cleanup, documentation, and developing data visualization tools. True to their open-source philosophy, they aim to share some of their code, furthering their mission to democratise algorithmic auditing tools.
As our world becomes increasingly governed by lines of code and algorithms, the work of AI Forensics stands as a safeguard for digital transparency and algorithmic accountability. The power to shape our online experiences should not rest solely in the hands of tech giants. Instead, it should be a collaborative effort, with users empowered to understand the algorithms that influence their daily lives.