Disinex- Digital Entrepreneurship Expert
A proof of concept study, conducted before a grant application for future funding
The Digital Awareness and Protection Assistant (DAPA) is a browser extension that introduces an AI-powered approach to media literacy education. Unlike traditional lessons or workshops, DAPA provides real-time feedback on news articles, helping users identify bias, persuasion techniques, and credibility issues as they browse.
This interactive approach is commonly used in fact-checking tools, but DAPA takes a different path. Instead of telling users what is true or false, it helps them understand how content is designed to influence them. By integrating Human-Computer Interaction and UX principles, DAPA makes media literacy engaging, contextual, and non-intrusive—so users can learn naturally while browsing.
Misinformation and digital manipulation are growing problems, and traditional media literacy training often struggles to reach people at scale, in real-time, and in an engaging way.
Our project team—bringing expertise in HCI, UX, and cognitive science — recognizes that disinformation is more than just spreading falsehoods. Misinformation is often not an outright lie; instead, it relies on subtle framing, emotional triggers, and selective reporting to shape perception and influence decision-making.
This leads us to a key realization: fact-checking alone is not enough. People need critical thinking skills to interpret information effectively. Media literacy education could help bridge this gap, but it also faces its own challenges.
We knew that to overcome these challenges, we had to go beyond traditional approaches. DAPA needed to meet users where they are, providing real-time, AI-driven insights as they browse. Instead of simply labeling content as “true” or “false,” it had to be engaging and transparent, showing how content is designed to persuade. To make critical thinking more accessible, DAPA also had to reduce cognitive effort, helping users analyze information without feeling overwhelmed.
DAPA is a pilot study conducted in collaboration with European partners as a preliminary step before applying for EU funding. Its primary objective was to validate a new approach to media literacy—real-time, AI-assisted content evaluation integrated directly into users' browsing experiences.
To develop DAPA in a way that was both practical and scientifically rigorous, we had two key objectives: developing a functional product and ensuring research-backed validation, as required by EU-funded projects. To achieve this, we adopted Design Science Research (DSR), a well-established methodology that focuses on creating and evaluating innovative solutions while maintaining scientific rigor.
DSR follows an iterative process of designing, building, and testing solutions in real-world contexts to ensure both usability and impact. This approach allowed us to develop DAPA as a real, working tool while systematically testing its effectiveness and usability.
Before designing DAPA, we conducted in-depth research to understand how people consume and evaluate digital content, why misinformation spreads so effectively, and what cognitive and psychological biases shape media perception. Our research process included:
Our research revealed several critical behavioral insights that shaped DAPA’s design:
Based on our findings, we developed a set of UX and functional requirements for DAPA, guided by key design principles. The table below aligns practical UX solutions with user needs and implementation strategies.
DAPA's core functionality is built on the Media Deconstruction Framework, a well-established media literacy method used to break down and analyze messages. By automating this framework with AI, DAPA ensures a structured, step-by-step analysis, which then needed to be effectively translated into the user interface (UI).
The primary challenge was not defining DAPA’s features—they were predetermined by this methodology—but rather determining how to present the information in a way that was effective, engaging, and non-intrusive.
Early UX Questions:
Early UX Questions:
How do we alert users to biased content without being intrusive?
How much information should we show at first glance?
What level of AI transparency do users expect?
Early user testing of different interaction models revealed varied responses:
• Fact-checking labels that provided direct “true” or “false” verdicts were perceived as too aggressive and polarizing, reducing trust in the system.
• Highlighting manipulative techniques within the text was seen as useful and educational, but it was intrusive and distracted users from reading.
• Passive indicator systems, which placed subtle icons next to headlines to signal content credibility, offered a low-friction engagement method, though some users wanted more context.
The most effective approach was the layered information model, which used progressive disclosure—starting with a simple alert and allowing users to expand for more details when needed. This method was best received, as it gave users control over how much information they wanted to engage with.
Based on these insights, we developed the floating window UI, which presents main credibility metrics at a glance, with the option to open a full analysis on demand. This balanced engagement and usability, ensuring users could access critical insights without disrupting their browsing experience.
After refining the core UX interactions, we built interactive prototypes and conducted usability tests with participants. One of the biggest challenges was selecting which aspects of the analysis would appear in the pop-up window and how users would interact with them.
To assess how users perceived and interacted with DAPA, we conducted a series of usability tests and feedback sessions, using:
User behavior during testing revealed that most participants skimmed articles rather than reading them in full, focusing primarily on headlines and summaries. DAPA’s credibility score played a significant role in engagement, with many users checking the evaluation first and only reading the article if the result contradicted their expectations. Additionally, users expressed a preference for a quick way to understand an article before fully engaging with it. These insights led to a major design update: the integration of a short article summary within the interface, enabling users to grasp key points quickly before deciding whether to read the full content.
Based on collected insights, we refined DAPA’s UI and interaction model to improve clarity, engagement, and user control.
The final design introduced a pop-up window that provided both graphical and descriptive grading, offering users a clear assessment of an article’s tactical accuracy, bias levels, and use of persuasive techniques. We also integrated a concise article summary, allowing users to quickly understand the key points before engaging with the full content.
For the Full Analysis Window, we implemented collapsible explanation sections with detailed analysis to make the information more digestible. This introduced an additional layer of progressive disclosure, ensuring that only key points were visible by default while allowing users to expand sections for more details if needed. Each section now includes an option to open a Sheets file containing the raw analysis data, giving users access to the full evaluation. During testing, users engaged with the raw data only once (if at all)—they were curious about how the analysis worked but did not revisit it. However, having access to this data was essential for maintaining transparency and trust in the system.
To assess the effectiveness of DAPA, we conducted a controlled experimental study designed to evaluate improvements in misinformation detection and engagement compared to traditional media literacy interventions and no intervention at all.
We employed a mixed-methods approach, combining quantitative performance metrics with qualitative user feedback. The study consisted of 36 participants, divided into three groups:
Control Group: Browsed news normally, without any intervention.
Media Literacy Training Group: Completed a traditional 2-hour media literacy course.
DAPA User Group: Used the DAPA browser extension while browsing online content in real-time.
The study followed a four-phase design to measure the impact of DAPA on misinformation detection and critical thinking skills:
1. Pre-Test: Participants' initial skills and attitudes were assessed using a set of baseline articles.
2. Training Session: The Media Literacy Training Group completed a 2-hour media literacy course, while the DAPA User Group received no formal training but used the tool during browsing.
3. Post-Test: Participants were presented with new articles to evaluate their ability to detect bias and misinformation immediately after intervention.
4. Follow-Up Test & Interviews: Three weeks later, participants were re-evaluated to measure skill retention, followed by qualitative interviews to explore their experiences and perceptions of the tool.
Participants using DAPA demonstrated a 30% increase in their ability to detect biased or misleading content compared to the control group. Real-time AI-driven feedback led to greater improvements than the traditional media literacy training group, which only showed a 10% improvement. Users exposed to real-time analysis retained their critical thinking skills longer, as evidenced by the follow-up retention test.
Additionally, we found that this approach was particularly effective for highly polarized participants, helping them overcome political bias. Users with strong ideological leanings showed greater improvements in critical evaluation of content, suggesting that real-time, AI-driven feedback can mitigate bias more effectively than traditional media literacy training.
In addition to the controlled experiment, we conducted qualitative research to explore user perceptions of DAPA’s design, usability, and effectiveness. This study aimed to understand how participants interacted with the tool, how they perceived its credibility, and what features they found most useful.
One of the most striking findings was the extent to which participants perceived DAPA’s explanations as obvious and self-evident once they had seen them. This "I Knew It All Along" Effect led to increased trust in the system, as users felt that DAPA reaffirmed what they already knew rather than shaping their interpretations. However, in reality, DAPA’s evaluations played a subtle but significant role in influencing participant perspectives, even when they did not explicitly acknowledge it.
While the DAPA group actively engaged with credibility evaluations, the Media Literacy Training group exhibited passive participation. Many participants in this group—particularly older users—found the training materials redundant, reinforcing a slightly different version of the "I Already Knew That" Effect. Instead of perceiving DAPA’s evaluations as obvious, they viewed the training content itself as unnecessary, assuming they already possessed the required knowledge. As a result, their interaction with the materials was minimal, leading to a limited impact from the training.
Another key insight was the strong user preference for summary evaluations over detailed explanations. Many participants valued quick, digestible insights rather than in-depth reasoning or breakdowns. This preference further reinforced the "I Already Knew That" Effect, as users often skimmed through explanations, assuming they fully understood the evaluation without deeply engaging with the underlying reasoning.
The findings from this study provide strong evidence that AI-powered real-time media literacy education is a promising alternative to traditional media literacy programs. Further development will focus on refining engagement strategies, improving personalization, and expanding accessibility to reach a broader audience.
With DAPA’s pilot study completed and key insights gathered, the next phase focuses on scaling, refining, and expanding the tool’s impact. Our roadmap includes further development, broader testing, and securing funding for large-scale implementation.
To increase engagement and combat the “I Knew It All Along” Effect, we plan to introduce interactive learning elements, such as prediction-based interactions, gamified challenges, and microlearning interventions that encourage active critical thinking. At the same time, we are preparing to apply for EU funding, develop partnerships with fact-checking organizations and educators, and expand multilingual support to make DAPA more widely accessible.