DAPA- A UX-Driven Approach to Media Literacy Education

Pilot Study for the Digital Awareness and Protection Assistant (DAPA)

User research
ux design
ui design
user testing
click to jump to the video
Project description

A proof of concept study, conducted before a grant application for future funding

platform
Browser extension
Role
UX Research
UX Design
Information Architecture
Wireframing and Prototyping
User Testing

About the project

The Digital Awareness and Protection Assistant (DAPA) is a browser extension that introduces an AI-powered approach to media literacy education. Unlike traditional lessons or workshops, DAPA provides real-time feedback on news articles, helping users identify bias, persuasion techniques, and credibility issues as they browse.

This interactive approach is commonly used in fact-checking tools, but DAPA takes a different path. Instead of telling users what is true or false, it helps them understand how content is designed to influence them. By integrating Human-Computer Interaction and UX principles, DAPA makes media literacy engaging, contextual, and non-intrusive—so users can learn naturally while browsing.

Background

Misinformation and digital manipulation are growing problems, and traditional media literacy training often struggles to reach people at scale, in real-time, and in an engaging way.

Our project team—bringing expertise in HCI, UX, and cognitive science — recognizes that disinformation is more than just spreading falsehoods. Misinformation is often not an outright lie; instead, it relies on subtle framing, emotional triggers, and selective reporting to shape perception and influence decision-making.

This leads us to a key realization: fact-checking alone is not enough. People need critical thinking skills to interpret information effectively. Media literacy education could help bridge this gap, but it also faces its own challenges.

Why Traditional Media Literracy Education Falls Short
Passive Learning Isn’t Enough: People struggle to retain fact-checking lessons when passively consuming content. Without real-time application, the impact of media literacy education fades quickly.
Emotional Reactions Shape Perception: Information is often processed emotionally rather than rationally. Quick emotional responses can override critical thinking, making it harder for people to evaluate content objectively.
Trust Issues: Many users distrust fact-checking tools, perceiving them as politically biased. The same issue affects media literacy programs, as users often see political agendas in educational curricula. This skepticism limits the effectiveness of traditional approaches.

We knew that to overcome these challenges, we had to go beyond traditional approaches. DAPA needed to meet users where they are, providing real-time, AI-driven insights as they browse. Instead of simply labeling content as “true” or “false,” it had to be engaging and transparent, showing how content is designed to persuade. To make critical thinking more accessible, DAPA also had to reduce cognitive effort, helping users analyze information without feeling overwhelmed.

The Goal of the Pilot Study

DAPA is a pilot study conducted in collaboration with European partners as a preliminary step before applying for EU funding. Its primary objective was to validate a new approach to media literacy—real-time, AI-assisted content evaluation integrated directly into users' browsing experiences.

The study had two key objectives:

Validate the design, effectiveness, and usability of DAPA before moving to full-scale development.
Gather empirical evidence to support funding applications for full-scale deployment within an EU-funded media literacy program.

Methodology: A User-Centered Approach

To develop DAPA in a way that was both practical and scientifically rigorous, we had two key objectives: developing a functional product and ensuring research-backed validation, as required by EU-funded projects. To achieve this, we adopted Design Science Research (DSR), a well-established methodology that focuses on creating and evaluating innovative solutions while maintaining scientific rigor.

DSR follows an iterative process of designing, building, and testing solutions in real-world contexts to ensure both usability and impact. This approach allowed us to develop DAPA as a real, working tool while systematically testing its effectiveness and usability.

Research & Discovery

Before designing DAPA, we conducted in-depth research to understand how people consume and evaluate digital content, why misinformation spreads so effectively, and what cognitive and psychological biases shape media perception. Our research process included:

Literature Review
Cognitive Science Research – Examining how cognitive biases, trust, and emotions influence how users judge content credibility.

HCI & UX Research –
Analyzing strategies and methodologies from Human-Computer Interaction (HCI) literature on countering disinformation.
Quantitative & Qualitative User Research
Triangulated insights from multiple research methods:

• Online survey to gather broad user perspectives.
• 8 semi-structured interviews for deeper individual insights.
Focus groups and usability testing with users of different ages, political orientations, and media consumption habits.
Competitive Analysis
Evaluating existing media literacy and misinformation detection tools to understand their strengths, limitations, and why users engage or disengage with them.

First Findings

Our research revealed several critical behavioral insights that shaped DAPA’s design:

Cultural and Social Influences Shape Media Perception
Not surprisingly, users trusted their personal networks more than institutional fact-checking. Besides that, we found that users relied on media sources in their native language, even when they were aware of biases.
💡 UX Solution:
DAPA needed to be culturally sensitive and adaptable to different media ecosystems and user backgrounds.
Information Overload Leads to Cynical Disengagement
Users exposed to excessive, conflicting information often gave up on seeking objective truth. Many felt that misinformation was inevitable and mistrust in mainstream media was high, with all sources perceived as inherently biased.
💡 UX Solution:
DAPA should provide clear, concise, and actionable insights to prevent cognitive overload. Instead of overwhelming users, it should focus on real-time education and small, manageable learning moments.
Limited Interest in Traditional Media Literacy Training
Most participants saw media literacy training as unnecessary or irrelevant, and none expressed interest in attending a traditional course.
💡 UX Solution:
DAPA should integrate media literacy into users’ daily habits rather than requiring a separate learning module. Learning should be contextual, real-time, and interactive.
Skepticism Toward AI-Driven Media Analysis
Many users distrusted AI-based fact-checking due to concerns about bias and complexity. They were also vulnerable to echo chambers and social media influence, which could further reinforce skepticism.
💡 UX Solution:
Ensure transparency—DAPA should explain AI decisions rather than making black-box recommendations.

User Needs & Design Requirements

Based on our findings, we developed a set of UX and functional requirements for DAPA, guided by key design principles. The table below aligns practical UX solutions with user needs and implementation strategies.

Category
Feedback Summary
# of Users
Feed Creation
Users found the process clear and easy to understand.
5 out of 6
Terminology
Terms like “feed” and “request” caused confusion.
4 out of 6
Guided Task Creation
Felt intuitive and reduced user hesitation.
5 out of 6
Visual Confirmation
Users wanted clearer confirmation that actions (e.g., claiming a task) were successful.
3 out of 6
Tagging and Categorization
Tagging structure made sense and aligned with familiar workflows.
5 out of 6
Setting tags
Some users needed time to understand how setting filters works.
3 out of 6
Reliability
Users expressed concern about missing important tasks when filters were applied.
2 out of 6
Usefullness
Believed the tool would make their work more efficient
6 out of 6

From Research to UX: Early Prototyping & Testing

DAPA's core functionality is built on the Media Deconstruction Framework, a well-established media literacy method used to break down and analyze messages. By automating this framework with AI, DAPA ensures a structured, step-by-step analysis, which then needed to be effectively translated into the user interface (UI).

The primary challenge was not defining DAPA’s features—they were predetermined by this methodology—but rather determining how to present the information in a way that was effective, engaging, and non-intrusive.
Early UX Questions:

Early UX Questions:

How do we alert users to biased content without being intrusive?

How much information should we show at first glance?

What level of AI transparency do users expect?

Early user testing of different interaction models revealed varied responses:

• Fact-checking labels
that provided direct “true” or “false” verdicts were perceived as too aggressive and polarizing, reducing trust in the system.
• Highlighting manipulative techniques within the text was seen as useful and educational, but it was intrusive and distracted users from reading.
• Passive indicator systems, which placed subtle icons next to headlines to signal content credibility, offered a low-friction engagement method, though some users wanted more context.

The most effective approach was the layered information model, which used progressive disclosure—starting with a simple alert and allowing users to expand for more details when needed. This method was best received, as it gave users control over how much information they wanted to engage with.

Based on these insights, we developed the floating window UI, which presents main credibility metrics at a glance, with the option to open a full analysis on demand. This balanced engagement and usability, ensuring users could access critical insights without disrupting their browsing experience.

Iterative Design & User Testing

After refining the core UX interactions, we built interactive prototypes and conducted usability tests with participants. One of the biggest challenges was selecting which aspects of the analysis would appear in the pop-up window and how users would interact with them.

To assess how users perceived and interacted with DAPA, we conducted a series of usability tests and feedback sessions, using:

Real-Time Observation

Monitoring how participants navigated DAPA during think-aloud sessions to track usability issues and initial reactions.

Follow-Up Semi-Structured Interviews

Conducting post-test interviews to gather in-depth feedback on:
Clarity of grading and explanations.

Trust in the AI’s evaluations.
Challenges faced while using the tool.
Most and least useful features identified by users.
Suggestions for improvement.

Key User Feedback Summary

User behavior during testing revealed that most participants skimmed articles rather than reading them in full, focusing primarily on headlines and summaries. DAPA’s credibility score played a significant role in engagement, with many users checking the evaluation first and only reading the article if the result contradicted their expectations. Additionally, users expressed a preference for a quick way to understand an article before fully engaging with it. These insights led to a major design update: the integration of a short article summary within the interface, enabling users to grasp key points quickly before deciding whether to read the full content.

Based on collected insights, we refined DAPA’s UI and interaction model to improve clarity, engagement, and user control.

The final design introduced a pop-up window that provided both graphical and descriptive grading, offering users a clear assessment of an article’s tactical accuracy, bias levels, and use of persuasive techniques. We also integrated a concise article summary, allowing users to quickly understand the key points before engaging with the full content.

For the Full Analysis Window, we implemented collapsible explanation sections with detailed analysis to make the information more digestible. This introduced an additional layer of progressive disclosure, ensuring that only key points were visible by default while allowing users to expand sections for more details if needed. Each section now includes an option to open a Sheets file containing the raw analysis data, giving users access to the full evaluation. During testing, users engaged with the raw data only once (if at all)—they were curious about how the analysis worked but did not revisit it. However, having access to this data was essential for maintaining transparency and trust in the system.

Experimental Evaluation

To assess the effectiveness of DAPA, we conducted a controlled experimental study designed to evaluate improvements in misinformation detection and engagement compared to traditional media literacy interventions and no intervention at all.

We employed a mixed-methods approach, combining quantitative performance metrics with qualitative user feedback. The study consisted of 36 participants, divided into three groups:

Control Group: Browsed news normally, without any intervention.

Media Literacy Training Group: Completed a traditional 2-hour media literacy course.

DAPA User Group: Used the DAPA browser extension while browsing online content in real-time.

The study followed a four-phase design to measure the impact of DAPA on misinformation detection and critical thinking skills:

1. Pre-Test: Participants' initial skills and attitudes were assessed using a set of baseline articles.
2. Training Session: The Media Literacy Training Group completed a 2-hour media literacy course, while the DAPA User Group received no formal training but used the tool during browsing.
3. Post-Test: Participants were presented with new articles to evaluate their ability to detect bias and misinformation immediately after intervention.
4. Follow-Up Test & Interviews: Three weeks later, participants were re-evaluated to measure skill retention, followed by qualitative interviews to explore their experiences and perceptions of the tool.

Key Findings

Participants using DAPA demonstrated a 30% increase in their ability to detect biased or misleading content compared to the control group. Real-time AI-driven feedback led to greater improvements than the traditional media literacy training group, which only showed a 10% improvement. Users exposed to real-time analysis retained their critical thinking skills longer, as evidenced by the follow-up retention test.

Additionally, we found that this approach was particularly effective for highly polarized participants, helping them overcome political bias. Users with strong ideological leanings showed greater improvements in critical evaluation of content, suggesting that real-time, AI-driven feedback can mitigate bias more effectively than traditional media literacy training.

Qualitative Study: User Experience & Perception

In addition to the controlled experiment, we conducted qualitative research to explore user perceptions of DAPA’s design, usability, and effectiveness. This study aimed to understand how participants interacted with the tool, how they perceived its credibility, and what features they found most useful.

Research Approach
Think-Aloud Protocols – Participants verbalized their thoughts while interacting with the tool.
Post-Intervention Interviews – Focused on user trust, engagement, and feature preferences.
Screen Recording & Behavioral Analysis – Tracked real-time interactions with the interface.

Key Findings: "I Knew It All Along" Effect

One of the most striking findings was the extent to which participants perceived DAPA’s explanations as obvious and self-evident once they had seen them. This "I Knew It All Along" Effect led to increased trust in the system, as users felt that DAPA reaffirmed what they already knew rather than shaping their interpretations. However, in reality, DAPA’s evaluations played a subtle but significant role in influencing participant perspectives, even when they did not explicitly acknowledge it.

While the DAPA group actively engaged with credibility evaluations, the Media Literacy Training group exhibited passive participation. Many participants in this group—particularly older users—found the training materials redundant, reinforcing a slightly different version of the "I Already Knew That" Effect. Instead of perceiving DAPA’s evaluations as obvious, they viewed the training content itself as unnecessary, assuming they already possessed the required knowledge. As a result, their interaction with the materials was minimal, leading to a limited impact from the training.

Another key insight was the strong user preference for summary evaluations over detailed explanations. Many participants valued quick, digestible insights rather than in-depth reasoning or breakdowns. This preference further reinforced the "I Already Knew That" Effect, as users often skimmed through explanations, assuming they fully understood the evaluation without deeply engaging with the underlying reasoning.

Next steps

The findings from this study provide strong evidence that AI-powered real-time media literacy education is a promising alternative to traditional media literacy programs. Further development will focus on refining engagement strategies, improving personalization, and expanding accessibility to reach a broader audience.

With DAPA’s pilot study completed and key insights gathered, the next phase focuses on scaling, refining, and expanding the tool’s impact. Our roadmap includes further development, broader testing, and securing funding for large-scale implementation.

To increase engagement and combat the “I Knew It All Along” Effect, we plan to introduce interactive learning elements, such as prediction-based interactions, gamified challenges, and microlearning interventions that encourage active critical thinking. At the same time, we are preparing to apply for EU funding, develop partnerships with fact-checking organizations and educators, and expand multilingual support to make DAPA more widely accessible.

Watch the video