For this project we looked at the issue of false information online and how digital design could help people recognise it more easily. Our research focused on the difference between misinformation and disinformation, the people most affected by it, and possible digital ideas that could reduce its spread.

Although the two terms are often used together, they are not exactly the same.
This distinction is important because both can be damaging, even when the intent behind them is different.
False and misleading content spreads very quickly online. Social media makes it possible for one post, video or headline to reach huge numbers of people in a very short time. People are exposed to so much content every day, it can be difficult to tell what is genuine and what is not.
This creates several problems. It can confuse users, reduce trust in reliable information and lead people to make poor or harmful decisions. As misleading content becomes more common and easier to share, there is a growing need for better ways to identify it and warn users before it spreads further.
Older adults may be more at risk because they can be less confident with digital tools or less familiar with checking whether a source is trustworthy. Younger people are also affected because they often spend a large amount of time online and are constantly exposed to fast moving content. People with low digital literacy can struggle to evaluate what they are seeing, while communities experiencing crisis situations can be especially vulnerable because urgent or emotional circumstances make misleading content easier to believe.
During ideation, we explored a range of ways digital products might respond to this problem.
One idea was deepfake detection tools, which could help users identify edited or AI generated media. Another was clickbait detection, aimed at spotting exaggerated or misleading headlines. We also considered reporting systems and stronger platform regulation, which could allow suspicious content to be flagged more effectively.
Other directions included content labels or tags that highlight questionable posts, trusted information hubs that gather verified sources in one place, and AI based detection systems that could scan for potentially false or misleading material on social platforms. We also looked at browser extensions or plugins that support users while they browse as well as educational tools, such as lessons or short courses that teach people how to recognise misinformation more confidently.
Our concept focused on making warnings visible at the moment a user encounters suspicious content. Instead of expecting people to investigate every post on their own, the interface would help by flagging potentially misleading information and showing an explanation.
For example, a social media post could display a visible warning label if the content appears unreliable. The user would then be able to open more detail, such as why the post was flagged, what evidence supports the warning, and whether the source is considered trustworthy. This gives users a clearer basis for deciding whether to believe, ignore or report the content.