Think online ads are harmless? They could be revealing your private life
UNSW Sydney
A new study has uncovered a significant and largely invisible privacy risk in the online advertising ecosystem: the ads you see may be enough to reveal sensitive personal information.
Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ARC ADM+S) at UNSW Sydney and QUT have demonstrated that artificial intelligence can assess personal attributes, including political preferences, education level, and employment status, based solely on the advertisements a person is shown online.
The study analysed more than 435,000 Facebook ads seen by 891 Australian users, collected through the Australian Ad Observatory project — a signature project of the ARC ADM+S.
Using advanced large language models (LLMs), researchers found that:
- Personal traits could be inferred without access to browsing history or personal data
- Profiles could be built from short browsing sessions
- AI systems matched and sometimes exceeded human ability to infer personal characteristics
- The process was over 200 times cheaper and 50 times faster than human analysis
In a paper presented at the ACM Web Conference 2026, the researchers say: “Our results demonstrate that off-the-shelf LLMs can accurately reconstruct complex user private attributes.
“Critically, actionable profiling is feasible even within short observation windows, indicating that prolonged tracking is not a prerequisite for a successful attack.”
Lead author Baiyu Chen, from UNSW, said the findings challenge common assumptions about online privacy.
“The key point is that the ads a person sees are not random. Advertising systems optimise delivery based on inferred profiles and behaviours, so the overall pattern of ads shown to a user can carry signals about traits such as gender, age, education, employment status, political preference, and broader socioeconomic position," Chen said.
“Our study shows that LLMs can analyse those patterns and infer private attributes from ad exposure alone.
“These findings provide the first empirical evidence that ad streams serve as a high-fidelity digital footprint, enabling off-platform profiling that inherently bypasses current platform safeguards, highlighting a systemic vulnerability in the ad ecosystem and the urgent need for responsible web AI governance in the generative AI era.
“This work reveals a critical blind spot in Web privacy: the latent leakage of user private attributes through passive exposure to algorithmic advertising.”
A critical blind spot in privacy
By using AI to analyse ad content, the researchers – including Professor Flora Salim, Professor Daniel Angus, Dr Benjamin Tag and Dr Hao Xue – show that streams of ads act like highly detailed digital fingerprints, allowing private attributes to be reconstructed with surprising accuracy, which often match or even exceed human judgement.
Crucially, the research shows this is not a theoretical risk. Profiles can be built quickly and at scale, even from short browsing sessions, and without long-term tracking. Even when predictions are not exact, they are often close enough to reveal meaningful insights about a person’s life stage or financial situation.
How it could be exploited
While major platforms have restricted advertisers from targeting sensitive categories, the study shows that algorithmic ad delivery still encodes these traits indirectly and that this information can now be extracted using widely available AI tools.
This creates a new form of privacy risk where:
- Users do not actively share information
- No hacking or platform-side access is required
- Profiling can happen outside platform oversight
The researchers warn that everyday tools such as browser extensions could be repurposed to quietly collect ads and build detailed user profiles — bypassing platform safeguards and leaving little trace.
In the paper, they say: “We identify browser extensions that abuse legitimate privileges as the potential primary vector for this attack. This scenario is severe due to its inherent stealth and scalability.
“Rather than distributing specialised malware, an adversary can opportunistically deploy this attack within the existing ecosystem of widely installed, benign functioning extensions, such as ad blockers, coupon finders, or page translators.
“These extensions legitimately require permissions to read web page content to function, providing a perfect cover for data harvesting.”
Implications for policy and regulation
The findings suggest current privacy protections may not go far enough.
As AI tools make this kind of analysis easier and more accessible, the researchers argue that regulation must evolve to address not just data collection, but what can be inferred from the content people are exposed to.
Addressing this risk will require rethinking privacy frameworks to account for the hidden signals embedded in everyday online experiences — including the ads users passively consume.
“In terms of protection, users can reduce the risk by being cautious with browser extensions, limiting unnecessary permissions, and using available privacy and ad-personalisation settings,” said Chen.
“However, this is not something users can fully solve on their own, because the broader issue is systemic: people cannot easily opt out of the ad ecosystem altogether, so stronger platform safeguards are also needed.”
About the research
The study draws on data from the Australian Ad Observatory, a citizen science initiative that collects ads seen by everyday users. It represents one of the largest real-world investigations into how AI can infer personal information from online advertising.
The research, titled “When Ads Become Profiles: Uncovering the Invisible Risk of Web Advertising at Scale with LLMs,” will be presented at the ACM Web Conference 2026.
Contact details:
For enquiries about this story and interview requests please contact Neil Martin, News & Content Coordinator.
Email: [email protected]