AI Allows Hackers To Identify Anonymous Social Media Accounts, Study Finds
Source: Slashdot
Overview
An anonymous reader quotes a report from The Guardian: AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned. In most test scenarios, large language models (LLMs)—the technology behind platforms such as ChatGPT—successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted. The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost‑effective to perform sophisticated privacy attacks, forcing a “fundamental reassessment of what can be considered private online”.
In their experiment, the researchers fed anonymous accounts into an AI and instructed it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school and walking their dog Biscuit through a “Dolores park.” In that fictional case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence. While the example was illustrative, the paper’s authors highlighted real‑world scenarios in which governments could use AI to surveil dissidents and activists posting anonymously, or hackers could launch “highly personalized” scams.