
Combating Fake Accounts with World
Trust and consumer experiences online are worsening as “AI slop” takes over our timelines.
On social media, gaming platforms and online marketplaces, it’s increasingly hard to tell what or whom you’re interacting with. Accounts could represent real people…or bots…or one of many personas controlled by the same actor. Their posts, replies and engagement might look real, but we’re all seeing much more artificial content every day.
Why? AI bots have entered the chat.
In the past, bots were often seen the same way as spam emails: annoying, but something we could filter out and ignore. With AI in the mix, bots have become exponentially better at looking and sounding human. They produce realistic content, engage in conversation and can fool people into accepting their friend requests.
Whether they are driving your attention to low-quality websites covered in ads, trying to trick you into sharing private information or selling you the latest must-have gadget, the data shows more and more people are mistaking these bots for people they can trust.
A recent World-commissioned survey of 2,000 American adults found that:
- Over three-quarters of respondents say the internet has “never been worse” in terms of differentiating between real and artificial content
- Respondents believe half of the news stories and articles they see online feature some element of AI
- Only 30% of respondents could tell the difference between business reviews written by humans and those written by AI
Why Current Solutions Don’t Work
When it comes to combating fake accounts in the age of AI, platforms are fighting a losing battle. Automated tools make it easy to create email addresses in bulk and then use these to spawn unlimited spam accounts on platforms. Because AI is becoming more advanced, these bots increasingly behave like humans, making them hard for platforms to detect and shut down.
This is more than a content moderation problem or a platform policy decision. It’s a systemic issue affecting social trust, civic discourse, consumer rights and regulatory compliance. And the current infrastructure isn’t built to handle what’s coming next.
Without new tools to verify authenticity online, governments will default to blunt regulatory instruments, often with unintended consequences, like over-collection of personal data (which can paradoxically exacerbate the problem by giving AI bots more data to train on), exclusion of underserved populations or national fragmentation of the internet.
A Better Solution: World ID
Online platforms don’t have to check government IDs or integrate KYC to keep bots off their platforms; they don’t even really need to know who you are. And since AI can bypass CAPTCHAs and fake IDs with ease, that approach won’t work anyway. Instead, platforms can integrate World ID into their sign-in systems, just as Razer has done with video games.
World ID offers a fundamentally different approach to the solutions currently available. It allows platforms to verify that an individual is a real, unique person - but without collecting personal data, requiring KYC-style onboarding or subjecting users to high-friction experiences like CAPTCHA.
Implementing World ID on online platforms enables:
- Human-to-human interactions: Consumers can trust that the accounts they interact with are controlled by actual humans.
- More trust in platforms: Online platforms can get back to facilitating human interactions, which also helps restore trust in their platform economies.
- Reduced risk to personal data: Individuals don’t need to share personal data, so platforms don’t have to worry about losing it.
- AI-resistance: World ID offers a human-centered approach to authentication that AI just can’t fake.
Today, people are navigating online spaces that feel less safe, less real and less fair. World ID offers a smarter, more human approach - restoring trust online and helping platforms stay open and safe in an AI-driven world.