
The business models of digital platforms are undermined by bots, fake accounts, and deepfakes, and the problem is rapidly accelerating. World’s proof of human technology gives platforms a privacy-preserving way to restore trust online by ensuring that their users are unique humans. This is expected to significantly increase a platform’s user base and per-user revenue due to better product experience and user retention, lower user acquisition costs, higher ad revenue, and a competitive advantage over other platforms. The new World ID 4.0 allows credential issuers and the World protocol to monetize the use of proof of human technologies via World ID fees. Specifically, the expected model would have applications using World ID paying fees, while keeping protocol usage free for end-users. The affected industries, including social media, dating, ticketing, and banking, serve billions of users and generate trillions of dollars in annual revenue, offering significant revenue potential for World ID.
How AI is eroding the foundations of the internet
Digital platforms were built on the assumption that most participation comes from real people. That assumption is breaking down. Tools that let bots pretend to be humans are now good enough to fool users and platforms in text, voice, and video. In a 2025 Turing test, GPT-4.5 was judged to be human 73% of the time. In voice, researchers found that people could correctly identify AI-generated clones only about 60% of the time, and a broader 2024 meta-analysis found that human deepfake detection overall is only slightly above chance. Furthermore, these deepfake tools have become cheap, widely available, and easy to use.
The inability to tell humans from bots is severely eroding trust on the internet and undermining the business models of digital platforms. Recently, X’s Head of Product said that X is suspending more than 200 bot accounts per minute. Already in 2023, a University of Queensland study found that X was “flooded with platform manipulation of various kinds”. In 2025, researchers from the University of Zurich conducted a study on Reddit, showing that their AI bot was three to six times as successful at changing users’ opinions compared to humans. This justifies increased concerns over a potential bad actor’s ability to systematically influence the opinions of targeted groups via bots. It is widely known that “bot farms” using racks of real smartphones are being deployed by state actors and other groups to fake social media popularity, manipulate algorithms, and make fabricated narratives appear genuinely popular. When trust erodes, engagement quality falls, retention weakens, acquisition gets less efficient, and monetization gets harder.
Companies are already spending enormous resources to manage these problems. Imperva research estimates that bot attacks impose direct economic costs of up to $116 billion annually. Ad fraud adds an estimated $140 billion, projected to reach $172 billion by 2028. Recently, Europol warned that AI is accelerating “an online fraud epidemic” with unprecedented sophistication and scale, “expected to outpace other types of serious and organized crime.”
The problem is about to get orders of magnitude worse with the advancement of autonomous AI agents. In the hands of scammers, AI agents can deceive victims across long-running conversations, strategically combine actions like comments, likes, and messages across different platforms, and patiently build consistent activity profiles that will soon be indistinguishable from humans. This is a problem because existing safeguards were designed for a world where deception was relatively expensive and limited in scale, not one where AI can imitate human behavior cheaply, convincingly, and across every channel.
Proof of human: A new technology to restore trust online
The World ID protocol was designed to tackle the challenges posed by AI at its source: to restore trust on the internet, we need to ensure that the participants of online communities are humans, not bots. At the core of World ID is the proof of human credential, generated by Orb verification, which gives platforms a privacy-preserving way to ensure that their users are unique humans.
Importantly, World ID’s proof of human is not just another identity or authentication service — it is a new foundational layer for the internet, to restore trust online. The World ID protocol enables any entity (e.g., enterprises or governmental institutions) to create a new World ID Credential (e.g., a credit report or a university certificate), which individuals can attach to their World ID. Enterprises can attach existing authentication or identification services to an employee's World ID.
World ID enables individuals to share things about themselves without revealing their real identity. For example, Orb-verified individuals can use the proof of human credential to prove to an application “I am a unique human,” while individuals who have added a passport credential can prove “I am over 18” or “I am a US citizen.” Credentials can also be combined to make composite proofs, for example “I am a unique human who is over 18,” without revealing any other information. Since the World ID protocol is implemented via zero-knowledge cryptographic proofs, the user does not need to entrust their private information to any central party.
Many proactive enterprises foreseeing the challenge of reliable proof of human have already integrated with World ID, including Razer (a leader in eSports tournaments) and Match Group (the world’s pioneer in online dating). To date, nearly 18 million users have verified their humanness at an Orb and can now use their World ID to verify their unique humanness online.
There is currently no other technology solution that enables platforms to distinguish bots from humans in a robust, privacy-preserving, scalable, and globally accessible way. CAPTCHAs and behavioral detection can be easily evaded by modern AI. IP-based rate limiting and client-side device fingerprinting are increasingly unreliable as standalone defenses: for example, modern bots can rotate IPs to bypass rate limits, while browser privacy protections and spoofing tools are eroding the reliability of fingerprint-based identification. eKYC introduces massive friction and privacy risks for users, and it is not globally accessible because large segments of the world’s population lack verifiable government ID. This places World ID’s proof of human in a category of its own (see here for a more detailed discussion).
How proof of human can increase platform revenue
At scale, adoption of World ID’s proof of human can unlock enormous value by restoring trust online and ensuring that a platform’s users are unique humans. While this value flows to both users and platforms, it it is most readily measurable at the platform level. Integrating with World ID can increase their user base and their ARPU (average revenue per user) due to (a) better product experience and user retention, (b) lower user acquisition cost, (c) higher ad revenue and better creator economics, and (d) a competitive advantage over other platforms.
Better product experience and user retention. Users tend to disengage rapidly when they believe a platform is populated by bots rather than real people. For example, user engagement on the 10 largest dating apps declined by 16% between 2023 and 2024, a period in which bot attacks on dating platforms increased by more than 2000%. A 2025 survey found that six in ten dating app users believe they have encountered at least one AI-generated conversation. World ID fixes this, which is one of the reasons why Tinder (owned and operated by Match Group), the world’s pioneer in online dating, is piloting World ID in Japan, where verified users are amplified via a verified Human Badge and receive 5 boosts on Tinder. But the dynamic extends beyond retention to the quality of participation itself. In any product whose value depends on authentic human participation, bot activity degrades the experience and corrupts the environment. A social feed where every participant is a real human produces better content and more accurate recommendations. An online game where every player is a unique person produces fairer matches. An eCommerce platform where reviews are written by real purchasers produces better buying decisions. In each case, World ID enables the formation of more trustworthy and valuable user communities by retaining genuine users, reliably excluding bad actors, and improving the product experience for everyone.
Lower user acquisition cost. Bots and declining trust increase user acquisition costs by driving spend toward non-converting traffic and degrading targeting efficiency, while also making it harder to attract new users. World ID’s proof of human enables platforms to capture the additional revenue from the latent demand of users who would join, pay, and engage if they trusted the platform. Beyond unlocking this latent demand, proof of human also makes customer acquisition spending more efficient. Free trials, sign-up bonuses, promotions, and referral rewards are designed to acquire real users. Proof of human lets platforms ensure that these offers reach actual humans. This reduces marketing spend that is otherwise wasted on bot activity and duplicate accounts, and it improves conversion rates across acquisition channels. Furthermore, the value of promotions and bonuses can be increased if they are truly limited to one-person-per offer, increasing growth through better promotional efficiency.
Higher ad revenue and better creator economics. Advertisers pay for human attention, but a meaningful share of ads is consumed by bots, not humans. Ad fraud costs an estimated $140 billion annually. Proof of human creates value for platforms, advertisers, and creators. Platforms can offer a "verified human" tier at a higher cost per impression, because the impressions are guaranteed to reach real people. Advertisers whose conversions come from verified humans can better measure their marketing ROI, which justifies sustained or increased spend. This is one of the reasons why Hakuhodo, Japan’s second-largest marketing agency, plans to use World ID to build a fraud-proof ad network so that it can affordably reach more unique humans, not paid “click farmers”. For creators (e.g., in Spotify’s streamshare model), the same logic applies to royalty pools: when fake artists are excluded due to proof of human, the share of the revenue pool paid to legitimate streams increases, making the platform more attractive to the real creators who drive its value.
Competitive advantage. Eventually, a share of users will leave platforms that have too many bots and migrate towards platforms that can guarantee human participants. For example, a dating app where every match is a verified human will outcompete one that cannot make that claim. A ticketing platform that offers fair, bot-free access to tickets will win artists and fans. An ad network that can prove its impressions reach real people will command the budgets. Being an early adopter of proof of human gives platforms a competitive advantage as platforms without it will lose users, creators, and advertisers to competitors.
All of these benefits reinforce one another, allowing the value of proof of human to compound over time. Better user experience improves retention, stronger retention attracts more genuine users, better audience quality increases monetization, and stronger monetization creates more resources to invest in growth. As these effects accumulate, a platform’s willingness to pay for proof of human should increase.
World ID fees: Capturing value for the World protocol
One way to capture the value created by a proof of human protocol is to charge for the benefits that accrue to the applications that use it. World ID 4.0 enables such a mechanism, meaning applications may have to pay World ID fees for using a credential while ensuring usage remains free for end users.
Applications can integrate pre-existing credentials (e.g., the Orb’s proof of human credential), create and use their own institutional credential (e.g. a credit report), use credentials issued by others (such as a digital government ID), or adopt a mix of multiple credentials. Protocols like World ID make these credentials usable by applications while ensuring the credential issuers can charge fees for the credential’s use if they so choose.
World ID fees could consist of two components:
- Credential fee: Each credential issuer (e.g., the World Foundation for the Orb credential, enterprises or institutions for their credentials) can set and receive a fee for their credential. This ensures that credential issuers have an incentive to create and maintain their credentials.
- Protocol fee: The protocol can set a base fee and additionally charge a small premium on top of the credential fee.
From an application’s perspective, there would only be one World ID fee: the sum of the credential fee and the protocol fee. That World ID fee would be automatically charged when an application (identified via a unique app id) requests a World ID proof.
The new World ID 4.0 protocol supports the enforceability of World ID fee payments. Once fees are enabled, an application’s wallet must be pre-funded with the asset used to settle fees on the protocol. Web3-native applications can directly pre-fund their wallet on the blockchain. Alternatively, applications (e.g., Web2 platforms) might use a third-party pre-funding service that handles the wallet balances for them and charges the application in fiat. In either model, tokens are ultimately used to pay all fees.
Pricing mechanisms
While several different pricing mechanisms are possible (see here for a detailed discussion), it is likely that many applications will prefer a per-monthly-active-user fee, as it allows applications to easily compare the fee amount with the value that World ID generates for them per user per month (e.g., due to an increase in ARPU). Monthly pricing also reflects the fact that a credential verified once and never refreshed degrades over time: a user may lose their World ID; fraud signals may be added to the protocol; more recent user actions may identify a user as fraudulent or compromised; etc. Thus, most applications will not treat proof of human as a one-time event. Instead, they will likely want the proof-of-human signal to be refreshable on an ongoing, risk-based basis, with periodic renewal where appropriate — similar to existing password managers requiring monthly authentication.
Each World ID proof generation has essentially zero cost to the protocol (because the proof is generated on the user’s device), so that there is no need for a per-proof fee. Given this, a monthly fee per active verified user aligns the cost structure best with the needs of the applications and the protocol.
Fee allocation and token burns
Each credential issuer will have discretion over how to use their credential fees. Protocol fees can be programmatically allocated according to pre-defined onchain mechanisms. Such a mechanism may direct a portion of the protocol fees toward the network’s operations, or it may burn a share of the fees. Initially, the World Foundation will program the configuration of this mechanism.
Continued adoption and use of the World ID protocol (more participants, more applications, and more World ID proofs) can lead to the generation of more fees, which credential issuers and the protocol can direct back into the ecosystem to create further growth, leading to a self-reinforcing mechanism.
An example World ID fee calculation
Consider a platform with 100 million monthly active users and $5 billion in annual revenue, implying a $50 ARPU. But a share of those users is not real: bots, duplicate accounts, fake profiles. A recent study estimates that more than 50% of all internet traffic is now generated by bots. They appear in the denominator of every metric the business tracks, but they do not subscribe, buy products, click ads with purchase intent, or engage with creators in ways that translate to revenue. Every fake user mechanically suppresses ARPU, because the platform is dividing real revenue by an inflated user count. Thus, for this example, let’s assume that the real user number is only 50 million. Given this, with the same $5 billion in annual revenue, the corrected ARPU (for real users) is $100 rather than $50.
Now assume that integrating World ID’s proof of human solution increases the platform’s corrected ARPU by another 25% due to a better user experience (thus increasing users’ willingness to pay), better user retention, lower user acquisition cost, better promotion conversion, and higher ad revenue. Thus, the corrected ARPU increases by $25 from $100 to $125. Another way to arrive at the same number is to consider what would happen if the platform did not implement any proof of human solution. In that scenario, it is plausible to assume that the platform’s corrected ARPU would decrease by $25 over time. Thus, under either perspective, the integration of the proof of human solution leads to a corrected ARPU that is $25 higher.
If 20% of this $25 ARPU increase was captured by the World ID protocol in the form of fees, this would correspond to a yearly World ID fee per active verified user of $5, or a monthly fee of $0.40. In the long run, the total amount of World ID fees generated will depend on the number of applications that choose to integrate with World ID, the number of Orb-verified users that choose to use those integrations, and the value that proof of human generates via those integrations.
Industry analysis: 13 industries with World ID use cases
World ID’s value lies in curating a trusted set of unique humans that serves as a foundation for all subsequent value creation. Across many industries, systems can operate more efficiently and safely if they’re built on the assurance that every participant in the network is a verified, unique human.
The following 13 industries suffer from bots, fake accounts, and fraud at scale. And they serve billions of users and generate trillions of dollars in annual revenue, and together they illustrate what is at stake if proof of human remains unsolved: not isolated fraud problems in isolated markets, but a structural erosion of trust across the digital economy as a whole. The following table shows the number of annual users, the market size today (in annual revenue), the ARPU (average revenue per user), and a brief summary of some of the bot activity that occurs today that World ID can prevent. We discuss each industry in detail in the Appendix (with links to sources).
| Industry | Users | Annual Revenue | Average Revenue Per User (ARPU) | World ID Solves |
|---|---|---|---|---|
| Social media | 5.1B | $239B | $47 | Bots destroy trust by flooding platforms with misinformation, manipulating users, and displacing authentic content from algorithmic distribution |
| eCommerce | 2.8B | $642B | $229 | Bots scrape prices, create fake accounts and reviews, manipulate ratings, and hoard scarce inventory at scale |
| Ticketing | 500M | $53B | $106 | Bots flood ticketing platforms with fake accounts to scalp tickets at scale and resell them at extreme multiples of face value |
| Advertising & creator economies | 6B | $411B | $69 | Fake impressions and clicks dilute ad revenue, ruin ROI, and steal royalty share from creator royalty pools |
| Enterprise security & remote work | 620M | $32B | $52 | Bad actors use deepfakes in video calls or gain unauthorized access via synthetic identities |
| Dating | 350M | $6.2B | $18 | Fake profiles erode trust, steal matching capacity from real users, and are used to engage in romance scams |
| Gaming | 3.6B | $189B | $53 | Bots farm in-game currencies, corrupt real player rankings, and hurt the overall player experience |
| AI agents & GenAI | 980M | $53B | $54 | AI agents act without transparent human delegation, making it impossible for platforms to distinguish them from malicious bots |
| Banking & payments | 4.9B | $2.4T | $489 | Bad actors create fake bank accounts and perform deepfake impersonations of other account holders to achieve account takeover |
| Blockchain | 659M | $56B | $85 | Bad actors perform sybil attacks on airdrops and governance votes and exploit compliance gaps |
| Government services | 3.7B | -- | -- | Bad actors defraud benefits systems via synthetic identities |
| Gig economy | 2.5B | $557B | $223 | On the driver side, fraudsters use the same work authorization for multiple drivers and engage in sophisticated schemes defrauding the platform. On the customer side, fraudsters abuse refunds and promotions. |
| Travel & hospitality | 1.4B | $650B | $433 | Bots engage in loyalty point fraud, perform fake bookings, and abuse promotions via automated account creations |
Social Media
The problem. Social media is an industry with 5.1B annual users1 and combined annual revenue of $239B2, implying an ARPU of $47. Social media platforms grapple with billions of fake accounts. Modern bots can create accounts autonomously and bypass anti-bot protections with increasing success, driving misinformation, political polarization, user manipulation, and low-quality content, all of which erode user trust and attract significant regulatory scrutiny. Related costs show up in moderation budgets, but even more importantly in deteriorating product quality and user engagement. Head of Instagram Adam Mosseri highlighted “infinitely reproducible [authenticity]”, i.e., the ability for bots to imitate humans and create synthetic content without bound, as a key risk in 2026. There is also a subtler cost: algorithmic distribution is finite, and fake engagement consumes it. Every bot-driven interaction that surfaces in a recommendation feed is one that displaces a real creator, costing them reach, ad revenue share, and eventually the incentive to create at all. Reddit already removes 100,000 accounts per day for spam and bot activity, and its CEO recently announced that the platform would begin requiring suspicious accounts to verify they are human, writing: "The internet feels different lately. It's getting harder to tell who, or what, you're interacting with."
Why current approaches fail. Today, social platforms employ three main approaches, each with a critical flaw. Behavioral detection catches bot patterns temporarily, until bots are updated to better mimic human behavior, at which point detection becomes stale and requires expensive upgrades and additional data collection to catch up. Phone number verification is cheap to bypass with virtual number services and therefore does little to prevent one person from operating a large number of accounts. Government ID verification, while temporarily effective at proving identity, is subject to AI spoofing and undermines the anonymity that is central to how most social platforms function. As Reddit's CEO has been explicit about: the platform needs to verify that participants are human, but will not require identity disclosure, because anonymity is not incidental to Reddit's product, it is the product. Every current approach forces platforms to choose between catching bots and preserving the environment their users came for.
What World ID enables. World ID allows platforms to verify that a participant is a real, unique human while revealing anything else about them. Platforms can limit account creation, posting, or amplification on a one-person basis while guaranteeing their users that no personal identity data is collected and accounts cannot be linked across services. Social platforms already incur significant and rising costs to moderate abuse, and privacy-invasive implementations create adverse effects that degrade content quality and retention, both of which directly translate to lower revenue. World ID eliminates this trade-off: the platform gets proof of human without storing any personal information, removing the risk of data breaches and the liability that comes with them. Social media platforms are also heavily dependent on advertising revenue, which becomes more effective and trusted when ads are reliably shown to humans with purchasing power and not bots.
eCommerce
The problem. eCommerce is an industry with 2.8B3 annual users and combined annual revenue of $642B4, implying an ARPU of $229. Bots now account for a large share of eCommerce traffic, creating fake accounts, abusing promotions, manipulating reviews and ratings, and hoarding scarce inventory. This creates direct costs for platforms and degrades the product experience for real users. Bot traffic comprised 57% of eCommerce website traffic during the 2024 holiday season, with bots increasingly mimicking human behavior to make traditional defenses less effective. “Bad” bots, comprising 31% of traffic, violate service policies and engage in price scraping, content scraping, account takeover, fake account registration, and malicious cart abandonment. eCommerce services incur $4.61 in costs for every $1 of fraud on their platform. Amazon, for example, has disclosed employing more than 12,000 people dedicated to combating fraud, fake reviews, and abuse across its marketplaces, and blocking over 275 million suspected fake reviews in 2024 alone.
Why current approaches fail. eCommerce platforms face a difficult trade-off between detection effectiveness and friction for real users. While behavioral analysis can be lower friction, it is locked in an arms race with increasingly sophisticated bots. CAPTCHAs introduce friction, drive abandonment, and can now be bypassed by consumer-grade AI or outsourced cheaply. Invasive approaches like eKYC are unacceptable for most retail use cases due to cost, effort, and privacy concerns. This results in sustained defensive spending that does not improve the shopping experience for real customers and often introduces painful friction that harms conversion.
What World ID enables. World ID lets marketplaces know that a single, unique human is on the other side of a given account, review, or action, rather than a bot farm. World ID offers a way to shift some of this defensive spend from reactive enforcement to preventative infrastructure that maintains a quality user experience, improving trust and engagement with reviews and ratings and increasing spend. Moreover, integrating World ID allows eCommerce services to offer promotions and discounts that can only be claimed once per person without adding friction at checkout, to offer fair access to high-demand inventory, and to increase the quality of reviews and ratings and the discoverability of products.
Ticketing
The problem. Ticketing is an industry with 500M5 users and annual revenue of $53B6, implying an ARPU of $106. The user experience on ticketing platforms is best when there is fair access, with each user being able to get a handful of tickets, not more, and each user having a chance at securing tickets they want. This is not the case today, as scalpers deploy automated software that purchases tickets in bulk the instant they go on sale, faster than any human can act. Nearly 40% of online ticket sales are made through automated purchasing, and in high-demand on-sales the share is far higher. A queue-it analysis found 96% of traffic to a major concert sale came from bots and uninvited visitors, with just 4% from legitimate fans. The New York Attorney General documented a single broker acquiring over 1,000 concert tickets for a U2 show in one minute, despite a four-ticket limit. During Taylor Swift's Eras Tour, resale prices reached seventy times face value on the secondary market. The result is a market that systematically redirects value from fans and artists to scalpers and leaves platforms exposed to regulatory and reputational risk.
Why current approaches fail. Ticketing system defenses operate in layers: virtual waiting rooms with randomized queue positions, SMS verification to confirm a real phone number exists behind each account, CAPTCHAs to distinguish human interaction from automated input, behavioral analysis to flag bot-like session patterns, and the Verified Fan pre-registration program, which invites only vetted fans to presales. Each layer has a known circumvention. SMS verification is defeated by virtual number services like Google Voice or bulk burner phones. CAPTCHAs are bypassed by consumer-grade AI or cheaply outsourced to human solvers. Behavioral signals are evaded by bots trained to mimic legitimate browsing patterns. Purchase limits are circumvented by operating thousands of accounts simultaneously. And Verified Fan pre-registration, while effective when it works, relies on the same account-level verification that bots are built to defeat. For example, access codes from the Taylor Swift Eras Tour presale were guessable by automated systems because of their random alphanumeric structure. Ticketmaster now blocks 200 million bots per day, a fivefold increase from 2019, and the problem has not gone away. Legislation adds pressure but not resolution: the BOTS Act has been on the books since 2016, but the FTC has resolved just one case under it in nearly a decade.
What World ID enables. World ID allows ticketing platforms to enforce a robust one-person-per-purchase guarantee. A fan presents an anonymous proof of human prior to queue access or at checkout, confirming they are a unique human. Scalpers operating thousands of fake accounts cannot do this, since each account would require a distinct verified human behind it, making bulk acquisition economically unviable at any meaningful scale. Platforms can apply this selectively, requiring proof of human only for high-demand on-sales where bot abuse is most severe. World’s proof of human is already being used by Mundo Arjona, the fan network for major Latin pop singer-songwriter Ricardo Arjona, to ensure fair access to presale tickets.
Advertising and Creator Economies
The problem. The advertising industry (outside of social media) and the related creator platforms have 6B7 annual users and annual combined revenue of $411B8, implying an ARPU of $69. Advertisers pay advertising platforms under the assumption that impressions, views, and clicks represent real human attention that can convert into meaningful revenue and engagement. However, bots, fake accounts, and automated agents generate large volumes of traffic, inflate engagement metrics, suppress ARPU, and siphon advertising spend without delivering real audience value9. This distorts ad pricing and inventory, degrades critical ROI measurement, and erodes trust between advertisers and platforms.
The same dynamic plays out in the creator economy. Streaming platforms assume that plays and listens represent genuine human creators reaching real human audiences. Spotify removed 75 million fake tracks in 2024, including AI-generated music uploaded by fake artist accounts designed to farm royalty payouts through bot-driven streams. Today, 5–10% of all music streams are likely fraudulent. Real human creators compete for algorithmic visibility and royalty pools against a flood of content from accounts with no human behind them.
Why current approaches fail. Current detection approaches heavily rely on pattern matching against known bad behavior, but as detection improves, evasion adapts to match it. Both sides draw on the same underlying AI capabilities, so the investment gap never closes and the costs increase on both sides. In 2024, ad fraud cost advertisers over $140 billion globally, projected to reach $172 billion by 2028, despite years of sustained investment in detection. On the creator side, distributors have financial incentives to onboard as many tracks as possible and apply no meaningful verification that a real human is behind the account. Once tracks are live, the same arms race dynamic applies: platforms flag suspicious listen patterns, and bots are trained to mimic legitimate ones.
What World ID enables. World ID allows platforms to distinguish verified human attention from automated activity, anchoring impressions and engagement to real, unique humans without identifying them. This sharpens the targeting and measurement systems advertisers already rely on. Hakuhodo, Japan's second-largest marketing agency, is already trialing this for their ad network. For the creator economy, World ID enables a verified human creator credential: not a statement about whether AI tools were used, but a guarantee that a real person owns and operates the account, making bot-operated upload farms economically unviable. On the advertising side, this lets platforms charge higher CPM through credibly lower bot traffic or a "verified human" offering. On the streaming side, it protects royalty pools from the hundreds of millions of royalty dollars that are misallocated and flow to fake artists, increasing revenue for real creators and making the platform more attractive for both sides of the market.
Enterprise Security & Remote Work
The problem. Enterprise security and remote work is an industry with more than 620M10 annual users and combined annual revenue of $32B11 in 2025, implying an ARPU of $52. Organizations routinely authorize payments, approve contracts, and exchange sensitive information based on emails, messages, and video calls. If the “person” on the other end is not real or is impersonating someone else, this can create financial loss, legal exposure, or reputational damage. Video has traditionally been used as an ultimate proof of authenticity, but with the prevalence of deepfake technology, this is no longer valid. Fraud through deepfakes is estimated to have cost organizations $1.5B in 2025. 72% of US businesses anticipate AI fraud as a major challenge in 2026. In one high-profile case, a company incurred a $25M loss due to a deepfake of the CFO ordering banking transfers. A 2026 survey found that 72% of respondents had lost up to 5% of their business profits to AI-powered attacks over the previous 12 months, and 94% are concerned about the risk of attacks in the coming year.
Why current approaches fail. Enterprise remote security is today largely implemented along two avenues: first, the security of devices that hold identifiers, session tokens, and two-factor keys, and second, the authenticity of video. Device security, while the subject of massive investments by organizations, is of course imperfect: laptops and phones get stolen and accounts get compromised. Video authenticity today largely relies on detecting artifacts of AI generated or modified video, locked in a persistent arms race with ever improving deepfake models. Detection must avoid false positives, which becomes increasingly difficult as video models improve, while the cost of deepfake attacks continues to decline.
What World ID enables. World ID augments device security by human continuity. A user connects their World ID to their enterprise account. For particularly sensitive actions and messages, applications can request that a user confirms that they still have control over their device using Face Auth. World ID thus provides a second factor, in the sense of 2FA, that does not merely show continuous control over a device, but still being the same person. For video authenticity, World ID’s Deep Face feature prevents deepfake compromise. A user can connect their World ID to a video stream and Deep Face matches the images transported via video to the high-fidelity images taken by the Orb. This way, human continuity and Deep Face integration provide an additional defense to organizations.
Dating
The problem. Dating is an industry with 350M12 annual users and annual revenue of $6.2B13, implying an ARPU of $18. Arktose Labs reports that, between January 2023 and January 2024, dating apps saw a 2087% increase in bot attacks. Deepfakes in dating apps are one of the most nefarious uses of AI today. Modern bots sustain convincing multi-turn conversations, generate realistic profile photos, and mimic emotional cues well enough to evade detection. Agents can swipe, match, and message autonomously across thousands of fake profiles simultaneously. A 2025 Norton survey found six in ten dating app users believe they've encountered at least one AI-written conversation. Users who encounter bots (or simply suspect they might) lean out of interactions with other users, suspect their matches may not be genuine, and are at risk of lower engagement and attrition for the platform, which compounds adversely in a service that depends heavily on network effects.
Why current approaches fail. Phone verification is cheap to bypass with virtual number services. Photo review is losing ground to generative image models. Behavioral signals that once identified bots by response time, message cadence, conversation flow, etc., are now replicated by large language models built specifically to pass as humans in dating or romantic situations; a 2025 study found humans mistook GPT-4.5 for a human 73% of the time. Report-based moderation is reactive by design: it catches bad actors after harm has been done, and only when users report rather than simply churn. Dating platforms also face similar trade-offs as other industries: aggressive enforcement generates false positives that alienate real users and worsen engagement metrics. And identifying bad acting accounts does not prevent the bad acting person from creating a new account. The result is a persistent population of bots and bad actors that degrades the product.
What World ID enables. World ID allows dating platforms to verify that a profile belongs to a unique human without requiring identity disclosure, government documents, or cross-service data sharing. Users present an anonymous proof of human at account creation or before initiating contact. This anchors the profile to a genuine person without constraining what name, photo, or persona they choose. Platforms can apply proof of human selectively: requiring verification to access matching features, to send a first message, or to appear in search results. Alternatively, platforms can simply show a verified Human Badge on the user’s profile as a signal to other users. The one-human-per-account guarantee also closes the multi-profile abuse vector directly: operating hundreds of fake profiles from a single infrastructure becomes economically unviable when each account must be tied to a distinct verified human. Platforms are already demonstrating that they can credibly offer genuine human encounters as a product differentiator. Tinder has integrated World ID in Japan to display a “verified human” badge on user profiles for those with a valid proof of human credential.
Gaming
The problem. Gaming is an industry with 3.6B14 annual users and annual revenue of $189B15, implying an ARPU of $53. Gaming platforms battle on the order of millions of bot accounts (e.g., LostArk (2023, 2025), RuneScape (2024, 2025), World of Warcraft (2023), Lineage2M (2025)). Bots can make up a significant share of the player base (e.g., 14% in RuneScape). Much of this bot activity consists of “farming” operations that generate in-game currency, items, or player upgrades from repeated in-game actions, and then selling those for real money. Bot activity is undesirable for players because it distorts game economies to the point of preventing access to certain mechanics and disrupts social expectations and the perception of fairness. It is also widely undesirable for game operators because it discourages players and also distorts engagement metrics that investors and advertisers rely on; for example, Roblox faces an SEC investigation for the latter reason. Gaming ranks as the number one vertical for bot attacks by percentage of fraudulent traffic. This introduces real costs for gaming platforms, as first-party fraud such as bonus abuse, multi-accounting, and fake engagement cost online gaming operators $2.8 billion in 2024 alone despite sophisticated AI and thousands of human employees engaged in content moderation.
Why current approaches fail. Current approaches that flag and ban problematic accounts face a fundamental problem: banning an account does not effectively ban a person who behaves badly. When an account is flagged, the fraudster creates a new one at effectively zero cost. Behavioral detection catches fake accounts until bots are updated to convincingly mimic human behavior, after which detection lags. Rate limits and device fingerprinting are defeated by rotating proxies and virtual machines. iGaming platforms report a 6.48% fraud rate in 2024, with more than 8% of new account applicants identified as fraudsters in some markets, and 83% of operators saying it is getting worse. Studios end up in a permanent enforcement cycle that consumes engineering resources without resolving the underlying economics: as long as account creation is free and anonymous, operating fake accounts at scale remains profitable.
What World ID enables. Tying a verified World ID to each account would make ban evasion genuinely costly for the first time. Farming operations that depend on running hundreds of simultaneous accounts become economically unviable and bonus abuse tied to new account creation is eliminated. Studios can apply verification selectively to ranked play, tournament entry, or real-money wagering, without touching casual play. Razer has already moved in this direction, introducing "Razer ID verified by World ID" to link player accounts to World IDs.
AI Agents & GenAI
The problem. AI agents and GenAI are a rapidly growing industry with 980M16 annual users and annual revenue of $53B17, implying an ARPU of $54 — a figure that is conservative, given the pace of agent adoption. AI agents are AI systems that autonomously execute actions on behalf of a user: browsing websites, filling forms, making purchases, booking travel, and interacting with third-party services. To the platform on the receiving end, an agent is indistinguishable from a bot. The original human context, such as who authorized this action, and who bears responsibility for it, is invisible. This creates two problems. First, platforms that defend against bot traffic have no basis on which to allow legitimate agents through; blocking bots means blocking agents, which degrades agent utility and restricts legitimate traffic. Second, when something goes wrong, accountability is unclear: for example, an agent may make an unintended purchase on behalf of a user, leading to a dispute later on and exposing the merchant to risk.
Why current approaches fail. AI Agents are a new technology and it is a topic of ongoing research and development how to prove that an agent is acting on behalf of a human. For services with whom the user has a prior relationship, it is relatively straightforward to authenticate the agent using a credential shared by the user. However, this kind of “blanket sharing” comes with major problems: the agent is now indistinguishable from its human principal, this does not work for new interactions (e.g., agents browsing websites), and it comes with the risk of agents exposing their credentials.
What World ID enables. World ID gives agents a portable, anonymous proof of human authorization that works across all interactions, not just established relationships. A human can attach a proof of delegation to an agent. The agent can then prove to a platform that it acts on behalf of a human, and the platform receives an anonymous identifier for that human. This lets platforms draw a distinction between delegated agents and adversarial bots. The human remains responsible for the actions of their agent and the platform remains in control: when a delegated agent misbehaves, the platform can rate-limit or block agent interactions for that human or, in extreme cases, block interactions with that human completely.
World ID can also be used to certify that specific actions taken by an agent have been reviewed and approved by its human principal. Platforms can use this for purchases, for example, giving them the confidence to support agent workflows without concerns of costly disputes later on. Organizations can use human approval proofs for high-risk actions, letting them deploy agent workflows while maintaining compliance, accountability, and control over operational risk.
World has already launched AgentKit, providing infrastructure for agents to register proof of human authorization across use cases. The economic stakes scale directly with agent adoption: as agents become standard business infrastructure, platforms that cannot distinguish them from bots face a compounding cost with lost legitimate traffic on one side and unmitigated bot abuse on the other.
Banking & Payments
The problem. Banking and payments is an industry with 4.9B18 annual users and annual revenue of $2.4T19, implying an ARPU of $489. Financial compliance (KYC/AML) and account recovery today overwhelmingly rely on the assumption that video and documents can be trusted. For example, users trying to create an account with a fintech company in Germany usually need to go through an eKYC process that consists of a video call with a service representative. The call is made to collect biometric data while weeding out deepfakes, by having the customer move their head in a specific way, holding, bending, and tilting their government ID, and reciting personal information from memory while looking into the camera. Such extreme measures are implemented because financial institutions are prime targets for attacks using sophisticated AI bots, with $16B lost to account takeover and 42% of clients abandoning their institution after such an incident.
Why current approaches fail. Video-based eKYC and identity verification is common in the financial industry, but it is clearly locked in an arms race with ever more sophisticated AI models. Current deepfake models can already defeat many verification models deployed in practice. The industry responds to this by additional, layered checks of the video feed, but this will likely not be enough as deepfakes reach a quality level indistinguishable from reality and deepfake detection becomes impractical.
What World ID enables. World ID’s proof of human is not based on a reactive detection of anomalies in a video feed, but on high-fidelity images taken by a specialized hardware device, the Orb. Because of this, the ground truth proof of human is not exposed to deepfakes. World ID’s Deep Face feature lets users attach their proof of human to a video feed, proving that the person in the video is the holder of a corresponding World ID. This can be used for the initial account setup and to approve particularly sensitive transactions. Banks and fintechs already use extensive infrastructure for risk scoring, with spending on anti-fraud technology estimated at $16.3B in 2024. World ID fits well into these frameworks, increasing the accuracy of the overall system.
Blockchain
The problem. Blockchain is an industry with 659M20 annual users and annual revenue of $56B21, implying an ARPU of $85. Blockchains have traditionally been pseudonymous: any given set of wallets may belong to any number of humans, or no human at all. This provides some degree of privacy to users, but many use cases for blockchain technology require reliable proof of unique human. Among those are fair token airdrops and voting schemes, undercollateralized lending, limited free products, and reputation systems. Recently, many traditionally off-chain industries have significantly expanded on chain, such as payments (Stripe, Circle), stock trading (Robinhood, Nasdaq, NYSE) and securities issuance (e.g., Superstate). Compliance and risk management for these new use cases cannot be realized in a fully pseudonymous system.
Why current approaches fail. Historically, in the absence of proof of unique human, blockchain protocols had to resort to mechanisms that cannot be exploited through multi-wallet “sybil” attacks, such as airdrops proportional to a capital commitment, linear token voting schemes, per-gas transaction fees, and collateralization. However, such “financialized” mechanisms are often deeply suboptimal, both from a purpose-driven and an economic point of view. Some airdrops used network analytics to detect sybils, but the transaction network has fundamentally limited predictive power, leading to some major failures (e.g., Celestia (1, 2), Arbitrum (1, 2), MYX). Another approach is to associate eKYC information with a cryptographic proof of wallet ownership. This works but requires an undesirably high degree of trust in the eKYC provider, who is now able to identify the user across all their wallets and transactions.
What World ID enables. The core bottleneck in blockchain technology has been the absence of a reliable, privacy-preserving proof of unique human. Without it, protocols have had to use financial proxies for fairness: governance votes weighted by token holdings, airdrops proportional to capital, lending requiring overcollateralization. World ID makes sybil resistance a solved problem, enabling protocols to treat every verified human equally regardless of capital position. This is already in production: uncollateralized lending on World Chain is a product category that is structurally impossible without proof of unique human. Priority Blockspace for Humans extends the same logic to infrastructure, giving verified humans fair ordering access over bots.
The economic opportunity scales with the broader onchain financial buildout. Stripe acquired stablecoin provider Bridge for $1.1B and launched payments chain Tempo with Visa. Circle launched stablecoin finance chain Arc. IEX made a strategic investment in OKX. These systems require uniqueness primitives to function at scale, and World ID is the only infrastructure that provides them without compromising user privacy. The $5T US consumer credit market has been largely inaccessible to blockchain-based products because unsecured credit requires proof of unique human to prevent multi-wallet abuse — World ID is the missing primitive that makes it accessible onchain for the first time.
Government Services
The problem. The US GAO estimates between $100 billion and $135 billion in unemployment insurance went to fraudsters during the COVID pandemic, between 11% and 15% of total benefits paid. The mechanism involved bots flooding state portals with synthetic identities, filing hundreds of fraudulent claims from single IP addresses faster than any human reviewer could respond. In June 2025, the DOJ announced the largest healthcare fraud takedown in history, with 324 defendants charged across 50 districts for schemes totaling $14.6 billion in fraudulent Medicare claims for patients with fake identities. These massive programs designed for human applicants had no way to verify whether a real person was on the other side of the claim activity.
Governments have recognized this problem. India's biometric identity system Aadhaar, was built precisely to eliminate such “ghost beneficiaries” from welfare programs and now includes over 1.4 billion enrollments. The objective is well founded. However, centralized biometric databases are attractive targets for attackers: 815 million Aadhaar records appeared for sale on dark web forums in 2023, and compromised biometric credentials cannot be reset.
Why current approaches fail. Traditional verification relies on documents and databases that are increasingly easy to fabricate. Synthetic identity fraud where real data is combined with fabricated details is the fastest-growing form of fraud targeting government programs. Centralized systems create single points of failure. And privacy-invasive approaches also generate their own costs, such as false positives that deny legitimate beneficiaries and the legal exposure of storing sensitive data on hundreds of millions of citizens.
What World ID enables. World ID can prevent the exploitation of government benefits fraud as illustrated in the GAO’s $100-135B estimate above. However, this type of fraud is not limited to this example, and government benefits fraud increased 242% between 2020 and 2024. World ID provides a proof of unique human without storing sensitive personal data centrally. A citizen demonstrates that they are real and unique, and optionally proves specific facts like age or residency, without revealing identity. For benefits programs, this eliminates synthetic identity fraud: each claim is anchored to a distinct verified human, and bots cannot apply at scale. Through its zero-knowledge architecture, World ID can provide governments the assurance they need while avoiding the data liabilities and surveillance concerns that have undermined centralized alternatives.
Gig Economy
The problem. The gig economy is an industry with 2.5B22 users and an annual revenue of $557B23, implying an ARPU of $223. Gig economy apps are unique in being fundamentally trust-based marketplaces: the product they sell is the assurance that the driver, courier, or tasker on the other end is who they claim to be, and that the customer requesting the service is legitimate. Fraud attacks both sides of this marketplace simultaneously. On the provider side, accounts are sold or rented to unverified workers, letting buyers circumvent background checks and exploit the previous owner's reputation. A 2024 case exposed 18 people running such a scheme, using over 2,000 stolen identities, renting accounts to unqualified drivers and netting nearly $800,000 over two years. On the consumer side, fake accounts enable refund abuse, scam payments, and money laundering. This practice is rampant, with 31% of Gen Z and Millennial gig workers having sold or rented an account out.
Why current approaches fail. Platforms face a structural asymmetry: they can impose friction on providers. Uber, for example, prompts drivers to take randomized selfies and compares them against registration photos. However, consumer-side verification is deployed sparingly because friction at that point kills conversion. Uber's rider verification, introduced in 2024, remains lightly enforced for exactly this reason. The result is a platform that verifies one side of the transaction but leaves the other largely open. This matters because consumer fraud is where volume losses accumulate: refund abuse, promotion exploitation, and fake account creation all depend on the consumer side being easy to create and hard to attribute to a real person.
What World ID enables. World ID addresses the asymmetry directly. On the provider side, the Orb's high-fidelity biometric images serve as a reliable reference. On the consumer side, World ID's value depends on the growth of its verified user base: as more people obtain World ID credentials, platforms can require proof of human at account creation with minimal abandonment, because the marginal cost of presenting an existing credential is essentially zero. A 2024 TransUnion survey found one in three consumers had already been victimized by fraud on a gig platform, and 75% said they would switch platforms or stop using the app entirely if it happened to them. That churn risk is the clearest economic signal: as World ID's user base grows, gig platforms gain the ability to verify consumers at scale for the first time, protecting their highest-value users without sacrificing conversion.
Travel & Hospitality
The problem. Travel and hospitality is an industry with 1.4B24 annual users and annual revenue of $650B25, implying an ARPU of $433. Loyalty programs are the travel industry's most valuable retention asset, but also its most exploited one. Major travel and hospitality companies lose over $1 billion annually to loyalty fraud alone, with loyalty-linked accounts attacked four to five times more often than non-loyalty accounts. Points and miles have become digital currency and are stolen, traded, and liquidated on dark web markets at scale. In Q1 2024, 94% of attacks on the airline industry were bot-driven, and the travel industry became the single most attacked sector on the internet, absorbing 27% of all global AI-powered bot attacks. Fake accounts are created in bulk to abuse promotional offers, hold inventory without booking, and harvest referral bonuses. As AI agents begin booking travel autonomously, the problem is compounding because platforms cannot distinguish a legitimate agent acting on a traveler's behalf from a bot conducting fraud.
Why current approaches fail. Over 90% of travel websites are not fully protected against simple bot attacks. Credential stuffing, account takeover, and fake account creation all exploit the same structural weakness: account identity either not verified at all or only at registration, and rarely again. Behavioral detection is losing ground as bots are trained on legitimate booking workflows until their activity is indistinguishable from real travelers.
What World ID enables. World ID ties each account to a unique human, closing the fake account creation vector that underlies most loyalty fraud. Promotional abuse, which depends on creating new accounts to claim first-time offers repeatedly, becomes economically unviable, and points can only be earned and redeemed by the person who accumulated them. The AI agent problem also has a clean solution: agents acting on behalf of World ID-verified humans can present a delegated proof of human, allowing platforms to confidently serve legitimate agentic bookings while blocking autonomous bots. Theglobal travel loyalty program market is valued at $29B — a retention asset that fraud directly erodes through stolen points, disputed redemptions, and degraded trust. Protecting it is not a compliance exercise; it is a revenue imperative.
Footnotes
- 1.
- 2.
- 3.
- 4.This reflects an eCommerce gross merchandise value of $6.42T, with a 10% take rate for the services themselves, https://www.shopify.com/blog/global-ecommerce-sales, https://www.oberlo.com/statistics/global-ecommerce-sales-growth
- 5.52% of the US population attends live events each year, and we scale this up to a global value: https://wifitalents.com/concert-statistics/
- 6.
- 7.We use the number of people who use the internet annually, assuming they’ve seen at least one advertisement digitally: https://datareportal.com/global-digital-overview
- 8.This considers $650B in overall digital ad spend (including social media, search engines, video sites, retail sites, and mobile apps), less $239B in social media: https://finance.yahoo.com/news/digital-ad-spending-market-size-123300420.html?guccounter=1
- 9.“Ad fraud occurs when bad actors generate fake interactions with digital advertising, such as fake clicks, impressions, or conversions. The goal of these fraudulent interactions is to slowly and steadily siphon off marketing spend without delivering genuine value.” With more detail here: https://www.anura.io/blog/truth-about-what-ad-fraud-costs
- 10.Microsoft Teams has 320M DAU, and Zoom has 300M DAU: https://www.demandsage.com/microsoft-teams-statistics/, https://www.demandsage.com/zoom-statistics/
- 11.This is a combined value for the video conferencing market and identity and account management industry: https://electroiq.com/stats/video-conferencing-statistics/, https://www.skyquestt.com/report/identity-and-access-management-market
- 12.
- 13.
- 14.
- 15.This is somewhat conservative as other sources have the gaming industry at $298B globally: https://www.grandviewresearch.com/industry-analysis/gaming-industry, https://newzoo.com/resources/trend-reports/newzoo-global-games-market-report-2025
- 16.
- 17.
- 18.This is the population of banked adults globally: https://www.worldbank.org/en/publication/globalfindex.
- 19.
- 20.
- 21.
- 22.This is the number of users of rideshare services globally in 2024. Note that 3B users used delivery apps, but given overlap we used the lower of the two: https://www.marketgrowthreports.com/market-reports/ride-hailing-app-market-114672, https://www.prosus.com/~/media/Files/P/Prosus-CORP/documents/livelihoods-in-a-digital-world.pdf
- 23.
- 24.
- 25.
Disclaimer
The contents of this post only speak to public information in existence prior to the time of its original publication and are not a representation of current undertakings or future plans. No organization or individual is making a commitment, promise, or obligation to develop, launch, or make available any feature, product, or functionality described in this post, and there is no guarantee that they will ever exist or perform as described in the future. Any future development or existence of any features, products, integrations, or functions described herein remains hypothetical. None of the content herein should be construed as implying or representing that any cryptocurrency or other crypto asset will have any value or utility now or in the future. Any figures and values are only estimates based on publicly-available information, but are not guaranteed to be accurate or comprehensive. Any conclusions or assumptions contained in the post are the opinions of its authors and may be inaccurate or subject to change. This post does not constitute an offer to sell or a solicitation of an offer to buy any security, token, or other asset, and should not be relied upon for any investment, legal, or financial purpose.
Artikel terkait



