Meta, which owns WhatsApp, said the accounts were taken down before they became active. These accounts were flagged during internal enforcement aimed at stopping scams before they reach potential victims.
Criminals behind these scams use a mix of platforms to reach people. They begin contact on one service, then push users toward encrypted or private channels. From there, they guide the person toward a payment page or cryptocurrency platform. Most schemes involve fake investment offers, low-effort income promises, or job tasks that seem profitable but require deposits.
In one recent case, WhatsApp worked with Meta and OpenAI to stop a scam traced to Cambodia. The group behind it used AI tools to create messages that directed people to WhatsApp chats. From there, victims were redirected to Telegram and told to complete social media tasks, including liking videos. After some interaction, the scammers demanded money transfers to cryptocurrency accounts.

WhatsApp also introduced features to warn users when they are added to unknown group chats. A new group overview screen shows basic information before the chat opens. Users can choose to exit without entering the conversation. Notifications are muted until they make a choice to stay.
Other tools focus on direct messages. WhatsApp is testing alerts that appear when a user starts a chat with someone outside their contact list. These alerts display limited context, allowing users to think twice before replying.
Criminal groups use a wide range of tactics to build trust. Some impersonate relatives, while others pretend to offer help or work opportunities. They often create urgency by claiming bills are overdue or accounts are at risk. Victims are then pressured into sending money or personal information.
Authorities in countries like Singapore have warned people to avoid acting quickly when approached by unknown contacts. Police recommend checking messages carefully, looking out for payment demands, and using app features like two-step verification.
Meta said many of the scams rely on a combination of fear and financial pressure. Messages often make quick profits sound easy, but always include an upfront payment or deposit. These patterns repeat across platforms and regions.
The company noted that its systems, including AI models, play a central role in detecting suspicious activity. Most enforcement actions happen before user reports. Even so, some users have reported mistaken account blocks or delays in appeal responses.
The situation highlights the scale of the problem and the limits of automation. Meta continues to expand its safety tools, but the complexity of the scams has raised concerns about the company’s reliance on algorithm-based enforcement. Some users and small businesses say account removals hurt their access to essential communication, prompting calls for more human review and better safeguards.
Read next: X’s Algorithmic Design Helped Fuel Anti-Muslim Riots After Southport Tragedy, Amnesty Report Finds