Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


The company removed 159 million fraudulent ads last year and removed 10.9 million accounts linked to criminal networks. Now you want to catch the scammers before they get to you.
Meta has announced a new wave of anti-scam tools across its platforms, WhatsApp, Messenger and Facebook, as it steps up detection on the platform and cooperation with law enforcement in Southeast Asia and beyond.
The centerpiece of the announcement is a new Facebook feature, currently in testing, that flags suspicious friend or follow requests before users act on them. When a request comes in from an account with no mutual connections, from a different country, or from a suspiciously recent join date, Facebook will display a warning.
The same alert will appear when users send requests to similarly flagged accounts. The feature is designed to disrupt one of the most common social engineering processes: fake profiles that accumulate mutual friends over time to give themselves the appearance of legitimacy and then go on to send fraudulent messages through Messenger.
WhatsApp is also getting a new layer of protection aimed at a specific and growing attack vector: device pairing fraud. Scammers have been tricking users into scanning malicious QR codes, sometimes under the guise of a customer service call or technical support request, which links the scammer’s device to the victim’s WhatsApp account.
The app will now display a warning when it detects a suspicious device pairing request and will show the user where the request originated.
For Messenger, Meta says it is expanding its existing scam detection feature to more countries this month. The system works in two stages. First, on-device scanning automatically detects messages from unknown contacts that match patterns of common scams, fraudulent job offers, fake investment proposals, and work-from-home schemes.
If checked, the user is warned and given the option to send the conversation to Meta’s AI for a second cloud-based review. That opt-in step breaks the end-to-end encryption of the message, which Meta reveals; Users who prefer not to send can still act based solely on the device’s warning.
The detection feature can be accessed and toggled in Settings > Privacy & Security Settings > Scam Detection.
In addition to platform-level tools, Meta is accelerating a broader advertiser verification push. The company says it wants verified advertisers to make up 90% of its ad revenue by the end of 2026, up from 70% now.
The remaining 10% would be reserved for low-risk advertisers, such as small local businesses, which Meta gives as an example of a category it considers exempt from the high-risk verification requirement.
The announcement is accompanied by a significant set of compliance figures. Meta says it removed more than 159 million scam ads last year and removed 10.9 million Facebook and Instagram accounts associated with criminal scam operations.
He also revealed the outcome of a recent joint operation with the Royal Thai Police, which resulted in 21 arrests and Meta disabling more than 150,000 accounts linked to scam hub networks.
This was the second “Joint Disruption Week” of its kind, according to Axios; The first, in December, saw Meta remove 59,000 accounts and pages; the second expanded the coalition to include the United Kingdom, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand and Indonesia.
Meta also confirmed a partnership with the US State Department to launch the ‘Caught in Criminal Scams’ awareness campaign in Vietnam, Thailand, Laos, Cambodia and several other countries.
The campaign takes aim at the supply side of the problem: trafficked workers who are forced to recruit in scam centres, often lured with false job offers before being held against their will in complexes based mainly in Myanmar, Cambodia and Laos.
The moves come as Meta faces intensifying scrutiny over fraudulent advertising in general. A Reuters investigation in late 2025 reported that internal Meta documents showed the company earned about $7 billion a year from ads linked to scams and banned products, and showed users about 15 billion higher-risk ads per day on average.
Meta has questioned part of Reuters’ framework; The current announcement is the latest in a series of compliance updates the company has made public in the months since.