Share it

This is not about a single feature or policy change. It is about a pattern that has been repeated for over a decade.

When Meta bought WhatsApp in 2014 for $19 billion, its founders Jan Koum and Brian Acton made a public promise: no ads, no games, no gimmicks. Koum had grown up in Soviet Ukraine, where the government monitored private communications. He built WhatsApp explicitly as a reaction to that. Privacy was not a feature. It was the point.

Advertisement

Both founders eventually left Meta. Acton later said he had “sold his users’ privacy” and donated $50 million to the Signal Foundation, WhatsApp’s direct competitor. Koum left, citing disagreements over data sharing and weakening encryption.

The people who built WhatsApp to be private left because Meta was making it less private. They may have a point:

  • In 2016, Meta updated WhatsApp’s terms to share user phone numbers and data with Facebook for ad targeting. The EU fined Meta €110 million for providing misleading information about this during the acquisition approval process.
  • In 2018, the Cambridge Analytica scandal became public. Facebook had allowed third-party apps to harvest data on up to 87 million users without their knowledge. The data was used to build psychological profiles for political micro-targeting, including the 2016 US election and the Brexit campaign. Facebook knew third-party apps were doing this and had allowed it for years.
  • In 2021, Meta issued a new WhatsApp privacy policy requiring users to accept expanded data sharing with Facebook or lose access to the app entirely. Regulators in multiple countries objected. The policy went ahead.
  • In 2025, Meta began using data from interactions with its AI tools to personalize ads across all its platforms. It announced this in a blog post. There was no opt-out for most of the world.
  • In 2026, Meta embedded its AI assistant permanently into WhatsApp, which may be difficult to disable. Meta says interactions with its AI can be used to improve and personalize its services. Given its advertising model, it is difficult to see how that data would not ultimately feed into ad targeting. 3 billion users were not asked.
Advertisement

This does not look like a company that made a few privacy missteps. It looks like a company that has deliberately taken a product people trusted in their personal life and used that trust to extract data for revenue.

Cambridge Analytica was not an anomaly

It is worth being precise about what Cambridge Analytica actually was, because it is often discussed as though it were a one-off data breach, which it was not. It was a rogue actor who exploited a gap that has since been closed.

Facebook’s platform made large amounts of personal data accessible to third parties because third parties paid for that access. Cambridge Analytica used a personality quiz app to harvest psychological data on 87 million people, most of whom had never even taken the quiz. Their data was pulled from their friends’ networks. Facebook allowed this since its business model depended on it.

What Cambridge Analytica then did with that data was not to sell advertising in any ordinary sense. It sold the ability to identify psychologically vulnerable people. People prone to anxiety, to conspiratorial thinking, to specific emotional triggers. They then pushed targeted political content at them. The clients were political campaigns. The goal was to shift votes.

Cambridge Analytica did not hack Facebook. It used Facebook in ways the platform enabled and tolerated. The product was always access to people’s psychology. Meta has since tightened what third-party developers can access directly. But it has not changed the underlying model. It still builds detailed psychological and behavioral profiles of its users. It still sells access to those profiles to whoever pays. The difference now is that Meta itself is the intermediary, rather than allowing third parties to extract the data directly.

The people buying access to your psychology today are still, largely, unknown to you. I assume they are brands, political campaigns, hedge funds running sentiment analysis, and anyone else who can afford to use Meta’s advertising tools. Meta does not publish a list, and you do not get to approve them.

The thread that connects to WhatsApp now

The business model that made Cambridge Analytica possible is the same one now being extended into WhatsApp. Collect as much personal data as possible. Make it available to buyers. Treat regulatory fines as a cost of doing business.

The difference with WhatsApp’s 2026 AI integration is the intimacy of the data. Facebook posts are things people choose to make semi-public. WhatsApp conversations are things people think of as private. An AI assistant embedded in a private messaging app, connected to an ad network, with no off switch, is designed to close that gap. It gets access to the private self that people withheld from their public social media profiles.

Meta already knows your public self from Facebook and Instagram. The WhatsApp AI is reaching for your private self. This is data that is worth considerably more, and there is no reason to believe the buyers will be different this time.

The 2026 change, specifically

Meta AI now lives inside WhatsApp. It appears in the search bar. It can join group chats. It cannot currently be fully removed or disabled in most versions. If you interact with it, those conversations may be used to build your advertising profile across all Meta platforms. People tell AI assistants things they would not post publicly. This can include health concerns, relationship problems, financial stress, and mental health struggles. Meta knows this. It built the product anyway.

The opt-out that isn’t

Users in the EU, UK, and Brazil can submit a formal objection through Meta’s privacy settings. This limits future use of their data. It does not delete data already collected. It does not apply retroactively.

Everyone else — the United States, India, Nigeria, Indonesia, and most of the world — has more limited options. Meta appears to actively lobby in Washington against the kind of federal privacy law that would change this.

Roughly 2.5 billion people have limited legal rights, if any, to stop Meta from using their WhatsApp AI conversations for advertising or political targeting. This is a business decision made by Meta.

What to do

The obvious answer is to stop using WhatsApp. Since most people are not likely to stop using WhatsApp, you should not interact with Meta AI inside WhatsApp. Do not tap it. Do not use @MetaAI. This is the only real protection currently available to most users.

If you have used it, delete those conversations. The data has likely already been processed, but it removes it from your interface. This pattern — where the burden of protection falls on the individual rather than the institution that created the risk — is not unique to Meta. It is how most of the technology industry handles security failures.

Advertisement

If you are in the EU, EEA or UK, submit a Right to Object request in Meta’s privacy settings.

If the conversations you are having are sensitive, use Signal and invite your contacts to use Signal. Signal is a non-profit. It has no advertising business. It has no AI in the interface. It was co-founded by Brian Acton, the WhatsApp founder, who left Meta and said publicly that he had sold his users’ privacy.

The plain version

Meta bought a private messaging app. Promised not to change it. Changed it. Drove out the founders who objected. Let a political operation profile tens of millions of people psychologically. Paid a fine that amounted to a parking ticket. Admitted nothing. Meta has now installed a permanent AI in the private conversations of 3 billion people, because the private self is worth more to advertisers than the public one ever was.

Meta consistently expands data collection in ways that align with its ad business. Extracting private profit while pushing the cost onto everyone else is not a bug in the model. It is the model. At some point, it stops being about any single feature or policy change. It becomes a pattern people recognize but continue to accept.

None of this works without participation. Three billion people do not use WhatsApp by accident. The trade-off has been clear for years: convenience in exchange for data. The only thing that changes is how far the boundary moves.

AI Transparency Statement: The author used ChatGPT and Claudeto assist with research and editing. Any AI-generated content has been verified for accuracy, and the author maintained full control over the final decisions and direction of the work.

Hi there.

Sign up to get the latest content from Bethics delivered to your inbox every month!

We don’t spam! Read our privacy policy for more info.


Share it

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


Advertisement 

Bethics Sidebar
Conoce más sobre Bethics México
Consultoría en ética empresarial, desarrollo organizacional y servicios digitales.
Visita bethics.mx



Get our Newsletter

Sign Up

We don’t spam! Read our privacy policy for more info.