WhatsApp's new AI feature: Convenient or a privacy nightmare?

Meta's latest WhatsApp update allows its AI to remember user details and preferences. But are people ready for the potential data risks that come with this personalisation?

Whatsapp's new feature allows Meta AI to automatically retain specific details shared during conversations with the chatbot, such as dietary preferences and birthdays. / Photo: Reuters
Reuters

Whatsapp's new feature allows Meta AI to automatically retain specific details shared during conversations with the chatbot, such as dietary preferences and birthdays. / Photo: Reuters

Last month, after WhatsApp surpassed 100 million users in the United States, Meta Co-Founder and CEO Mark Zuckerberg announced new artificial intelligence advancements to the app, including a voice chat mode, image editing tools, and features designed to support businesses.

WhatsApp serves 2.9 billion users—more than the total populations of India, Europe, and North America combined—making any change to the platform resonate worldwide.

Following these developments, WhatsApp rolled out its latest beta update (2.24.22.9) on Saturday, featuring an enhanced Meta AI with memory capabilities.

The new feature allows Meta AI to automatically retain specific details shared during conversations with the chatbot, such as dietary preferences and birthdays.

Users' interactions, preferences, and personal information will now be stored across the Meta AI platform to improve future interactions.

But what really happens behind the scenes when our conversations are stored, and how much control do we truly have over what Meta AI learns about us?

Experts suggest this seemingly convenient addition masks deeper privacy concerns that extend beyond simple data collection.

Read More
Read More

Turkish watchdog probes WhatsApp over new privacy terms

'AI inferences'

What happens when the type of coffee you drink becomes a predictor of your political views?

And what if an app could deduce your income level from the content of your messages?

"AI inferences refer to predictions made by artificial intelligence based on patterns found in data," says Ignacio Cofone, a professor of law and regulation of AI from Oxford University in an interview with TRT World.

"These inferences consist not of direct user inputs or collected data, but are generated through the analysis of an array of information sources—such as your location, browsing habits, and even seemingly unrelated details like your coffee preferences or music choices," Cofone said.

The primary ethical concern with the new update is "the unpredictable inferences about individuals and groups that will be drawn from that information," he added.

Cofone said that AI systems process vast amounts of data to make connections and predictions in ways that users – and even companies themselves – cannot fully anticipate.

,,

"Companies like Meta can't always predict what AI will infer in advance because these patterns emerge from the processing of several large datasets. Therefore, while users might consent (or not) to data collection, they have no say over the inferences drawn from it​"

'The consent illusion'

Meta promotes the new memory feature as a way to improve personalised user experiences.

The feature introduces a Memory section in Meta AI's contact card, giving users the option to view and manage the information the AI stores about them.

While Meta states that it offers the option to update or delete specific information, Cofone warned that these assurances are insufficient when dealing with AI-generated inferences:

"Once information has been processed and integrated into datasets, the inferences drawn from it are entangled in ways that make them nearly impossible to reverse or delete."

This limitation exposes a deeper issue with current privacy frameworks.

According to Cofone: "This highlights a significant gap in protection mechanisms that focus on user control, such as data sovereignty, because managing the raw collected personal data doesn't account for the consequences of the inferences that the AI creates."

In his book, The Privacy Fallacy, Cofone writes, "Individual agreements leave out inferences made about us, large anonymised datasets that affect us, and information about us that other people agreed to disclose."

Even when users agree to share their data, they cannot predict how that data will be combined with other datasets to generate insights beyond their control​.

In the Consent Illusion chapter, Cofone reveals that AI inferences don't just predict individual behaviour; they shape corporate strategies and consumer interactions on a broader level.

"Inferences are important because they shape the way companies interact with individuals and society.

"They influence personalised ads, insurance rates, and even eligibility for financial services," he adds, showing the reality of AI systems—from determining loan approvals to setting insurance premiums—without users ever realising how or why such decisions are made.

‘Moral hazard’

The heart of the issue lies in what Cofone describes as a moral hazard—a scenario where companies like Meta face little accountability for the risks they impose on users.

Cofone warns, "Privacy's moral hazard is the misalignment of incentives between corporations, who want to maximise profit from data, and their users, who wish to participate in the information economy without exposing themselves to harm.

"People don't know how their information will be used and what exactly can and will be done with it," he writes, highlighting the imbalance of power between users and corporations.

"While Meta can be transparent about the information it collects—such as voice data or chat history—it can't offer the same transparency for the insights and inferences the AI will generate."

These predictions, Cofone suggests, "can significantly affect people's lives, yet users often have no control over or awareness of the inferences made about them."

At its core, he emphasises the urgency of addressing these risks: "In a society where our information can be used to exploit us and where our wellbeing is influenced by how our information is turned into credit scores, risk assessments, and employability, developing functional protection against privacy harm is urgent."

Loading...
Route 6