Meta AI Incognito Chat makes user conversations private, Zuckerberg says

Written on 05/14/2026

Meta AI Incognito Chat is supposed to be "completely private."

A feature called Incognito Chat is coming to Meta AI and WhatsApp soon, according to Mark Zuckerberg.

The Meta CEO announced the feature on his Facebook page and described it as allowing users a "completely private way" to interact with the company's AI assistant.

"This is the first major AI product where there is no log of your conversations stored on servers," Zuckerberg wrote.

He said the feature is similar to end-to-end encryption, which "means no one can read your conversations, even Meta or WhatsApp."

While the conversations can't be read by the platforms themselves, they also vanish when a user ends their session.


Mashable 101 Fan Fave: Vote for your favorite creator today!


"To get the most from personal superintelligence, we'll all need ways to discuss sensitive topics in ways that no one else can access," Zuckerberg wrote.

Incognito Chat privacy and safety concerns

Disappearing chats raise safety questions that Zuckerberg and Meta's blog post on Incognito Chat didn't address.

While absolute privacy may incline users to ask sensitive questions about their health, finances, or career, it will also shield Meta from knowing when users may need urgent help or intervention.

For example, conversations with Meta AI in WhatsApp indicating that a user may be considering self-harm or suicide can trigger a human review, according to Mashable's testing. The same is true for discussions of violence.

These messages couldn't be identified with Incognito Chat, nor would there be any retrospective record of them.

Meta said that it implements safeguards designed to refuse potentially harmful prompts, and that Meta AI will not comply with dangerous requests. Additionally, users who repeatedly submit harmful prompts will be temporarily blocked, according to the company.

Both scenarios — suicidal behavior and acts of public violence — are the subject of lawsuits and criminal inquiries against the biggest AI companies.

OpenAI has been sued multiple times by the bereaved families of users. They allege that OpenAI's ChatGPT coached their loved one to take their own life. OpenAI has denied the allegations in one case involving a 16-year-old who died.

Separately, the Florida state attorney general recently opened a criminal investigation into whether ChatGPT offered "significant" advice to a gunman who allegedly killed two people and five others in an April 2025 shooting.

Google, maker of the chatbot Gemini, was sued for wrongful death earlier this year by the family of an adult man after Gemini allegedly convinced him to kill himself.

"Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect," Google said in a statement following the allegations.

The lawsuits against Google and OpenAI draw heavily on user chat transcripts.

Can teens use Incognito Chat with Meta AI?

Meanwhile, in an effort to strength safeguards for teen Meta AI users, the company recently debuted a feature that allows parents to view their topics of discussion with AI.

Incognito Chat is meant for users 18 and older, according to Meta. Users will be prompted to confirm their age prior to using the feature. When legally required, Meta will implement additional age assurance methods to verify that a user is an adult.

Sarah Gardner, CEO of Heat Initiative, an advocacy group focused on online safety and corporate accountability, voiced concern over Incognito Chat, particularly given Meta's previous rollout of AI chatbots that permitted "sensual" conversations with children.

"The new features announced today should absolutely raise alarm bells for parents," Gardner said in a statement to Mashable. "We don't have confidence in Meta's record on age verification, so they need to answer a lot more questions about how they are going to guarantee kids' safety."