DuckDuckGo founder: AI surveillance should be banned

The founder of privacy-focused search engine DuckDuckGo is sounding the alarm about artificial intelligence surveillance, warning that Congress must act swiftly to prevent a repeat of the privacy disasters that have plagued online tracking for decades.

Gabriel Weinberg argues that AI presents far more serious privacy threats than traditional search engines ever did. While search queries reveal personal interests and problems, AI conversations expose something much deeper and more dangerous.

“Longer input invites more personal information to be provided, and people are starting to bare their souls to chatbots,”

Weinberg explains in a blog.

“The conversational format can make it feel like you’re talking to a friend, a professional, or even a therapist.”

This intimate communication style creates unprecedented opportunities for manipulation. Unlike traditional advertising that follows users around the web, AI can craft personalized arguments tailored to individual psychological triggers. The technology can incorporate

“an improperly sourced ‘fact’ that you’re unlikely to fact-check or a subtle product recommendation you’re likely to heed,”

according to Weinberg.

The manipulation potential grows even more concerning when combined with chatbot memory features that learn from past conversations. This allows AI systems to become increasingly attuned to what persuades each individual user, making influence campaigns far more subtle and effective than crude banner ads.

Recent privacy breaches underscore the urgency of the situation. Weinberg points to a series of alarming incidents from just the past few weeks: Grok leaked hundreds of thousands of private chatbot conversations, Perplexity’s AI agent proved vulnerable to hackers seeking personal information and major AI companies are openly discussing plans for comprehensive user tracking.

“OpenAI is openly talking about their vision for a ‘super assistant’ that tracks everything you do and say (including offline),”

Weinberg notes.

“And Anthropic is going to start training on your chatbot conversations by default.”

These developments represent a dramatic shift from earlier, more privacy-conscious approaches. Where companies once defaulted to protecting user conversations, many are now moving toward data collection as the standard practice.

DuckDuckGo has responded by launching Duck.ai, offering protected chatbot conversations and anonymous AI-assisted search results. Weinberg sees this as proof that privacy-respecting AI services are entirely feasible, contradicting industry claims that surveillance is necessary for functionality.

However, he acknowledges the broader regulatory landscape remains bleak. The United States still lacks comprehensive online privacy legislation, let alone constitutional privacy protections. While there appears to be some congressional appetite for AI-specific regulations, Weinberg warns that time is running out.

“Every day that passes further entrenches bad privacy practices,”

he emphasizes.

“Congress must move before history completely repeats itself and everything that happened with online tracking happens again with AI tracking.”

The stakes extend beyond individual privacy concerns. Research has already shown that chatbots can be more persuasive than humans, with some users falling into what researchers describe as delusional spirals after extended AI interactions. As these systems become more sophisticated and personalized, their capacity for both commercial and ideological manipulation will only grow.

Weinberg‘s call for an outright ban on AI surveillance reflects a recognition that half-measures may prove insufficient against such powerful technologies. With AI systems becoming increasingly integrated into daily life, establishing strong privacy protections now could be the last opportunity to prevent a surveillance infrastructure far more invasive than anything seen in the early internet era.

The DuckDuckGo founder remains committed to offering privacy-protected alternatives regardless of regulatory outcomes, but his warning is clear: without swift congressional action, AI surveillance may become as entrenched and difficult to regulate as online tracking has proven to be over the past two decades.