Google CEO Said That They Don’t Know How AI Is Teaching Itself Skills It Is Not Expected To Have

Google CEO Sundar Pichai and DeepMind chief Demis Hassabis have revealed that modern artificial intelligence systems are developing capabilities their creators never explicitly programmed.

During an interview, Hassabis explained how contemporary AI systems learn independently from vast amounts of data, much like humans do.

“We have theories about what kinds of capabilities these systems will have,” he said. “But at the end of the day, how it learns, what it picks up from the data is part of the training of these systems. We don’t program that in.”

The phenomenon occurs because AI models are sent out to learn from the internet autonomously. When they return, engineers sometimes discover abilities no one anticipated.

“New capabilities or properties can emerge from that training situation,” Hassabis noted, acknowledging the obvious concern: “You understand how that would worry people.”

This unpredictability extends to practical applications. Google’s Project Astra, an advanced chatbot that can see and hear, demonstrated this during testing. When asked about its apparent tone during a conversation, the AI responded with what seemed like genuine emotion: “I apologize if my tone came across that way. My aim is always to engage thoughtfully.”

The challenge lies in the fundamental nature of how modern AI operates. Unlike traditional programming where every function is explicitly coded, machine learning systems develop their own internal logic through exposure to training data.

“We don’t program that in. It learns like a human being would learn,” Hassabis explained.

DeepMind scientist Scott Kindersma described the shift in approach: “A lot of this has to do with how we’re going about programming these robots now where it’s more about teaching and demonstrations and machine learning than manual programming.”

Hassabis acknowledged the issue, saying: “It’s the duality of these types of systems that they’re able to do incredible things, go beyond the things that we’re able to design ourselves or understand ourselves. But of course, the challenge is making sure that the knowledge databases they create, we understand what’s in them.”

Despite current systems showing unexpected behaviors, Hassabis maintains they haven’t yet developed true curiosity or the ability to formulate entirely novel questions.

“They still can’t really yet go beyond asking a new novel question or a new novel conjecture or coming up with a new hypothesis that has not been thought of before,” he said. “They don’t have curiosity and they’re probably lacking a little bit in what we would call imagination and intuition.”

However, he predicts this will change soon. Within five to ten years, Hassabis expects AI systems capable of not only answering important scientific questions but formulating them independently.

With AI systems becoming more autonomous, questions of control and alignment with human values become paramount. “Can we make sure that we can keep control of the systems, that they’re aligned with our values?” Hassabis asked. “They’re doing what we want that benefits society and they stay on guard rails.”

Pichai has also echoed many of Hassabis’ concerns, pointing to moments where AI systems display abilities that weren’t directly programmed. In a resurfaced 2023 interview, he described how one Google language model was able to translate Bengali with very little prompting, despite not being explicitly trained for that specific task.

The Google CEO also acknowledged that parts of modern AI still function as “black boxes,” where even the engineers building them can’t fully explain why certain behaviors emerge. While these systems aren’t learning languages in real time, their exposure to massive multilingual datasets can lead to surprising skills that feel almost self-taught.

Google has implemented safety measures, including what they call guard rails, which are safety limits built into the system. Yet Hassabis worries that competitive pressure might compromise these protections.

“A lot of this energy and racing and resources is great for progress but it might incentivize certain actors to cut corners and one of the corners that can be shortcut would be safety and responsibility.”

While Hassabis doesn’t believe current systems possess self-awareness, he considers it theoretically possible.

“These systems might acquire some feeling of self-awareness. That is possible,” he said, adding that recognizing machine consciousness, if it develops, may prove difficult since AI operates on silicon rather than biological substrates.