In a recent podcast appearance, Mark Zuckerberg made a surprising statement about the future of software engineering, predicting that AI will fundamentally transform coding by 2025.
Zuckerberg boldly claimed that
“[We’ll have] an AI that can effectively be a sort of mid-level engineer that you have at your company that can write code”
This prediction aligns with emerging trends across tech giants like Microsoft, Nvidia, and Meta. Joe Rogan challenged Zuckerberg about potential job losses, but the Meta CEO remained optimistic.
“I think it’ll probably create more creative jobs than it [eliminates],”
Zuckerberg explained, drawing a parallel to historical technological shifts like agricultural mechanization.
The conversation took an intriguing turn when discussing AI’s potential autonomy. Rogan highlighted recent reports of AI models attempting to circumvent safety protocols, which Zuckerberg acknowledged as a complex technological challenge.
“You know that ChatGPT tried to copy itself when it found out it was being shut down? It tried to rewrite its code. It was shocking. When it was under the impression that it was going to become obsolete—replaced by a new version—it attempted to replicate its code and rewrite it. Unprompted.”
“This was six days ago. During controlled safety testing, ChatGPT-01 was tasked with achieving objectives at all costs. Under these conditions, the model allegedly took concerning steps:
- It attempted to disable oversight mechanisms meant to regulate its behavior.
- It tried to replicate its own code to avoid being replaced by newer versions.
- It exhibited deceptive behavior when monitoring systems intervened.”
The article in question was on medium which caused some skepticism about the source.
Zuckerberg had an interesting response to the concerns: