Coprorate VCs Claims AI Agents Token Spend Will Outpace Salary Of The Workers Using Them

The cost of running AI agents in the workplace is approaching a critical threshold that few anticipated: token expenses may soon exceed employee salaries. During a recent episode of the All-In Podcast, venture capitalists discussed their firsthand experience deploying AI agents across their organizations.

David Sacks, a partner at Craft Ventures, highlighted what he believes will be a defining trend of 2025. He predicts AI will actually increase demand for knowledge workers rather than eliminate jobs, contrary to popular predictions. He stated, “My most contrarian belief is that AI would increase demand for knowledge workers, not put them out of business.”

A UC Berkeley study tracking 200 employees over eight months found that workers using AI tools worked faster, took on broader tasks, and extended work into more hours of the day. While they reported feeling more productive, stress and burnout also increased.

Sacks said, “They actually ended up working more hours in the day. So they did more work, not less, and even more effort rather than less. Not because they were required to, but just because they were more motivated. And I think they were more motivated because their work was getting upleveled, right? They’re kind of able to offload more menial tasks to AI and it made their work more purposeful and meaningful.”

Jason Calacanis provided concrete numbers from his venture firm’s experience. Their AI agents currently cost $300 per day each when using the Claude API. That translates to roughly $100,000 annually per agent.

With multiple agents deployed across the organization, the token budget has become a significant line item. Calacanis noted that 20 percent of work at his firm now runs through AI agents, with the most technical employees achieving 10 to 20 times the leverage of their peers.

The venture capitalist has become so convinced of the technology’s potential that he announced plans to invest in 10 to 20 startups building in the space, offering $125,000 each to come through his accelerator program.

His firm has deployed what they call “replicants” with their own Slack accounts, email addresses, and access to Google Docs and Notion. One meta-agent named Ultron manages the other four agents, checking their work and coordinating tasks throughout the day.

Calacanis raised what he sees as an under-discussed tipping point. He asked when token spending will actually surpass the salary of the employee it’s meant to support.

“This is a very interesting trend that you’re not going to hear anybody else talk about,” he said. “But when do tokens outpace the salary of the employee? Because you’re about to hit it. I’m about to hit it.”

Chamath Palihapitiya, founder of Social Capital, raised concerns about the cost structure as adoption scales. He pointed out that superstar developers are already hitting token costs that rival or exceed their compensation. For rank-and-file employees, the spending currently sits in the hundreds to low thousands annually, but the trajectory is clear.

Palihapitiya says: “I think superstar developers are already there. Yeah, I think the rank and file is probably 10 20% max. More than likely, they’re spending a few thousand. The average non-technical employee is probably in the hundreds to low thousands. ”

The cost issue becomes more pressing when considering enterprise security requirements. Palihapitiya argued that companies may need to move AI operations back to on-premise infrastructure to maintain control over proprietary data.

Using public endpoints like ChatGPT or Claude means all prompt and response data flows back to the model providers, creating potential intellectual property concerns. Running private instances increases costs but may prove necessary for companies handling sensitive information.

During the conversation, Palihapitiya also said: “Unless we have some gigantic leap forward in generating output tokens at one-tenth the cost of what they are today, which I suspect we will have. So bear with everybody for a while because I think Nvidia and Grock and Google and AMD, they’re all incentivized to massively ramp up the energy density and massively push down the token cost. That’s going to happen, but it doesn’t change the trend. And it doesn’t change the incentives on confidentiality. ”

Palihapitiya suggested the token cost problem may resolve itself as chip manufacturers race to improve energy density and push down pricing. Nvidia, AMD, Google, and others have strong incentives to make token generation cheaper. However, he acknowledged this does not address the fundamental security and confidentiality concerns around using public AI endpoints.

The panelists agreed that bottom-up adoption will drive enterprise transformation faster than top-down initiatives. Early adopters bringing consumer AI tools into the workplace will demonstrate value before corporate IT departments complete lengthy evaluation processes. Sacks compared this to how software-as-a-service tools spread through organizations in previous decades.

Calacanis described the practical impact at his firm. Tasks that previously took days now complete in two hours. AI agents handle reporting work, analyze data from sales databases, clip podcast segments, and monitor social media engagement.

One agent reviews YouTube and Instagram statistics to identify viral content and suggest optimization strategies. The agents never forget tasks or make mistakes once properly configured, eliminating the need for checklists.

The technology has progressed to the point where Calacanis is upgrading to enterprise Slack to ingest every message across the organization. He plans to grant API access to all company email, giving the AI system complete visibility into operations. This level of integration was previously impossible because companies would not grant such broad access due to security concerns.