In late March, Microsoft published at least three job openings within its Azure public cloud division, looking for candidates to work on features for an AI chip. The following month the team listed an opening for a silicon program manager, and “an engineer for software/hardware codesign and optimization for AI acceleration.”
Under the leadership of CEO Satya Nadella, Microsoft is showing it’s willing to spend the money it takes to have a full-featured cloud as it competes with Amazon Web Services and Google. Specialized processors are one way Microsoft can prove it’s serious about bulking up its AI services for businesses within its cloud.
Google, which trails AWS and Microsoft in the cloud market, first promoted the idea of a custom chip for AI in the cloud. The company is now on the third iteration of its tensor processing unit (TPU), an alternative to Nvidia’s graphics processing unit (GPU), which has become the early industry standard for performing AI workloads.
TPUs and GPUs can be used for training artificial neural networks, which can analyze large amounts of data, such as photos, and learn to recognize patterns. After being trained, they can make predictions based on what they know, helping computers recognize different people or pick up on the presence of certain objects.
In a memo to employees in March, Nadella mentioned AI 18 times. But that’s mostly futuristic. The public cloud is one of the main businesses driving growth today, and Microsoft announced last week that it’s spending $7.5 billion on GitHub, adding a popular collaboration tool for developers.
Semiconductors aren’t entirely new for Microsoft. The company has taken steps to supercharge AI computing in its cloud with field-programmable gate array (FPGA) chips, with something called Project Brainwave. Today, those chips are available for training and running AI models with Azure’s ready-to-use machine learning software. And last year Microsoft said it was building a custom AI chip for the next version of its HoloLens mixed reality headset — though that’s outside of its cloud unit.
The new job openings are not part of the FPGA program, a Microsoft spokesperson told CNBC, but are related to the work the company does in designing its own cloud hardware, an initiative called Project Olympus.
“That group has been working on server design, silicon and AI to enable cloud workloads for some time,” the spokesperson said in a statement.
Jason Zander, executive vice president for Azure, referenced that work in an interview last month with CNBC’s Jon Fortt.
It’s a costly endeavor. Patrick Moorhead, an analyst at Moor Insights & Strategy, estimates that Google has spent $200 million to $300 million on its TPU project. Microsoft has shown its willingness to spend, reporting a record $3.5 billion in capital expenditures last quarter.
In an April interview with CNBC, Doug Burger, a Microsoft technical fellow who has led the FPGA work, was asked if the company is working on an AI server chip. He declined to say.
“If … you want to pick the economically rational decision, of course you’re going to look at the various options,” Burger said. “We do the model and we say, ‘What’s the right thing for us?'”