When Microsoft invested $1 billion in OpenAI in 2019, it agreed to build a massive, cutting-edge supercomputer for the artificial intelligence research startup.
OpenAI was trying to train an increasingly large set of artificial intelligence programs called models, which were ingesting greater volumes of data and learning more and more parameters, the variables the AI system has sussed out through training and retraining.
The outfit needed access to powerful cloud computing services for long periods of time. To meet that challenge, Microsoft had to find ways to string together tens of thousands of Nvidia's A100 graphics chips -- the workhorse for training AI models -- and change how it positions servers on racks to prevent power outages.
Microsoft executive vice president who oversees cloud and AI Scott Guthrie wouldn't give a specific cost for the project, but said "it's probably larger" than several hundred million dollars.
Vole uses that same set of resources it built for OpenAI to train and run its own large artificial intelligence models, including the new Bing search bot introduced last month.
It has sold the system to other customers. The software giant is already at work on the next generation of the AI supercomputer, part of an expanded deal with OpenAI in which Microsoft added $10 billion to its investment.