Published in AI

Meta starting to built its own AI chips

by on22 May 2023


Meta Training and Inference Accelerator can't play Crystalis yet

Meta is building its first custom chip specifically for running AI models. 

CEO Mark Zuckerberg recently said the company sees "an opportunity to introduce AI agents to billions of people in ways that will be useful and meaningful" — the chip and other infrastructure plans revealed Thursday could be critical tools for Meta to compete with other tech giants also investing significant resources into AI.

Meta's new MTIA chip, which stands for Meta Training and Inference Accelerator, is its "in-house, custom accelerator chip family targeting inference workloads."  

It is an ASIC chip that combines different circuits on one board, allowing it to be programmed to carry out one or many tasks in parallel. 

The idea started to appear on Meta's whiteboards when executives suddenly realised that they lacked the hardware and software to support demand from product teams building AI-powered features.

As a result, the company scrapped plans for a large-scale rollout of an in-house inference chip and started work on a more ambitious chip capable of performing training and inference, Reuters reported.  Meta said it has an AI-powered system to help its engineers create computer code, similar to tools offered by Microsoft, Amazon and Alphabet.

Meta VP and head of infrastructure Santosh Janardhan wrote in a blog post that MTIA will not be ready until 2025. 

Meta's VP of Infrastructure told TechCrunch "This level of vertical integration is needed to push the boundaries of AI research at scale."

Meta says that it created the first generation of the MTIA — MTIA v1 — in 2020, built on a 7-nanometer process. It can scale beyond its internal 128 MB of memory to up to 128 GB, and in a Meta-designed benchmark test — which, of course, has to be taken with a grain of salt — Meta claims that the MTIA handled "low-complexity" and "medium-complexity" AI models more efficiently than a GPU. Work remains to be done in the memory and networking areas of the chip, Meta says, which present bottlenecks as the size of AI models grow, requiring workloads to be split up across several chips. (Not coincidentally, Meta recently acquired an Oslo-based team building AI networking tech at British chip unicorn Graphcore.) And for now, the MTIA's focus is strictly on inference — not training — for "recommendation workloads" across Meta's app family.

 

Last modified on 22 May 2023
Rate this item
(0 votes)

Read more about: