Meta Details Four New Internally Developed AI Chips
- tech360.tv

- 2 hours ago
- 3 min read
Meta Platforms presented a development roadmap for four new chips it is producing internally, as the organisation rapidly expands its data centres. This initiative follows a broader trend among Big Tech companies, including Alphabet and Microsoft, which have heavily invested in in house chip design teams. These companies aim to supplement their purchases of standard products from suppliers like Nvidia and Advanced Micro Devices.

The development of chips specifically designed for Meta's distinct data processing requirements can result in designs that consume less energy and offer improved cost efficiency. This strategic shift reflects an ongoing effort to optimise infrastructure for the increasing demands of artificial intelligence workloads.
These new chips form part of Meta's Meta Training and Inference Accelerator (MTIA) programme. The first of these, designated MTIA 300, is presently operational, powering the company's ranking and recommendation systems. But the remaining three chips are scheduled for deployment this year and in 2027.
Yee Jiun Song, Meta's vice president of engineering, indicated in an interview that the company is currently focused on the surging demand for inference capabilities. Inference refers to the process where an AI model, such as the technology behind the ChatGPT application, generates responses to customer enquiries and requests. The MTIA 450 and 500, the final two chips in the new series, are being designed specifically for this function.
Meta has experienced some success with its inference chips. However, the organisation has encountered challenges with its long held ambition to create a generative AI training chip, which would be capable of building the large scale models that underpin AI applications. This aspect of chip development has proven more complex than anticipated.
The company stated that the MTIA 400 is progressing towards implementation in its data centres. Meta has engineered an entire system around these chips, which approximates the volume of several server racks and incorporates a version of liquid cooling. And this integrated approach aims to maximise performance and efficiency within its infrastructure.
The rapid expansion of data centres, necessary to support applications such as Instagram and Facebook, dictates a release schedule of new chips at intervals of six months, according to Song. He described this pace as "the reality of how quickly our infrastructure is being built out."
Earlier this year, in Jan., Meta announced projected capital spending between USD 115 billion and USD 135 billion for the current financial period. This significant expenditure indicates the scale of the company's investment in its technological infrastructure.
Broadcom assists Meta with certain aspects of the chip designs, though Song did not specify which particular chips benefit from this collaboration. So the fabrication of these processors is handled by Taiwan Semiconductor Manufacturing Co.
According to Reuters, Meta entered into significant agreements with Nvidia and AMD last month, in Feb., to acquire tens of billions of USD worth of chips. This indicates a multi pronged strategy for chip acquisition and development.
Meta Platforms has outlined a roadmap for four new internally developed chips to support its expanding data centres.
These chips are part of the Meta Training and Inference Accelerator programme, with MTIA 300 already in use.
The MTIA 450 and 500 chips are designed to handle inference, catering to growing demand for AI model responses.
Meta's capital expenditure for this year is projected between USD 115 billion and USD 135 billion.
The company collaborates with Broadcom for design elements and Taiwan Semiconductor Manufacturing Co. for fabrication.


