Foxconn Launches First Large Language Model, FoxBrain
Taiwan’s Foxconn has introduced its first large language model, FoxBrain, to enhance manufacturing and supply chain management.

The model was trained using 120 of Nvidia’s H100 GPUs and completed in about four weeks, the company said on Monday. Foxconn, the world’s largest contract electronics manufacturer, assembles iPhones for Apple and produces Nvidia's artificial intelligence servers.
FoxBrain is based on Meta’s Llama 3.1 architecture and is Taiwan's first large language model with reasoning capabilities optimised for traditional Chinese and Taiwanese language styles.
Foxconn acknowledged a slight performance gap compared with China's DeepSeek distillation model but said FoxBrain’s overall performance is close to world-class standards.
Initially designed for internal applications, FoxBrain supports data analysis, decision-making, document collaboration, mathematics, reasoning, problem-solving, and code generation.
Foxconn plans to collaborate with technology partners to expand the model’s applications, share open-source information, and promote AI in manufacturing, supply chain management, and intelligent decision-making.
Nvidia provided support through its Taiwan-based supercomputer, Taipei-1, and offered technical consulting during the model’s training. Taipei-1, the largest supercomputer in Taiwan, is owned and operated by Nvidia in Kaohsiung.
Foxconn will reveal further details about FoxBrain during Nvidia’s GTC developer conference in mid-March.
Foxconn launched its first large language model, FoxBrain, for manufacturing and supply chain management.
The model was trained using 120 Nvidia H100 GPUs in four weeks.
FoxBrain is based on Meta’s Llama 3.1 and optimised for traditional Chinese and Taiwanese.
Source: REUTERS