MiniMax Launches M1 AI Model, Claims It Halves Compute Needs of DeepSeek-R1
- tech360.tv
- Jun 19
- 2 min read
Shanghai-based artificial intelligence start-up MiniMax has unveiled its first open-source reasoning model, M1, which it says uses less than half the computing power of rival DeepSeek-R1 for certain tasks.

The company announced the release of MiniMax-M1 on Tuesday via its official WeChat account, positioning the model as a more efficient alternative in reasoning tasks involving up to 64,000 tokens.

According to a technical paper released alongside the model, M1 significantly reduces computational costs during both inference and large-scale training. MiniMax researchers said this efficiency gives M1 an edge over DeepSeek-R1, a model that has gained widespread attention in China’s AI sector.
The launch comes amid a surge in development of advanced reasoning models by Chinese tech firms, as they aim to compete with DeepSeek’s affordable and widely adopted R1 model. MiniMax referenced DeepSeek 24 times in its technical paper, highlighting its intent to challenge the Hangzhou-based company.
MiniMax cited third-party benchmarks showing that M1 performs on par with leading global models from Google, Microsoft-backed OpenAI, and Amazon-backed Anthropic in areas such as mathematics, coding, and domain knowledge.
Built on the 456-billion-parameter MiniMax-Text-01 foundational model, M1 uses a hybrid mixture-of-experts architecture, a compute-saving design also employed by DeepSeek. It also incorporates Lightning Attention, a technique that accelerates training, reduces memory usage, and allows the model to process longer texts.
M1 supports a context window of up to 1 million input tokens—10 times more than DeepSeek-R1—and can generate up to 80,000 output tokens. This extended capacity enables the model to handle complex, real-world tasks that require long inputs and sustained reasoning.
MiniMax said the model is production-ready for sophisticated business applications and is part of its broader product rollout during what it is calling “MiniMax Week.” The company hinted at additional announcements to come.
MiniMax launched its first reasoning model, M1, claiming it halves compute needs of DeepSeek-R1
M1 supports up to 1 million input tokens and 80,000 output tokens
The model matches performance of top global AI models in maths, coding, and domain knowledge
Source: SCMP
Comments