ManaMind Secures USD 1.1 Million, Launches AI Game Testing Platform
- tech360.tv
- 2 minutes ago
- 3 min read
London-based startup ManaMind has secured USD 1.1 million in pre-seed funding to launch an artificial intelligence platform. This platform uses autonomous agents to play and test games, aiming to reduce the tedious and costly nature of quality assurance in production studios.

The AI agents operate by observing video and listening to audio, mimicking human players' perception. These agents then autonomously decide subsequent actions within a running game environment.

Founder and Chief Executive Emil Kostadinov experienced testing challenges firsthand as a teenager. He noted the repetitive tasks involved in manual quality assurance.
Kostadinov stated that quality assurance can account for up to 12% of a game's total budget in some productions. Existing script-based tools often lack scalability and player-like behaviour across diverse genres or platforms.
Chief Technology Officer Sabtain Ahmad, who holds a PhD in machine learning from TU Wien, developed ManaMind's proprietary vision language model. Ahmad built the system after public models proved unreliable for interpreting games.
Internal tests showed Ahmad's model outperformed systems from OpenAI, Google, and Anthropic in bug detection tasks. Ahmad explained that the breakthrough came from abandoning universal automation.

"It became obvious pretty early that no existing model could actually understand or move through a game the way we needed," Ahmad said. Studios require AI automation for different tasks, leading to the focus on agents behaving like real testers using only audio and video.
This approach allows for flexibility unmatched by other methods that rely on code or engine hooks. Kostadinov demonstrated the platform running a vertical sync test.
During the test, the agent independently navigated from an options menu to gameplay to collect evidence before returning to complete its evaluation. "It came up with that on its own," Kostadinov said.
ManaMind operates with a two-person founding team: Kostadinov, 30, handles business and product, while Ahmad, 31, developed the technical system. The company began work ten months ago, focusing on technology testing.
The platform is currently in pre-revenue, with its first commercial rollouts scheduled for January. Four early access partners, including THQ Nordic and several unnamed studios, are already using the platform.
The system is engine-agnostic, running purely from captured audio and video. It facilitates tests across many genres without framework changes, producing logs, evidence, and reports that integrate into existing quality assurance workflows.
Investors recognise the complexities of modern game design as ideal for training general-purpose agents. Daniel Dippold, chief executive of EWOR, compared ManaMind's methodology to early DeepMind and OpenAI work in games.
Dippold highlighted ManaMind's delivery of commercial value over research prototypes. Imti Basharat, an investor with Heartfelt Capital, noted the agents' capability to operate in unfamiliar digital environments.
Basharat believes this provides a broad foundation for expansion beyond gaming. "Games are the perfect proving ground for AI," Kostadinov stated. He added that games combine complexity, interactivity, and scale, which are essential ingredients for AI systems to understand and act in the real world.
Kostadinov envisions the company's long-term plan evolving its perception and reasoning stack. This evolution aims to support general software testing and eventually robotics.
Currently, the company focuses on the games industry's most repetitive and effortful tasks. "QA is an innately boring, repetitive, expensive job," Kostadinov said. "People who want to build games should not spend their best years walking into every wall to see what breaks."
ManaMind, a London startup, secured USD 1.1 million in pre-seed funding for its AI game testing platform.
The platform uses autonomous agents that test games by perceiving video and audio, mimicking human players.
This AI-driven approach aims to reduce the high costs and repetitive nature of traditional game quality assurance.
Source: FORBES