top of page

ManaMind Secures USD 1.1 Million, Launches AI Game Testing Platform

  • Writer: tech360.tv
    tech360.tv
  • 2 minutes ago
  • 3 min read

London-based startup ManaMind has secured USD 1.1 million in pre-seed funding to launch an artificial intelligence platform. This platform uses autonomous agents to play and test games, aiming to reduce the tedious and costly nature of quality assurance in production studios.


Split-screen image: Left shows AI code testing a game, text details visible. Right shows a game scene with a character by a scenic ocean view.
Credit: Manamind

The AI agents operate by observing video and listening to audio, mimicking human players' perception. These agents then autonomously decide subsequent actions within a running game environment.


Two men side by side, one in a blue shirt with a pensive look against a dark backdrop, the other in beige smiling warmly, neutral background.
Credit: Manamind

Founder and Chief Executive Emil Kostadinov experienced testing challenges firsthand as a teenager. He noted the repetitive tasks involved in manual quality assurance.


Kostadinov stated that quality assurance can account for up to 12% of a game's total budget in some productions. Existing script-based tools often lack scalability and player-like behaviour across diverse genres or platforms.


Chief Technology Officer Sabtain Ahmad, who holds a PhD in machine learning from TU Wien, developed ManaMind's proprietary vision language model. Ahmad built the system after public models proved unreliable for interpreting games.


Internal tests showed Ahmad's model outperformed systems from OpenAI, Google, and Anthropic in bug detection tasks. Ahmad explained that the breakthrough came from abandoning universal automation.


AI navigates game settings; left shows code reasoning, right displays game menu. Background has a green and purple pattern with game icons.
Credit: Manamind

"It became obvious pretty early that no existing model could actually understand or move through a game the way we needed," Ahmad said. Studios require AI automation for different tasks, leading to the focus on agents behaving like real testers using only audio and video.


This approach allows for flexibility unmatched by other methods that rely on code or engine hooks. Kostadinov demonstrated the platform running a vertical sync test.


During the test, the agent independently navigated from an options menu to gameplay to collect evidence before returning to complete its evaluation. "It came up with that on its own," Kostadinov said.


ManaMind operates with a two-person founding team: Kostadinov, 30, handles business and product, while Ahmad, 31, developed the technical system. The company began work ten months ago, focusing on technology testing.


The platform is currently in pre-revenue, with its first commercial rollouts scheduled for January. Four early access partners, including THQ Nordic and several unnamed studios, are already using the platform.


The system is engine-agnostic, running purely from captured audio and video. It facilitates tests across many genres without framework changes, producing logs, evidence, and reports that integrate into existing quality assurance workflows.


Investors recognise the complexities of modern game design as ideal for training general-purpose agents. Daniel Dippold, chief executive of EWOR, compared ManaMind's methodology to early DeepMind and OpenAI work in games.


Dippold highlighted ManaMind's delivery of commercial value over research prototypes. Imti Basharat, an investor with Heartfelt Capital, noted the agents' capability to operate in unfamiliar digital environments.


Basharat believes this provides a broad foundation for expansion beyond gaming. "Games are the perfect proving ground for AI," Kostadinov stated. He added that games combine complexity, interactivity, and scale, which are essential ingredients for AI systems to understand and act in the real world.


Kostadinov envisions the company's long-term plan evolving its perception and reasoning stack. This evolution aims to support general software testing and eventually robotics.


Currently, the company focuses on the games industry's most repetitive and effortful tasks. "QA is an innately boring, repetitive, expensive job," Kostadinov said. "People who want to build games should not spend their best years walking into every wall to see what breaks."

  • ManaMind, a London startup, secured USD 1.1 million in pre-seed funding for its AI game testing platform.

  • The platform uses autonomous agents that test games by perceiving video and audio, mimicking human players.

  • This AI-driven approach aims to reduce the high costs and repetitive nature of traditional game quality assurance.


Source: FORBES

As technology advances and has a greater impact on our lives than ever before, being informed is the only way to keep up.  Through our product reviews and news articles, we want to be able to aid our readers in doing so. All of our reviews are carefully written, offer unique insights and critiques, and provide trustworthy recommendations. Our news stories are sourced from trustworthy sources, fact-checked by our team, and presented with the help of AI to make them easier to comprehend for our readers. If you notice any errors in our product reviews or news stories, please email us at editorial@tech360.tv.  Your input will be important in ensuring that our articles are accurate for all of our readers.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page