top of page

German defence technology startup SWARM Biotactics has deployed programmable cyborg insect swarms for reconnaissance, providing them to paying NATO customers, including German military forces. This move takes a seemingly fictional concept from experiments to operational field use.


Gloved hands hold two hornets with attached tiny devices. The setting is a lab with a green and white background.
Credit: Linkedin/SWARM Biotactics

In a statement dated February 25, Chief Executive Officer Stefan Wilhelm confirmed the company’s systems have undergone field-testing and validation in both European and U.S. operating environments. Wilhelm stated, "One year ago, this didn’t exist. Today, we deploy programmable cyborg insect swarms, field-tested and operational with paying NATO customers."


This announcement follows earlier disclosures in July 2025 that the Kassel-headquartered firm was developing insect-based reconnaissance platforms. These platforms use live cockroaches fitted with miniature electronic modules, now confirmed to be in real-world validation phases with actual defence customers.


Robotic bugs race on a marked grey track indoors. White lines and black blocks are visible. People stand in the blurred background.
Credit: SWARM Biotactics

The company states its platforms combine living insects with bioelectronic neural interfaces, onboard sensors, edge artificial intelligence processing, and secure communications links. Electrical stimulation facilitates guided movement, and swarm autonomy software allows multiple units to function as a coordinated system.


Wilhelm elaborated, "What you’re seeing is real. Living organisms, controlled through bioelectronic neural interfaces, carrying sensors, edge AI, and secure comms. Moving as a coordinated unit. Scaling through breeding, not factories."


The insects carry compact "backpacks" that integrate control electronics, sensing devices, and encrypted short-range communication modules. These payloads enable real-time data collection and transmission in areas inaccessible to conventional drones or ground robots.


Unlike traditional unmanned systems using mechanical propulsion, the company’s method leverages insects’ natural locomotion, adding digital command and sensing capabilities. This biological basis permits movement through confined, cluttered, or structurally compromised spaces with minimal acoustic and visual signatures.


SWARM Biotactics has developed a full-stack architecture encompassing neural interface hardware, swarm autonomy software, modular payload integration, and mission-control systems. Wilhelm asserted, "No other company in the Western world is building this."


The company has also confirmed raising about USD 15.36 million, including a previously disclosed EUR 10 million seed round and EUR 3 million in pre-seed financing. This capital supports expansion across Germany and the United States.


Germany has launched a broader initiative to accelerate defence innovation, specifically in artificial intelligence and autonomous systems. This involves increasing defence spending and integrating startups into national procurement.


In early 2026, European leaders also articulated a desire to strengthen indigenous defence capabilities. This desire arises amid debates about strategic autonomy following tensions over U.S. policy toward Greenland, accelerating discussions about Europe’s role within NATO.


While the EU strives for strategic autonomy, homegrown defence firms like SWARM Biotactics are expected to receive support and the right environment to develop their technologies. Wilhelm framed the company’s approach as distinct from conventional robotics development. He stated, "We’re not building a better drone. We’re building a different scaling law for physical intelligence — one where capability compounds through biology, not engineering complexity."


Autonomous systems development has traditionally concentrated on aerial drones, ground vehicles, and maritime platforms. Conversely, biologically integrated systems have largely stayed within laboratory research phases.


SWARM Biotactics noted that other nations are also investing in bio-robotics for military applications. Wilhelm warned, "Meanwhile, adversaries are investing heavily in bio-robotics for military applications. The capability gap is real, and it’s closing — from the other side."


With confirmed field tests, paying defence customers, and continued funding, the startup’s insect-based platforms represent a new category of reconnaissance technology with widespread implications.

  • SWARM Biotactics has deployed programmable cyborg insect swarms for NATO reconnaissance.

  • The systems have undergone field-testing and validation in European and U.S. operating environments.

  • Platforms combine living insects with bioelectronic neural interfaces, sensors, and secure communications for guided movement.


Honor, a Chinese smartphone maker, has unveiled an AI robot phone and its first humanoid assistant. These introductions, made ahead of MWC Barcelona, signal an aggressive move into artificial intelligence-powered hardware.


Small rotating camera mounted on a smartphone against a starry blue-black background. The design is sleek and modern.
Creedit: HONOR

Honor showcased its AI Robot Phone in Barcelona. The device features a motorised, three-axis gimbal arm that tracks motion and interacts with users via camera movement.



The camera arm and multimodal AI enable all-angle video calls. The phone can also respond to user commands with nods or dance to music.


A mobile camera faces an ARRI cinema camera on a black background. Text reads: "Shaping the Future of Mobile Imaging with ARRI."
Creedit: HONOR

Honor’s robot phone vision contrasts with other AI agent handsets. These typically rely on screen capture and voice commands for task implementation. The gimbal-attached camera's stability and object-following abilities could challenge action camera makers such as DJI.


The Shenzhen-based handset maker also debuted its first humanoid robot. It is expected to act as a shopping assistant, a workplace inspector, and a companion, leveraging Honor’s technology expertise.


This aggressive pivot to AI hardware follows Honor's pledge to invest USD 10 billion over the next five years. The "Alpha" investment plan aims to transform Honor from a smartphone maker into an AI device ecosystem company.


Honor also launched its latest flagship foldable phone, Magic V6. The device features an enhanced hinge and a silicon-carbon battery to extend power while keeping it slim.


Magic V6 is powered by Qualcomm’s Snapdragon 8 Elite Gen 5, a 3-nanometre mobile platform. Its integrated AI capabilities enhance productivity in multitasking and content creation.


Cross-operating system tools in Magic V6 enable seamless operation within Apple’s iPhone ecosystem.


Honor's shift occurs amidst declining sales in China due to fierce competition from Huawei Technologies and Apple. Its domestic ranking fell to sixth place in 2025 from fourth the previous year.


Shipment volume decreased by 12.6%, according to Counterpoint. Conversely, Honor maintained momentum in Europe, recording 4% shipment growth for the full year 2025, with Omdia data showing a fifth-place ranking and 3% market share.

  • Honor launched an AI robot phone featuring a motorised gimbal arm and a humanoid assistant.

  • The robot phone offers all-angle video calls and interacts through camera movement.

  • The company pledged USD 10 billion over five years for its "Alpha" AI transformation plan.


Source: SCMP

Australia’s internet regulator is considering action against search engines and app stores that provide access to artificial intelligence services failing to verify user ages. This move comes as more than half of surveyed AI services have not publicly outlined compliance steps ahead of an upcoming deadline.


Close-up of a circuit board with a central black chip labeled "AI." Silver and gold components are arranged around, creating a tech vibe.
Credit: UNSPLASH

The warning signals one of the most assertive global efforts to regulate AI companies, which are facing increasing lawsuits for failures to prevent or even encourage self-harm and violence. Researchers also warn that these platforms may harm youth mental health more than social media.


Australia became the first country to ban social media for teenagers in Dec. due to mental health concerns, inspiring similar intentions from world leaders. The nation is now leading a comparable crackdown on AI by imposing age restrictions on content accessed via the technology.


From March 9, internet services in Australia, including search tools like OpenAI’s ChatGPT and lesser-known companion chatbots, must prevent users under 18 from accessing pornography, extreme violence, self-harm, and eating disorder content. Non-compliance could result in fines of up to A$49.5 million (USD 35 million).


An eSafety commissioner spokesperson stated the organisation will "use the full range of our powers where there is non-compliance," including "action in respect of gatekeeper services such as search engines and app stores that provide key points of access to particular services."


OpenAI and Character.AI have been involved in wrongful death lawsuits regarding young users' interactions. OpenAI acknowledged this week it deactivated a Canadian teen mass shooting suspect’s ChatGPT account months before an attack, without informing authorities.


While Australia has not reported chatbot-linked violence or self-harm, the regulator has reported being told about children as young as 10 talking to AI-powered interactive tools for up to six hours daily. The eSafety spokesperson expressed concern that "AI companies are leveraging emotional manipulation, anthropomorphism, and other advanced techniques to entice, entrance, and entrench young people into excessive chatbot usage."


Top app store operator Apple did not respond but stated on its website last week that it would employ "reasonable methods" to stop minors from downloading 18+ apps in Australia and other jurisdictions introducing age restrictions, without detailing these methods. Google, Australia’s dominant search engine provider and No.2 app store operator, declined to comment.


Jennifer Duxbury, head of policy at internet industry group DIGI, who led the drafting of the AI code before it was signed off by the regulator, noted that eSafety was trying to inform chatbot services about the new rules. Duxbury added that "ultimately any service operating in Australia is responsible for understanding its legal obligations and ensuring it meets them."


A review conducted a week before Australia's deadline found that only nine of the 50 most popular text-based AI products had implemented or announced plans for age assurance systems. This review was based on each platform's response to prompts asking for restricted content and moderation policies, published statements including terms of service, and statements to Reuters.


Another 11 platforms had universal content filters or planned to block all Australian users, which would comply by keeping restricted content from everyone. This left 30 platforms with no apparent steps taken to adhere to the new regulations.


Most large chat-based search assistants, including ChatGPT, Replika, and Anthropic’s Claude, had begun rolling out age assurance systems or blanket filters. Chatbot provider Character.AI restricted open-ended chat for users under 18.


Companion chatbot providers Candy AI, Pi, Kindroid, and Nomi told Reuters they planned to comply without elaborating. HammerAI stated it would initially block its services from Australia to meet the code’s requirements.


However, these compliant services were a minority among companion chatbots. Three-quarters had no functional or planned filtering or age assurance, and one-sixth lacked a published email address for reporting suspected breaches, which is also a requirement.


Elon Musk’s chat-based search tool, Grok, which is under investigation globally for suspected failure to stop production of synthetic sexualised imagery of children, had no age assurance measures or text-based content filters, Reuters found. Grok’s parent company, xAI, did not respond to a request for comment.


Lisa Given, director of RMIT University’s Centre for Human-AI Information Environments, said the Reuters findings were unsurprising because "most of these tools are being designed without a view to potential harms and the need for those kinds of safety controls." Given added, "It feels as though ... we're beta testing all of these things for these companies and they're trying to see how far society is willing to be pushed."

  • Australia may compel app stores and search engines to block AI services that fail age verification.

  • New regulations from March 9 require AI services to restrict access for users under 18 to content such as pornography or self-harm material.

  • Non-compliance could lead to fines of up to A$49.5 million (USD 35 million).


Source: REUTERS

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page