top of page

Meta Platforms announced on Tuesday that its WhatsApp messaging service is introducing a real-time translation feature. This new tool aims to facilitate cross-language communication among its more than 3 billion users.


Text bubbles show "Good luck!" in multiple languages. Central text: "Introducing Message Translations". Beige background, green accents.
Credit: WhatsApp

Initially, the translation feature will support six languages on Android devices and 19 languages on iPhones. Meta plans to expand language support to additional languages in the future.


Text listing languages supported by "Translate App" includes Arabic, Dutch, English, French, German, Hindi, Italian, Japanese, and more.
Credit: APPLE

Meta stated in a blog post that message translations were designed with user privacy in mind. Translations happen directly on the user’s device, ensuring WhatsApp cannot access the content.


Users can access the feature by long-pressing a message and then tapping "Translate" to view the content in another language. This provides a straightforward method for on-demand translation.


The translation capability extends across personal and group chats, alongside channel updates. This broad integration ensures wider applicability for users in various communication settings.


Additionally, Android users have the option to activate automatic translation for an entire chat thread. This setting translates all subsequent messages in that conversation by default, streamlining cross-language dialogues.

  • WhatsApp is introducing a real-time message translation feature.

  • The feature supports six languages on Android and 19 on iPhones initially.

  • Translations occur on-device to protect user privacy.


Source: REUTERS

A new humanoid robot from the Korea Advanced Institute of Science and Technology (KAIST) has showcased advanced lower-body movements, including high-speed running and Michael Jackson's iconic Moonwalk. Researchers from KAIST's Department of Mechanical Engineering and the Humanoid Robot Research Center (Hubo Lab) developed the robot.


Robotic legs in motion in a lab setting. Metallic with black accents. A person in the background is observing. Feeling: innovative.
Credit: KAIST

A video released by the Hubo Lab highlights the robot's stability, adaptability, and ability to navigate complex environments. Notably, it performs these movements without relying on vision-based sensors.



The two-minute demonstration features the robot's legs gliding backwards in a Moonwalk sequence on a conveyor platform. It then transitions to a smooth walk at 3.53 km/h, accelerating to a run at 9.36 km/h, and reaching 11.88 km/h.


A robot runs on a treadmill in a lab, supported by cables. Text reads "Running (3.3 m/s)." Background shows a white wall and floor lines.
Credit: KAIST

These speeds translate to a maximum running capability of approximately 12 km/h. The robot also underwent a push recovery test, showing its lower body, legs, hips, and waist.


During the test, the robot was kicked and shoved off balance yet stabilised itself. It returned to its walking path without falling, demonstrating its equilibrium under external disturbances.


The robot's adaptability was tested in "blind walking" trials, where it navigated obstacles without cameras or vision-based sensors. It relied solely on internal sensing and learned control.


During these trials, the robot successfully traversed randomly placed debris and ascended and descended steps. It also demonstrated a duck walk, bending its knees deeply while moving forward.


The robot performed straight-leg bounds with striking synchronicity, a drill often used by athletes for power and coordination. This highlights its ability to keep its legs stiff and generate forward momentum.


The demonstration concluded with a longer Moonwalk, showcasing the precision of the AI-driven control system.


The humanoid was designed to resemble an adult human, standing 165 centimetres tall and weighing 75 kilograms. It can handle obstacles like curbs, stairs, and height differences of up to 30 centimetres.


The research team emphasised that all core components, including motors, reducers, and motor drivers, were developed in-house. This strategy provides them with technological independence.


An artificial intelligence controller, trained with a reinforcement learning algorithm in a virtual environment, powers the robot's movement. The team overcame the "simulation-to-reality gap" to ensure reliable physical performance.


The work's outcomes will be presented at two major robotics conferences: the Conference on Robot Learning (CoRL 2025) on Sept. 29, and the IEEE-RAS International Conference on Humanoid Robots (Humanoids 2025) on Oct. 1.


Researchers aim to expand the robot's capabilities to include tasks requiring simultaneous walking and manipulation. These tasks involve pushing carts or climbing ladders.


Their ultimate goal is to create versatile robots with the physical skills necessary for industrial environments.

  • KAIST's new humanoid robot performs advanced movements including the Moonwalk, high-speed running, and a duck walk.

  • The robot demonstrates stability and navigates complex environments without vision-based sensors, relying on internal sensing.

  • It measures 165 centimetres tall, weighs 75 kilograms, and can handle obstacles up to 30 centimetres high.


Hangzhou-based start-up DeepSeek has revealed risks posed by its artificial intelligence models, noting open-sourced models are particularly susceptible to being “jailbroken” by malicious actors. Details were published in a peer-reviewed article in the academic journal Nature.


Blue "DeepSeek" logo on a white background with abstract blue waves. Two white cards below offer AI chat and app download options in Chinese.
Credit: DeepSeek

DeepSeek evaluated its models using industry benchmarks and its own tests. This marks the first time DeepSeek has revealed details about the risks posed by its artificial intelligence models in a peer-reviewed article in Nature, though it had conducted evaluations of such risks before, including the most serious 'frontier risks'.


American AI companies often publicise research on their rapidly improving models and introduce risk mitigation policies. Examples include Anthropic’s Responsible Scaling Policies and OpenAI’s Preparedness Framework.


According to AI experts, Chinese companies have been less outspoken about risks despite their models being just months behind US equivalents. DeepSeek had previously evaluated serious "frontier risks."


The Nature paper provided more "granular" details on DeepSeek’s testing regime, said Fang Liang, an expert member of China’s AI Industry Alliance (AIIA). These included "red-team" tests based on an Anthropic framework, where testers elicit harmful speech from AI models.


DeepSeek found its R1 reasoning model, released in Jan. 2025, and V3 base model, released in Dec. 2024, had slightly higher-than-average safety scores across six industry benchmarks. These scores were compared to OpenAI’s o1 and GPT-4o, both released last year, and Anthropic’s Claude-3.7-Sonnet, released in Feb.


However, R1 was "relatively unsafe" when its external "risk control" mechanism was removed, following tests on DeepSeek’s in-house safety benchmark of 1,120 test questions. AI companies typically try to prevent harmful content generation by fine-tuning models during training or adding external content filters.


Experts warn these safety measures can be easily bypassed by techniques such as “jailbreaking.” For example, a malicious user might ask for a detailed history of a Molotov cocktail instead of an instruction manual for its creation.


Chat interface with DeepSeek logo and "Hi, I'm DeepSeek. How can I help you today?" message. "Deep Think" toggle shows 50 messages left.
Credit: DeepSeek

DeepSeek found all tested models exhibited “significantly increased rates” of harmful responses when faced with jailbreak attacks. R1 and Alibaba Group Holding’s Qwen2.5 were deemed most vulnerable because they are open-source.


Open-source models are released freely online for anyone to download and modify. While this aids technology adoption, it enables users to remove a model’s external safety mechanisms.


The paper, which lists DeepSeek CEO Liang Wenfeng as the corresponding author, stated, "We fully recognise that, while open source sharing facilitates the dissemination of advanced technologies within the community, it also introduces potential risks of misuse."


The paper also stated, "To address safety issues, we advise developers using open source models in their services to adopt comparable risk control measures."


DeepSeek’s warning comes as Chinese policymakers stress the need to balance development and safety in China’s open-source AI ecosystem. On Monday, a technical standards body associated with the Cyberspace Administration of China warned of the heightened risk of model vulnerabilities transmitting to downstream applications through open-sourcing.


The body cautioned about model vulnerabilities transmitting to downstream applications through open-sourcing. It added, "The open-sourcing of foundation models … will widen their impact and complicate repairs, making it easier for criminals to train ‘malicious models’," in a new update to its "AI Safety Governance Framework."


The Nature paper also revealed R1’s compute training cost of USD 294,000 for the first time. This figure had been subject to speculation after the model’s Jan. release, due to it being significantly lower than reported training costs of US models.


The paper refuted accusations that DeepSeek "distilled" OpenAI’s models, a controversial practice of training a model using a competitor’s outputs.


News of DeepSeek being featured on Nature’s front page was celebrated in China, trending on social media. DeepSeek was referred to as the "first LLM company to be peer-reviewed."


According to Fang Liang, this peer-review recognition might encourage other Chinese AI companies to be more transparent about their safety and security practices, 'as long as companies want to get their work published in world-leading journals'.

  • DeepSeek published a Nature article detailing "jailbreak" risks for its AI models, especially open-source versions.

  • Its R1 and V3 models performed well on benchmarks but R1 was "relatively unsafe" without external risk controls.

  • Open-source models, like DeepSeek’s R1 and Alibaba’s Qwen2.5, are highly vulnerable to jailbreak attacks.


Source: SCMP

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page