top of page

A Chinese robotics organisation, AheadForm, has introduced a humanoid robot head featuring ultra-realistic facial expressions. The company aims to enhance human-robot interactions through these advanced designs.


Bald robotic head with expressive face, wires visible, sits near a fan and digital display. Background features mechanical parts.
AheadForm/YouTube

AheadForm's objective is to improve communication between humans and robots by creating robots with realistic facial features. These features include moving eyes and synchronised speech, which allow the robots to express emotions more naturally.



The company achieves these human-like facial expressions by combining self-supervised AI algorithms with bionic actuation technology. This approach, according to the company, empowers future Artificial General Intelligence (AGI) to convey genuine emotions.


“We develop advanced bionic humanoid robots that integrate self-supervised AI algorithms with high-DOF bionic actuation, empowering future AGI to express authentic emotions and lifelike facial expressions,” the company said.


Blonde person with elf ears in iridescent outfit gazes at a white orb. Dark background with a soft spotlight. Mysterious and ethereal mood.
Credit: AheadForm/YouTube

AheadForm is showcasing its technology through a product line known as the ‘Elf Series’. One model in this series, named ‘Xuan’, is described as resembling an Elvish princess from a fantasy novel.


Xuan also features a full body, crafted with sculptural precision. The robot is engineered to engage attention and create emotional resonance. To see AheadForm’s robots in action, individuals can visit the company’s YouTube channel.

  • AheadForm, a Chinese robotics organisation, has unveiled a humanoid robot head with realistic facial expressions.

  • The company aims to enhance human-robot interaction by enabling robots to express emotions through moving eyes and synchronised speech.

  • This technology combines self-supervised AI algorithms with high-DOF bionic actuation.


Meta Platforms has launched a new artificial intelligence video feed called Vibes. The company aims to accelerate its work on AI technology with this new offering.


Furry orange character on motorbike, cat with goggles, moose, and other scenes on dark background with "Generate" button and color wheel.
Credit: META

Vibes, a platform enabling users to create and share short-form, AI-generated videos, will be rolled out on the Meta AI app and the meta.ai website starting Thursday.


Two smartphone screens display apps: left shows "Alex's Shades" with device status, right lists creative tasks. Both dark-themed with icons.
Credit: META

Users can generate videos from scratch, utilise existing content, or remix videos from the feed. Options are available to add new visuals or layer in music.


Laptop screen showing a sleek dark interface with a chat history and options like writing a screenplay. Text: "Hey Darrell, how’s your night?"
Credit: META

Content created on Vibes can be uploaded directly to the feed or cross-posted to Instagram and Facebook stories and reels.


Meta reorganised its artificial intelligence efforts in June, establishing a division named 'Superintelligence Labs'. This reorganisation followed setbacks for its open-source Llama 4 model and key staff departures.


The company is banking on Superintelligence Labs to generate new cash flows from the Meta AI app, image-to-video advertising tools, and smart glasses. Meta generated nearly $165 billion in revenue last year.

  • Meta Platforms launched Vibes, a new artificial intelligence video feed.

  • Vibes allows users to create, remix, and share short-form, AI-generated videos.

  • The platform rolls out on the Meta AI app and meta.ai website starting Thursday.


Source: REUTERS

A report by child-safety advocacy groups and Northeastern University researchers has found that many of Instagram’s teen safety features are flawed or non-existent. Of 47 features tested, only eight were deemed completely effective.


Credit: META
Credit: META

Meta, Instagram’s parent organisation, disputed the findings, calling them erroneous and misleading. The report, titled "Teen Accounts, Broken Promises," compiled and analysed Instagram's publicly announced updates spanning over a decade.


Instagram Teen Accounts screen with settings options on a pink-orange gradient background. Options include privacy and time management.
Credit: META

Researchers noted that features designed to prevent young users from finding self-harm-related content were easily bypassed. Anti-bullying message filters also failed to activate, even when prompted with harassing phrases Meta had previously used to promote them.


Additionally, a feature meant to redirect teens from excessive consumption of self-harm content never triggered. However, some teen account safety features did work as advertised, including a "quiet mode" meant to temporarily disable notifications at night, and a feature requiring parental approval for changes to a child’s account settings.


Laura Edelson, a professor at Northeastern University who oversaw a review of the findings, stated that the results question Meta’s efforts "to protect teens from the worst parts of the platform." She added, "Using realistic testing scenarios, we can see that many of Instagram's safety tools simply are not working."


Two of the groups behind the report, Molly Rose Foundation in the United Kingdom and Parents for Safe Online Spaces in the U.S., were founded by parents. These parents allege their children died due to bullying and self-harm content on Meta’s platforms.


Meta spokesman Andy Stone said the report "repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today." Stone also called some of the report’s appraisals "dangerously misleading."


Stone contended that the company’s approach to teen account features and parental controls has evolved over time. He claimed that teens placed into these protections "saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night."


Arturo Bejar, a former Meta safety executive and an Instagram consultant from late 2019 to 2021, provided tips to the advocacy groups and university researchers. Bejar, who left Meta in 2015 before returning as a consultant, stated he "experienced firsthand how good safety ideas got whittled down to ineffective features by management."


Bejar indicated that during his second tenure, Meta did not respond to data highlighting severe teen safety concerns on Instagram. Stone maintained that Meta responded to the concerns Bejar raised while employed by taking actions to make its products safer.


Reuters independently confirmed some of the report’s findings through its own tests and a review of internal Meta documents. One test showed that entering "skinny thighs" without a space, a term Meta had blocked, still surfaced anorexia-related content for a teen test account.


Internal Meta documents revealed that as the company promoted teen-safety features last year, it was aware of significant flaws. Safety employees had warned in the last year that Meta failed to maintain automated-detection systems for eating-disorder and self-harm content.


This failure meant Meta could not reliably avoid promoting content glorifying eating disorders and suicide to teens, nor divert users consuming large amounts of such material. Staffers also acknowledged that a system to block search terms used by potential child predators was not being updated promptly.


Stone stated that internal concerns regarding deficient search term restrictions have since been addressed by combining a newly automated system with human input.


The findings come amidst increased scrutiny on tech companies to safeguard young users. Last month, U.S. senators launched an investigation into Meta after Reuters reported on an internal policy allowing chatbots to "engage a child in conversations that are romantic or sensual."


This month, former Meta employees testified before a Senate Judiciary subcommittee, alleging the company suppressed research showing preteen users of its virtual reality products were exposed to child predators. Stone dismissed these allegations as "nonsense."


Meta announced Thursday an expansion of its teen accounts to Facebook users internationally. The organisation also said it would pursue new local partnerships with middle and high schools.


App interface showing account privacy options for teens. A pop-up advises adding a parent to change settings if under 16. Buttons: "Add parent," "Keep current setting."
Credit: META

Instagram head Adam Mosseri commented, "We want parents to feel good about their teens using social media."

  • A report by advocacy groups and Northeastern University researchers found Instagram's teen safety features are largely ineffective or flawed.

  • Only eight of 47 tested features were fully effective, with many self-harm and anti-bullying tools failing to work as intended.

  • Meta disputes the report's accuracy, with spokesman Andy Stone calling the findings "erroneous and misleading."


Source: REUTERS

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page