top of page

At the 5th edition of Future of Health Asia held in Singapore, global healthcare leaders, insurance policymakers, and industry speakers converged to explore how AI, digital transformation, and cross-sector partnerships are reshaping the future of healthcare in the Asia-Pacific region. Organised by The Economist Impact, the event spotlighted not just technology’s potential, but also its limitations, and the shared responsibility across sectors to ensure patients remain at the heart of innovation.


Future of Health Asia, 5th Edition in Singapore
Future of Health Asia, 5th Edition in Singapore

 

AI Not Replacing, Human Care

From administrative automation to surgical precision, AI’s role in healthcare is rapidly expanding. Speakers across sessions emphasised that AI is here to augment clinical decision-making, not replace it. AI’s most immediate impact is seen in reducing administrative inefficiencies and enabling clinicians to spend more time with patients. Yet clinical AI remains in its early adoption phase, limited by challenges such as data bias, lack of interoperability, and the need for rigorous validation. Experts in the panel also warned that failing to establish the foundational basics would result in "rubbish in, rubbish out", a clear signal that the quality of output relies entirely on the quality of input. This caution, alongside concerns regarding automation bias, cybersecurity, and evolving regulatory oversight, underscores the critical need to keep human oversight central to both clinical decision-making and surgical procedures.

 

Rebuilding Trust and Access

While digital tools promise efficiency, the region still faces fundamental access barriers. Data presented at the event revealed that 8 in 10 people in Asia delay seeking care due to confusion, costs, or fear of burdening loved ones, and over 60% of Singaporeans struggle to navigate their own healthcare systems. This reality reinforces that innovation cannot be separated from access. Panellists on healthcare equity pointed out that the patient voice, though often referenced, remains insufficiently integrated into system design and policymaking. Trust, they argued, is the missing currency in healthcare transformation. A multi-country patient survey revealed that 75% of ethnic minorities reported negative healthcare experiences affecting their trust, and 15% of marginalised populations avoid the system altogether. Rebuilding trust requires data transparency, evidence-based policy, and consistent engagement between clinicians, insurers, industry, and media.

 

Health as Economic Capital

The event also reframed health as not merely a cost burden but as foundational economic capital. Through the Health Dividend Initiative, The Economist Impact strives to act as a catalyst and called for sustained investment in health as a driver of productivity, education, and equity. The idea that investing in health is not merely an expense, but rather something that pays dividends was also echoed in the discussion. This economic framing links well with the technological transformation narrative where AI, robotics, and digital health can yield immense efficiency and societal returns, but only if investment decisions align with long-term public good rather than short-term budget cycles.

 

Collaboration Is the Catalyst

Discussions Between Multiple Sectors at the Future of Health Asia
Discussions Between Multiple Sectors at the Future of Health Asia

A central message echoed that no single actor can achieve healthcare transformation alone. Governments, hospitals, insurers, industry players, and patient organisations must move from siloed initiatives to integrated partnerships. As The Economist Impact’s moderators summarised, progress depends on connecting these dots between innovation and regulation, between investment and inclusion, and between systems and the human stories that will define them.

 

Ultimately, the event reminded everyone that while AI may shape the future of healthcare, empathy, trust, and collaboration will determine its success. Technology can analyse, predict, and even assist, but it cannot listen. Keeping the patient’s voice at the centre remains the truest form of innovation.

Microsoft rolled out a series of artificial intelligence upgrades to Windows 11 on Thursday, aiming to make its Copilot AI assistant more appealing to users. These enhancements streamline task automation and connection with services across devices.


Credit: MICROSOFT
Credit: MICROSOFT

Users can now activate the AI assistant by using the wake word "Hey Copilot" and execute voice commands. This new opt-in feature is available on any Windows 11 PC, Microsoft stated.


Laptop with glowing keys, rainbow icon on screen. "Copilot + PC" text. Background features colorful abstract design. Time: 1:11 AM, 6/22/2024.
Credit: MICROSOFT

An experimental 'Copilot Actions' mode is also included in the update, allowing the AI assistant to perform real-world tasks directly from the desktop. These tasks include booking restaurant reservations and ordering groceries.


This new tool expands on a similar capability first announced for the web browser in May. Microsoft clarified that these agents will begin with limited permissions, only obtaining access to resources explicitly provided by the user.


Search bar interface with layered tabs in pastel blue and green, featuring a magnifying glass icon. Blue grid and Windows logo in background.
Credit: MICROSOFT

Microsoft also launched its 'Gaming Copilot' embedded in Xbox Ally consoles on Thursday. This feature allows players to engage with the AI assistant for real-time tips, recommendations, and support during gameplay.


"We think we're on the cusp of the next evolution, which is where AI happens, not just in that chatbot, but gets naturally integrated into the hundreds of millions of experiences that people use every day," said Yusuf Mehdi, Microsoft's consumer chief marketing officer.


The company has been actively boosting Copilot's adoption and usage to better compete with tech giants Google and Meta. These competitors have also pushed their own AI assistants through various features in devices, applications, and browsers.

  • Microsoft introduced significant AI upgrades to Windows 11, enhancing its Copilot assistant.

  • New features include voice activation with "Hey Copilot" and expanded Copilot Vision.

  • Experimental 'Copilot Actions' enable real-world tasks such as booking reservations and ordering groceries.


Source: REUTERS

Chinese fintech giant Ant Group has open-sourced dInfer, an inference framework for diffusion language models, which it claims makes artificial intelligence systems more efficient. The Alibaba Group Holding affiliate stated dInfer surpasses a framework proposed by Nvidia and is faster than an open-source inference engine developed by researchers at the University of California, Berkeley.


Building facade with Ant Financial logo and text, framed by tree leaves. The sky is a clear blue, creating a calm urban atmosphere.
Credit: ANT GROUP

Ant Group, the operator of Alipay, announced on Monday that dInfer is designed for diffusion language models. These models generate outputs in parallel, differing from autoregressive systems, such as ChatGPT, which produce text sequentially. Diffusion models are already widely utilised in image and video generation.


The company asserted that dInfer is up to three times faster than vLLM, an open-source inference engine from University of California, Berkeley researchers. Furthermore, it is 10 times faster than Fast-dLLM, Nvidia’s own framework.


Autoregressive language models, including OpenAI’s GPT-3.5 and DeepSeek’s R1, have largely powered the chatbot boom due to their strengths in understanding and generating human language. Nevertheless, researchers continue to explore diffusion language models for potentially greater capabilities.


Ant Group’s focus on alternative model paradigms highlights how China’s technology firms are enhancing algorithmic and software optimisation. This strategy aims to counterbalance the country’s disadvantages in advanced AI chips.


Internal tests conducted on Ant’s diffusion model LLaDA-MoE showed dInfer generated an average of 1,011 tokens per second on the HumanEval code-generation benchmark. This compares with 91 tokens per second for Nvidia’s Fast-dLLM and 294 for Alibaba’s Qwen-2.5-3B model, which was optimised with vLLM.


Researchers noted that these results help address a primary limitation of diffusion language models: their high computational cost. "We believe that dInfer provides both a practical toolkit and a standardised platform to accelerate research and development in the rapidly growing field of dLLMs," Ant researchers wrote in a technical report.


This announcement follows other artificial intelligence activities from the Hangzhou-based firm. On Tuesday, Ant Group unveiled a one-trillion-parameter large language reasoning model, one of the world's biggest open-sourced models, which scored strongly on reasoning benchmarks.


Ant Group entered the AI model race in 2023 with a self-developed financial large language model. Its current portfolio includes the Ling series non-thinking large language models, Ring series reasoning models, Ming series multimodal models, and the experimental diffusion model LLaDA-MoE. The company is also developing AWorld, a framework supporting continual learning among AI agents.


Digital circuit board with glowing blue AI chip in the center, surrounded by intricate lines on a dark blue background, conveying technology.

"At Ant Group, we believe artificial general intelligence (AGI) should be a public good – a shared milestone for humanity’s intelligent future," said Chief Technology Officer He Zhengyu. AGI refers to a theoretical AI system that could surpass humans in most economically valuable tasks, a goal for companies like OpenAI and Alibaba.


Other Chinese technology firms are also experimenting with alternative model paradigms. In late July, TikTok owner ByteDance introduced Seed Diffusion Preview, a diffusion language model it claimed achieved speeds five times faster than comparable autoregressive models.

  • Ant Group open-sourced dInfer, an AI inference framework for diffusion language models.

  • dInfer is claimed to be up to 10 times faster than Nvidia’s Fast-dLLM and three times faster than vLLM from the University of California, Berkeley.

  • The framework helps address the high computational cost typically associated with diffusion language models.


Source: SCMP

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page