top of page

Social media platform X is investigating "racist and offensive" posts generated by xAI's chatbot Grok. Sky News reported the investigation.


Text "Grok" glows on a dark background with blue light. Below, a text box asks "What do you want to know?" creating a mysterious mood.
Credit: GROK

X and its safety teams are urgently looking into the chatbot’s role in creating "hate-filled, racist posts" online. These posts were made in response to user prompts.


Sky News reporter Rob Harris discussed this in a video posted to the digital news website’s X account. Harris noted the urgency of the investigation.


Governments and regulators have been cracking down on sexually explicit content produced by Elon Musk’s xAI chatbot Grok on X. There is a growing global effort to curb illegal material.


This global push includes investigations, bans, and demands for safeguards to curb illegal material. Regulators are focused on the chatbot’s output.


Previously, xAI restricted image editing capabilities for Grok AI users. The organisation also blocked users from generating images of people in revealing clothing.


These image generation blocks were implemented based on user location, specifically in "jurisdictions where it’s illegal." xAI did not disclose the specific countries.

  • X is investigating "racist and offensive" posts from xAI's Grok chatbot.

  • The investigation follows reports from Sky News about "hate-filled, racist posts."

  • Governments and regulators are increasing crackdowns on sexually explicit content from Grok.


Source: REUTERS

Amazon's cloud unit, AWS, has launched an artificial intelligence-enabled platform designed to improve patient access to care and reduce administrative tasks for healthcare providers. The new platform, Amazon Connect Health, aims to streamline operations within the healthcare sector.


Smartphone displaying the Amazon logo on a white screen, set against a neutral background. The phone is tilted, showing part of the text.
Credit: UNSPLASH

Amazon Connect Health is an agentic AI-led platform that integrates with electronic health records. Clinicians can utilise it for patient verification, appointment scheduling, compiling medical histories, clinical documentation, and medical coding.


The system is built for around-the-clock operation, allowing instant appointment bookings. It is also designed to escalate complex cases to staff when human intervention is required.


Amazon Connect Health leverages specialised learning techniques applied to healthcare-specific data sets and guidelines. The platform undergoes a multi-step evaluation process for model performance, focusing on safety and accuracy, including checks driven by clinicians.


UC San Diego Health, which has deployed the tool, reported saving one minute per call and reducing call abandonment rates by up to 60%. These improvements highlight the platform's potential efficiency gains.


The system offers features such as transcribing doctor-patient conversations during visits. It can also draft clinical notes for provider review in real time and generate patient-friendly summaries.


To ensure transparency, Amazon Connect Health incorporates a feature called evidence mapping. This links AI-generated output to its exact source, such as call transcripts and medical records.


Amazon One Medical has already employed the documentation feature for more than one million visits. This usage has been met with strong clinician adoption and regular weekly utilisation.

  • Amazon AWS launched Amazon Connect Health, an AI-enabled platform for healthcare administration.

  • The platform aims to ease patient access and cut administrative work for providers.

  • It integrates with electronic health records for tasks like scheduling, documentation, and coding.


Source: REUTERS

The use of AI among students in higher education has seen a dramatic rise, according to a 2025 finding from the UK-based independent think tank, HEPI (Higher Education Policy Institute). A remarkable 92% of students now use AI in some capacity, marking a staggering increase from 66% recorded in 2024. As Generative AI floods university campuses, a pressing question emerges: Are academic institutions genuinely evolving, or are they simply scrambling to protect an outdated system? This rapid change has created a pivotal moment for universities worldwide, placing traditional educational models under intense and necessary scrutiny.


Debate on AI in Higher Education by Left to Right: Dr. Fadhil Ismail, Dr. Samson Tan, Dr. Jürgen Rudolph &  Professor Peter Waring
Left to Right: Dr. Fadhil Ismail, Dr. Samson Tan, Dr. Jürgen Rudolph & Professor Peter Waring

Responding to this surge, Dr. Jürgen Rudolph, an adjunct lecturer with Murdoch University in Singapore, voices the urgency of this reality, noting how the ubiquitous nature of generative technology creates deep logistical and philosophical challenges for educators. He explains, "GenAI produces a genuine headache: it is increasingly difficult to determine whether a student wrote an assignment, whether a teacher set it, or indeed whether a teacher marked it". According to Dr. Rudolph, the core problem stems from outdated evaluation metrics that rely heavily on unsupervised writing. Expanding on this dilemma, he argues, "The real issue is that traditional assessments were designed for a world in which producing fluent text required genuine human effort. That world no longer exists. When a student can generate a competent essay in seconds, the unsupervised written assignment stops being a reliable measure of learning".


While these traditional methods may be failing, Dr. Rudolph does not believe evaluation should be abandoned entirely; rather, he suggests the crisis is an opportunity for evolution. He advocates for a complete redesign, stating, "What excites me far more is the move towards authentic assessment: tasks rooted in real-world complexity where AI becomes a tool rather than a threat". To be successful, this shift requires educators to generate evidence of genuine learning through "oral examinations, iterative drafts with feedback, project-based work tied to real contexts, and process documentation that makes the learning journey visible".


The urgency to overhaul education is frequently fuelled by the extravagant industry claims that ignore the hallucinations by Generative AI, a premise that Dr. Rudolph vigorously dismantles by questioning the fundamental nature of the technology. When asked if AI possesses more knowledge than human educators, he retorts, "Let me challenge the premise before addressing the question, because the premise is doing a great deal of heavy lifting. Do these models truly possess more knowledge than any human educator? I would argue they do not and that this claim is itself a product of the very hype we ought to be resisting". He views the technology not as an omniscient oracle, but as a flawed helper, explaining, "What GenAI actually resembles is a sycophantic, super-hardworking, and deeply forgetful research assistant. It will work tirelessly, never complain, and agree with almost anything you say - which is precisely what makes it dangerous if left unsupervised".


While the capabilities of AI in specific, contained environments are undeniable and have demonstrated significant value, the universal, verifiable effectiveness and positive contribution across diverse economic and cultural landscapes remain unproven. This nuanced view is shared by Dr. Stefan Popenici, a Sydney-based independent researcher in AI, who challenges the core assumptions driving the modern economic conversation around AI adoption, "What we are witnessing is AI-driven valuation growth - and those are very different things". He elaborates on this precarious financial situation, warning, "The current AI economy has all the hallmarks of a speculative bubble. Nvidia lost $600 billion in a single day when DeepSeek demonstrated that comparable AI could be built at a fraction of the cost... The broader pattern is dismally familiar: extravagant promises, a frenzy of capital, eye-watering prices detached from demonstrated returns". Echoing the scepticism about the sweeping economic narratives of AI, Ms. Shannon Tan, a Senior Lecturer at Amity Global Institute, points out the tragic paradox evident in some academic institutions that have fully embraced this hype. She reveals, "None of this means AI is trivial - it is a genuinely useful technology within specific domains. But the gap between what is marketed and what materialises is vast. One US university announced a $17 million OpenAI partnership whilst simultaneously issuing faculty layoff notices and proposing $375 million in budget cuts. That is not a transformation. That is institutional auto-cannibalism".


Dr. Popenici concurs that change is necessary but emphasises that it should be deliberate rather than merely reactive. He underscores this need for caution, asserting that the drive to keep pace with industry advancements often overlooks the fundamental purpose of education. He observes, "Yes, institutional cycles are slow, and yes, they need reform. But speed is not intrinsically virtuous. The frantic pace of AI development has produced more wreckage than progress. What accreditation frameworks ought to do is ensure that graduates can evaluate these tools critically, not merely keep pace with the latest release cycle. That is a more durable ambition than chasing weekly updates, and frankly, a more important one."


Dr. Popenici also warns against relying on corporate training programmes to define academic futures and student capabilities. He articulates this concern clearly, observing, "Tech companies are skilled at building training programmes - designed, naturally, so that people use their products. That training will always carry a techno-solutionist, techno-optimistic flavour. It rarely pauses to ask: how much technology should we use, and for what? Universities, at their best, do precisely that".


AI Generated Image - Humanoid Reading a Book (Artificial Intelligence in Higher Education)
AI Generated Image - Humanoid Reading a Book (Artificial Intelligence in Higher Education)

Consequently, rather than merely teaching students how to operate software, academics are urging a profound shift toward critical understanding. Dr. Fadhil Ismail, Senior Lecturer at Kaplan Higher Education Academy, strongly rejects the idea that mastering system prompts is sufficient for higher education. He clarifies his position extensively by stating, "Critical AI literacy does not begin with tool use. It begins with understanding what AI is and what it is not. GenAI systems perform statistical pattern-matching, not reasoning. They are neither truly artificial - built as they are on vast human labour, from mineral extraction to precarious gig workers annotating data - nor genuinely intelligent". Dr. Ismail stresses that true educational advancement requires recognising that fluent output does not mean accurate output. He insists that institutions must cultivate "the capacity to refuse - to decide that certain intellectual tasks should not be delegated to machines".


Rather than yielding to the temptation of technological fixes, a call is being made to educators to fundamentally reassess and redefine the core mission of the education system. Ms. Tan advocates for a holistic approach where essential human skills are inextricably interwoven with technical knowledge. She maintains, "Empathy, ethical judgment, and metacognition matter enormously. But they need not be taught in opposition to technical knowledge. They should be embedded within it". Acknowledging the need for systemic structural changes without sacrificing academic rigour, Ms. Tan adds, "In an era of breakneck change, lifelong learning is not optional - it is survival. The degree needs reimagining, certainly. But let us not throw out the baby with the bathwater. The capacity for deep, sustained, critical inquiry remains something no six-week certification can replicate".


Ultimately, institutions must avoid rushing into reactive overhauls driven by unproven market forces and software updates. Dr. Rudolph offers a beautifully grounded final thought, advising, "So before we redesign entire education systems around the assumption that AI has permanently restructured the economy, perhaps we should wait for the evidence that it actually has. The most prudent thing education can do right now is equip graduates with critical thinking, adaptability, and the capacity to evaluate hype - precisely the skills needed to survive whatever comes next, bubble or bust".


These critical perspectives and analyses are presented by Dr. Stefan Popenici, Dr. Jürgen Rudolph, Dr. Fadhil Ismail, and Ms. Shannon Tan, who serve as the editors of the recently published Handbook of Artificial Intelligence in Higher Education.

Tech360tv is Singapore's Tech News and Gadget Reviews platform. Join us for our in depth PC reviews, Smartphone reviews, Audio reviews, Camera reviews and other gadget reviews.

  • YouTube
  • Facebook
  • TikTok
  • Instagram
  • Twitter
  • LinkedIn

© 2021 tech360.tv. All rights reserved.

bottom of page