Google Study Finds AI Models Mimic Human Collective Intelligence
- tech360.tv

- 2 hours ago
- 2 min read
New Google research suggests powerful artificial intelligence models mimic human collective intelligence. The study found reasoning models generate internal multi-agent debates, which researchers termed "societies of thought."

The findings indicated that perspective diversity, not just computational scale, contributes to increasing AI model intelligence. The research also highlighted the growing importance of Chinese open models for interdisciplinary research in the United States.
Experimentation involved DeepSeek’s R1 and Alibaba Cloud’s QwQ-32B models. Researchers observed these models generated internal debates, where distinct personality traits and domain expertise led to greater capabilities.
Researchers stated in their paper, "We suggest that reasoning models establish a computational parallel to collective intelligence in human groups, where diversity enables superior problem-solving when systematically structured." The paper was published on the open-access online repository arXiv.
Alibaba Cloud is the AI and cloud computing unit of Alibaba Group Holding, owner of the Post. The study was conducted by four researchers from Google’s "Paradigms of Intelligence" research team, which explores intelligence through interdisciplinary methods.
Junsol Kim, a PhD candidate in sociology at the University of Chicago, led the study. Google vice-president Blaise Agüera y Arcas was listed as the final author. The study has not undergone peer review.
Reasoning models that "think" through tasks have become the dominant type of foundational AI system. This trend began when ChatGPT developer OpenAI introduced its o-series of models in September 2024.
Such models utilise more computational resources during deployment, increasing AI capabilities. They have also lowered the cost of intelligence, according to benchmarking organisation Artificial Analysis.
Google researchers based their findings on analysis of the Chinese models’ "reasoning traces." These are the intermediate step-by-step outputs generated by reasoning models before their final response.

Reasoning traces were first exposed to users when Hangzhou start-up DeepSeek released its first reasoning model, R1. The models’ reasoning traces mimicked "simulated social interactions," including questioning, perspective taking, and reconciliation.
When the models were encouraged to be more conversational with themselves, their reasoning accuracy improved. These findings could shift the conceptualisation of AI models.
They may be viewed less as solitary problem-solving entities and more as collective reasoning architectures. Intelligence would then arise from the structured interplay of distinct voices, not merely from scale.
Google is considered a world-leading AI company. Its latest flagship reasoning model, Gemini Pro 3, was developed by its foundational model team DeepMind. Artificial Analysis considers it one of the most powerful in the world.
The study used Chinese models as experimental subjects, reflecting a growing reliance on Chinese open-weight models in US academia. This includes top institutions such as Stanford University.
Chai Wenhao, a PhD candidate in computer science at Princeton University, noted that classes at his university almost exclusively use Chinese models. He stated there are few open US models of comparable performance.
Google research suggests AI models like DeepSeek and Alibaba mimic human collective intelligence.
The study found these models generate "societies of thought" through internal multi-agent debates.
Perspective diversity, not just computational scale, was identified as a key factor in AI intelligence.
Source: SCMP


