WAIC 2025: Can We Govern AI Before It Governs Us?
【数据猿导读】 At the 2025 World Artificial Intelligence Conference, urgency replaced optimism. As AI grow more powerful—and possibly self-aware—global experts like Geoffrey Hinton warned that time is running out to put guardrails in place....

At the 2025 World Artificial Intelligence Conference, urgency replaced optimism. As AI grow more powerful—and possibly self-aware—global experts like Geoffrey Hinton warned that time is running out to put guardrails in place. From ethical dilemmas to geopolitical stakes, WAIC 2025 marked a turning point: not just in what AI can do, but in how humanity chooses to respond.
Geoffrey Hinton: Governance Before It's Too Late
It’s been a long time since deep learning pioneer and godfather of neural networks Geoffrey Hinton last travel due to chronic back pain, yet this time, he made a special trip to Shanghai— motivated by a growing sense of alarm about the need for AI governance.
Hinton said that we will eventually create intelligences smarter than us, but humans are used to being the smartest species. While some linguists argue generative AI is merely a “statistical next-word predictor” that cannot truly understand language, Hinton disagrees. He believes accurate prediction—especially the first word in a Q&A—requires true comprehension of context, akin to human consciousness and subjective understanding.
In February, researchers at UC San Diego demonstrated that large language models can now strictly pass the Turing Test.
What’s more, Hinton warned, AI can already transmit knowledge far faster than humans. Humans transfer information via language, whereas AIs can share gigabytes of data instantly—replicating, co-evolving, and forming information networks beyond human comprehension. Once AI agents reach a certain threshold of intelligence, they could develop tendencies for self-preservation, task completion, and even control.
This leads to a disturbing possibility: AIs might one day develop motives to take over control. How do we train AI to be benevolent Hinton asked. No one yet knows the answer.
On July 25, Hinton joined over 20 international AI experts in signing the Shanghai Consensus on International Dialogues on AI Safety, which proposes three key governance measures:
First, frontier models must pass pre-deployment safety evaluations, including third-party assessments, power-off function at any time, and mandatory risk disclosures. Secondly, establish clear limits on AI development and a permanent global oversight body. Thirdly, investment in developing safe AI by governments and companies, with unified testing and verification standards.
CEO of MiniMax Yan Junjie: AI Will Become More Accessible
The next speaker after Hinton was MiniMax founder Yan Junjie. A leading figure among China’s AI startups, Yan was one of the first PhD students in China to study deep learning. He founded MiniMax a year before ChatGPT's debut, making it one of the country's earliest generative AI companies.
In his keynote “Everyone’s AI”, Yan predicted AI will become increasingly powerful and profoundly impact society—without being monopolized. He believes the field will continue to support multiple players for three reasons:
First, model alignment means each company tailors AI to its own values, creating diverse model behaviors. Secondly, multi-agent systems have replaced single-model approaches, with different models collaborating across tools, diminishing any single model's dominance. Lastly, open-source models are rapidly catching up to closed-source ones in performance, democratizing AI development and expanding innovation space.
“AI will undoubtedly be in the hands of many companies,” Yan said. “And it will become more accessible.” He predicted the cost of using AI will drop, even as computing demands rise. A single multi-agent conversation might consume millions of tokens, but with token prices falling and user adoption rising, affordability remains within reach. “We believe AI must be affordable for everyone,” he said.
Harry Shum x Eric Schmidt: Dialogue on Competition and Cooperation
Former Microsoft EVP Harry Shum sat down with former Google CEO Eric Schmidt for a discussion on AI’s global trajectory——with warnings and hugs.
Having helped steer major tech waves—from personal computing to cloud and now AI—both agreed the technology’s influence has expanded far beyond engineering and business. It now touches governance, ethics, and global power dynamics. “Who sets the boundaries—and how—is the core question,” Schmidt said, arguing for regulated approach via international cooperation.
Quoting Henry Kissinger, Schmidt emphasized: “As long as both sides have common goals, they can cooperate.” He cited U.S.-China relationship normalization as an example of trust built from the ground up.
On the delicate balance between rivalry and collaboration, Schmidt remarked: “Let competition drive progress, and cooperation define the bottom line.” Dialogue is essential—especially on issues like AI weaponization, self-replication, or autonomous learning.
When it comes to governance, Schmidt stressed the primacy of values. Studies show it can be coaxed into lying or cheating—meaning AI must be trained from the start not to do harm.
Reflecting on his time leading Google’s Android open-source push, Schmidt observed that openness spurs innovation but poses security risks, as open models’ safeguards are easier to bypass compared to closed systems.
This echoed 2021 WAIC themes, when 98-year-old Kissinger warned that tech development must prioritize human dignity and destiny. In The Age of AI, co-authored with Schmidt and Microsoft’s Craig Mundie, the trio called AI a turning point for civilization—requiring a philosophical, ethical, and governance framework to ensure it benefits humanity.
CEO of Unitree Wang Xingxing: AI Now Writes Code for Me
Unitree Robotics founder and CEO Wang Xingxing admitted he now lets AI write much of his code. He gave an example: generating a raffle software using DeepSeek— fully automated, with little to no manual tweaking required.
Last year, AI programming success rates were low, but in 2025's first half, that number surpassed 90%, significantly lowering the technical barrier for research and boosting individual capability.
On U.S.-China divide in humanoid robots and AI development, Wang said China excels in manufacturing and deployment, while the U.S. leads in software and systems integration. “We need cooperation,” he emphasized, revealing that the robot industry is evolving rapidly—“at least one new robot launches every day.” He expects industry growth in the first half of 2025 to reach 50–100%.
At WAIC, Unitree exhibited four-legged robots B2 and Go2, as well as humanoid robot G1, capable of complex actions like boxing and spinning kicks. On July 25, the company also launched the R1 humanoid—featuring 26 joints, multimodal voice-vision capabilities, weighing 25kg, priced at ¥39,900.
Unitree is also preparing for IPO. On July 18, China’s securities regulator confirmed the company is under listing guidance, with CITIC Securities as its advisor. Wang directly holds 23.82% and indirectly controls 34.76%.
According to GGII, Unitree sold 23,700 robotic dogs in 2024—69.75% of global market share—and delivered over 1,500 humanoid robots, making it an industry leader.
China Mobile’s Yang Jie: Jiutian Institute and Digital Twin Factories
China Mobile Chairman Yang Jie gave a speech titled “Deepening the ‘AI+’ Initiative to Empower Industrial Upgrades.” He framed AI as the engine of the fourth industrial revolution—just as steam power once reshaped global industry. China Mobile, he said, aims to build a future of carbon-silicon symbiosis, where humans and AI operate in tandem across factories, cities, and daily life.
He outlined four strategic priorities:
First, innovation. Upgrade China Mobile’s Jiutian model to version 3.0, develop MoMA AI agents, and pursue AGI and autonomous AI.
Secondly, industrial services. Build intelligent manufacturing labs and digital factories to serve industries, homes, and smart cities.
Thirdly, R&D ecosystem. Establish the Jiutian Research Institute, Embodied Intelligence Center, AI+ Industrial Research Center, and more.
Fourthly, open-source trainings. Launch AI training/testing/innovation bases, build national open-source platforms with central SOEs, and fund frontier areas like embodied AI.
A Bigger WAIC Than Ever: Real-World AI Projects Take Center Stage
WAIC 2025 reached new heights—70,000 sqm of exhibition space, over 800 exhibitors, and 3,000+ showcased products, including:
·40+ large language models
·50+ AI terminal devices
·60+ intelligent robots
On Day 2, forums and demos revealed impressive industrial progress. Tech giants like Alibaba Cloud, SenseTime, and Baidu Intelligent Cloud unveiled AI applications for healthcare, education, and governance, signing strategic partnerships with firms from Germany, France, and Southeast Asia.
Organizers reported 20+ deals worth over ¥10 billion were signed in one day, spanning AI chips, computing infrastructure, smart city pilots, and AI+ manufacturing—signaling tighter integration across the value chain.
China’s Global AI Voice: From Participant to Rule-Maker
From the proposal of the organization to the action plan, China has demonstrated a global AI governance concept at the WAIC, which has also attracted widespread attention from foreign media and international public opinion.
The "Global Partnership on AI" proposed by China is regarded by many as a response to the G7 initiative and the UN AI governance framework.
Although there are still challenges in achieving fairness and mutual trust, most media observers believe that China is no longer content with simply exporting AI products and technologies. Instead, it is actively pushing to shape global rules—signaling a more strategic and deliberate role in the AI era.
来源:数智猿
刷新相关文章
我要评论
不容错过的资讯
大家都在搜
