Chinese AI Firms Suspend Chatbots During Gaokao to Prevent Cheating

Generated by AI AgentCoin World
Tuesday, Jun 10, 2025 12:01 am ET3min read

Chinese AI companies, including

, ByteDance, DeepSeek, Tencent, and Moonshot, temporarily suspended their chatbot services during the country’s annual university entrance exams, known as the Gaokao, which took place from June 7 to 10. This move was part of a broader effort to prevent cheating and ensure the integrity of the exams. Alibaba’s Qwen and Tencent’s also disabled their image recognition functionalities during the exam hours to further mitigate any potential for academic dishonesty.

In addition to the measures taken by private AI companies, Chinese authorities implemented a comprehensive set of methods to prevent cheating attempts. These measures included biometric verifications, signal jamming, strict filtering of digital devices, and video surveillance to detect any abnormal behavior during the exams. The authorities aimed to create a secure and fair environment for all candidates, ensuring that the exams were conducted with the highest standards of integrity.

According to data released by the Chinese Ministry of Education, the number of candidates for this year’s Gaokao was approximately 13.35 million, roughly 70,000 fewer than the previous year. The Gaokao consists of three compulsory subjects: Chinese,

, and English, with exam questions set by the Ministry of Education. The exams are a critical milestone for students, determining their future academic and career paths.

During the exam period, AI chatbots refused to answer questions related to exam content. For instance, Qwen and Doubao, which typically offer photo recognition services, declined to process images of test papers. Qwen responded that the service was temporarily frozen during the exam hours, while Doubao indicated that the uploaded picture was “not in compliance with rules.” The Doubao and DeepSeek applications also displayed automated messages announcing the temporary restriction of services, explaining that it was only during the examination period.

Yuanbao and Kimi also suspended their photo-recognition services during the exam hours. The chatbots responded that, “To ensure the fairness of the college entrance examinations, this function cannot be used during the test period.” However, none of the AI firms issued an official statement about suspending AI chatbots during the exam period. University students reportedly spread the information on the Chinese social media platform Weibo when they could not access the AI chatbots’ services to study during those exam hours.

In addition to suspending AI tools to curb exam irregularities, school administrations in several regions in China announced that they would use AI to monitor for “abnormal behaviors,” such as whispers or repeated glances between students during exams. For example, East China’s Jiangxi Province deployed an AI-powered, real-time surveillance system for all 567,100 candidates. The system utilized deep learning algorithms to monitor irregular behavior by both examinees and exam invigilators in real time. Actions such as starting early, turning one’s head, passing items, or leaving mid-exam were flagged and recorded. Exam room footage was reviewed after the exam, and any violations or misconduct were dealt with strictly per relevant regulations.

China’s Vice Premier Ding Xuexiang called for a “safe Gaokao,” emphasizing the importance of a campaign against cheating. Last month, the Education Ministry released a set of regulations stating that while schools should start cultivating AI talent at a young age, students should not use AI-generated content as answers to tests. The 2025 edition of the guideline aimed to standardize the ethical and appropriate use of generative AI while addressing privacy risks, academic integrity, and overreliance on technology. A senior Education Ministry committee official stated that the restrictions were meant to ensure age-appropriate use of AI and prevent younger students from becoming overly dependent on the technology.

Primary school students were prohibited from independently using AI tools that generated open-ended content, while educators were required to ensure that AI could complement but not replace human-led teaching. Middle school students were allowed to explore the logical structure of AI-generated content, while high school students could engage in inquiry-based learning that involved understanding AI’s technical principles. The guideline also asked educational authorities to formulate localized AI management policies, establish data protection rules, launch ethical review mechanisms, and create dynamic “whitelists” of approved AI tools. Teachers were required to ensure that AI played only a supplementary role in education.

The ministry also announced plans to increase investment in AI education infrastructure and gradually establish AI education bases in primary and secondary schools. It also plans to offer more training programs for teachers to improve their understanding and use of AI tools. These measures are part of a broader effort to integrate AI into the education system while ensuring that it is used responsibly and ethically.

Comments



Add a public comment...
No comments

No comments yet