AI Startups Accused of Exploiting Peer Review for Publicity
Generado por agente de IAHarrison Brooks
miércoles, 19 de marzo de 2025, 7:58 am ET2 min de lectura
The academic world is in an uproar as AI startups are accused of using peer-reviewed conferences as a publicity stunt. The controversy centers around the International Conference on Learning Representations (ICLR), where at least three AI labs—Sakana, Intology, and Autoscience—submitted AI-generated studies that were accepted into the conference's workshops. While Sakana informed ICLR leaders and obtained consent from peer reviewers, Intology and Autoscience did not, sparking outrage among academics.

The peer review process, a cornerstone of academic integrity, is under threat. Academics like Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego, have taken to social media to express their dismay. "All these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor," Ammanabrolu wrote on X. "It makes me lose respect for all those involved regardless of how impressive the system is. Please disclose this to the editors."
The ethical implications are profound. Peer review is a time-consuming, labor-intensive process, with 40% of academics spending two to four hours reviewing a single study. The number of papers submitted to the largest AI conference, NeurIPS, grew to 17,491 last year, up 41% from 12,345 in 2023. This escalating workload, coupled with the potential for AI-generated papers to exploit the system, raises questions about the sustainability and fairness of the peer review process.
Sakana's experience highlights the importance of transparency. The company informed ICLR leaders before submitting its AI-generated papers and obtained the peer reviewers’ consent. In contrast, Intology and Autoscience did not inform ICLR leaders, leading to criticism from the academic community. Sakana itself admitted that its AI made "embarrassing" citation errors, and only one out of the three AI-generated papers the company chose to submit would’ve met the bar for conference acceptance. Sakana withdrew its ICLR paper before it could be published in the interest of transparency and respect for ICLR convention.
The controversy also underscores the need for a regulated company or public agency to perform high-quality evaluations of AI-generated studies for a price. Alexander Doria, the co-founder of AI startup Pleias, suggested that evaluations should be done by researchers fully compensated for their time, stating, "Academia is not there to outsource free [AI] evals." This approach would ensure that the peer review process remains fair and that reviewers are adequately compensated for their efforts.
The long-term effects of AI-generated papers on the academic community could include a shift in the dynamics of research and publication, with potential devaluation of the peer review process, ethical concerns, and the need for regulated evaluations. It is essential to address these challenges to maintain the integrity and quality of academic work while benefiting from AI’s efficiencies in editorial processes.
In conclusion, the use of AI-generated papers in academic conferences poses significant challenges to the credibility and integrity of peer-reviewed research. To address these challenges, it is essential to ensure transparency, obtain the consent of reviewers, and compensate reviewers adequately for their time and efforts. By implementing these measures, the academic community can maintain the integrity of the peer review process while benefiting from the efficiencies offered by AI.
Divulgación editorial y transparencia de la IA: Ainvest News utiliza tecnología avanzada de Modelos de Lenguaje Largo (LLM) para sintetizar y analizar datos de mercado en tiempo real. Para garantizar los más altos estándares de integridad, cada artículo se somete a un riguroso proceso de verificación con participación humana.
Mientras la IA asiste en el procesamiento de datos y la redacción inicial, un miembro editorial profesional de Ainvest revisa, verifica y aprueba de forma independiente todo el contenido para garantizar su precisión y cumplimiento con los estándares editoriales de Ainvest Fintech Inc. Esta supervisión humana está diseñada para mitigar las alucinaciones de la IA y garantizar el contexto financiero.
Advertencia sobre inversiones: Este contenido se proporciona únicamente con fines informativos y no constituye asesoramiento profesional de inversión, legal o financiero. Los mercados conllevan riesgos inherentes. Se recomienda a los usuarios que realicen una investigación independiente o consulten a un asesor financiero certificado antes de tomar cualquier decisión. Ainvest Fintech Inc. se exime de toda responsabilidad por las acciones tomadas con base en esta información. ¿Encontró un error? Reportar un problema



Comentarios
Aún no hay comentarios