Hong Kong, March 8, 2025 – The prestigious Conference on Computer Vision and Pattern Recognition (CVPR) 2025, a leading global AI conference, has sparked controversy after rejecting a paper for allegedly using AI to generate responses to reviewer comments. The incident raises serious ethical questions about the integrity of the peer-review process in the age of large language models (LLMs).
The news comes as AI research groups worldwide anxiously awaited the release of CVPR 2025 acceptance decisions. This year’s conference, held in [Location of CVPR 2025, if available, otherwise omit], saw a record number of submissions, reflecting the continued surge in popularity and competitiveness of the AI field.
According to sources familiar with the matter, the paper in question was flagged after reviewers noticed inconsistencies and generic language in the authors’ responses to their feedback. Suspicions arose that the responses were not genuinely crafted by the authors but rather generated by an LLM. While the specific LLM used, if any, remains unconfirmed, the incident highlights the potential for misuse of AI tools within the academic publishing ecosystem.
The integrity of the peer-review process is paramount to maintaining the quality and credibility of scientific research, stated a CVPR official, speaking on condition of anonymity. The conference takes any attempt to manipulate this process extremely seriously. Using AI to generate responses to reviewers, without proper disclosure and oversight, is a clear violation of ethical guidelines.
The rejection comes amidst growing concerns about the use of LLMs in various aspects of research, including writing, editing, and even reviewing papers. While LLMs can be valuable tools for accelerating research and improving communication, their potential for misuse raises significant ethical dilemmas.
Record Submissions and Lower Acceptance Rate Reflect Intense Competition
This year’s CVPR saw a significant increase in submissions, further intensifying the competition for acceptance. The conference received 13,008 valid submissions, with the program committee ultimately recommending 2,878 papers for acceptance. This translates to an acceptance rate of just 22.1%, a historical low compared to last year’s 23.6% (2,719 papers accepted out of 11,532 submissions).
The lower acceptance rate reflects the increasingly competitive landscape of the computer vision field. Despite the challenges, many researchers celebrated their success. Tianxing Chen, a fourth-year undergraduate student at Shenzhen University who will be pursuing his Ph.D. at the University of Hong Kong, announced the acceptance of three of his papers at CVPR 2025.
Ethical Implications and Future Directions
The CVPR 2025 incident underscores the urgent need for clear guidelines and ethical frameworks governing the use of AI in academic publishing. The community must grapple with questions such as:
- What constitutes acceptable use of LLMs in research?
- How can we detect and prevent the misuse of AI in the peer-review process?
- What are the implications of using AI to evaluate AI research?
The incident serves as a wake-up call for the AI community to proactively address the ethical challenges posed by the rapid advancement of LLMs. Moving forward, conferences and journals must develop robust policies and procedures to ensure the integrity and fairness of the peer-review process in the age of AI. The future of AI research depends on it.
References:
- [Link to CVPR 2025 Website] (If available)
- [Link to OpenReview CVPR 2025 Submissions] (If available)
Views: 0