Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

上海的陆家嘴
0

Hong Kong, March 8, 2025 – The prestigious Conference on Computer Vision and Pattern Recognition (CVPR) 2025, a leading global AI conference, has sparked controversy after rejecting a paper for allegedly using AI to generate responses to reviewer comments. The incident raises serious ethical questions about the integrity of the peer-review process in the age of large language models (LLMs).

The news comes as AI research groups worldwide anxiously awaited the release of CVPR 2025 acceptance decisions. This year’s conference, held in [Location of CVPR 2025, if available, otherwise omit], saw a record number of submissions, reflecting the continued surge in popularity and competitiveness of the AI field.

According to sources familiar with the matter, the paper in question was flagged after reviewers noticed inconsistencies and generic language in the authors’ responses to their feedback. Suspicions arose that the responses were not genuinely crafted by the authors but rather generated by an LLM. While the specific LLM used, if any, remains unconfirmed, the incident highlights the potential for misuse of AI tools within the academic publishing ecosystem.

The integrity of the peer-review process is paramount to maintaining the quality and credibility of scientific research, stated a CVPR official, speaking on condition of anonymity. The conference takes any attempt to manipulate this process extremely seriously. Using AI to generate responses to reviewers, without proper disclosure and oversight, is a clear violation of ethical guidelines.

The rejection comes amidst growing concerns about the use of LLMs in various aspects of research, including writing, editing, and even reviewing papers. While LLMs can be valuable tools for accelerating research and improving communication, their potential for misuse raises significant ethical dilemmas.

Record Submissions and Lower Acceptance Rate Reflect Intense Competition

This year’s CVPR saw a significant increase in submissions, further intensifying the competition for acceptance. The conference received 13,008 valid submissions, with the program committee ultimately recommending 2,878 papers for acceptance. This translates to an acceptance rate of just 22.1%, a historical low compared to last year’s 23.6% (2,719 papers accepted out of 11,532 submissions).

The lower acceptance rate reflects the increasingly competitive landscape of the computer vision field. Despite the challenges, many researchers celebrated their success. Tianxing Chen, a fourth-year undergraduate student at Shenzhen University who will be pursuing his Ph.D. at the University of Hong Kong, announced the acceptance of three of his papers at CVPR 2025.

Ethical Implications and Future Directions

The CVPR 2025 incident underscores the urgent need for clear guidelines and ethical frameworks governing the use of AI in academic publishing. The community must grapple with questions such as:

  • What constitutes acceptable use of LLMs in research?
  • How can we detect and prevent the misuse of AI in the peer-review process?
  • What are the implications of using AI to evaluate AI research?

The incident serves as a wake-up call for the AI community to proactively address the ethical challenges posed by the rapid advancement of LLMs. Moving forward, conferences and journals must develop robust policies and procedures to ensure the integrity and fairness of the peer-review process in the age of AI. The future of AI research depends on it.

References:

  • [Link to CVPR 2025 Website] (If available)
  • [Link to OpenReview CVPR 2025 Submissions] (If available)


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注