Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

90年代的黄河路
0

As generative AI rapidly advances, the misuse of technology has led to the proliferation of deepfakes, posing significant risks to society and individuals. Building a secure and reliable system for detecting and defending against deepfake content is now more critical than ever.

In response to the emerging security challenges posed by AI-generated forgeries, the International Joint Conference on Artificial Intelligence (IJCAI) 2025 will host a workshop and challenge focused on Deepfake Detection, Localization, and Explainability. The event aims to bring together leading global scholars and industry experts to tackle core technical challenges, including multimodal forgery, weakly supervised forgery localization, forgery explainability, and adversarial attacks and defenses against generative AI.

The challenge, organized by Ant Group Digital Technologies, will provide participants with a massive dataset of over 1.8 million samples to conduct practical exercises in attacking and defending against AI fraud risks, thereby boosting AI security research. The event is co-organized by several prestigious institutions, including the Agency for Science, Technology and Research (A*STAR) in Singapore, Nanyang Technological University, Tsinghua University, the Institute of Automation of the Chinese Academy of Sciences, Hefei University of Technology, the Anhui Provincial Key Laboratory of Digital Security, the University of Rochester, the University at Buffalo, and the University of Campinas.

The challenge will feature both image and audio-visual tracks, adopting a defense through competition approach to address the limitations of existing detection algorithms in accurately locating forged regions and identifying multimodal audio-visual collaborative forgeries. This initiative aims to enhance the level of content security in the AI era.

Notably, Ant Group Digital Technologies has opened up a multimodal deepfake dataset of over 1.8 million forged samples for the challenge, covering 88 types of forgery techniques. This effort will promote the establishment of verifiable,


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注