Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

shanghaishanghai
0

在人工智能领域,图像生成技术的进展不断推动着科技的边界。扩散模型(Diffusion Model)作为一种先进的图像生成工具,近年来在创造逼真图像方面取得了显著成就。然而,随着技术的进步,随之而来的是对安全性的担忧,包括生成有害内容和潜在的版权侵犯问题,这些都可能引发法律和伦理的争议。

尽管机器遗忘(Machine Unlearning, MU)技术已被提出,试图在模型接收到不适当提示时避免生成不适宜的图像,但其实际效果仍待验证。密歇根州立大学和英特尔的研究团队对此提出了疑问:经过机器遗忘的扩散模型是否真的安全?

在 ECCV 2024 上,该团队发表了一项研究,提出了一种名为 UnlearnDiffAtk 的新方法,这是一种高效且无需辅助模型的对抗性文本提示生成技术。该方法旨在检验遗忘后的扩散模型在安全可靠性的表现。论文的第一作者是密歇根州立大学计算机系的博士生张益萌和贾景晗,他们都是 OPTML 实验室的成员,指导教师是刘思佳助理教授。

UnlearnDiffAtk 的独特之处在于它寻找离散的对抗性文本进行攻击,不同于传统的连续文本嵌入攻击方法。通过利用扩散模型自身的分类能力,UnlearnDiffAtk 能够在不依赖额外模型的情况下生成对抗性文本,从而挑战遗忘模型的安全性。

研究团队还建立了一个名为 Unlearned DM Benchmark 的评估平台,以测试模型的安全可靠性和图像生成能力。通过对抗性文本提示攻击和图片质量指标(如 FID 和 CLIP score)进行评估,该平台为研究人员提供了一个全面了解遗忘后模型性能的场所。

这一开源研究为理解机器遗忘在扩散模型中的作用提供了新的视角,同时也提醒我们,即使在遗忘之后,模型的安全性仍然是一个需要持续关注和解决的问题。研究团队鼓励有兴趣的学者通过邮件(zhan1853@msu.edu)参与模型测评的讨论,共同推动这一领域的进步。

【source】https://www.jiqizhixin.com/articles/2024-08-26

Views: 4

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注