Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

NEWS 新闻NEWS 新闻
0

Shanghai, China – As generative AI technologies rapidly advance, concerns surrounding the safety and ethical implications of diffusion models are growing. These models, capable of generating incredibly realistic images, can also be exploited to create harmful content, including sensitive, inappropriate, or copyright-infringing material. Addressing this critical challenge, researchers at Fudan University have developed a novel approach to concept erasure in diffusion models, achieving state-of-the-art (SOTA) results. Their work, titled [Insert Actual Title of Paper Here Once Available], has been accepted to the prestigious AAAI 2025 conference.

The research team, led by Associate Professor Jingjing Chen, focuses on bolstering AI safety. The first and second authors of the paper are Feng Han and Kai Chen, both master’s and doctoral students, respectively, from the Visual and Learning Lab at Fudan University. The team has a strong track record in AI safety research, with multiple publications in top-tier conferences such as CVPR, ECCV, AAAI, and ACM MM.

One of the key applications of their research lies in mitigating the generation of explicit or suggestive content. Imagine a scenario where a user prompts a text-to-image model to generate a picture of a person. Instead of generating an image that is overly revealing, the Fudan team’s method, implemented in their Risk Concept Removal Network dubbed DuMo, can effectively clothe the individual in the generated image.

DuMo’s capabilities extend beyond simply covering up generated content. The network is designed to remove specific risky concepts from the generated image while preserving other important attributes and details. This means that the identity of the person, their pose, and other relevant features are maintained, ensuring the generated image remains consistent with the user’s intent, minus the undesirable elements. This erase-where-you-point functionality surpasses existing methods in its precision and control.

Furthermore, DuMo can be used to prevent text-to-image models from mimicking the styles of specific artists, thereby reducing the risk of copyright infringement. This is particularly relevant given the ongoing debate surrounding the use of artists’ styles in training generative AI models.

Existing methods for addressing these issues often rely on a technique called concept erasure through fine-tuning. However, the Fudan University team’s approach represents a significant advancement, offering more granular control and improved performance.

The acceptance of this research to AAAI 2025 underscores its significance in the field of AI safety. As diffusion models become increasingly powerful and pervasive, ensuring their responsible use is paramount. The Fudan University team’s work provides a crucial step towards building safer and more ethical AI systems.

Looking Ahead:

The development of DuMo represents a promising advancement in mitigating the risks associated with diffusion models. Future research could explore further refinements to the network, including:

  • Expanding the range of detectable and removable risky concepts.
  • Improving the robustness of the network against adversarial attacks.
  • Developing user-friendly interfaces for integrating DuMo into existing text-to-image platforms.

By continuing to address these challenges, researchers can pave the way for a future where generative AI technologies are used responsibly and ethically.

References:

  • [Insert Link to AAAI 2025 Paper Once Available]
  • [Insert Links to Relevant Publications from the Fudan University Team]
  • [Insert Links to Relevant Articles on AI Safety and Diffusion Models]

Note: The title of the paper and specific links will be updated once the information becomes publicly available. This article is based on the information provided and aims to present a clear and informative overview of the research.


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注