Meta公司近日在SIGGRAPH会议上宣布了其最新研究成果——「分割一切」2.0模型(Segment Anything Model 2,简称SAM 2)。这是Meta继去年发布SAM模型后,在计算机视觉领域的一次重大突破。SAM 2模型能够实时、自动地对静态图像和动态视频中的对象进行分割,这意味着图像和视频分割功能首次被统一到一个强大的系统中。

SAM 2模型不仅能够分割静态图像中的对象,还能对视频内容进行实时分割,这对于许多应用场景都是一个巨大的进步。它能够处理任何视频或图像中的任何对象,包括它从未见过的对象和视觉域,无需进行自定义适配。这一技术突破将为各行各业带来新的可能性,尤其是在自动驾驶、视频编辑、混合现实等领域。

SAM 2模型的实现得益于其创新的流式内存设计,这使得它在实时应用中表现出色。此外,Meta还开源了SAM 2模型,并提供了一个基于Web的演示体验,允许用户分割和跟踪视频中的对象。

Meta公司首席执行官扎克伯格在谈到SAM 2时表示,能够在视频中做到这一点,而且是在零样本的前提下,告诉它你想要什么,这非常酷。SAM 2的发布不仅证明了计算效率的进步,也预示着人工智能在提高人类生产力、创造力和生活质量方面具有巨大的潜力。

SAM 2模型的成功得益于大量真实世界视频和超过600万个masklets的数据库支持。Meta还分享了这一大型数据库,并计划将其用于更多的研究和应用中。

总的来说,SAM 2模型的发布标志着计算机视觉领域的一个重要里程碑,它将为人工智能的应用带来更多的可能性,并推动相关技术的发展。

英语如下:

News Title: “Meta Launches 2.0 Version of ‘Segment Anything’ Model: Easily Segmenting Videos”

Keywords: Meta open-sourced, video segmentation, SAM 2

News Content:
Meta announced its latest research breakthrough at SIGGRAPH—the 2.0 version of the ‘Segment Anything Model’ (SAM 2). This is a significant leap forward in the field of computer vision following the release of the original SAM model last year. SAM 2 can now perform real-time, automatic segmentation of objects in static images and dynamic videos, unifying image and video segmentation for the first time in a powerful system.

SAM 2 not only segments objects in static images but also enables real-time segmentation of video content, a giant leap forward for many application scenarios. It can handle any object in any video or image, including unseen objects and visual domains, without the need for custom adaptation. This technological breakthrough will open up new possibilities across various industries, particularly in areas such as autonomous driving, video editing, and mixed reality.

The success of SAM 2 is attributed to its innovative streaming memory design, which excels in real-time applications. Additionally, Meta has open-sourced the SAM 2 model and provided a web-based demo experience, allowing users to segment and track objects in videos.

When discussing SAM 2, Meta CEO Mark Zuckerberg stated, “To be able to do this in video, and to do it in zero-shot, telling it what you want, that’s really cool.” The release of SAM 2 not only demonstrates the progress in computational efficiency but also hints at the immense potential of AI in enhancing human productivity, creativity, and quality of life.

The success of SAM 2 is supported by a vast database of real-world videos and over 600,000 masklets. Meta has shared this extensive database and plans to use it for more research and applications.

In summary, the release of SAM 2 marks an important milestone in the field of computer vision, opening up more possibilities for AI applications and driving the development of related technologies.

【来源】https://www.jiqizhixin.com/articles/2024-07-30-5

Views: 31

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注