Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

0

香港,2025年3月30日 – 人工智能领域再次迎来突破。香港大学俞益洲教授及其博士生娄蒙近日发布了一种全新的卷积神经网络视觉基础模型,名为OverLoCK (Overview-first-Look-Closely-next ConvNet with Context-Mixing Dynamic Kernels)。该模型模拟人类视觉系统“纵观全局 – 聚焦细节”的双阶段认知机制,在图像识别领域展现出卓越的性能,再次证明了卷积神经网络的潜力。

灵感源于人类视觉:Top-down Attention机制

人类在观察复杂场景时,往往先快速获得整体印象,再聚焦关键细节。这种“纵观全局 – 聚焦细节(Overview-first-Look-Closely-next)”的双阶段认知机制,也被称为Top-down Attention,是人类视觉系统强大的主要原因之一。

然而,现有的大多数视觉基础网络,如Swin、ConvNeXt和VMamba,仍然采用经典的金字塔架构,缺乏显式的自上而下的语义指导。这使得它们在处理复杂视觉任务时,难以像人类一样高效地定位关键区域。

OverLoCK:仿生视觉的创新设计

为了解决这一问题,香港大学的研究团队将Top-down Attention机制引入到Vision Backbone的设计中,构建了OverLoCK模型。该模型的核心思想是将深度学习网络分解为三个子模型:

  • Base-Net: 负责提取中低层特征,相当于视觉系统的“视网膜”,利用Dilated RepConv Layer实现高效的low-level信息感知。
  • Overview-Net: 提取较为粗糙的高级语义信息,完成“第一眼认知”,同样基于Dilated RepConv Layer快速获得high-level语义信息作为Top-down Guidance。
  • Focus-Net: 在全局先验知识的引导下进行精细分析,实现“凝视观察”,基于动态卷积ContMix和Gate机制,充分利用Top-down Guidance信息。

OverLoCK的关键创新在于,来自Overview-Net的Top-down Guidance不仅会在特征和kernel权重两个层面对Focus-Net进行引导,还会沿着前向传播过程在每个block中持续更新。这种全方位的语义信息注入,使得Focus-Net能够获得更为鲁棒的特征表示能力。

性能卓越:ImageNet、COCO、ADE20K数据集上的亮眼表现

OverLoCK模型在ImageNet、COCO、ADE20K三个极具挑战性的数据集上展现出了强大的性能。例如,30M参数规模的OverLoCK-Tiny模型在ImageNet-1K上达到了84.2%的Top-1准确率,相比于先前的ConvNet、Transformer与Mamba模型具有明显的优势。

未来展望:卷积网络的持续演进

OverLoCK的成功,不仅证明了Top-down Attention机制在视觉任务中的巨大潜力,也再次印证了卷积神经网络的强大生命力。尽管近年来Transformer等新型网络架构崭露头角,但卷积神经网络凭借其高效性和强大的局部感知能力,仍然在视觉领域占据着重要地位。

OverLoCK的发布,无疑将推动卷积神经网络的进一步发展,为未来的视觉基础模型设计提供新的思路。随着研究的深入,我们有理由相信,卷积网络将在人工智能领域继续发挥重要作用。

参考文献


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注