在近日举行的世界经济论坛会谈中,图灵奖得主、Meta首席AI科学家Yann LeCun针对如何让AI理解视频数据发表了看法。LeCun认为,虽然目前还没有明确的答案,但可以肯定的是,适用于处理视频数据的模型并非目前广泛应用的生成模型。他表示,新模型应学会在抽象的表征空间中进行预测,而非像素空间。
LeCun的观点为我们揭示了AI在处理视频数据方面的未来发展趋势。长期以来,AI在图像识别和处理方面取得了显著的成果,但视频数据的处理却面临着诸多挑战。相较于生成模型,抽象空间预测或许能为AI理解视频数据提供新的可能。
英文翻译:
News Title: Yann LeCun Discusses New Approaches to Processing Video Data with AI
Keywords: AI, Video Data, Abstract Space Prediction
News Content:
At the recent World Economic Forum meeting, Turing Award winner and Meta’s Chief AI Scientist Yann LeCun shared his thoughts on how to make AI understand video data. LeCun believes that although there is no clear answer yet, it is certain that the models suitable for processing video data are not the generative models widely used today. He said that the new models should learn to predict in the abstract representation space rather than the pixel space.
LeCun’s perspective reveals the future development trend of AI in processing video data. For a long time, AI has achieved significant achievements in image recognition and processing, but processing video data remains a challenge. Compared to generative models, abstract space prediction may provide a new possibility for AI to understand video data.
【来源】https://mp.weixin.qq.com/s/sAWFkcTFfZVJ_oLKditqVA
Views: 1