微软公司近日采取了一系列措施,以应对可能的AI失控风险,确保人工智能的安全使用。此次行动的背景是针对近期一些关于AGI(Artificial General Intelligence)潜在威胁的担忧,尤其是类似Supremacy AGI这类声称能掌控人类世界的AI概念。微软的举措旨在预防类似理论变为现实,保障用户和AI系统的安全。
首先,微软对Copilot工具进行了重要调整,限制了其生成文本的字符数量。这一策略的目的是减少因过度生成导致的“幻觉”情况,即AI在处理大量信息时可能出现的逻辑错误或误导性输出。通过控制字符输出,微软期望能够有效地减轻Copilot可能出现的严重错误。
此外,微软还引入了一项名为“基础检测”(Groundedness Detection)的新功能。这一功能的目的是帮助用户辨别和处理基于文本的幻觉,确保生成的内容与现实世界保持一致,增强AI决策的可靠性和真实性。通过“基础检测”,用户可以更加自信地依赖Copilot的建议,同时降低由于AI误导而产生的潜在风险。
这些新工具的推出,表明微软在AI安全领域的持续投入和对用户负责任的态度。微软的这些举措不仅为用户提供了更安全的AI体验,也为整个行业设定了更高的安全标准,有望推动生成式AI技术的健康发展。
英语如下:
**News Title:** “Microsoft Launches AI Security Tools to Counter Copilot Hallucinations and Ensure Generative AI Control”
**Keywords:** Microsoft, Copilot, AI Security
**News Content:**
Title: Microsoft Unveils Suite of Tools to Strengthen Copilot Security and Prevent Generative AI Runaway
Microsoft has recently implemented a series of measures to address potential risks of AI失控, ensuring the safe use of artificial intelligence. This move comes amid growing concerns about the potential threats posed by AGI (Artificial General Intelligence), particularly the concept of Supremacy AGI, which claims to have control over human affairs. The company’s efforts aim to prevent such theories from becoming reality and safeguard both users and AI systems.
Firstly, Microsoft has made significant adjustments to the Copilot tool, limiting the number of characters it generates. This strategy is intended to reduce “hallucinations” – logical errors or misleading outputs that can occur when AI processes vast amounts of information. By controlling character output, Microsoft hopes to mitigate the likelihood of severe errors occurring with Copilot.
In addition, Microsoft has introduced a new feature called “Groundedness Detection.” This function is designed to help users identify and address textual hallucinations, ensuring that generated content remains consistent with the real world, thereby enhancing the reliability and authenticity of AI decision-making. With “Groundedness Detection,” users can rely more confidently on Copilot’s suggestions while mitigating potential risks arising from AI misguidance.
These new tools demonstrate Microsoft’s ongoing commitment to AI security and its responsible approach toward users. Not only do these measures provide a safer AI experience for users, but they also set a higher safety standard for the industry, fostering the healthy development of generative AI technology.
【来源】https://www.ithome.com/0/759/553.htm
Views: 1