微软公司近日采取了一系列措施,旨在保障人工智能(AI)的安全性,防止类似Supremacy AGI等潜在危险事件的发生。微软的这一行动旨在确保生成式AI工具,如Copilot,在为用户提供便利的同时,不会超出控制产生危害。
据悉,微软已对Copilot的字符数量实施了限制,这一策略旨在减少因过度生成而导致的“幻觉”情况。这种“幻觉”是指AI在处理大量信息时可能出现的逻辑错误或不准确的输出。通过控制字符数量,微软旨在降低此类问题的发生概率,从而保证Copilot的输出更为准确和可靠。
此外,微软还引入了一项名为“基础检测”(Groundedness Detection)的新功能。这一创新功能的目的是帮助用户识别并避免基于文本的幻觉。通过检测AI生成内容的现实依据,基础检测功能能够在AI可能偏离实际时向用户发出警告,提高用户对生成内容的审辨能力。
微软的这些举措表明了该公司在AI安全领域的积极态度和责任担当,旨在为用户提供更加安全、可控的AI工具,同时为AI技术的健康发展树立行业标准。随着AI技术的广泛应用,此类安全措施的实施显得尤为重要,它们将有助于构建一个更加可信和负责任的AI生态系统。
英语如下:
**News Title:** “Microsoft Launches AI Security Tools to Counter Copilot Hallucinations and Ensure Generative AI Control”
**Keywords:** Microsoft, Copilot, AI Security
**News Content:**
Title: Microsoft Unveils Suite of Tools to Strengthen Copilot Security and Mitigate Risks of Uncontrolled Generative AI
Microsoft has recently implemented a series of measures aimed at ensuring the safety of artificial intelligence (AI) and preventing potential incidents akin to the Supremacy AGI scenario. The company’s efforts are focused on maintaining control over generative AI tools like Copilot, ensuring they provide benefits to users without posing hazards.
It has been reported that Microsoft has imposed limits on the number of characters Copilot can generate, a strategy designed to reduce “hallucinations” resulting from excessive output. These hallucinations refer to logical errors or inaccurate outputs that AI may produce when processing large amounts of information. By controlling character limits, Microsoft aims to decrease the likelihood of such issues, thereby making Copilot’s output more accurate and reliable.
Furthermore, Microsoft has introduced a new feature called “Groundedness Detection.” This innovative function aims to help users identify and avoid text-based hallucinations. By assessing the reality basis of AI-generated content, the Groundedness Detection alerts users when the AI might deviate from factual information, enhancing their critical assessment of the generated content.
These initiatives demonstrate Microsoft’s commitment and responsibility in the field of AI security, as the company strives to provide users with safer and more controllable AI tools while setting industry standards for the responsible development of AI technology. As AI becomes increasingly prevalent, such safety measures are crucial in fostering a more trustworthy and accountable AI ecosystem.
【来源】https://www.ithome.com/0/759/553.htm
Views: 1