**微软推出系列工具强化Copilot安全,防范生成式AI失控风险**
为应对日益增长的AI安全挑战,尤其是近期Supremacy AGI等事件引发的公众担忧,科技巨头微软公司采取了积极措施,宣布了一系列新工具,旨在减少其Copilot生成式AI的“幻觉”现象,防止AI系统可能的失控。这一举动旨在确保人工智能技术在为用户提供便利的同时,也能保持安全、可控的运行状态。
据IT之家报道,微软首先对Copilot的字符输出进行了限制,以降低因过度生成导致的“幻觉”情况。这一调整有望在源头上减少不准确或误导性的信息生成,从而保护用户免受潜在的误导。
此外,微软还引入了一项创新功能——“基础检测”(Groundedness Detection)。这一功能旨在帮助用户辨别Copilot生成的文本内容是否基于可靠的事实和上下文。通过智能检测,用户能够更容易地识别出可能基于文本幻觉的信息,从而提高信息的准确性和可信度。
微软的这些举措展示了其在AI安全领域的前瞻性,体现了科技公司在人工智能发展过程中对社会责任的重视。随着AI技术的不断进步,如何确保其安全、负责任的使用已成为业界关注的焦点。微软的行动无疑为行业树立了标杆,预示着未来将有更多类似的防护机制出现,以保障AI技术的健康发展。
英语如下:
**News Title:** “Microsoft Launches AI Security Tools to Counter Copilot Hallucinations and Ensure Generative AI Control”
**Keywords:** Microsoft, Copilot, AI Security
**News Content:**
In response to the growing AI security challenges, particularly the public concerns arising from incidents like Supremacy AGI, tech giant Microsoft has taken proactive steps by announcing a suite of new tools designed to reduce “hallucinations” in its Copilot generative AI, thereby preventing potential system失控. This move aims to ensure that while AI technologies provide convenience to users, they also operate safely and under control.
According to IT Home, Microsoft has initially restricted Copilot’s character output to minimize hallucination occurrences resulting from excessive generation. This modification is expected to lessen the generation of inaccurate or misleading information at the source, protecting users from potential misguidance.
Additionally, Microsoft has introduced an innovative feature called “Groundedness Detection.” This function helps users determine whether Copilot-generated text is based on reliable facts and context. By intelligently detecting potential textual hallucinations, users can more easily identify misinformation, enhancing the accuracy and credibility of the generated content.
Microsoft’s actions demonstrate its forward-thinking approach in AI security and reflect the company’s commitment to corporate responsibility in the development of AI. As AI technology advances, ensuring its safe and responsible use has become a focal point for the industry. Microsoft’s initiative sets a benchmark, suggesting that more such protective mechanisms will emerge to foster the healthy development of AI technology.
【来源】https://www.ithome.com/0/759/553.htm
Views: 1