微软公司近日采取了一系列措施,旨在保障人工智能安全,防止类似Supremacy AGI等潜在威胁人类的AI事件发生。微软的这一举措凸显了其在AI伦理和安全领域的领先地位,以确保其Copilot工具在提供智能辅助时不会产生失控的“幻觉”情况。

据IT之家报道,微软首先对Copilot的字符数量进行了限制,这一策略旨在减少因过度生成或误导性文本导致的严重“幻觉”问题。通过控制输出字符的数量,微软旨在保持AI生成内容的准确性和可靠性,避免过度复杂或不实的生成结果。

此外,微软还引入了一项创新的“基础检测”(Groundedness Detection)功能。这一功能的主要目的是帮助用户辨别Copilot生成的文本是否基于坚实的事实基础,从而避免被不实或误导性的信息所影响。通过这项技术,用户可以更有效地评估和验证AI提供的信息,确保决策过程的稳健性。

微软的这些新工具和策略展示了其在AI安全领域的持续承诺,为用户提供了更安全、更负责任的AI使用体验。随着AI技术的不断发展,微软的这些举措不仅有助于保护用户,也为整个行业的伦理规范和安全标准设定了新的基准。

英语如下:

**News Title:** “Microsoft Launches AI Security Tools to Counter Copilot Hallucinations and Ensure Generative AI Control”

**Keywords:** Microsoft, Copilot, AI Security

**News Content:**

Title: Microsoft Unveils Suite of Tools to Strengthen Copilot Security and Mitigate Risks of Uncontrolled Generative AI

Microsoft has recently implemented a series of measures aimed at ensuring artificial intelligence (AI) safety, guarding against potential threats like the hypothetical Supremacy AGI that could pose risks to humanity. This move underscores the company’s leadership in AI ethics and security, ensuring that its Copilot tool remains under control and free from “hallucinations” while providing intelligent assistance.

According to IT Home, Microsoft has initially restricted the character count in Copilot’s output. This strategy is designed to reduce severe “hallucination” issues caused by excessive or misleading text generation. By limiting character output, Microsoft aims to maintain the accuracy and reliability of AI-generated content, preventing overly complex or false results.

Furthermore, the company has introduced an innovative “Groundedness Detection” feature. This function aims to assist users in distinguishing whether Copilot-generated text is grounded in solid factual foundations, thus preventing users from being influenced by inaccurate or misleading information. With this technology, users can more effectively evaluate and verify the information provided by AI, ensuring the robustness of decision-making processes.

Microsoft’s new tools and strategies demonstrate its ongoing commitment to AI safety, offering users a safer and more responsible AI experience. As AI technology continues to evolve, these initiatives not only protect users but also set new benchmarks for ethical guidelines and safety standards in the industry.

【来源】https://www.ithome.com/0/759/553.htm

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注