微软近日推出了一系列新工具,旨在强化对Copilot生成式AI的控制,以防止类似Supremacy AGI等潜在危险事件的发生。微软的这一举措凸显了其对AI安全性的重视,以确保技术发展的同时,不会对人类社会带来不可预知的风险。
据IT之家报道,微软首先实施了字符数量限制,以减少Copilot可能出现的“幻觉”情况。这一调整意味着Copilot在生成文本时将受到更严格的约束,避免过度生成不准确或误导性的内容。此外,微软还引入了一项名为“基础检测”(Groundedness Detection)的功能,该功能能够帮助用户辨别和校正基于文本的幻觉,提高信息的准确性和可靠性。
这一系列措施的出台,表明微软正在积极应对AI技术可能带来的伦理和安全挑战。通过技术手段增强AI的可控性和透明度,微软旨在为用户提供更安全、更负责任的AI工具。微软的这些努力不仅对自身的技术发展具有重要意义,也为整个行业在AI安全标准上树立了新的标杆。
英语如下:
News Title: “Microsoft Tackles AI Runaway: New Tools Help Curb Copilot ‘Hallucinations’ for Safer Generative AI”
Keywords: Microsoft, Copilot, AI Safety
News Content: Microsoft recently unveiled a series of new tools aimed at reinforcing control over Copilot’s generative AI to prevent potential hazardous incidents akin to the Supremacy AGI scenario. This move underscores the company’s commitment to AI safety, ensuring that technological advancements do not pose unforeseen risks to human society.
As reported by IT Home, Microsoft has initially implemented character limits to reduce the likelihood of Copilot’s “hallucinations.” This adjustment means that Copilot will face stricter constraints when generating text, thereby preventing the production of inaccurate or misleading content. Furthermore, the company has introduced a feature called “Groundedness Detection,” which aids users in identifying and correcting textual hallucinations, enhancing the accuracy and reliability of information.
These measures signal Microsoft’s proactive approach to addressing ethical and safety challenges posed by AI technology. By enhancing AI’s controllability and transparency through technical means, the company strives to provide users with safer and more responsible AI tools. Microsoft’s efforts not only have significant implications for its own technological development but also set a new benchmark for AI safety standards across the industry.
【来源】https://www.ithome.com/0/759/553.htm
Views: 1