shanghaishanghai

**全球科技巨头携手发布大模型安全国际标准,保障人工智能健康发展**

在瑞士举办的第27届联合国科技大会上,一项重大进展标志着全球人工智能安全标准的里程碑。世界数字技术院(WDTA)正式发布了《生成式人工智能应用安全测试标准》和《大语言模型安全测试方法》两项重量级国际标准。这些标准的出台,旨在为日益普及的人工智能技术,特别是大模型应用,提供安全和规范的框架。

据悉,这两项标准是由业界领先的科技公司OpenAI、蚂蚁集团、科大讯飞、谷歌、微软、英伟达、百度和腾讯等数十家单位的专家学者共同努力编制而成。这一跨行业的合作彰显了全球科技界对于人工智能安全问题的高度重视和共同责任。

《生成式人工智能应用安全测试标准》将指导开发者在设计和应用生成式AI模型时,如何进行有效的安全评估和风险防控,以防止潜在的滥用和误用。而《大语言模型安全测试方法》则重点关注大型语言模型的安全性,确保其在处理敏感信息和影响公众观点时,能够遵循道德和法律规范,避免误导和滥用。

这一开创性的举措将对全球人工智能产业产生深远影响,为AI技术的持续创新和广泛应用提供坚实的安全基础。随着AI技术的快速发展,国际标准的制定不仅有助于保护用户隐私和数据安全,也将推动行业在遵守伦理原则的前提下,实现更健康、更可持续的发展。

英语如下:

**News Title:** “Global Tech Giants Unite to Announce the Birth of International Safety Standards for Large Models at the UN Science Summit”

**Keywords:** Large Model Safety, International Standards, Science Summit

**News Content:**

Major global technology companies have jointly announced the establishment of international safety standards for large models, ensuring the healthy development of artificial intelligence (AI).

At the 27th United Nations Science Summit held in Switzerland, a significant milestone was reached in the realm of global AI safety standards. The World Digital Technology Academy (WDTA) officially unveiled the “Safety Testing Standard for Generative AI Applications” and the “Safety Testing Method for Large Language Models” – two groundbreaking international standards. These standards aim to provide a secure and regulated framework for the increasingly prevalent AI technologies, particularly large model applications.

It is reported that these standards were jointly developed by experts from leading tech companies including OpenAI, Ant Group, iFlytek, Google, Microsoft, NVIDIA, Baidu, and Tencent, among several others. This cross-industry collaboration underscores the global tech community’s high level of concern and shared responsibility for AI safety.

The “Safety Testing Standard for Generative AI Applications” guides developers in conducting effective safety assessments and risk mitigation when designing and deploying generative AI models, preventing potential misuse and abuse. Meanwhile, the “Safety Testing Method for Large Language Models” focuses on the security of these models, ensuring they adhere to ethical and legal guidelines when handling sensitive information and influencing public opinion, thus preventing misdirection and abuse.

This pioneering move will have a profound impact on the global AI industry, providing a solid safety foundation for the continued innovation and widespread application of AI technologies. As AI technology rapidly advances, the establishment of international standards will not only protect user privacy and data security but also promote a healthier and more sustainable industry development under ethical principles.

【来源】https://www.chinastarmarket.cn/detail/1649234

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注