从ChatGPT到Sora,新一代人工智能在收获大量关注的同时,也引发安全担忧。28日在沪举办的2024上海网络安全产业创新大会上,业界人士围绕大模型安全话题展开探讨。
随着ChatGPT等大模型的快速发展,业界和监管层面对其应用前景充满期待的同时,也在关注潜在风险。多位与会专家表示,大模型安全至关重要,需要行业共同努力进行风险评估和治理。
有专家指出,大模型可能产生错误输出、数据偏差、算法歧视等问题。建议实施安全审计和风险控制,确保大模型向有益方向发展。业内也呼吁制定行业规范和伦理标准,为大模型的监管提供指引。
会上,监管部门代表透露,将密切关注大模型应用情况,根据风险制定针对性政策。与会企业也表示,将积极采取措施提高产品透明度和可解释性。
Title: Hot Debate on the Safety of Large Models
Keywords: AI safety, industry discussion, regulatory requirements
News content: With the rapid development of large models such as ChatGPT, while the industry and regulators have high hopes for their application prospects, they are also concerned about potential risks. At the 2024 Shanghai Network Security Industry Innovation Conference held in Shanghai on February 28, several industry experts said that the safety of large models is crucial and requires joint efforts from the industry to conduct risk assessments and governance.
Some experts pointed out that large models may have problems such as erroneous outputs, data biases, and algorithm discrimination. They suggest that safety audits and risk controls be implemented to ensure large models develop in a beneficial direction. The industry also calls for the formulation of industry norms and ethical standards to provide guidance for the regulation of large models.
At the meeting, representatives of regulatory authorities revealed that they will closely monitor the application of large models and formulate targeted policies based on risks. Enterprises attending the meeting also said that they will take measures to improve product transparency and explainability.
【来源】http://www.chinanews.com/cj/2024/02-28/10171597.shtml
Views: 1