世界卫生组织(WHO)近日发布关于多模态大模型伦理标准和治理指南,强调人工智能(AI)在医疗卫生领域的广泛应用前景,同时指出需警惕潜在风险,如“自动化偏见”导致的过度依赖。

AI技术在医疗领域的应用正逐渐改变诊疗模式,提高医疗服务效率。然而,随着技术的发展,也暴露出诸多伦理和安全问题。自动化偏见可能导致医疗决策的不公平,加剧 healthcare disparities。此外,对AI技术的过度依赖可能使医护人员忽视临床经验和专业知识。

为确保AI技术在医疗领域健康有序发展,各国政府有必要加强监管。WHO指南建议,AI应用应遵循公平、透明、负责和人道的原则。政府、企业和医疗机构需共同努力,确保AI技术在医疗领域的合理使用,以造福患者和社会。

Title: AI in healthcare: Promising applications and necessary regulation
Keywords: AI, healthcare, regulation

News content:
The World Health Organization (WHO) recently released a guidance document on ethical standards and governance of multimodal large models, emphasizing the widespread application prospects of artificial intelligence (AI) in the field of healthcare while warning of potential risks, such as over-dependence caused by “automation bias”.

As AI technology transforms the healthcare industry, it also exposes many ethical and safety issues. Automation bias, for example, can lead to unfair medical decision-making and exacerbate healthcare disparities. Moreover, over-dependence on AI technology may neglect clinical experience and professional knowledge.

To ensure the healthy and orderly development of AI applications in healthcare, governments worldwide should strengthen regulation. The WHO guidance suggests that AI applications should adhere to the principles of fairness, transparency, responsibility, and humanity. Governments, companies, and healthcare institutions need to work together to ensure the rational use of AI technology in healthcare, benefiting patients and society.

【来源】https://www.cls.cn/detail/1575757

Views: 1

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注