news pappernews papper

The European Union has taken a significant step in the realm of artificial intelligence (AI) by enacting the world’s first comprehensive AI legislation. The EU’s AI Act officially came into effect on Thursday, August 1, marking a pivotal moment in the global effort to regulate the rapidly evolving field of AI.

Background and Significance

The AI Act was approved by the European Council on May 21 and published in the EU Official Journal on July 12. After a 20-day grace period, the act has now become operational. This legislation is a testament to the EU’s commitment to ensuring that AI technologies are developed and deployed in a manner that respects fundamental rights and minimizes risks.

Phased Implementation

The rules outlined in the AI Act will be implemented in phases to provide businesses with a grace period to adjust their systems. This approach is designed to ensure a smooth transition while allowing companies to comply with the new regulations. Some provisions will take effect six months or a year after the law’s passage, while the majority will come into force on August 2, 2026.

Risk-Based Approach

The AI Act adopts a risk-based approach to AI regulation. This means that different applications of AI will be subject to varying degrees of oversight depending on the level of risk they pose to society. This approach aims to balance innovation with safety and ethical considerations.

Key Provisions

Ban on Certain AI Systems

Starting from February 2025, the act will prohibit AI applications that exploit personal vulnerabilities, non-targeted scraping of facial images from the internet or closed-circuit television (CCTV) footage, and the creation of facial recognition databases without consent.

Labeling Requirements

From August 2025, complex and widely used AI models will be subject to new constraints. All AI-generated content, including images, audio, or videos, must be clearly labeled to address concerns about disinformation and electoral interference.

Transparency Obligations

The act imposes strict transparency obligations on high-risk AI systems, such as autonomous vehicles, medical devices, loan decision systems, educational scoring, and remote biometric systems. These rules will come into effect in August 2026.

Regulatory Framework

The AI Act will be enforced through a robust and multifaceted regulatory framework. Each of the 27 EU member states will establish and designate national regulatory authorities to oversee compliance. These authorities will have the power to conduct audits, request documents, and enforce corrective measures. The EU AI Committee will coordinate the work of these authorities to ensure consistent application across the EU.

Penalties for Non-Compliance

Companies found in violation of the AI Act may face severe penalties. The EU can impose fines of up to €35 million or 7% of a company’s global annual turnover, whichever is greater.

Global Impact

The AI Act is not just a European affair; its impact is expected to resonate globally. Tanguy Van Overstraeten, media and technology business leader at law firm Linklaters, notes that the act is the world’s first of its kind. It is likely to affect many businesses, especially those developing AI systems, as well as those deploying or using AI systems in certain contexts.

Charlie Thompson, an executive at enterprise software company Appian, commented that the act’s influence extends beyond the EU, potentially affecting most global tech companies. This will bring more scrutiny to tech giants in the EU market and their use of EU citizens’ data.

In response to regulatory considerations, Meta has already restricted the availability of its AI models in Europe, although this move may not be directly related to the AI Act.

Eric Loeb, executive vice president for government affairs at Salesforce, praised the EU’s risk-based regulatory framework, stating that it encourages innovation while prioritizing safe development and deployment of technology. He suggested that other governments should consider these rules when crafting their own policy frameworks.

Jamil Jiva, global head of asset management at fintech company Linedata, pointed out that the EU understands the need for substantial fines to ensure regulatory impact, echoing the approach taken with the General Data Protection Regulation (GDPR).

The EU’s AI Act represents a significant milestone in the global regulation of AI. As other countries and regions look to develop their own AI policies, the EU’s approach is likely to serve as a benchmark for ensuring that AI is developed and used responsibly and ethically.


read more

Views: 0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注