上海枫泾古镇一角_20240824上海枫泾古镇一角_20240824

The Age of AI Authentication: Differentiating Humans from Robots

As AI models become increasingly adept at mimicking human behavior, distinguishing between genuine internet users and sophisticated systems designed to imitate them has become increasingly challenging. This poses a significant threat, especially when such systems are used for malicious purposes like spreading misinformation or fraud, making it difficult to trust the content we encounter online.

A team of 32 researchers from OpenAI, Microsoft, MIT, Harvard, and other esteemed institutions has proposed a potential solution: a concept called personhood credentials. These credentials aim to prove that the holder is a real human without requiring any additional personal information.

The concept of personhood credentials relies on the fact that current AI systems are still unable to bypass advanced encryption systems or pretend to be humans in the real world. To obtain this credential, individuals would need to visit certain institutions like government offices or other trusted organizations and provide proof of their humanity, such as a passport or biometric data. Once approved, they would receive a single credential that can be stored on their devices, similar to how credit and debit cards are stored in mobile wallet applications today.

When using these credentials online, users can provide them to third-party digital service providers, who can then use a cryptographic protocol called zero-knowledge proof to verify their authenticity. This confirms that the holder possesses a personhood credential without disclosing any unnecessary information.

Filtering out users who have not been verified could be beneficial in many ways. For example, people could reject Tinder matches without personhood credentials or choose not to view any social media content that is not explicitly posted by a real person.

MIT doctoral student Tobin South, who participated in this project, expressed the team’s hope that governments, companies, and standard-setting organizations would consider adopting this system to prevent AI deception from spiraling out of control.

We want to encourage governments, companies, and standard-setting organizations to consider this system in the future to prevent AI deception from spiraling out of control, said South. Our goal is not to impose this solution on the world, but to explore why we need it and how to implement it.

Several existing technologies could potentially be used to implement this concept. For example, Idena claims to be the first blockchain-based personhood proof system, which requires humans to solve puzzles that are difficult for robots to solve within a short time. The controversial Worldcoin project, which collects users’ biometric data, claims to be the largest privacy-protected human identity and financial network in the world. It recently partnered with the Malaysian government to generate a code for online personhood proof by scanning users’ irises. This approach is similar to the personhood credential concept, with each code being encrypted to protect it.

However, the project has been criticized for deceptive marketing practices, collecting more personal data than it admits, and failing to obtain meaningful consent from users. Earlier this year, regulatory authorities in Hong Kong and Spain banned Worldcoin’s operations, and its businesses in Brazil, Kenya, and India have also been suspended.

Therefore, new concepts are still needed to address the problem of AI confusion. The rapid proliferation of AI tools has created a dangerous period, where internet users are increasingly skeptical of the authenticity of online content. Henry Ajder, an AI and Deepfake expert, and advisor to Meta and the British government, said that while the idea of verifying human identity has been around for a while, personhood credentials seem to be one of the most realistic means of combating the growing skepticism.

However, the biggest challenge facing these credentials is how to get enough platforms, digital services, and governments to accept them, as they may feel uncomfortable about following a standard they cannot control. For this system to work effectively, it must be widely adopted, said Henry Ajder. In theory, this technology is very attractive, but in practice, in the complex real world, I think there will be considerable resistance.

Martin Tschammer, the security chief of AI company Synthesia, which specializes in ultra-realistic deepfake generation technology, agrees with the principle behind personhood credentials – the need to verify human identity online. However, he is unsure whether this is the correct solution or whether it is feasible. He also expressed doubts about who would run such a scheme.

We may eventually enter a world where we will further concentrate power, concentrating decisions over our digital lives in the hands of large internet platforms, deciding who can exist online and for what purpose, said Martin Tschammer. Considering some governments’ poor performance in adopting digital services, as well as the growing tendency towards authoritarianism, is it realistic to expect that this technology will be widely and responsibly adopted by the end of this century?

Synthesia is currently evaluating how to integrate other personhood credential mechanisms into its products rather than waiting for cross-industry collaboration. He said that the company already has several measures in place, such as requiring businesses to prove that they are legally registered companies and banning and refusing to refund customers who violate the rules.

It is clear that we urgently need methods to distinguish between humans and robots, and encouraging discussions between stakeholders in the technology and policy fields is a step in the right direction. We are not far from a future where we will be unable to distinguish between the humans and the robots we interact with online unless we take some measures, said Emilio Ferrara, a professor of computer science at the University of Southern California who did not participate in the project. We cannot be as naive about technology as previous generations.


>>> Read more <<<

Views: 0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注