川普在美国宾州巴特勒的一次演讲中遇刺_20240714川普在美国宾州巴特勒的一次演讲中遇刺_20240714

As AI models become increasingly adept at mimicking human behavior, distinguishing between genuine internet users and sophisticated systems impersonating them has become an increasingly challenging task. This challenge is particularly concerning when such systems are used for malicious purposes such as spreading false information or conducting fraud, making it difficult for us to trust the content we encounter online.

A team of 32 researchers from prestigious institutions like OpenAI, Microsoft, MIT, and Harvard has proposed a potential solution—a concept known as personhood credentials. These credentials can verify that the holder is a real person without the need to gather any other information about the user. The team explored this idea in a paper published earlier this month on the arXiv preprint server, which has not yet undergone peer review.

The Concept of Personhood Credentials

Personhood credentials rely on the fact that AI systems still cannot bypass advanced encryption methods and cannot pretend to be humans in the real world. To obtain these credentials, individuals would need to visit certain institutions, such as government agencies or other trusted organizations, and provide evidence like passports or biometric data to prove they are real humans. Once approved, they would receive a single credential that can be stored on a device, similar to how credit and debit cards can be stored in smartphone wallet applications today.

When using these credentials online, users can present them to third-party digital service providers, who can verify them using a cryptographic protocol known as zero-knowledge proof. This protocol confirms that the holder possesses personhood credentials without disclosing any other unnecessary information.

The Utility of Personhood Credentials

Filtering out unverified users on platforms could be beneficial in many ways. For instance, people could opt to reject matches on dating apps like Tinder that lack personhood credentials, or choose not to view social media content that is not explicitly posted by real humans.

Tobin South, a Ph.D. student at MIT who participated in the project, said the research team hopes to encourage governments, companies, and standard-setting bodies to consider adopting this system in the future to prevent AI deception from getting out of control.

Our goal is not to impose this solution on the world, but to explore why we need it and how to implement it, South explained. AI is everywhere. It will cause many problems, but at the same time, many solutions will be invented.

Existing Alternatives

Several alternative technologies already exist. Idena claims to be the first blockchain-based proof of humanity system, which works by having humans solve puzzles that are difficult for robots in a short amount of time. The controversial Worldcoin project, which collects biometric data from users, purports to be the world’s largest privacy-protected human identity and financial network. It recently partnered with the Malaysian government to provide an online proof of humanity by scanning users’ irises to generate a code. This approach is similar to the concept of personhood credentials, with each code being encrypted.

However, Worldcoin has faced criticism for its deceptive marketing practices, collecting more personal data than it admits, and failing to obtain meaningful consent from users. Earlier this year, regulatory bodies in Hong Kong and Spain banned Worldcoin from operating, and its operations have also been suspended in countries like Brazil, Kenya, and India.

Challenges and Concerns

The biggest challenge for personhood credentials is to gain acceptance from a sufficient number of platforms, digital services, and governments, as they may feel uneasy about adhering to a standard they cannot control. For this system to work effectively, it must be universally adopted, said Henry Ajder, an AI and deepfake expert who advises Meta and the UK government. Theoretically, this technology is very attractive, but in practice, in the complex real world, I think there will be considerable resistance.

Martin Tschammer, the security chief at AI company Synthesia, which focuses on generating ultra-realistic deepfakes, agrees with the necessity of verifying humans online but is unsure if personhood credentials are the right solution or whether they are feasible. He also questions who would run such a program.

We may end up in a world where we further centralize power, centralizing decisions about our digital lives and giving large internet platforms more control over who can be online and for what purpose, Tschammer said. Given some governments’ poor track record with digital services and the growing trend towards authoritarianism, is it realistic to expect this technology to be adopted on a large scale and responsibly by the end of the century?

Currently, Synthesia is evaluating how to integrate other personhood credentials mechanisms into its products rather than waiting for cross-industry collaboration. The company already has several measures in place, such as requiring businesses to prove they are legally registered entities and banning and refusing to refund clients who violate the rules.

The Urgent Need for Solutions

It is clear that there is an urgent need for methods to distinguish between humans and robots, and encouraging discussions


read more

Views: 0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注