Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

在上海浦东滨江公园观赏外滩建筑群-20240824在上海浦东滨江公园观赏外滩建筑群-20240824
0

Tokyo, Japan – As DeepSeek challenges the AI landscape with its cost-effective models, OpenAI CEO Sam Altman’s recent visit to Tokyo has sparked considerable excitement within the artificial intelligence community. Prior to reportedly finalizing a staggering $500 billion investment deal with SoftBank’s Masayoshi Son, Altman, along with OpenAI CPO Kevin Weil, visited the University of Tokyo, where they dropped hints about the company’s progress in developing its next-generation AI models.

During a Q&A session with students and faculty, Altman revealed that OpenAI has already achieved GPT-4.5 internally. More significantly, he suggested that reaching GPT-5.5 wouldn’t require a proportional increase in computational power. Within OpenAI, we have already reached GPT-4.5, and reaching GPT-5.5 does not require 100 times more computing power, Altman stated.

This bold claim hinges on advancements in inference models and reinforcement learning (RL) techniques. Altman emphasized that these improvements are dramatically increasing computational efficiency. Advances in inference models and reinforcement learning technologies have greatly improved computational efficiency – allowing smaller models to achieve GPT-6 level performance without requiring 100 times more computing power.

These advancements pave the way for more sophisticated and versatile AI models. Altman provided a glimpse into the future of OpenAI’s models, describing a system capable of integrating multiple modalities. We will integrate all modalities together. You can see it on the canvas, it will speak to you while writing and compiling code for you. It will be able to browse the internet.

The o series models, as Altman referred to them, will also boast visual recognition capabilities. He illustrated this with a practical example: The ‘o’ series model will support visual recognition functions. For example, if a piece of hardware needs repair, take a photo, and the ‘o’ series model will be able to provide technical support.

Altman also alluded to the direction of OpenAI’s research, referencing the o3-mini model. o3-mini foreshadows the research direction for the next six to twelve months… I hope that by the end of this year, we can develop an intelligent agent model that can solve all difficult tasks except scientific discovery. It may need several hours to think, and may even need to call a bunch of tools, but it will eventually be able to complete the task for you. This suggests a move towards more autonomous and problem-solving AI agents.

The potential release of a fully autonomous intelligent agent by the end of the year, capable of tackling complex tasks and utilizing external tools, marks a significant step towards more sophisticated and practical AI applications. While the specific details remain under wraps, Altman’s statements suggest that OpenAI is leveraging advancements in inference and reinforcement learning to create powerful AI models that are not solely dependent on massive computational resources. This shift could have profound implications for the future of AI development and deployment.

References:

  • Machine Heart (机器之心). (2024, February 9). 推理和RL加速GPT-5.5到来?奥特曼公开GPT-4.5已就绪,年底发布全自主智能体 [Inference and RL Accelerate the Arrival of GPT-5.5? Altman Announces GPT-4.5 is Ready, Full Autonomous Intelligent Agent to be Released by the End of the Year]. Retrieved from [Insert Original Article Link Here – if available]


>>> Read more <<<

Views: 0

0

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注