Tokyo, Japan – As DeepSeek challenges the AI landscape with its cost-effective models, OpenAI CEO Sam Altman’s recent visit to Tokyo has sparked considerable excitement within the artificial intelligence community. Prior to reportedly finalizing a staggering $500 billion investment deal with SoftBank’s Masayoshi Son, Altman, along with OpenAI CPO Kevin Weil, visited the University of Tokyo, where they dropped hints about the company’s progress in developing its next-generation AI models.
During a Q&A session with students and faculty, Altman revealed that OpenAI has already achieved GPT-4.5 internally. More significantly, he suggested that reaching GPT-5.5 wouldn’t require a proportional increase in computational power. Within OpenAI, we have already reached GPT-4.5, and reaching GPT-5.5 does not require 100 times more computing power, Altman stated.
This bold claim hinges on advancements in inference models and reinforcement learning (RL) techniques. Altman emphasized that these improvements are dramatically increasing computational efficiency. Advances in inference models and reinforcement learning technologies have greatly improved computational efficiency – allowing smaller models to achieve GPT-6 level performance without requiring 100 times more computing power.
These advancements pave the way for more sophisticated and versatile AI models. Altman provided a glimpse into the future of OpenAI’s models, describing a system capable of integrating multiple modalities. We will integrate all modalities together. You can see it on the canvas, it will speak to you while writing and compiling code for you. It will be able to browse the internet.
The o series models, as Altman referred to them, will also boast visual recognition capabilities. He illustrated this with a practical example: The ‘o’ series model will support visual recognition functions. For example, if a piece of hardware needs repair, take a photo, and the ‘o’ series model will be able to provide technical support.
Altman also alluded to the direction of OpenAI’s research, referencing the o3-mini model. o3-mini foreshadows the research direction for the next six to twelve months… I hope that by the end of this year, we can develop an intelligent agent model that can solve all difficult tasks except scientific discovery. It may need several hours to think, and may even need to call a bunch of tools, but it will eventually be able to complete the task for you. This suggests a move towards more autonomous and problem-solving AI agents.
The potential release of a fully autonomous intelligent agent by the end of the year, capable of tackling complex tasks and utilizing external tools, marks a significant step towards more sophisticated and practical AI applications. While the specific details remain under wraps, Altman’s statements suggest that OpenAI is leveraging advancements in inference and reinforcement learning to create powerful AI models that are not solely dependent on massive computational resources. This shift could have profound implications for the future of AI development and deployment.
References:
- Machine Heart (机器之心). (2024, February 9). 推理和RL加速GPT-5.5到来?奥特曼公开GPT-4.5已就绪,年底发布全自主智能体 [Inference and RL Accelerate the Arrival of GPT-5.5? Altman Announces GPT-4.5 is Ready, Full Autonomous Intelligent Agent to be Released by the End of the Year]. Retrieved from [Insert Original Article Link Here – if available]
Views: 0