How Did OpenAI’s o3 Model Reach Human-Level Intelligence on Benchmark?

Internet

OpenAI unveiled the reasoning-focused o3 series of artificial intelligence (AI) models last month. During a live stream, the company shared the benchmark scores of the model based on internal testing. While all of the shared scores were impressive and highlighted the improved capabilities of the successor to o1, one benchmark score stood out. On the ARC-AGI benchmark, the large language model (LLM) scored 85 percent, beating the previous best score by a 30 percent margin. Interestingly, this score is also on par with what an average human scored on the test.

OpenAI Scores 85 Percent on ARC-AGI Benchmark

However, just because o3 scored such a high score on the test, does it mean its intelligence is equal to that of an average human? This would be easier to answer if the AI model was released in the public domain and we could test it out. Since OpenAI has not disclosed anything about the model’s architecture, training techniques, or datasets, it is difficult to conclusively claim anything.

There are certain things that we do know about the AI firm’s reasoning-focused models which can help us understand just what to expect from OpenAI’s upcoming LLM. Firstly, so far, the o-series models do not have a major overhaul in their architecture or framework but are fine-tuned to showcase enhanced capabilities.

For instance, developers used a technique with the o1 series of AI models called test-time compute. With this, the AI models were given additional processing time to spend on a question and a workspace to test the theories and correct any mistakes. Similarly, the GPT-4o model was just a fine-tuned version of the GPT-4.

It is unlikely that the company would have made major changes to the architecture with the o3 model, given that it is also rumoured to be working on the GPT-5 AI model, which could be launched later this year.

Coming to the ARC-AGI (Abstract Reasoning Corpus – Artificial General Intelligence) benchmark, it features a series of grid-based pattern recognition questions that require reasoning and spatial understanding capabilities to solve. This could be done with a large dataset of high-quality data focusing on reasoning and aptitude-based logic.

However, if this were that simple, older AI models would have scored high on the test as well. Notably, the previous highest score was 55 percent as opposed to o3’s 85 percent score. This highlights that the developers have added new refinement techniques and algorithms to enhance the reasoning capabilities of the model. The full extent of it cannot be stated unless OpenAI officially reveals the technical details.

That being said, it is unlikely that the o3 AI model would have reached AGI or human-level intelligence. Firstly, if that were the case, it would mark the end of the company’s partnership with Microsoft, which is slated to end once OpenAI models hit the AGI status. Second, many AI experts, including Geoffrey Hinton, the godfather of AI, have repeatedly highlighted that we are multiple years away from reaching AGI.

Finally, AGI is such a big accomplishment that if OpenAI did reach that milestone, it would explicitly let people know instead of sharing subtle hints about it. What is far more likely here is that the o3 AI model has found a way to improve the pattern-based reasoning capabilities of the model (either by adding enough sampling data or by tweaking the training methods), as also highlighted in a PTI report.

However, this improvement is likely very isolated and does not mean an increase in the overall intelligence level of the model.