OpenAI Releases o3-Pro Model
Compared to the o3 model, o3-pro uses more compute to think harder and provide consistently better answers.
OpenAI just announced two major updates. First, the price of the o3 model has been reduced by 80%. Second, they released a new model called o3-pro.
O3-pro is currently OpenAI’s most advanced multi-modal model with deep reasoning capability. It works through problems step by step, enabling it to perform more reliably in domains like coding, math, science, visual perception, and many others.
I know that OpenAI is terrible at naming their models. If you are confused about how o3 differs from the GPT-4.x series or GPT-4o, you are not alone. Here is a quick summary to help make sense of the mess:
GPT-4.x: Multi-modal model without advanced reasoning capability. It supports text and images.
GPT-4o: The “o” stands for “omni.” This model processes text, images, and audio.
o3 and o3-pro: Multi-modal models with reasoning capability. These are the “thinking” models, mostly text-based with some limited image support.
These reasoning models can agentically use and combine every tool within ChatGPT. This includes searching the web, analyzing uploaded files and data using Python, reasoning about visual inputs, and even generating images in some contexts.
Compared to the o3 model, o3-pro uses more compute to think harder and provide consistently better answers.
What is o3-pro?
To understand o3-pro, you really have to understand what OpenAI’s o3 model is doing behind the scenes, because o3-pro is simply o3 given more time and more compute to think harder.
Keep reading with a 7-day free trial
Subscribe to Generative AI Publication to keep reading this post and get 7 days of free access to the full post archives.