OpenAI just launched its smartest AI yet that can think with images — here's how to try it
The AI autonomously chooses the best tool for the job

OpenAI just released two updated AI models — o3 and o4-mini — for ChatGPT Plus, Pro and Team users. Essentially two new, bigger and better brains, these models are said to be the smartest ones yet because they can tackle more advanced queries, understand the blurriest images, and solve problems like never before.
This release comes just a few days after OpenAI announced that ChatGPT is getting a major upgrade to its memory features, aimed at making conversations even more personal, seamless and context-aware.
With ChatGPT retiring GPT-4 at the end of this month, the release of these new models underscore OpenAI’s broader push to make ChatGPT feel less like a one-off assistant and more like a long-term, adaptable tool that evolves with its users.
More advanced multimodal capabilities
These models are the most advanced yet, capable of interpreting both text and images, including lower-quality visuals such as handwritten notes and blurry sketches. Users can upload diagrams or whiteboard photos, and the models will incorporate them into their responses.
The models also support real-time image manipulation, such as rotating or zooming, as part of the problem-solving process.
Greater autonomy with built-in tools
For the first time, the models can independently use all of ChatGPT’s tools, including the browser, Python code interpreter, image generation and image analysis. This means the AI can decide which tools to use based on the task given, potentially making it more effective for research, coding, and visual content creation.
As part of this launch OpenAI is also unveiling Codex CLI, an open-source coding agent that runs locally in a terminal window. It’s designed to work with these new models and will soon support GPT-4.1. To encourage developers to test and build with these tools, OpenAI is offering $1 million in API credits, distributed in $25,000 increments.
Availability and other updates

The newly released o3 and o4-mini models are now available to ChatGPT Plus subscribers, with developers able to access them via the OpenAI API. A more advanced o3-pro model is expected to arrive in the coming weeks. In the meantime, users on the Pro plan can continue using the existing o1-pro model.
These updates come at a time when OpenAI is no longer held back by limited computing power — a shift that could mark a major leap forward for AI. In a recent interview with Business Insider, CEO Sam Altman revealed that OpenAI is no longer “compute constrained,” meaning the company now has access to the kind of massive processing power needed to build more sophisticated models.
With this boost it looks likely that OpenAI can accelerate development, roll out more powerful versions of ChatGPT, and create models capable of handling far more complex tasks. In short, the brakes are officially off.
This newfound capacity also signals OpenAI’s broader ambition to make its models more flexible, intelligent, and autonomous, particularly for users who rely on AI for research, content creation and coding.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
As these tools evolve, so does the potential for AI to move beyond assistant-level support and become a true creative and analytical collaborator.
More from Tom's Guide
- You can now use Google's AI to make videos from text — and I'm already obsessed
- OpenAI reportedly creating its own social network to take on X
- I let Google AI take over my desktop — it found files I thought were gone forever and other mind-blowing tricks













You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.