OpenAI just launched its smartest AI yet that can think with images — here's how to try it

human vs. robot face with ChatGPT on phone
(Image credit: Shutterstock)

OpenAI just released two updated AI models — o3 and o4-mini — for ChatGPT Plus, Pro and Team users. Essentially two new, bigger and better brains, these models are said to be the smartest ones yet because they can tackle more advanced queries, understand the blurriest images, and solve problems like never before.

This release comes just a few days after OpenAI announced that ChatGPT is getting a major upgrade to its memory features, aimed at making conversations even more personal, seamless and context-aware.

With ChatGPT retiring GPT-4 at the end of this month, the release of these new models underscore OpenAI’s broader push to make ChatGPT feel less like a one-off assistant and more like a long-term, adaptable tool that evolves with its users.

More advanced multimodal capabilities

An image of a person as a plushie toy

(Image credit: ChatGPT)

These models are the most advanced yet, capable of interpreting both text and images, including lower-quality visuals such as handwritten notes and blurry sketches. Users can upload diagrams or whiteboard photos, and the models will incorporate them into their responses.

The models also support real-time image manipulation, such as rotating or zooming, as part of the problem-solving process.

Greater autonomy with built-in tools

OpenAI logo with robotic human head

(Image credit: Shutterstock)

For the first time, the models can independently use all of ChatGPT’s tools, including the browser, Python code interpreter, image generation and image analysis. This means the AI can decide which tools to use based on the task given, potentially making it more effective for research, coding, and visual content creation.

As part of this launch OpenAI is also unveiling Codex CLI, an open-source coding agent that runs locally in a terminal window. It’s designed to work with these new models and will soon support GPT-4.1. To encourage developers to test and build with these tools, OpenAI is offering $1 million in API credits, distributed in $25,000 increments.

Availability and other updates

OpenAI’s Sam Altman Talks ChatGPT, AI Agents and Superintelligence — Live at TED2025 - YouTube OpenAI’s Sam Altman Talks ChatGPT, AI Agents and Superintelligence — Live at TED2025 - YouTube
Watch On

The newly released o3 and o4-mini models are now available to ChatGPT Plus subscribers, with developers able to access them via the OpenAI API. A more advanced o3-pro model is expected to arrive in the coming weeks. In the meantime, users on the Pro plan can continue using the existing o1-pro model.

These updates come at a time when OpenAI is no longer held back by limited computing power — a shift that could mark a major leap forward for AI. In a recent interview with Business Insider, CEO Sam Altman revealed that OpenAI is no longer “compute constrained,” meaning the company now has access to the kind of massive processing power needed to build more sophisticated models.

With this boost it looks likely that OpenAI can accelerate development, roll out more powerful versions of ChatGPT, and create models capable of handling far more complex tasks. In short, the brakes are officially off.

This newfound capacity also signals OpenAI’s broader ambition to make its models more flexible, intelligent, and autonomous, particularly for users who rely on AI for research, content creation and coding.

As these tools evolve, so does the potential for AI to move beyond assistant-level support and become a true creative and analytical collaborator.

More from Tom's Guide

Category
Arrow
Arrow
Back to Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Screen Type
Arrow
Storage Type
Arrow
Price
Arrow
Any Price
Showing 10 of 126 deals
Filters
Arrow
Show more
TOPICS
Amanda Caswell
AI Writer

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.