OpenAI Unveils GPT-4.5, Advancing Unsupervised Learning and Collaboration

Published  February 28, 2025   0
OpenAI Introduces GPT 4.5

OpenAI has introduced GPT-4.5, a research preview of its most advanced GPT model. Available to ChatGPT Pro users and developers worldwide, the new model represents a significant step forward in scaling unsupervised learning. It is designed to recognize patterns, draw connections and generate creative insights without relying on complex reasoning. This release marks an important milestone in enhancing both knowledge depth and conversational quality. Early tests indicate improved performance and consistently reduced hallucinations across topics.

GPT-4.5 is built by scaling up both compute and data using Microsoft Azure AI supercomputers. The model leverages unsupervised learning techniques to achieve a broader knowledge base and deeper world understanding. New training methods derived from smaller models improve its steerability and natural conversation. Enhanced emotional intelligence and creativity make GPT-4.5 valuable for writing, programming and problem-solving tasks in various industries. The improvements also reduce error rates while ensuring reliable, safe operation across applications.

ChatGPT Pro users can select GPT-4.5 in the model picker on web, mobile and desktop. The rollout will extend to Plus, Team, Enterprise and Edu users in the following weeks. In addition, GPT-4.5 is available through the Chat Completions, Assistants and Batch APIs for developers. Key features include search capability, file and image uploads and canvas support for writing and code. The model also demonstrates reliable performance and consistently improved user guidance for practical applications.

Safety remains a priority with GPT-4.5, which is trained using advanced supervision techniques including supervised fine-tuning and reinforcement learning from human feedback. Extensive safety tests were conducted according to OpenAI’s Preparedness Framework. GPT 4.5’s evaluation results show a 71.4% score in science, 36.7% in math, 85.1% in multilingual tasks, 74.4% in multimodal tasks and competitive coding performance with a 32.6% score on the SWE Lancer Diamond benchmark and 38.0% on SWE Bench Verified. Feedback from users and developers will help guide future improvements and ensure safe deployment in real-world settings. OpenAI continues to refine its model based on testing and feedback.