AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
OpenAI has taken a significant step towards open-source models by releasing two new open-weight language models, gpt-oss-120b and gpt-oss-20b, for free trial use. This marks the first time OpenAI has released open-weight models since 2019, when it launched GPT-2. The move comes six years after OpenAI signed an exclusive cloud service agreement with
, and it is seen as a direct response to challenges from competitors such as , Microsoft-supported Mistral AI, and China's DeepSeek in the open-weight model domain.The new models, gpt-oss-120b and gpt-oss-20b, are designed to be cost-effective options for GPT models, allowing developers, researchers, and enterprises to run and customize them easily. The gpt-oss-20b model can run on a laptop with 16GB of memory, while the gpt-oss-120b requires approximately 80GB of memory. Both models support advanced reasoning, tool usage, and chain-of-thought processing. The CEO highlighted that these models can run locally on personal computers or even mobile devices without an internet connection, providing users with complete control and privacy protection.
Amazon has also announced that it will provide OpenAI's models to its customers for the first time, planning to offer the new open AI weight models on its Bedrock and SageMaker platforms. This is the first time the cloud computing giant has offered OpenAI's products. The release of gpt-oss-20b and 120b signifies a strategic shift for OpenAI, which had previously delayed the release of open-weight models and conducted extensive safety testing and evaluations to ensure the models would not be misused.
Open-weight models represent an intermediate form between open-source and closed-source, allowing enterprises and government agencies to run the models independently as they can view and modify the model weights. In contrast, open-source models allow users to view the model's source code composition, sometimes including training and weight distribution methods. Previously, it was suggested that OpenAI's release of open-weight models could weaken Microsoft's exclusive advantage in AI business. On Tuesday, OpenAI stated that Microsoft would integrate the GPU-optimized version of the gpt-oss-20b model into Windows devices. These models, supported by ONNX Runtime, enable local inference and can be accessed through Foundry Local and AI Toolkit for VS Code, making it easier for Windows developers to build with open models.
OpenAI introduced that the two gpt-oss models utilize its most advanced pre-training and post-training technologies, with a particular focus on reasoning, efficiency, and practicality across deployment environments. They are both based on the expert mixture (MoE) architecture of Transformers, reducing the number of active parameters required to process inputs. The gpt-oss-120b activates 5.1 billion parameters per token, with a total of 1170 billion parameters, while the gpt-oss-20b activates 3.6 billion parameters, with a total of 210 billion parameters. The models use alternating dense and local band-sparse attention modes, supporting context lengths up to 128k.
OpenAI described gpt-oss-20b as a medium-sized open model and gpt-oss-120b as a large open model that can run on "most desktop and laptop computers." The smaller model requires at least 16GB of VRAM or unified memory to perform optimally and is "highly suitable for high-end consumer GPUs and Mac computers with
chips." The larger full-size model requires at least 60GB of VRAM or unified memory. In performance assessments, OpenAI revealed that gpt-oss-120b performed close to OpenAI o4-mini in core reasoning benchmark tests. The smaller gpt-oss-20b model matched or exceeded the performance of OpenAI o3-mini in the same evaluation.In competitive coding (Codeforces), general problem-solving (MMLU and HLE), and tool invocation (TauBench), gpt-oss-120b outperformed OpenAI o3-mini and matched or even surpassed OpenAI o4-mini. It also exceeded o4-mini in health-related queries (HealthBench) and competitive mathematics (AIME 2024 and 2025). Despite its smaller size, gpt-oss-20b matched or even surpassed OpenAI o3-mini in the same evaluations, and it also outperformed the latter in competitive mathematics and health-related queries.
OpenAI has placed a strong emphasis on the security of its open-weight models, as competitors could potentially fine-tune the models maliciously once they are released. The company filtered out harmful data related to chemistry, biology, radioactivity, and nuclear (CBRN) during pre-training. To directly assess the risk, OpenAI fine-tuned the models with specialized biological and cybersecurity data, creating non-reject versions in specific domains that attackers might create. After internal and external testing, these maliciously fine-tuned models did not reach the "high capability" threshold in their prepared framework. OpenAI also collaborated with three independent expert groups to provide feedback on its malicious fine-tuning assessments. The CEO emphasized, "We strive to mitigate the most serious security issues, especially in the area of biological safety. The gpt-oss models performed equivalently to our cutting-edge models in internal security benchmark tests."
The two models are freely available under the Apache 2.0 license, and users can download them through platforms like Hugging Face and GitHub. Cloud service providers
, Baseten, and Microsoft will also offer these models. OpenAI has partnered with leading deployment platforms such as Azure, vLLM, Ollama, AWS, and Fireworks to enable developers to widely use these models. Microsoft will also integrate the GPU-optimized version of the gpt-oss-20b model into Windows devices. The company is hosting a red team challenge with a prize pool of 500,000 dollars, encouraging global researchers, developers, and enthusiasts to help identify new security issues. The CEO expressed, "We believe this will bring more benefits than harm, and we look forward to this release enabling new research and the creation of new products."OpenAI's decision to release open-weight models is largely driven by market competition, particularly the challenge posed by China's DeepSeek. Since 2019, the company has focused on providing proprietary model services through APIs, while competitors have launched open-weight alternatives. The president stated, "It is exciting to see the development of the ecosystem, and we are happy to contribute to truly driving frontier development and then see what happens." OpenAI has collaborated with chip manufacturers such as
, , Cerebras, and Groq to ensure the models run well on various chips. The CEO of NVIDIA stated, "OpenAI has shown the world what can be built on NVIDIA AI—now they are driving innovation in open-source software."
Stay ahead with the latest US stock market happenings.

Oct.14 2025

Oct.13 2025

Oct.13 2025

Oct.11 2025

Oct.11 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet