OpenAI Drops Two Open Source AI Models That Run Locally and Match Premium Offerings
The post OpenAI Drops Two Open Source AI Models That Run Locally and Match Premium Offerings appeared on BitcoinEthereumNews.com.
In brief OpenAI open-sourced two mega-models under Apache 2.0, shattering licensing limits The 120B-parameter model runs on a $17K GPU; 20B-parameter version works on high-end gaming cards Performance rivals GPT-4o-mini and o3—beats rivals in math, code, and medical benchmarks OpenAI released two open-weight language models Tuesday that deliver performance matching its commercial offerings while running on consumer hardware—the gpt-oss-120b needs a single 80GB GPU and the gpt-oss-20b operates on devices with just 16GB of memory. The models, available under Apache 2.0 licensing, achieve near-parity with OpenAI’s o4-mini on reasoning benchmarks. The 120-billion parameter version activates only 5.1 billion parameters per token through its mixture-of-experts architecture, while the 20-billion parameter model activates 3.6 billion. Both handle context lengths up to 128,000 tokens—the same as GPT-4o. The fact that they are released under that specific license is a pretty big deal. It means anyone can use, modify and profit from those models without restrictions. This includes anyone from you to OpenAI’s competitors like Chinese startup DeepSeek. The release comes as speculation mounts about GPT-5’s imminent arrival and competition intensifies in the open-source AI space. The OSS models are OpenAI’s latest open-weight language models since GPT-2 in 2019. There is not really a release date for GPT-5, but Sam Altman hinted it could happen sooner than later. “We have a lot of new stuff for you over the next few days,” he tweeted early today, promising “a big upgrade later this week.” we have a lot of new stuff for you over the next few days! something big-but-small today. and then a big upgrade later this week. — Sam Altman (@sama) August 5, 2025 The open-source models that dropped today are very powerful. “These models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for…
Filed under: News - @ August 6, 2025 2:28 am