December 14, 2025
Technology

OpenAI to give away some of the technology that powers ChatGPT


SAN FRANCISCO — In a move that will be met with both applause and hand-wringing from artificial intelligence experts, OpenAI said Tuesday that it was freely sharing two of its AI models used to power online chatbots.

Since OpenAI unveiled ChatGPT three years ago, sparking the AI boom, it has mostly kept its technology under wraps. But many other companies, looking to undercut OpenAI, have aggressively shared their technology through a process called open source.

Now, OpenAI hopes to level the playing field and ensure that businesses and other software developers stick with its technology.

OpenAI’s shift adds more fuel to a long-running debate between researchers who believe it is in every company’s interest to open-source their technology, and national security hawks and AI safety pessimists who believe American companies should not be sharing their technology.

The China hawks and AI worriers appear to be losing ground. In a notable reversal, the Trump administration recently allowed Nvidia, the world’s leading maker of the computer chips used to create AI systems, to sell a version of its chips in China.

Many of the San Francisco company’s biggest rivals, particularly Meta and Chinese startup DeepSeek, have already embraced open source, setting OpenAI up as one of the few AI companies not sharing what it was working on.

The models being offered by OpenAI, called gpt-oss-120b and gpt-oss-20b, do not match the performance of OpenAI’s most powerful AI technologies. But they still rank among the world’s leading models, according to benchmark test results shared by the company. If people use those newly open-source models, OpenAI hopes they will also pay for its more powerful products.

“If we are providing a model, people are using us,” Greg Brockman, OpenAI’s president and one of its founders, said in an interview with The New York Times. “They are dependent on us providing the next breakthrough. They are providing us with feedback and data and what it takes for us to improve that model. It helps us make further progress.”

Open source has been a common practice among software companies for decades. As OpenAI and other companies began developing the kind of technology that would eventually drive chatbots like ChatGPT nearly a decade ago, they often open-sourced them.

“If you lead in open source, it means you will soon lead in AI,” said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world’s open-source AI projects. “It accelerates progress.”

But after OpenAI shared a technology called GPT-2 in late 2019, it stopped open-sourcing its most powerful systems, citing safety concerns. Many of OpenAI’s rivals followed its lead. When OpenAI unveiled ChatGPT in late 2022, a growing chorus of AI experts argued that open-source technologies could cause serious harm.

This kind of technology can help spread disinformation, hate speech and other toxic language. Many researchers also worry that they could one day help people build bioweapons or wreak havoc as governments and businesses connected them to power grids, stock markets and weapons.

But the public conversation started to shift in 2023, when Meta shared an AI system called LLama. Meta’s decision to go against the grain fueled a growing open-source ecosystem in the United States and other parts of the world. By late 2024, when DeepSeek released a technology called V3, China had shown that its open-source systems could challenge many of the leading U.S. systems.

OpenAI said that it has released open-source systems in part because some businesses and individuals prefer to run these kinds of technologies on their own computer hardware, rather than over the internet. One of the new systems, gpt-oss-20b, is designed to run a laptop. The other, gpt-oss-120b, requires a more powerful system equipped with the specialized computer chips used to build the leading AI systems.

Brockman acknowledged that AI can be used to both harm and empower people. But he said that the same is true of any powerful technology. He said that OpenAI has spent months — even years — building and testing its new open-source systems in an effort to reduce any harm.

The debate over open source is expected to continue as companies and regulators weigh the potential harm against the power of the time-tested tech method. Many companies have changed strategy over the years and will continue to do so.

After creating a new superintelligence lab, Mark Zuckerberg and his fellow executives at Meta are considering their own shift in strategy.

They might abandon the AI technology that the company has freely shared with researchers and businesses, called Behemoth, and move toward a more guarded strategy involving closed-source software.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *