Loading content...
Loading content...
I tested OpenAI’s new open-source “thinking” models and had one running locally on my MacBook Pro in under 20 minutes. No cloud, no data leaving my laptop.
A few days ago, I decided to put OpenAI's new open-source "thinking" models to the test.
It took me less than 20 minutes.
I downloaded the model through Ollama, wrote a small script (ironically, with the help of another AI), and ran it locally on my MacBook Pro.
The result? The gpt-oss:20b model was up and running, fast, and entirely under my control—no data leaving my laptop.
For years, businesses—especially in regulated industries—have raised the same objection:
“We can’t use AI because we can’t risk sending sensitive data to the cloud.”
That excuse is now gone.
Two weeks ago OpenAI has made two reasoning models openly available under an open-source license. That means:
Of course, open source doesn’t mean “full package.”
So the trade-off is simple:
local = security and flexibility, but fewer features.
For SMBs, this is a breakthrough moment:
This levels the playing field. A small firm can now harness cutting-edge AI with nearly the same raw power as a Fortune 500 company.
For Enterprises, the story is more complex.
Large organizations thrive on integration, compliance, and scale. An open-source model sitting on a server may not check the boxes for monitoring, governance, or multi-departmental use.
Compared to OpenAI’s hosted platform, these models will feel bare-bones. Enterprises will need to invest significantly in building the missing pieces before they see real productivity gains.
Here’s the real shift: Data security is no longer a blocker.
The question for every business leader becomes:
Either way, the ground has moved. AI adoption is no longer about whether it’s safe. It’s about how you’ll use it to move your business forward.
What are your thoughts on this topic? Reply to our newsletter or connect with us on LinkedIn.