Press enter or click to view image in full size
A year ago, we went all in on AI at OSS Ventures, the leading venture builder for operations.
Twelve months later, we’ve launched four AI-native companies and helped implement AI in an existing portfolio company with a hands-on, months-long project. Across our broader portfolio of 20 companies, AI is now present in almost every product and operational workflow. We’ve shipped real systems into dozens of factories. And we’ve had our share of wins, dead ends, brutal rewrites, and joyful breakthroughs.
This article is not a victory lap. It’s not even advice. But it is a set of reflections — honest, messy, and grounded in real deployments. If you’re a builder or operator trying to make AI useful in production — this one’s for you.
1. The shallow AI trap: when “ChatGPT in the corner” changes nothing
We’ve seen a recurring anti-pattern: a SaaS product adds a chatbot on the side. Sometimes it’s called a “copilot.” Sometimes it’s “AI-powered search.”
And often, it does absolutely nothing.
Zero impact on activation, zero on retention, zero on revenue. Why? Because slapping AI on top of an existing flow without rethinking the core job-to-be-done is like painting a forklift neon yellow and expecting it to go faster.
The opportunity is not adding AI on top of your product. The opportunity is rethinking the product around what AI makes possible. That’s a different level of game. That’s also very difficult.
2. Start with a quantified outcome or die in the toy zone
The moment we stopped pitching features and started pitching outcomes, everything changed.
Instead of saying:
“We built an AI that parses supplier emails into a structured workflow”
we now say:
“We think we can get 3% more cost savings in your purchasing department with 50% fewer people. Do you buy that?”
The second statement sparks real conversation. Real discomfort, too — but that’s the point. We’ve found that about 15% of executives respond with clarity, energy, and ambition. The rest get nervous or deflect.
Which is fine. AI is expensive. Time-consuming. You want to work with believers.
3. You’re not building for the user — you’re building for the human in the loop
This is a subtle but critical shift.
In traditional SaaS, the user is the decision-maker. In AI systems, the user is often the supervisor of a decision-making engine. The interface must convey confidence, ambiguity, edge cases, and options. It must make failure visible and fixable.
This is especially true in manufacturing, where an “AI agent” is like an extra operator — except less predictable.
Designing for this “human-in-the-loop” paradigm is now half our product work. Think less “AI magic” and more “AI + pilot + instrument panel + brake pedal.”
4. Composable is mandatory. Especially when things look simple.
Take this spec:
“Help an operator run a machine correctly in any context.”
Looks simple.
Now build it.
You’ll need:
- A decision engine for contextual variations
- A scheduling engine
- Real-time sensor inputs
- Human-readable UI
- A body of knowledge with the correct format and cleaning
- Exception handling
- Versioned updates
- AI-driven anomaly detection
- Hardcoded safety logic
What looks like a simple “assistant” is often a composite of 5+ subsystems under the hood. We’ve learned (the hard way) that only a modular, composable architecture gives us the flexibility to deliver and iterate fast enough.
5. After the demo, your buyers want control
AI demos are fun. For a while.
But when the demo is over and the pilot starts, the factory manager wants:
- Failure modes
- Type 1 vs Type 2 error rates
- Edge case handling
- Performance in degraded environments
- Options to override
They don’t want magic. They want something they can trust, manage, and explain.
We now include these guardrails upfront. It’s part of the “industrial-grade AI” playbook: build for the skeptical operator, not the curious VP.
6. Pick big problems or don’t bother
AI is not cheap. Not in compute. Not in infra. Not in data. Not in complexity cost.
If your use case saves 1 hour per week or improves accuracy by 2%, it probably isn’t worth it. We’ve killed projects — ours and clients’ — because the ROI was just too thin.
By contrast, when you touch something like:
- Procurement negotiation
- CAPEX deployment
- Visual quality control
- Workforce planning
- ERP augmentation
… the potential uplift is in the millions. That’s the league where AI makes sense. That’s where it justifies the build.
7. Your AI model will be obsolete in six months. Plan for it.
One of our startups spent months tuning a custom model. It worked. Then a foundation model leapfrogged it in performance — overnight.
We now assume models will be replaced quarterly. We’ve made all stacks model-agnostic, retrain-friendly, and version-controlled by default.
If your architecture assumes “this model will be the one,” you’re in trouble.
8. There are no AI experts. You’re it.
We tried hiring model specialists. We tried consultants. Most failed.
Why? Because what you need isn’t theory. You need:
- People who can write tests for weird data
- People who can ship on edge devices
- People who can debug YAML and GPU drivers
- People who know your users
- People who can do all of the above, fast, and with purpose
That means your full-stack engineers must become AI-native. It’s the new literacy. It’s okay to not know — it’s not okay to not learn, and not okay to still not know the week after.
9. “Wrapper over GPT” is a surface-level critique
There’s a meme going around:
“You’re just a wrapper over OpenAI.”
It’s not entirely wrong. But it misses the point.
We are indeed using foundation models. But we’re also:
- Building pipelines
- Shipping to edge devices
- Cleaning proprietary data
- Crafting interfaces
- Tuning workflows
- Supporting operations
That’s not a wrapper. That’s a product.
Just like every SaaS company in 2010 used AWS. Nobody called them “wrappers over EC2.”
10. The nerd in the factory is your best friend
In nearly every factory, there is one person who runs 30Mb Excel sheets, scripts Arduino boards, and knows more than your product team.
We now build for that person.
When deploying in production environments, it’s not the headquarters that guarantees adoption. It’s the nerd who’s trusted on the floor.
In one case, a single technician deployed our system across five sites — with no training session. Find that person. Empower them.
11. The ERP won’t be a barrier for much longer
Here’s a hot take: within 18 months, data integration will be a solved problem.
LLMs are rapidly decoding the obscure schemas and brittle logic of legacy ERPs. We’ve already seen early wins connecting directly into SAP and bypassing costly APIs.
When that becomes standard?
- No more $500k “data lake” projects.
- No more three-month syncs for “integration.”
- No more vendor lock-in via data opacity.
That will open the floodgates for a new class of operational tools. We’re preparing for that moment.
Final thought: this is just the beginning
We’re still early. Every week something breaks. Every month something leaps forward. But the direction is clear.
AI in operations is not about replacing humans. It’s about radically amplifying their reach and simplifying their environments. It’s about enabling a new kind of work, where very good paying jobs in factories are powered by a level of leverage never seen before.
So to our fellow builders: Don’t get lost in the tech. Pick real problems. Ship relentlessly. Talk to users. And prepare to rebuild everything twice or thrice.
We’re in the middle of something big. And we’re grateful to be building it, one factory at a time.
If you’re working on AI for the physical world — or want to — let’s talk.
OSS Ventures is always looking for builders with ambition and courage.