Databases answer questions. You ask for data, they give it to you. That’s the job.
We’ve spent decades optimizing this: faster queries, better indexes, smarter caching, etc. The entire field of database engineering is built around one goal: answer the question as fast and reliably as possible.
But the database doesn’t know why you’re asking.
When a query comes in, the database has no idea what happens next. It doesn’t know if you’re about to make a decision that affects millions of dollars, or if you’re populating a report that can wait until tomorrow. It doesn’t know if a slow response will cascade into a system-wide failure, or if the caller can wait. It doesn’t know if you need perfect consistency or if you’d be fine with data from five seconds ago.
It just answers. And because it doesn’t know any of this, it has to assume the worst about everything.
Consider an e-commerce checkout. A single user action triggers a cascade of database calls:
The business knows this is one checkout. It knows the payment must complete before the order is confirmed, and that the email notification can wait. But to the database, each query arrives without that context. Every request is a fresh stranger. The database doesn’t see a checkout - it sees unrelated queries, and treats them all the same.
And if something breaks mid-flow? Best case: the whole process rolls back. Worst case: the system is left in a logically inconsistent state, and you’re writing compensatory transactions to clean up the mess.
The Cost of Not Knowing
Think about what this isolation forces:
Every query is urgent. Querying a database is inherently time-sensitive. Something is always waiting for the answer, and waiting has costs. A slow response doesn’t just delay the answer; it blocks the entire code path and risks cascading failures upstream.
So you optimize for the worst case. You set aggressive timeouts, add retries, build circuit breakers. Blocking a suspicious transaction gets the same treatment as compiling month-end metrics. The database can’t tell which is which, so you treat everything as urgent.
Always ready, just in case. Because you don’t know when a query will arrive or what it needs, you maintain consistency constantly. Maybe the caller would be fine with slightly stale data. Maybe they need perfect accuracy. You don’t know, so you pay the consistency tax on every write.
Consider indexes. Every database engineer knows the rule: add indexes, but not too many. Why? Because indexes slow down writes. Every insert or update has to maintain them. But why maintain them immediately? Because a query might arrive any moment needing that index. If you knew no query would need that data for the next hour, you could defer the indexing work. But you don’t know, so you pay upfront, every time.
Preemptively optimized. Buffers kept hot. Statistics gathered and query plans cached. All of this anticipates demand that may or may not come. You’re spending resources now for queries that might arrive later, based on your best guess about access patterns.
Every one of these is a reasonable response to the same problem: the database is isolated from the process it serves.
What Context Buys You
What if your data system knew what you were going to do with the answer?
Not in some magical way. Just: what if the system knew what process the data is part of? What if it knew this data feeds a time-sensitive decision, or that it’s needed for a batch job that can wait, or that the caller has a fallback if the data is slightly stale?
Fast where it matters. You can be fast when it matters and lazy where it doesn’t. Data feeding a human decision might need to arrive quickly. Data for a nightly report can take its time.
Consistency on demand. You don’t need to maintain perfect consistency for readers who don’t need it. You can be eventually consistent when that’s good enough, and strongly consistent when someone actually requires it.
Latency is about when you need it. Without context, data not in memory means the client waits. So you buffer aggressively, guessing what might be needed. But when you know what processes are running and when they’ll need data, you can buffer intentionally: preparing exactly what’s coming, not what might. Better yet, you can keep data in high-latency bottomless storage like S3 and prefetch it.
And the workloads that genuinely are unpredictable - ad-hoc analytics, exploratory queries - are usually the ones that can tolerate higher latency anyway. They don’t need the same treatment as transactional data.
From Answering to Participating
Yes, there are workarounds. Read replicas for analytics. Async queues for things that can wait. CQRS to separate concerns. But these are layers of complexity built around a database that doesn’t know what you need. Each one adds infrastructure to manage, edge cases to handle, and more ways to fail.
The traditional model treats databases as isolated services. They sit behind an API, answer questions, and have no visibility into what happens with those answers. They’re optimized for the query, not for the outcome.
The alternative is a data system that participates in the process. One that understands not just “what data do you need” but “why do you need it and what happens next.”
This isn’t about making databases smarter in the AI sense. It’s about making them aware, connecting them to the context they’ve been missing.
When your data system knows it’s part of a larger flow, it can make better tradeoffs: lazy about things that don’t matter yet, aggressive about things that do, adapting to what’s actually happening instead of preparing for every possible query that might theoretically arrive.
Now imagine the same checkout, conceptually reimagined:
No more blind queries. You declare what outcomes depend on: an order is placed when the request exists, payment is captured, and stock is available. The system knows the full dependency graph. Even if data is delayed, the system knows exactly where it should flow when it arrives. Data flows according to declared rules, not isolated requests.
Where This Led Us
We wanted a system where data and process aren’t separate. No client-initiated queries, just standing orders: declarations of what you need and why. The system knows what’s coming because the business logic is part of the system, not outside it.
At the core is a scheduler that drives entire processes, orchestrating compute and I/O rather than waiting to respond. Rules declare not just their dependencies but their priorities and constraints. The scheduler sees the full graph and allocates resources accordingly: aggressive for what’s blocking critical paths, lazy for what can wait, prefetching what it knows is coming.
A system that doesn’t know why you’re asking has to guess. One that does can actually help.
This is what we’re building at Inferal: a data system where business logic and data live together, so the system always knows what’s needed and why. We’re building toward data-native operating systems for complex, agent-augmented systems. If this resonates, let’s talk.