AI is a modeling problem

4 min read Original article ↗

We have been trying to solve the “AI problem” for decades. But where exactly does the true problem lie? Most current AI breakthroughs focus on the large-scale training of neural networks. But what if the bottleneck isn’t the size of the network, but how we model the world?

When I say “modeling,” I don’t mean “Large Language Models.” I mean how we represent mental constructs in computer memory. Classes, structures, and schemas were our primitive attempts to solve this. Gradually, this omnipresent problem was overlooked, and we became infatuated with LLMs.

Currently, the de-facto definition of AI is “LLM-based AI.” These systems use statistical approximations to model text. However, a true AI system should model facts—much like how banking software represents a bank, an account, a transaction, or a customer—rather than just a text representation of them.

The problem is this: we cannot manually build generic software for the millions of concepts the human mind handles. This is the “Modeling Problem.” Specifically:

  1. How do we model the content of our minds in computer memory?

  2. How do we create these models automatically from natural language?

Once the modeling problem is solved, AI will not just generate code; it will behave as software.

Have you ever wondered why multi billion-dollar AI companies promote code generation when they claim to have “all-knowing” AI? Consider these points:

  • We write code because computers do not understand natural language.

  • Programming languages were invented to meticulously tell computers what to do.

  • We call the quest for a “generic software that simulates human intelligence” Artificial Intelligence.

If AI is the ultimate form of making computers “understand,” then using AI to generate code (just to tell a computer what to do) is a defeat of the purpose. It suggests that the AI doesn’t actually understand the task; it is just translating it into another language it doesn’t understand either.

The reason AI exists is to provide a generic solution. Using AI to create a specific solution via code generation is a form of cognitive dissonance. While helping developers is fine, it is a demonstration of capability rather than a necessity for true AI. A true AI system would simply act as the software itself.

The intelligence embedded in typical software is negligible compared to the vast intelligence AI is expected to possess. To demonstrate this, I have developed a software builder PoC (Proof of Concept). This engine acts as the software itself.

In the demo, I show the creation of Sign-in/Sign-up modules, including login logic and role management—all created dynamically from natural language.

Key Technical Points:
  • Proprietary Engine: This is not LLM-based or Generative AI. It is a solution based on the “modeling problem.”

  • Dynamic Modeling: There is no back-end API, no DSL (Domain Specific Language), and no code generation. AI is the runtime.

  • Experience-Based Meaning: If you tell the system “a dog is an animal,” it creates a partial info piece. If you later say “a dog has a tail,” the model enriches itself. It creates meaning from experience and context without the “scaling problem” found in classical symbolic modeling.

By solving the modeling problem, three critical factors can be integrated into AI:

  1. Time: How time is perceived and modeled within logic.

  2. Psychology: Decisions should be based on gratification levels (pleasure/pain points) relative to a specific psychological framework. Without psychology, there is no true AI.

  3. Reasoning: Human reasoning is often just an observation of experience. If I hear stones “singing” every morning at 6:30 AM, my “logical” reason for the noise is “the stones are singing.” It is an observation based logic, and true AI must handle information this way.

As the knowledge base of such an AI develops, it will excel beyond human capability in most aspects. The only thing missing might be the “divine spark” of invention.

On a lighter note: there is no need for a “Sarah Connor” to protect humanity. A creator always has the upper hand unless they intentionally let go of the reins. There are multiple ways to solve the modeling problem, but solving it is far more important than aiming for the next trillion-parameter LLM.

Discussion about this post

Ready for more?