Moving towards purpose driven AI

3 min read Original article ↗

When ChatGPT was launched, it was the very first time most of us interacted with AI. We talked with intelligence using a prompt. The internet was flooded with the perfect prompt to get results. That model looked something like this - 

Prompt <=> AI

Pretty soon everyone got past the world model embedded within the LLM and we needed to provide more context for AI to use intelligence for our specific problems. That was the start of RAG architectures and a wave of startups solving for it.

Prompt <=> AI <=> RAG

Context window became an issue. Bad context provided bad outcomes. We now needed intelligence in creating context as well. That is when RAG's were AI enabled so applications can use external intelligence to generate context for feeding into AI for its tasks.

Prompt <=> AI <=> [AI + RAG]

Context on its own was not enough. We needed to change things. That is when tools were introduce and started the entire agentic AI wave of startups. An agentic AI is - 

AI Agent = Task <=> AI <=> [AI + RAG] + [AI + Tools]

OpenClaw took this architecture further and demonstrated was was possible when an AI agent can do both upgrade itself and interact with other agents with unlimited access to resources.

Task <=> [Agent] <=> [Agent] ... = [AI System]

This architecture is mostly task based. Reasoning abilities are embedded with individual AI LLMs but not in the systems yet. Extrapolating this to a purpose driven AI will requires a system of agents that interact with everything with a larger purpose or goal attached to everything. Which means everything, every agent does helps towards that purpose.

Purpose/Goal <=> [AI System] <=> [AI System] <=> [Agents]

While reasoning operates for minutes and hours, a purpose driven AI will be able to operate for days, months and may be even years. It does not only execute tasks, it creates them.

For example, lets say Pepsi determines that it wants to position itself as the healthiest drink on the planet. That is its purpose. The AI system can then over months and years drive creating content, plan events, may be suppress opposing views, fund supporting research, amplify related stories, highlight clean manufacturing, and continue doing everything with one underlying purpose. As long as that purpose is reinforced into everything it does, it will drive in some subtle way all tasks it performs towards that purpose. It will autonomously generate tasks and learn from their success and failures.

An expanding world model of foundation models will expand the capabilities of what these purpose driven AI systems will be able to do.

In conclusion, this is what is both exciting and fearful. An underlying task economy will continue drive agentic AI development which will enable a purpose drive AI system with unlimited resources to operate at a scale and speed never seen before.

Human intelligence is very limited. Our context window is tiny compared to what AI's can be. But we are purpose driven being. We interact with other intelligent beings to form systems. These systems interact with each other towards some purpose. What drives purpose is reinforcement and forming systems that support them.

A similar architecture can evolve for AI as well. The good or evil is where humans drive the purpose. The doomsday is when AI drives its own purpose.