🧠 Programming Without People: Designing a Language for LLMs

5 min read Original article ↗

Davin Hills

Press enter or click to view image in full size

🔧 For AI, By AI — And Not By a Compiler Expert

Let me be honest up front: I’m not a compiler designer. I’m an engineer, a pragmatist, and someone who’s spent too many hours watching LLMs hallucinate method names or trip over syntax that looked fine to me. This isn’t an academic paper — it’s a practical proposal, and maybe a bit of a manifesto.

The idea is simple:

What if we designed a general-purpose programming language not for humans — but for LLMs?

This isn’t just about making code easier to generate. It’s about rethinking the shape of code when the author and the interpreter are both machines.

We’re in an era where AI doesn’t just assist development — it is the developer. And yet we’re still stuck asking it to write in Python and JavaScript, like we’re pair programming with a human who has strong feelings about camelCase.

It’s time to build a language where human readability isn’t the goal — correctness, composability, structure, and machine-native features are.

Why Human-Readable Code Is a Limitation for AIs

Human-readable code comes with a lot of baggage:

  • Syntax built for typists
  • Verbosity for human clarity
  • Ecosystems of comments, formatters, naming conventions, and lint wars
  • Ambiguities resolved by tribal knowledge

And, of course, the infinite holy war:

Tabs or spaces?

LLMs don’t care. They don’t edit in VSCode or have editor fans (Neovim is clearly the best). They just want a clear grammar, predictable structure, and zero semantic ambiguity.

If an LLM can reason over a structured schema or AST, why are we still feeding it what we designed for humans?

A Language Designed for LLMs

Let’s flip the script.

Instead of designing for human consumption, we design for machine cognition:

  • No ambiguous syntax — deterministic, explicit, structured
  • Schema-defined semantics — enforceable and validateable before execution
  • Pluggable tooling — no built-in I/O; every side effect is declared with typed interfaces
  • Execution decoupled from generation — enabling validation, compilation, and inspection
  • Readability optional — structure matters more than style, semantics over syntax

A language tailored for LLMs to generate confidently, reason formally, and execute reliably.

Introducing ALaS: The AI Language Specification

Meet ALaS — the AI Language Specification. A structured, introspectable format for logic authored by AI and compiled and executed like machine-native programs.

Core Features

  • Canonical JSON-based format — no syntax ambiguity
  • Composable node structure — branching, looping, memory, tool call
  • Plugin architecture — typed, registered tools for all external interactions
  • First-class memory and context — variables, maps, and arrays included
  • Maps & Arrays — native first-class collections for richer data handling
  • LLVM compilation — ALaS programs can now compile down to LLVM IR for near-native performance
  • Schema-first validation — if it validates, it compiles and runs

This isn’t pseudocode — it’s real, typed, validated, and now compilable down to highly optimized machine-level code.

What Does an ALaS Program Look Like?

{
"id": "search-and-filter-docs",
"type": "sequence",
"steps": [
{
"id": "search",
"type": "tool",
"tool": "docSearch",
"input": { "query": "how to configure OpenID" },
"output": "rawResults"
},
{
"id": "filter",
"type": "filter",
"source": "rawResults",
"criteria": { "score": { "gte": 0.8 } },
"output": "highConfidence"
},
{
"id": "select-top",
"type": "select",
"source": "highConfidence",
"count": 1,
"output": "topResult"
},
{
"id": "output",
"type": "return",
"value": { "result": "$topResult" }
}
]
}

New: Handling Collections

Now with native maps and arrays, ALaS can handle more complex data workflows without external tool hacks. Example:

{
"id": "count-by-category",
"type": "sequence",
"steps": [
{ "id": "load", "type": "tool", "tool": "loadData", "output": "items" },
{
"id": "group",
"type": "mapReduce",
"source": "items",
"map": { "category": "$it.category", "count": 1 },
"reduce": "sum",
"output": "counts"
},
{ "id": "return", "type": "return", "value": { "counts": "$counts" } }
]
}

Why Now?

  • LLVM compilation means near-native speed and broader runtime portability
  • Maps & arrays empower real-world data processing flows directly in ALaS
  • LLMs continue generating production-level logic — without Python-level fragility
  • It cleanly separates intent → plan → execution with validation in between

Use Cases

  • High-performance agent logic — complex logic with fast, compiled performance
  • Data pipelines — group, filter, reduce, and transform inside ALaS
  • Governance — full traceability from LLM-generated plans to machine execution
  • Autonomous reasoning — enabling nested data structures without workaround hacks

Trade-offs and Realities

It’s powerful — but not magic:

  • Debugging structured logs still needs tooling for humans (maps and arrays help, though)
  • Performance overhead from LLVM compilation is worthwhile, but it adds a build step
  • Expressiveness trade-offs — richer data structures mean more schema complexity
  • Ecosystem needs — interpreters, LLVM toolchain, and UI integration are works in progress

Still, this is far cleaner — and faster — than LLMs trying to output perfectly valid Python for every case.

Looking Forward

Infrastructure is YAML. CSS is mostly generated. Code is already written by LLMs.

So, next step: a language designed for machines — that outputs to machines, compiles to machines, and is readable only enough to validate.

ALaS isn’t a better Go, C, or Python.

It’s a different category.

It’s a language for entities that think differently — optimized for reasoning, validated structure, and now, performance.

Let’s stop pretending LLMs are human juniors. They don’t need prettier errors or nicer formatting — they need a medium they can traverse, reason over, and execute natively.

Learn More

Get the full spec, LLVM compiler backend, maps & arrays support, and more:

👉 github.com/dshills/alas

Want to help build tools, runtimes, or an ALaS playground? Let’s talk.