AI sugar for regular programmers — the silent majority

6 min read Original article ↗

Samuel Karani

Press enter or click to view image in full size

1) Introducing AI Sugar

What if you could run code like this in Typescript?

const result1 = await ai.sort({
array: ["green", "red", "blue", "yellow"],
prompt: "rainbow color order",
});
// ["red", "yellow", "green", "blue"]

With my new and free open source library — now you can!

What? Lodash for AI

AI Sugar is a collection of utility functions powered by AI. This initial version is built on top of Vercel AI SDK. “Sugar” as each function is syntactic sugar wrapper for an AI api call.

Why? More imperative, less declarative

Letting the AI make critical (biiig) control flow decisions like deciding when to call your tools/functions can be unreliable, inconsistent or unpredictable.

It might be better to let the AI make smaller decisions more reliably so that we can structure our program as we are normally used to. This makes programming with AI less declarative and more imperative which might be the more ideal or preferred pattern.

This allows us to use familiar syntax like if/else when dealing with AI requests and responses.

const prompt = "What happened in San Francisco last week?";
if (await ai.knows({ prompt: prompt })) {
// continue prompting AI
} else {
// use web search api
}

What’s more, it allows you break out of the declarative model of Vercel’s AI SDK into an imperative one particularly when building tools.

2) Declarative vs Imperative Tools

Example 1

Setup:

const model = openai("gpt-4o");
const prompt = "What happened in San Francisco last week?";
function searchExa(query: string) {... // web search api

Vercel’s AI SDK wants you to code like this:

const { text, sources } = await generateText({
model: model,
prompt: prompt,
tools: {
// each tool comes with a management and "orchestration" cost
webSearch: tool({
// more prompting
description: "Search the web for up-to-date information",
parameters: z.object({
// could generate irrelevant queries
query: z.string().min(1).max(100).describe("The search query"),
}),
execute: async ({ query }) => {
return searchExa(query); // could be called multiple times
},
}),
},
maxSteps: 3, // requires guessing/approximating
toolChoice: "required", // could still be ignored
});
// 20.196s
// often returns { text: '', sources: [] } despite tool being called

Instead you can code like this:

if (await ai.knows({ prompt: prompt })) {
const { text, sources } = await generateText({
model: model,
prompt: prompt,
});
} else {
const results = await searchExa(prompt);
const text = await ai.summarize({ data: results });
}
// 8.899s
// { text: 'Last week in San Francisco, two major events took place. Firstly, ...', sources: [] }

Example 2

Setup:

const model = google("gemini-2.0-flash");
const prompt = "who is the current president of the united states?";

Vercel way:

const { text, sources } = await generateText({
model: model,
prompt: prompt,
tools: {
webSearch: tool({
description: "Search the web for up-to-date information",
parameters: z.object({
query: z.string().min(1).max(100).describe("The search query"),
}),
execute: async ({ query }: { query: string }) => {
console.log("query:", query);
// some of the (irrelevant) queries from testing
// query: current president of the United States 2023
// query: current president of the United States
// query: current president of the United States 2023
const { answer, citations } = await exa.answer(query); // web search grounding api
return { text: answer, sources: citations.length };
},
}),
},
maxSteps: 3,
toolChoice: "required",
});
// 9.439s
// in my testing, despite tool being called, always returns empty text and no sources

Our way:

if (await ai.knows({ prompt: prompt })) {
const { text, sources } = await generateText({
model: model,
prompt: prompt,
});
return { text, sources: sources.length };
} else {
const { answer, citations } = await exa.answer(prompt); // web search grounding api
return { text: answer, sources: citations.length };
}
// 2.455s
// { text: 'Donald J. Trump is the current president of the United States...', sources: 8 }

Evaluating our approach:

More accurate and more reliable from our testing — tools may return nothing or may not be called at all returning older/stale information as a result.

Secondly, you now have to deal with maintaining the tool — its parameter schemas, description prompts and the execution logic.

Lastly, composing tools in this way comes with an orchestration cost where you need to make sure all your prompts & tools can work together harmoniously — that one prompt/tool doesn’t override or clash with another one. It may force you have to always have (implicitly) the entire set of possibilities in mind, instead of partitioning and dealing with different parts independently as we are used to.

Pros of our approach:

  • More control — execution becomes much less of blackbox
  • More predictable flow — model is not “overloaded” or “overwhelmed” with many instructions or prompts
  • Less prompting hence less prone to misunderstandings or hallucinations from the LLM
  • Tools don’t work with either structured or streaming objects — our approach can handle such cases
  • Less generic and complex prompts — simpler and more specific prompts
  • In my testing, has always been faster than using maxSteps (2.5–5X faster)

Cons of our approach:

  • Could potentially have more (verbose) code — but this has not been the case in my testing
  • You lose the “magic” and delegation of declarative approach
  • Probably not the best approach for building “agents” or programs with generic goals or wide scopes

maxSteps is kinda weird

When it comes to building tools, Vercel’s maxSteps forces you into a somewhat esoteric declarative model without recourse. (Can you imagine what kind of loop is being forced to iterate with maxSteps?)

As mentioned, this library aims to alleviate or complement this approach by facilitating an imperative one.

For example, this snippet from the official docs doesn’t even work at the the time of this writing i.e. the tool is not being called.

const { text, sources } = await generateText({
model: openai.responses('gpt-4o-mini'),
prompt: 'What happened in San Francisco last week?',
tools: {
web_search_preview: openai.tools.webSearchPreview(),
// doesn't work in my testing at the time of writing
},
});

Additionally, here are some of the top comments for the Vercel video that introduced composing tools with maxSteps — most of which expressed some confusion, critique or disapproval of the api.

3) Inspiration for this library

Composing tools the “Vercel way” can work great for some cases. I can see it working well for assistants and other general use cases. But not everybody wants to use AI to build chat apps and voice bots. (Doesn’t the world already have enough of those?)

There are many people with plenty of small narrow use cases often at the level of function calls who would benefit from injecting some useful AI into different parts of their applications without needing to accept an entire paradigm shift to even started.

Being one of those people, I believe this library could be a path there. I think plenty of people would simply appreciate sprinkling AI into different parts their programs without being forced to adopt a new programming model wholesale.

My approach is to make the AI do as little as possible, as specificly as possible. The opposite of agents and tools which strive to make the AI do as much as possible, often at the cost of quality of the output produced.

Lastly, if this library is well received, then local LLMs become even more important towards making this style of AI programming more cost efficient.

That’s all

Thanks for reading. I welcome your input, suggestions, feedback.

Github: https://github.com/samuelkarani/ai-sugar

NPM: https://www.npmjs.com/package/ai-sugar