Show HN: TinyFn – Your agent sucks at math
tinyfn.ioI built TinyFn because I kept watching AI agents confidently get basic things wrong — math, string counting, unit conversions, validations. The classic "how many R's in strawberry" problem, but across hundreds of utility tasks.
TinyFn is a collection of 500+ deterministic utility endpoints (math, string ops, hashing, validation, conversions, etc.) that AI agents can call via MCP instead of guessing. (Also works as a plain REST API). Think of it as offloading the stuff you wouldn't ask a human to do in their head either.
I'd love feedback on which tool categories are most useful, and what's missing. Happy to answer any questions.
https://tinyfn.io This is the right instinct. LLMs are great at proposing actions, but they're fundamentally unreliable at enforcing deterministic invariants. Math is the obvious example, but schema validation falls into the same bucket. If an agent outputs structured JSON, the question "does this conform to the declared schema?" should have exactly one answer. Same schema + same payload → same result, every time, regardless of runtime, language, or retry. Once you treat that layer as deterministic infrastructure instead of model behavior, a few things get easier: • retries stop producing inconsistent side effects
• downstream systems can trust that validation actually ran
• you can audit what passed structural checks independently of the model Semantic correctness is still a separate problem (models or domain rules are needed there) but offloading the structural layer removes a lot of accidental complexity. One example of what that deterministic validation layer looks like as a standalone API: https://docs.rapidtools.dev/openapi.yaml LLMs hallucinate math, fumble conversions, and make up validations. So let's obscure (partially) the inadequacies of LLMs? This way, we can skip past the obvious and move on to making really big mistakes using LLMs. Humans are also bad at math! So we created calculators. LLMs should have the same access to tools to give them the best opportunity to succeed LLMs have access to the same tools --- they run on a computer. The problem here is the basic implementation of LLMs. It is non-deterministic (i.e. probabilistic) which makes it inherently inadequate and unreliable for *a lot* of what people have come to expect from a computer. You can try to obscure the problem but you can't totally eliminate it without redesigning LLMs. At this time, the only real cure is to verify everything ---which nullifies a lot of the incentive to use LLMs in the first place. > LLMs have access to the same tools --- they run on a computer. That doesn't give them access to anything. Tool access is provided either by the harness that runs the model or by downstream software, if it is provided at all, either to specific tools or to common standard interfaces like MCP that allow the user to provide tool definitions for tools external to the harness. Otherwise LLMs have no tools at all. > The problem here is the basic implementation of LLMs. It is non-deterministic (i.e. probabilistic) which makes it inherently inadequate and unreliable for a lot of what people have come to expect from a computer. LLMs, run with the usual software, are deterministic [ignoring hardware errors ans cosmic ray bit flips, which if considered make all software non-deterministic] (having only pseudorandomness if non-zero temperature is used) but hard to predict, though because implementations can allow interference from separate queries processed in a batch, and the end user doesn't know what other typical hosted models are non-deterministic when considered from the perspective of the known input being only what is sent by one user. But your problem is probably actually that the result of untested combinations of configuration and input are not analytically predictable because of complexity, not that they are non-deterministic. LLMs absolutely do not have access to the same tools unless they're explicitly given access to them. Running on a computer means nothing. It sounds like you don't like LLMs! In that case, you may be more interested in our REST Api. All the same functions, but designed for edge computing, where dependency bloat is a real issue https://tinyfn.io/edge It sounds like you don't like LLMs! I tend to prefer predictability/consistency and reliability. I fail to understand why anyone would prefer otherwise. same--this is exactly why I built this I don't really get it. While these functions sound useful, the logic seems simple enough that you could just prompt an LLM with the documentation and get a working codebase without actually subscribing to the hosted service. Where is the commercial value of it?