Intents: The Endgame for Computing Interfaces?
http200.substack.comSince coding with Cursor, I’ve spent a lot of time thinking about how much computing interfaces have changed in such a short period of time.
Just as we saw a paradigm shift from imperative to declarative interfaces, LLMs have opened the door to new kinds of interfaces.
Intent-driven interfaces are already everywhere - from AI assistants processing raw data, to Arazzo for defining how APIs map to human-scale workflows, to agents capable of handling complex tasks.
While the doomer view is “robots will take all our jobs”, my optimistic take is we finally have a human-centric way of interacting with machines and it’s going to supercharge our abilities
If there are two partners and one is a million times faster than the other, are they both necessary?
Who is going to pay a thousand times more for a thousand times less?
If the faster one creates naive, hard to discover bugs, is it really faster? I don't think we understand the long term consequences (or maintenance) of LLM generated code. So far the anecdotal results haven't been great.
I've become increasingly frustrated with having to work with other people's AI-"assisted" code. I can tell when I depart from reading sensible human-written code and enter the land of Copilot where all bets are off. Just yesterday I discovered that some environment variables someone had configured on a service weren't actually doing anything, because the service didn't support being configured via environment variables, let alone those particular ones. It's stuff like that which really gets me about all this: you can no longer assume a coherent theory of mind behind what you're reading. You can no longer trust that because something looks specific and intentional (like environment variable names), that it actually came from reality and not a confabulation. It's breaking the social contract that makes collaboration work.
I don’t support stupidly applied AI generated code either, but this isn’t a new thing at all, before it was code pasted from Stackoverflow and hammered at until it sort of worked.
Kitbashed SO code at least had some kind of grounding to it. The AI stuff is fully unmoored.
We are having two different conversations. My question was of course based on the invention of AIs that are actually competent.
So largely fictional, then.
LLMs seem good at replacing off-shore and very Jr. devs. That’s about it so far. We’ll see if they can get over this current stagnation.
Do they?
They just seem to replace stack overflow.
Stack Overflow rarely surfaced a solution to your particular problem, but solutions to tangential or otherwise very similar problems. You still had to reason around the posters problem context and try to see if it really matched your own. You still had to have some understanding.
LLMs present a solution as if it is 100% in the context of your problem. You arrive at a solution without being forced to think at all if the solution applies to your problem.
Without being forced to, many will not. I teach programming and I can tell you it is so so obvious when students are just blasting shit from ChatGPT into their submissions without thought.
Stack Overflow for better or worse would let you know when you're being an idiot and would shoot your foot off. AI will gladly load the gun for you.
Gave me a good laugh, it's like when they were envisioning flying cars 50 years ago.
https://www.paradigm.xyz/2023/06/intents
Check this out
If I understand, this means to map a chat to a choice of clear API calls. I buy it.
The only issue with it that I can think of it the lack of reporting without asking. When I login to a site, I already want to see the most applicable information without having to ask for it. If I exclusively have to start asking a chatbot for basic info, I may miss out on a lot of what I need to know.
Is this basically, how to do GUI's with functional programming. And someone saw the light and it is now the future?
People are always getting hung up on the "how to make a GUI if variables are immutable??".
It takes a bit of getting used to, but once over the hump, it seems like GUI's are easier.
See typescript, or fabulous, F#, elm, etc...
Oh we’re playing with fire alright
Another way of saying we're experimenting.
>The gradual shift from imperative to declarative interfaces
prolog, sql, chatgpt comes to mind