LLM tech comes to Wolfram Language
writings.stephenwolfram.comSymbolic AI could never become great because it was missing the connection to the intricacies of the real world that you can only get from data. I think a symbioses of symbolic techniques with LLMs or generally multimodal autoregressive foundation models will lead to our first legit "AGIish" agents. The LLM takes the role of a little gremlin inside the machine that provides the magic sauce, that tiny bit of general intelligence or common sense necessary to connect arbitrary interfaces.
I suspect Symbolic AI might become a part of the larger LLM based AI. IMO symbolic AI is really a representation of 'hard truths' where there is no room for fuzziness like physical laws. But real world is fuzzy so the AI needs to develop a discretion on where to apply symbolic AI and how much.
one example may be an AI can look at the car video feed and use it to determine the speed of a car but to recognize the mile/road markers it may have to use the traditional CNN type modules, kinda like what we do.
Overall I feel this has a potential of ballooning into something really interesting like theoretical physics research could be largely automated using this type of combo (in somewhat not-too-distant future).
Yes, I agree. Symbolic AI is for hardness, which we can use for the skeleton of the agent. LLMs are the fuzziness that surrounds and interfaces the skeleton. Everyone who writes some simple code that assembles a promot is already doing that.
This is I think quite insightful. Looking at the current AI hype cycle, I haven't actually been that impressed with GPT and the image generation models. They just get things too wrong, too often, and too convincingly to the point that they become dangerous/harmful. There's just no way to evaluate quality.
I strongly believe the first person that figures out how to interface these language models with some kind of knowledge model or source of truth will win a lot of the pot for AI technology. Human feedback clearly helped GPT get to where it is now, but It's clearly optimized for answers that appear good rather than are good and based on known facts.
It's been already done with LLangchain / Plugins / Function calling by LLMs.
I don’t think you’ve understood the difference between what you can do with wolfram vs. python.
Have a read of https://www.wolfram.com/language/elementary-introduction/3rd... and play with some examples maybe?
The tldr; is “yes, but…” and the “but” is that, no, generating/invoking python functions from an LLM is no where even remotely near the capabilities exposed by their online systems.
Yeah yeah, “but there’s a python library for that…”, no really. Have a play with it. There isn’t, for lots of the stuff it can do.
You can ground models better with retrieval augmented generation, and toolformer can actually call functions and whatnot, it is also entirely possible to orchestrate things with "agents".
By far, symbiosis with other LLMs, not traditional programs, will lead to more power AGIs.
If you connect a bunch of LLMs with a hard coded structure (Like sensory module -> planning module -> action agent or something like that) then for me that is like assembling a symbolic AI system out of non-symbolic components.
I think the magic of LLMs, if any, is the fact that they can make these kind of "symbolic AI" systems to work.
AutoGPT / AgentGPT / TeenageGPT will inevitably start borrowing ideas from Marvin Minsky & other symbolic AI / cognitive science researchers.
The space of different configurations is much larger than simple connections. It's in fact possibly larger than our imagination. This is because it's hard to comprehend all inputs to a single GPT-4 instance, and the space of LLM-multiagents is essentially the space of *graph theoretical graphs* of GPT-4 instances, practically in four dimensions (if one needs to start thinking about the signals strictly).
This is quite impressive. Especially the auto-correcting and error reporting. I always thought of Mathematica as a cool, but not worth the price technology. This LLM integration into the notebooks is totally increasing the value and making me consider giving it a try.
I've heard, but never confirmed, that you get Mathematica for free when you purchase a Raspberry Pi.
You do. I used it like that for many years, until I could afford the PC version. The raspberry pi version is version-capped though, to whatever they have released up to that point for that platform.
Are there restrictions against running it on a more powerful arm64 server?
This is an interesting question (which I don't know the answer to) because the Raspberry Pi hardware does not itself come with a license for Mathematica or the software in question. Rather, it's part of the Raspbian distribution. I think there's a decent chance that you agree to an EULA after installing the software that would prohibit running it on other hardware, but it would be interesting to hear something definite.
At one time, it would test for the existence of Pi-specific devices, but it’s been a long while since I ran it there.
I didn't know that, but apparently yes: https://www.wolfram.com/raspberry-pi/
Yes, but if you download raspbian to run in on virtualbox or similar it is not there. I guess they have different licenses.
Nice, now we just need to integrate AR headset and we can have proper Tony Stark style dictated 'research laboratory'.
edit: I spoke too soon, it actually has a module to publish AR model object of the design you created. so this could potentially be directly usable with AR glasses.
Relies on GPT-4. Maybe there could be a multistage way to get similar automated API usage with smaller open source models?
I wonder if it would be possible to train a 7B or 13B model to generate code in just one specific programming language. Train it with example problem input/ program output pairs. Then train another small model to translate natural language in a specific domain into an input for the coder model. And maybe a third to translate that into a different real programming language.
The point of this being that you can use smaller GPU instances and dedicate all of the limited power of each model to narrower domain that may be more tractable for it.
I tried it: you get to choose between GPT-3.5 and GPT-4.
Wow, I just learned about this ARObject thing. You can long press a QR code on an iPhone and see a 3d object in AR. Or without a QR code for example, if you're on an iPhone, try following this link [0] and then tapping the body of the page.
[0] https://www.wolframcloud.com/obj/yanz/Base/Temp/AR/201a538b-...
All I actually want is an LLM which can adhere to a set of rules, like a language spec, library/framework code & then help me make stuff rather than hallucination of arbitrary version numbers & staying in an infinite loop of trying every iteration of code by myself in an IDE
I used to subscribe to Wolfram Cloud/Desktop. I liked it, but never really loved the language itself.
I tried a Chat Notebook on Wolfram Cloud this morning, and asked it to write a script to fetch data from DBPedia and present it. It generated Wolfram Language code, so that was very cool.
It's a bit unclear from this post: is this a locally running LLM or in the cloud and if so whose servers? Does it use an existing service like openai or a completely new model specific to wolfram?
It looks like it's calling OpenAI's gpt-4 with function apis.
"Perfect third"?
A perfect third is a music term where the periods of two periodic signals (sounds) have a frequency ratio of 4:5. Including this shows the LLM can integrate knowledge from different domains: music, math, flags of the world, and Mathematica.
There's no such thing as a perfect third in music theory, only perfect 4ths and 5ths, which explains the 4:5 ratio. There's major/minor (imperfect) thirds.
Interesting test of LLMs, this one failed.
While "perfect third" may not exist as a name for an interval, it does make sense to apply the adjective "perfect" to the interval "third" in order to distinguish a just intonation third (4/5) from a well-tempered intonation third (1/2)^(4/12).
You can construct a system under which the phrase makes sense but musicians and musicologists will look at you weird if you say it. The LLM just forges bravely on, however.
Example usage here,
https://archive.org/details/onsensationston01helmgoog/page/3...
(by Helmholtz of Helmholtz Equation)
I'm very impressed that you were able to find this.
I don't think of it as a failure. The LLM figured out a reasonable interpretation for a term it did not know. It might be better to come back with "do you mean pure ptolemaic major third?" instead of assuming, but the way it does it now might be better for programmatic requests.
4:5 is a major third. "Perfect third" isn't really a thing in western music theory. Just found it odd the LLM didn't pick up on that.