Settings

Theme

“AI” is a misnomer. There's no ability to reason. Its just pattern matching

12 points by jonthepirate 3 years ago · 18 comments · 1 min read


I think GPT is really neat. However, it cannot solve even the most basic reasoning problems I tried.

It feels like it understands what I'm asking for and it provides good answers, but so does Google.

I think calling this "Artificial Intelligence" creates a misunderstanding of what's going on because it's pattern matching.

Sure, the input and output is way better than Google, but if it can't reason, where's the intelligence? The whole thing seems like a hype train that I'm evidently not on.

PaulHoule 3 years ago

Certain people always say that "art" isn't "art" but it's a consistent way to embarrass yourself whether the object is Duchamps' "Readymades" or video games (in the case of a famous movie reviewer.)

Particularly any kind of "A.I." is always considered by some to not be "A.I." for instance a chess playing program is just searching moves, an expert system is just applying rules, software that lays out microchips is just solving an optimization problem, etc.

It is moving the goalposts and it is a form of ignorance that leaves the field wide open to the likes of Eliezer Yudkowsky. Particularly, like the aphorism that "an LLM can't create anything new" it distracts people from the serious task of figuring out what specific things these things can and cannot do.

  • smoldesu 3 years ago

    > the serious task of figuring out what specific things these things can and cannot do.

    Is it rude or reductive to suggest that we already know what they can and cannot do? It's just text. Text can be interpreted in meaningful ways, and it's cool that this text can react to user input, but it's still... starkly limited. I'm honestly not sure what serious things are left to figure out with AI. It's like the Library of Babel in a way, it contains both everything and nothing. But it's also just entropy limited to text, which in-and-of itself is not that powerful.

    It's heady stuff and I don't know if there are any right answers. I feel like it's capabilities are not as strong as others profess though.

    • PaulHoule 3 years ago

      No, it is an active research area to discover what they can do. A typical paper is something like this

      https://arxiv.org/abs/2305.18618

      My opinion is as follows:

      Frankly the thing they are best at is bullshitting and some people are particularly good at falling for it, many of those people think they are talented "prompt engineers". I think there is something about picking the "most likely" next word that stops people from perceiving incongruities, which gives chatbots a hypnotic power.

      There are certain kinds of problems chatbots seem to do well on but usually research shows they struggle to do anything right more than 80% of time. For some problems ("is this about astrophysics or psychology?") you can put the same LLM into a supervised training situation and get the right answer 99% of the time.

      Chatbots are a big pile of biases and frequently come to correct answers by following shortcuts. The real life criminal justice system is biased in various ways but the chatbot would make a verdict based on a number of things it read like "Tyrone is a thug", in a conventional situation where the usual assumptions apply it looks genius, but if you go off the usual rails you find they stay on them.

      I've been interested in NLP both in terms of side projects and work since 2004 or so and for long I believed the Chomskyian idea that the "language instinct" is a peripheral that bolts onto an animal and that most of your human intelligence is really intelligence you share with mammals and birds at the very least. Computers have struggled to process language because they lack the groundedness in the world that animals have. I'd contrast that to this discredited trend in philosophy

      https://en.wikipedia.org/wiki/Structuralism

      which thought language had intrinsic meaning and is a model for understanding other things in the social sphere. Chatbots really do accomplish a lot more with text alone than I and many other people thought were possible and will actually revive structuralism. Yet boy do they bullshit and it is scary to see how giddy people get when they are seduced by them and think about how they'll soon be pressed into service running "pig butchering" and other romance scams.

      Personally I feel really jealous because they seem to elicit the neurotypical privilege I never had.

proc0 3 years ago

I think your contention that it's not real intelligence is properly described by "artificial", i.e. artificial flavors like lemon are not the same as lemon juice.

Also, and maybe more importantly, it can be argued that reasoning is a form of pattern matching. All that brains do is pattern match, they just do it in a complicated way that we have no clue about yet, and therefore all the side effects and intricacies of the brain's architecture are not seen in the relatively more simple algorithms that we have now with artificial neural networks.

That said, maybe a better term could be Algorithmic Cognitive Tools, or something similar to point out it's just extending our own intelligence, however I think most agree that eventually we will have proper AI and machines will be doing some form of reasoning whether human-like or not. I just don't think that "cognitive architectures" (another misnomer abusing biological terms) are there yet.

  • effed3 3 years ago

    "All that brains do is pattern match, they just do it in a complicated way that we have no clue about yet"

    probably brain do something way more complex than pattern match. The question can be: how much we can emulate of brain (or an true intelligent system) using only pattern match? I don't know, maybe not enough.

    • mindcrime 3 years ago

      probably brain do something way more complex than pattern match.

      I could agree with this statement if it were rephrased to begin "possibly brain do..."

      I'm curious... aside from human chauvinism, do we have any particular reason to believe that it's "probable" (as opposed to merely "possible") that brains require something beyond pattern matching at the lowest level?

      • effed3 3 years ago

        surely (or quite probably..) you can reprase with "possibly", with no change in the overall sense (to me), different point of view, possibile, probable, but same unknow.

wsgeorge 3 years ago

"It's just pattern matching"

Is there a name for when someone says "X is not really Y, but it is rather <mentions lower-level mechanism employed by X to achieve Y>"?

Because that's what I'm seeing in this post, and I don't think it makes your argument strong.

  • mindcrime 3 years ago

    In this specific context, this phenomenon is so common that it literally has the name "The AI Effect".[1]

    Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

    [1]: https://en.wikipedia.org/wiki/AI_effect

  • ttctciyf 3 years ago

    Nothing Buttery or, newer, https://en.wikipedia.org/wiki/Greedy_reductionism

    The gulf between the notion of machine intelligence fostered by unthinking use of "AI" terminology and the reality of the rudderless productions of generative AI, which provide only a semblance of intelligence (when the stars are aligned) is real enough, though, IMO.

effed3 3 years ago

I Agree. Probably the big part of the "intelligence" here, is by the reader, seeing in the textual output of a probabilistic model some sense or quality. The intelligence is hidden in the training data, i suppose, and probably the output is little more than a mirage.

Building this AI systems, now and in the past years, has proven usefull in narrow and specialized fields, chess play, chemical classification, planning, scientific data analysis, etc, but when the field is the Human Language -per se- and it's significance i feel something big and deep is still missing.

About the "Intelligence" till now we have no good idea of what really is, or how work, and how is related to Mind and Brain.

mindcrime 3 years ago

Please stop. These discussions never lead anywhere, and all they reveal in the end is that $SOME_AI_PROJECT doesn't meet $YOUR_IDIOSYNCRATIC_DEFINITION_OF_AI.

For the n'thousandth time: AI does not (necessarily) mean "perfect fidelity with human intelligence". It's many things including a field of research, a body of knowledge, a suite of technologies that display behavior which could be classified as "intelligent" in some sense, and an aspiration, among others. Current "AI" systems absolutely fall under this rubric, even if they aren't functionally equivalent to human intelligence.

Never mind that we don't know for sure that human intelligence doesn't ultimately reduce to "just pattern matching" at some level. And never mind the "AI Effect"[1] where the public at large quits considering anything "AI" once it works. Usually by saying things like "it's just computation" or "it's just pattern matching." :-)

[1]: https://en.wikipedia.org/wiki/AI_effect

  • effed3 3 years ago

    maybe Artificial Ability (AA) would be a better term than AI. Cybernetics is better than "Artificial Life".

    Seems that talking about Intelligence is like talking about Soul or Life, too much different opinions, and using precise terminology is important in scientific fields.

jstx1 3 years ago

This is pointless yelling at clouds.

For most things there's a generally accepted mapping <thing> : <term for the thing> which evolves naturally over time as part of language and culture.

That's what AI is - a name for thing, not a promise or a contractual obligation to perfectly match the preexisting dictionary meaning of the words that compose it.

  • sigstoat 3 years ago

    this is the same crowd that is angry because openai has “open” in their name, but isn’t “open”.

theonemind 3 years ago

AI could mean a lot of things. For the sake of clarity, I generally won't refer to GPT as AI. I usually think of the way Richard Stallman wrote about "intellectual property" meaning copyright, patents, and trademarks. https://www.gnu.org/philosophy/not-ipr.en.html .

Using grossly generic terms without distinctions seems to benefit those that want to misrepresent things by using concepts not fully applicable across the gross generic category for their self-interested part of the category, for hype or manipulation.

Currently, people want GPT to take on the luster of a mythical AGI by using the categorical term "AI", so I just call it GPT/LLM. I'll consider "AI" a field of research, not an adjective or noun suitable for products based on research from the field.

johntiger1 3 years ago

Not all AI is pattern matching - look up "symbolic reasoning"

throwawayadvsec 3 years ago

you don't understand what intelligence is intelligence does not mean human intelligence, intelligence does not mean genius

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection