Settings

Theme

Language may rely less on complex grammar than previously thought: study

scitechdaily.com

25 points by mikhael 9 days ago · 21 comments

Reader

burnt-resistor 8 days ago

Verbal language isn't strictly always necessary as it's often possible to express simple concepts with nonverbal cues, and so gesturing might be considered a less precise subset of sign language. It probably even existed before vocal language in h. sapiens sapiens and exists in other hominids.

  • 0928374082 8 days ago

    nonverbal cues go all the way down to canines, etc.

    • burnt-resistor 7 days ago

      C. familiaris probably co-evolved more than c. lupus. Primitive stuff.

      Give it another 15ky, and "dogs" will be having conversations with "humans". I also think one or more bird species will evolve human conversation ability.

readthenotes1 8 days ago

Something funny in linguists seeing complex grammar behind language instead of custom.

I would be interested in a study that compares breaches in the word order for adjectives in the English language vs noun-verb mis-numbering...

CyberDildonics 8 days ago

What did I previously think?

giardini 9 days ago

Paywall.

In any case, the short answer is "No!". There is a LOT written about language and I find it difficult to believe that most ANY idea presented is really new.

For example, have these guys run their ideas past Schank's "conceptual dependency" theory?

  • alew1 8 days ago

    The article presents the fact that we appear to treat non-constituents (eg “in the middle of the”) as “units” to mean that language is more like “snapping legos together” than “building trees.”

    But linguists have proposed the possibility that we store “fragments” to facilitate reuse—essentially trees with holes, or equivalently, functions that take in tree arguments and produce tree results. “In the middle of the” could take in a noun-shaped tree as an argument and produce a prepositional phrase-shaped tree as a result, for instance. Furthermore, this accounts for the way we store idioms that are not just contiguous “Lego block” sequences of words (like “a ____ and a half” or “the more ___, the more ____”). See e.g. work on “fragment grammars.”

    Can’t access the actual Nature Human Behavior article so perhaps it discusses the connections.

    • lupire 8 days ago

      There's no reason to assume that an human word begins and ends with a space. Compound words exist. The existence of both "Aforementioned" and "previously spoken of" isn't based on a deep neurological construct of compound words.

      • mcswell 8 days ago

        Sorry, I'm not following. What do spaces have to do with this? Grammar is dependent on concepts like lexemes (sort of like words), but there aren't any spaces between lexemes in spoken language.

        • Izkata 7 days ago

          Probably slight confusion over the description, which I was thinking at first with the first "in the middle of" example - that English has compounds nouns so the existence of spaces doesn't necessarily work as a delimiter.

          What it seems to be getting at instead is that language works more like madlibs than previously thought, just on a smaller scale than madlibs. Which to me isn't that surprising - it seems extremely close to "set phrases", and is explicitly how we learn language in a structured way when not immersed in it.

          I also suspect most people don't even know about tree-style sentence mapping. I've mentioned it a handful of times at work when languages come up and even after describing it no one knew what I was talking about. I only remember it being covered in one class in middle school.

          • mcswell 4 days ago

            I had to look up 'madlibs', but I see now what you mean. We do indeed have lots of canned phrases, some of which are idioms. They may indeed come with blanks at the end (your example) or even in the middle ("pull __'s leg"). The issue I have with the original paper is that these canned phrases need to fit in with the language's overall grammar, e.g. "What did the fire alarm come in the middle of __?"

            "Tree-style sentence mapping": I assume you mean the old sentence diagramming where the main part (more or less) of the sentence was on a line, and adjuncts (like prepositional phrases) were shown as branching off the bottom of the line on a diagonal. But there are also tree diagrams of the sort made popular by Chomsky and the generativists who followed. In fact I was once employed doing more or less that, just before the AI bubble in the late 80s. Fun!

  • akst 8 days ago

    Unless you’re referring to academic paper, I’m not getting a pay wall.

    I read the article (but not the paper), but it doesn’t sound like a no. But I also don’t find the claim that surprising given in other languages word matters a lot less.

    • mcswell 8 days ago

      In languages where word order matters a lot less, the grammar is still there---it just relies more on things like case markers and agreement markers (i.e. morphology).

      • akst 8 days ago

        The paper is basically saying “we have evidence that supports language comprehension inconsistent independently of structural hierarchy” [1] (or at least that’s my read of it).

        However I imagine linguists have a more precise definition than most of us, but instead of speculating, I’ve decide read the paper.

        Something they explain early on is a concept called multi-words (an example incomplete this is an idiom) tend communicate meaning without any meaning grammatical structure, and they say this

        > “… multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts … However, some types of multiword chunks may likewise challenge the constructionist account.”

        I’m an amateur language nerd with a piecemeal understanding of linguistics, but I’m no linguist so I don’t know what half this means, but it really sounds like they have a very specific definition here, that neither of us are talking about, and possibly hasn’t been well communicated in the article.

        That said I’m out of my depth here, and I have a feeling most ppl replying to this article are probably too if they are going off the title and article that linked to the paper. But I would be interested to hear the opinion of a linguist or someone more familiar with this field, and their experimentation methods.

        —-—-—-—-—-

        [1] With the hypothesis testing typically done in science you can’t really accept a alternative hypothesis only reject a null one given you’re evidence, so you get titles like “may” or “might” or “evidence supporting x, y, z”, so you get these noncommittal titles like the one. In social sciences or nonnatural sciences I feel this is even more the case given the difficulty of forming definitive experiments without crossing some ethical boundary. In nature science you can put to elements together control variables see what happens in social sciences it’s really hard.

        • foldr 7 days ago

          >multiword chunks challenge the traditional separation between lexicon and grammar associated with generativist accounts

          This is just silly (the paper, not your comment). Do these folks really think they're the first people to think of associating meanings with multi-word units? Every conceivable idea about what the primes of linguistic meaning might be has been explored in the existing literature. You might be able to find new evidence supporting one of these ideas over another, but you are not going to come up with a fundamentally new idea in that domain.

          As another commentor has pointed out, many of the sequences of words they identify correspond rather obviously to chunks of structure with gaps in certain argument positions. No-one would be surprised to find that 'trees with gaps to be filled in' are the sort of thing that might be involved in online processing of language.

          On top of that, the authors seem to think that any evidence for the importance of linear sequencing is somehow evidence against the existence of hierarchical structure. But rather obviously, sentences have both a linear sequence of words and a hierarchical structure. No-one has ever suggested that only the latter is relevant to how a sentence is processed. Any linguist could give you examples of grammatical processes governed primarily by linear sequence rather than structure (e.g. various forms of contraction and cliticization).

          • akst 6 days ago

            I think their point was the meaning of multi-words isn't the result of structure or word order, such as many idioms for example aren't interpreted literally or their grammar isn't too important.

            But this is also academia they want to have evidence behind claims even if they feel intuitive. Like in the social sciences you'll have models and theories that are largely true in a lot of cases, but fail to explain variance from the models. The constructivist and whatever stuff sounds like one of those larger models and they are pointing out where it falls short, not to entirely invalidate it but to show the model has limitations.

            I have a feeling the authors are well aware they aren't the first people to consider this, but they did leg work to provide some empirical evidence about the claim. Which is something you want to have in challenging the orthodoxy of a field. Entirely possible they're working on a larger piece of work but they're being asked to demonstrate this fact which this larger piece of work rests on. But I'm largely speculating there.

            > On top of that, the authors seem to think that any evidence for the importance of linear sequencing is somehow evidence against the existence of hierarchical structure

            The way I see it if you can demonstrate comprehension in the absence of this structure, I think you can make the case that it is optional and therefore may not rely on it. Which is a different claim from it benefits in no way whatsoever, which I don't think their evidence necessarily challenges (based on my read)

            My view is when a language depends a lot on complex grammar what's happening is its trying resolve ambiguity, but languages can address this problem a number of ways. In languages like Russian they handle more of this ambiguity in inflection (and many non-English indo-european languages), in tonal languages to some extent tone creates a greater possible combination of sounds which could provide other ways of resolving ambiguity. That's my guess at least, I also accept I have no idea what I'm talking about here.

            • mcswell 4 days ago

              > if you can demonstrate comprehension in the absence of this structure, > I think you can make the case that it is optional and therefore may not > rely on it.

              One kind of example demonstrating the importance of structure is wh-movement (the appearance of a word like 'who' or 'what' at the beginning of a sentence, when the argument it is asking about would be somewhere deeper inside the structure). For instance "Who did John say that Mary had a fight with __?" (I've represented the position of the argument with the __.) It's been known since the 60s that there are lots of constraints on wh-movement, e.g. *"Who did John say he knew the person who had a fight with __?" (vs. the non-wh-movement sentence "John said he knew the person who had a fight with Bill.")

            • foldr 4 days ago

              >the meaning of multi-words isn't the result of structure or word order

              Surely the 'word order' part must be a mistake here? Clearly word order influences the interpretation of sequences of English words. As for structure, the paper presents no evidence whatever that structure is not involved in the interpretation.

              >many idioms for example aren't interpreted literally

              This is just the definition of what an idiom is, not any kind of insight.

  • antonvs 8 days ago

    > In any case, the short answer is "No!".

    If the question you're answering is the one posed by the Scitechdaily headline, "Have We Been Wrong About Language for 70 Years?", you might want to work a bit on resistance to clickbait headlines.

    The strongest claim that the paper in question makes, at least in the abstract (since the Nature article is paywalled), is "This poses a challenge for accounts of linguistic representation, including generative and constructionist approaches." That's certainly plausible.

    Conceptual dependency focuses more on semantics than grammar, so isn't really a competing theory to this one. Both theories do challenge how language is represented, but in different ways that don't really overlap that much.

    It's also not as if conceptual dependency is some sort of last word on the subject when it comes to natural language in humans - after all, it was developed for computational language representation, and in that respect LLMs have made it essentially obsolete for that purpose.

    Meanwhile, the way LLMs do what they do isn't well understood, so we're back to needing work like the OP to try to understand it better, in both humans and machines.

  • mcswell 8 days ago

    Not sure why you bring up Schank's conceptual dependency theory. That was back in the late 60s, and I don't think anybody has worked in that theory for many decades.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection