Home » Articles » NG6. A transformation for grammar

NG6. A transformation for grammar

gauntletAre any real syntacticians following Network Grammar?  If so, most are about to become alienated.  I hope that, for at least a few, the alienation is not from the blog but from their previous beliefs.

Movement

The assumption that structures can explain human language has led to widespread adherence to implausible ideas.  ‘Movement’ is a good example, showing theorists’ tendency to account for more and more varied phenomena with more and more complicated processes.

(1)     John kissed Lucy

(2)    Lucy John kissed

Sentence (2) is grammatical and has the same relations between predicate and arguments as (1).  But the binary-branching structure shown earlier cannot be built incrementally in a single pass.  Phrase-structure grammars claim that the object Lucy is moved from its canonical position after the verb.

In production, do some parts of the speaker’s intended meaning create a structure and then other parts modify the structure?  In comprehension, is that process reversed?  Chomskyans would say YES: distinguishing deep/D and surface/S structures has been a key feature of their theories.  Whatever, the implication is that steps A and D (see LanguidSlog 2) do occur.

Movement is said to leave a ‘trace’ at the word’s initial position in the structure.  That would need either a change to the syntactic relation in the junction item that includes the word or else an additional junction item for the trace.

In (2) Lucy is topicalised, perhaps implying additionally that John kissed Lucy but not Mary.  Grammars ought to explain why the meaning is different from (1), not merely point out correlation between syntax and semantics.

These possibilities would need more branching and looping in the sequence of instructions but branching and looping need addressability.  Unless theorists show that mental storage is in fact addressable, they must accept that structure can play no part in sentence processing.

Without structure

Sentence processing cannot be as shown in LanguidSlog 2.  Instead it is direct:

meaning intended by speaker

=X=> phonological output

===> phonological input

=Y=> meaning comprehended by hearer

There is nowhere to hide the difficulties.  The real problem for syntax is explaining the connection between phonology and meaning, not between phonology and putative structure.

Still concentrating on comprehension rather than production, our focus now is step Y instead of C.

Understanding Y

There are three strong clues.

The first is that, without addressability, there can be no branching and looping in the logic of the process.  The only possibility is the least complicated: the same operation is repeated for every piece of input, language knowledge acting as both program and data.

The second clue is that, although merely artefacts, structures drawn by syntacticians correctly show regularities in the effect of words within sentences, and every theory uses junction items.  (Note that ‘junction item’ is a generalisation for the purposes of this blog, not a term widely used in linguistics.  Also, some theories allow junctions with more than two branches or with only one; but such structures can usually be redrawn as binary without contradicting anything.)

Third is the many-to-many relationship between words and meanings.  The diagrams in LS3 imply a one-to-one relationship between phonological word and conceptual meaning, but homonymy/polysemy (same sound, different meaning) and synonymy (different sound, same meaning) are so common that it’s clear the general relationship is many-to-many.  The phoneme inventory of, for example, English is such that the number of distinct words with just one or two syllables words that could be formed far exceeds what is actually needed.  But human language processing has no problem with phenomena such as there / their / they’re and furze / gorse.

Opportunity

It appears no one previously has acted on the third and most obvious clue.  That’s naughty but provides our opportunity.  The next piece should be exciting.

4 comments

  1. SH says:

    It’s a worry that “real syntacticians” are assumed to be Chomskyans!

    I think it’s a long-standing problem of syntactic arguments that examples like “John kissed Lucy” and “Lucy John kissed” are presented as equal because both can be parsed. The latter is readily parsable only in a conversational environment through use of prosodic queues; in text it requires a triple-take, at least for me.
    There are plenty of tagged corpora out there – why not use frequency counts to compare grammaticality rather than invented sentences and intuition?

    I don’t disagree that “the real problem for syntax is explaining the connection between phonology and meaning, not between phonology and putative structure”, but I certainly think structures are a helpful tool for understanding the restricted set of word orders used in this transfer of meaning.

    There is plenty of work out there on polysemy – are you saying that it’s not incorporated into work on the syntax/semantics interplay?

    I agree that this post is the natural conclusion to your views on the computational storage of mental reference models. I’m not convinced it’s what’s going on in human communication! Eager for the next, as ever.

    • Mr Nice-Guy says:

      Thanks again, SH. Good points!

      I used ‘real syntacticians’ to distinguish them from amateurs like me. LanguidSlog takes issue with all of them, not just Chomskyans. OK, in the coming weeks I show a leaning towards dependency grammar but advocates of that are no better at revealing how their ideas relate to how language is processed in the human mind.

      A sentence is grammatical if it can be comprehended by a native speaker without conscious effort. A native speaker can comprehend much more – e.g. poor non-native speech – but with effort. I think I would be OK reading “Lucy John kissed” (which I chose simply as a variant of “John kissed Lucy” that allegedly has movement). But my to-be-revealed theory allows another idiolect (e.g. yours) to require conscious effort in reading the sentence. It also allows prosodic cues to help when the sentence is heard.

      Tagged corpora … Yes – why not? The professionals like to trust their own intuition about grammaticality and thereby make syntax black-and-white, not shades of grey. Once I was coerced into abandoning a long essay after ~80 man-hours when I used the British National Corpus to show that ‘garden-path’ sentences are a myth.

      Yes, structures are fine for linguists talking about sentences – to each other and to students. My point is that structure/constituency can play no part in human language processing. (Did I succeed in actually proving that?)

      On polysemy etc, my point is that the many-to-many between sound and meaning is a huge clue about how language really works. In my time, I’ve read a small but representative subset of the literature – and no one else is making the same point.

      No, it’s not ‘the natural conclusion to [my] views’: see next week. Also ‘human communication’ is too wide: I’m focussing on sound==>meaning; later on we’ll look at meaning==>sound.

      • HK says:

        The type of movement you illustrate here is very well supported empirically. In fact it is so well supported that all serious linguistic theories that I know of have mechanisms to represent the equivalent of a trace in Chomskyan approaches. Consider for example, the c-command condition on reflexive binding. It is satisfied in:

        HimSELF John admires most.

        This is easily accounted for if there is a trace of movement (or similar). Without it, I’m not sure what you’d have to say about facts like these. Other examples show that the trace of this kind of movement is active for agreement:

        MARY John says is/*are coming to the party.

        The problem with your conclusion is that it flies in the face of masses of evidence that structure DOES play a role in sentence processing. The implication is that the alternative neuroscience-based approach must succeed in capturing the data that linguists explain using constituency.

  2. Mr Nice-Guy says:

    Thanks, HK. Which arm do you prefer for wrestling?

    Re movement … PSG trees describe sentences well – for teaching purposes. It is understandable that structures are widely assumed to participate in or to represent sentence processing. LanguidSlog 2 to 6 show there’s a problem with that assumption. One consequence is that ‘movement’ cannot happen. Your examples of anaphor binding and agreement can be analysed – without movement – using the approach that will be defined in LanguidSlog 7 onwards. I’ll expand this response to demonstrate that once my notation has been properly explained.

    Re structure generally … The putative evidence is indirect. It might be more plausible if it had ever been presented with a definition of the mental architecture that it assumes. ‘Constituency’ and ‘structure’ are two sides of the same coin. If the architecture doesn’t allow one, it can’t allow the other. So there can be no participation in production/comprehension by a constituent encompassing one or two other constituents (non-terminal). That would require either complicated processing in real time or an infinite amount of pre-stored constituents. Mr Ockham is surely on my side.

Comments