Are any real syntacticians following Network Grammar? If so, most are about to become alienated. I hope that, for at least a few, the alienation is not from the blog but from their previous beliefs.
The assumption that structures can explain human language has led to widespread adherence to implausible ideas. ‘Movement’ is a good example, showing theorists’ tendency to account for more and more varied phenomena with more and more complicated processes.
(1) John kissed Lucy
(2) Lucy John kissed
Sentence (2) is grammatical and has the same relations between predicate and arguments as (1). But the binary-branching structure shown earlier cannot be built incrementally in a single pass. Phrase-structure grammars claim that the object Lucy is moved from its canonical position after the verb.
In production, do some parts of the speaker’s intended meaning create a structure and then other parts modify the structure? In comprehension, is that process reversed? Chomskyans would say YES: distinguishing deep/D and surface/S structures has been a key feature of their theories. Whatever, the implication is that steps A and D (see LanguidSlog 2) do occur.
Movement is said to leave a ‘trace’ at the word’s initial position in the structure. That would need either a change to the syntactic relation in the junction item that includes the word or else an additional junction item for the trace.
In (2) Lucy is topicalised, perhaps implying additionally that John kissed Lucy but not Mary. Grammars ought to explain why the meaning is different from (1), not merely point out correlation between syntax and semantics.
These possibilities would need more branching and looping in the sequence of instructions but branching and looping need addressability. Unless theorists show that mental storage is in fact addressable, they must accept that structure can play no part in sentence processing.
Sentence processing cannot be as shown in LanguidSlog 2. Instead it is direct:
meaning intended by speaker
=X=> phonological output
===> phonological input
=Y=> meaning comprehended by hearer
There is nowhere to hide the difficulties. The real problem for syntax is explaining the connection between phonology and meaning, not between phonology and putative structure.
Still concentrating on comprehension rather than production, our focus now is step Y instead of C.
There are three strong clues.
The first is that, without addressability, there can be no branching and looping in the logic of the process. The only possibility is the least complicated: the same operation is repeated for every piece of input, language knowledge acting as both program and data.
The second clue is that, although merely artefacts, structures drawn by syntacticians correctly show regularities in the effect of words within sentences, and every theory uses junction items. (Note that ‘junction item’ is a generalisation for the purposes of this blog, not a term widely used in linguistics. Also, some theories allow junctions with more than two branches or with only one; but such structures can usually be redrawn as binary without contradicting anything.)
Third is the many-to-many relationship between words and meanings. The diagrams in LS3 imply a one-to-one relationship between phonological word and conceptual meaning, but homonymy/polysemy (same sound, different meaning) and synonymy (different sound, same meaning) are so common that it’s clear the general relationship is many-to-many. The phoneme inventory of, for example, English is such that the number of distinct words with just one or two syllables words that could be formed far exceeds what is actually needed. But human language processing has no problem with phenomena such as there / their / they’re and furze / gorse.
It appears no one previously has acted on the third and most obvious clue. That’s naughty but provides our opportunity. The next piece should be exciting.