At last we start deploying NG systematically to analyse sentences. In this piece we look at NG’s first big assumption. A sentence is processed in one pass from left to right; as each word is encountered, possible junctions between it and words to its left are identified by reference to stored language knowledge.
We’re keeping it simple. Any unconvinced linguists should ask themselves ‘Why should language theory have more complication when, with less, we can explain how language is so richly expressive?’
Let’s discuss those first-century Romans again.
(5) Nero is giving Olivia to Poppaea
For each word the possible pairings are with words to its left, starting with the nearest and working leftwards. Sentence (5) yields: Nero__is; is__giving; Nero__giving; giving__Olivia; is__Olivia; Nero__Olivia; Olivia__to; giving__to; is__to; Nero__to; to__Poppaea; Olivia__Poppaea; giving__Poppaea; is__Poppaea; Nero__Poppaea.
Language knowledge includes valid junctions in the form P / M / C / R / C / M / P as in LS7. I’ll repeat the diagram but remember that a concept represented by one of the circles is actually distributed across the network, not magically held in the node itself.
For (5) the relevant junctions, with the parent in bold, are:
All the other possibilities are ruled out because they’re not part of language knowledge (e.g. Olivia__to), or they are pre-empted by a more local junction (e.g. to__Poppaea rather than giving__Poppaea).
Each phonological word (P) must be in the same P / C / M lexical entry for every junction in which it participates. All but one of the words participates in one junction as dependent and every word in 0, 1 or more junctions as parent.
Allocating the dependent role to a word limits its participation in any other junction to the parent role. Thus to__Poppaea would allow beautiful__Poppaea.
The aim is to deliver to cognition the correct M / R / M propositions for a sentence. In general, an M / R / M is formed by replacing the Cs in an C / R / C rule with Ms (meanings) for the Ps (phonological words) from a junction in the sentence. A particular C may be for a specific word or else be a generic representing a whole class of words that behave similarly.
Where the dependent is an adjunct, the proposition can be completely formed irrespective of anything else in the sentence. For example, beautiful__Poppaea would immediately deliver POPPAEA / HAS PROPERTY / BEAUTIFUL.
But comprehension is not simply a matter of delivering to cognition a proposition for each junction, one at a time. A junction may need to create more than one proposition; initially these lack either a full set of concepts or full activation. Incomplete propositions from successive junctions must be merged until something useful can be delivered.
Sentence (5) is a good example. At Nero__is, is could be ‘copular’ and followed by a complement such as mad or emperor. Or is could be an ‘auxiliary’ followed by an active form such as giving, which delivers AGENT / NERO. Or is could be an auxiliary followed by a passive form such as given, which delivers THEME / NERO or GOAL / NERO and needs still more input to resolve.
I’m using AGENT, THEME and GOAL. The more explicit GIVER, GIFT and RECEIVER are unnecessary because the semantics are already in GIVE. Also the text-book thematic relations can be generic for a whole class of verbs.
A language user has to be able to deal with problems. One type is ungrammatical speech from infants, non-native speakers and aphasics. Another is disfluency – the ums and ahs and incompletenesses – in the speech of every competent native speaker.
In such a case, does the process restart itself following an impasse in order to try allocating different junctions amongst the words earlier in the sentence? Reprocessing would occur rarely because it is inefficient and a language ought to evolve its C / R / C rules and P / C / M words to avoid it.
But no, reprocessing doesn’t occur at all. A stored program would be needed to provide branching and looping in the logic to handle these exceptions.
A real possibility is that we don’t deal with ungrammatical sentences automatically but make a conscious effort to comprehend – much like we do as learners of a foreign language.
However it’s obvious that we deal unconsciously with the disfluency that pervades everyone’s spontaneous speech. NG forces me to conclude that the ums and ahs are somehow lexicalised. And we do pick up meaning from incomplete sentences – which is what NG predicts. Sadly, exploring the possibilities will not feature in LanguidSlog until much later.
Some academics say we also have to deal with ‘garden paths’. They contrive sentences that are certainly difficult to handle and allegedly grammatical. Thomas Bever’s The horse raced past the barn fell is often cited. I’ll ignore GPs. They’re vanishingly rare in real speech. And GPs are not actually grammatical according to NG: unconscious processing cannot wait until the sentence is complete and then go back and disentangle it.
I’ve not actually shown an analysis of sentence (5). It would have made this piece too long. I should manage it next time – unless there are lots of questions about LanguidSlog up to this point.