Home » Articles » NG11. First principles of Network Grammar

NG11. First principles of Network Grammar

Leibnitz MedalAt last we start deploying NG systematically to analyse sentences.  In this piece we look at NG’s first big assumption.  A sentence is processed in one pass from left to right; as each word is encountered, possible junctions between it and words to its left are identified by reference to stored language knowledge.

We’re keeping it simple.  Any unconvinced linguists should ask themselves ‘Why should language theory have more complication when, with less, we can explain how language is so richly expressive?’


Let’s discuss those first-century Romans again.

(5) Nero is giving Olivia to Poppaea

For each word the possible pairings are with words to its left, starting with the nearest and working leftwards.  Sentence (5) yields:  Nero__isis__givingNero__givinggiving__Oliviais__OliviaNero__OliviaOlivia__togiving__tois__toNero__toto__PoppaeaOlivia__Poppaeagiving__Poppaeais__PoppaeaNero__Poppaea.

Language knowledge includes valid junctions in the form P / M / C / R / C / M / P as in LS7.  I’ll repeat the diagram but remember that a concept represented by one of the circles is actually distributed across the network, not magically held in the node itself.

Language knowledge - junction

For (5) the relevant junctions, with the parent in bold, are:






All the other possibilities are ruled out because they’re not part of language knowledge (e.g. Olivia__to), or they are pre-empted by a more local junction (e.g. to__Poppaea rather than giving__Poppaea).

Each phonological word (P) must be in the same P / C / M lexical entry for every junction in which it participates.  All but one of the words participates in one junction as dependent and every word in 0, 1 or more junctions as parent.

Allocating the dependent role to a word limits its participation in any other junction to the parent role.  Thus to__Poppaea would allow beautiful__Poppaea.

The aim is to deliver to cognition the correct M / R / M propositions for a sentence.  In general, an M / R / M is formed by replacing the Cs in an C / R / C rule with Ms (meanings) for the Ps (phonological words) from a junction in the sentence.  A particular C may be for a specific word or else be a generic representing a whole class of words that behave similarly.


Where the dependent is an adjunct, the proposition can be completely formed irrespective of anything else in the sentence.  For example, beautiful__Poppaea would immediately deliver POPPAEA / HAS PROPERTY / BEAUTIFUL.

But comprehension is not simply a matter of delivering to cognition a proposition for each junction, one at a time. A junction may need to create more than one proposition; initially these lack either a full set of concepts or full activation.  Incomplete propositions from successive junctions must be merged until something useful can be delivered.

Sentence (5) is a good example.  At Nero__is, is could be ‘copular’ and followed by a complement such as mad or emperor.  Or is could be an ‘auxiliary’ followed by an active form such as giving, which delivers AGENT / NERO.  Or is could be an auxiliary followed by a passive form such as given, which delivers THEME / NERO or GOAL / NERO and needs still more input to resolve.

I’m using AGENT, THEME and GOAL. The more explicit GIVER, GIFT and RECEIVER are unnecessary because the semantics are already in GIVE.  Also the text-book thematic relations can be generic for a whole class of verbs.


A language user has to be able to deal with problems.  One type is ungrammatical speech from infants, non-native speakers and aphasics.  Another is disfluency – the ums and ahs and incompletenesses – in the speech of every competent native speaker.

In such a case, does the process restart itself following an impasse in order to try allocating different junctions amongst the words earlier in the sentence?  Reprocessing would occur rarely because it is inefficient and a language ought to evolve its C / R / C rules and P / C / M words to avoid it.

But no, reprocessing doesn’t occur at all.  A stored program would be needed to provide branching and looping in the logic to handle these exceptions.

A real possibility is that we don’t deal with ungrammatical sentences automatically but make a conscious effort to comprehend – much like we do as learners of a foreign language.

However it’s obvious that we deal unconsciously with the disfluency that pervades everyone’s spontaneous speech.  NG forces me to conclude that the ums and ahs are somehow lexicalised.  And we do pick up meaning from incomplete sentences – which is what NG predicts.  Sadly, exploring the possibilities will not feature in LanguidSlog until much later.

Some academics say we also have to deal with ‘garden paths’.  They contrive sentences that are certainly difficult to handle and allegedly grammatical.  Thomas Bever’s The horse raced past the barn fell is often cited.  I’ll ignore GPs.  They’re vanishingly rare in real speech.  And GPs are not actually grammatical according to NG: unconscious processing cannot wait until the sentence is complete and then go back and disentangle it.


I’ve not actually shown an analysis of sentence (5).  It would have made this piece too long.  I should manage it next time – unless there are lots of questions about LanguidSlog up to this point.


  1. SH says:

    ‘Why would language have more complication when it can be richly expressive without?’
    Because in the first instance, languages are not constructs, but rather emerge through a process of cultural evolution, which may lead to local maxima which are not optimal systems. And in the second instance, communication channels are noisy and may require complication in the form of redundancy to achieve expression!

    What is your model of language processing for speakers of languages with free word order? Does this networked processing happen with more information packed into each lexeme due to more complex morphology?

  2. Mr Nice-Guy says:

    Thanks again, SH.

    Yes, my sentence is horrible. It should be ‘Why should language theory have more complication when, with less, we can explain how language is so richly expressive.’ I think my revised wording no longer conflicts with your ‘first instance’. Ditto the ‘second instance’ but isn’t the complication/redundancy at discourse level rather than sentence level?

    Re your second para … I haven’t given a lot of thought to this yet. English is keeping me busy and I’m assuming that if I can crack English, then German, for instance, with plenty of audible inflection ought to be possible too. Where in English the lexicon contains one rule like (implicitly nominative noun) __ (verb), in German it will contain two rules (explicitly nominative noun) __ (verb) and (verb) __ (explicitly nominative noun).

    Your last sentence is a bit tricky because of ‘lexeme’. I tend to use that to mean the word abstracted away from its inflected forms. Thinking about German, I can’t be sure you’re not asking about those notorious compounds – which have ‘more information packed in…complex morphology’. But anyway I guess we’re talking about one lexical morpheme – or several concatenated – plus an inflectional morpheme. In the terms established in Languid Slog 7, the question is: How granular are the P / C / M words and the C / R / C rules? I’m not sure but suspect that granularity is preferred: less to learn/more to process is the trade-off.

  3. N H says:

    Languidslog is not so easy to follow and maybe I missed something. The distinction between “parent” and “dependent” is like in dependency grammar and I understand about TO-poppaea and beautiful-POPPAEA. What I don’t understand is how the distinction is in a lexical “rule”. Should one of the C nodes be marked as parent? Should the incoming word also be marked? How do you ignore an incoming pair of words with both (or neither) marked as parent?

  4. Mr Nice-Guy says:

    Thanks, NH. Good question – one that I intend to address later on, but let’s make the main point now.

    To be consistent with ‘no stored program’ as in LS5, there can be no marking of parent (or dependent) status in junctions as input or as stored in the lexicon. That means the status of each of the two words in a junction must be implicit.

    In English there are word pairs where the word-sequence can be reversed regardless of status-sequence; for example start__to and to__start, rice__pudding and pudding__rice, John__gave and gave__John. However I can’t think of any pairs where the status-sequence can be reversed but not the word-sequence; i.e. aaa__bbb and aaa__bbb seems to be impossible. I’m not sure that’s true cross-linguistically; so if you’re a native speaker of something other than English, please think about it and let me know true or false.

    You might say that the impossibility of aaa__bbb and aaa__bbb is obvious: how could the two junctions be distinguished? Well, that’s the whole point. They could be distinguished if words were marked as parent (or as dependent). The conclusion is likely to be that there is no such marking.

    Pending cross-linguistic confirmation, I’ll continue my convention of showing dependents in italic and parents in bold/italic. That’s to keep me honest regarding the principle that a word can only occur as dependent in one junction in a sentence. Yes, I need an explanation of how that works. I haven’t come up with one yet but suspect it’s something to do with activation.


This site uses Akismet to reduce spam. Learn how your comment data is processed.