Home » Articles » NG3. What would be in a sentence structure?

NG3. What would be in a sentence structure?

Last time we looked at how sentence structure could participate in production or comprehension.  For that to happen it must exist somehow in the mental architecture.  This piece outlines what the structure must contain during its brief existence.

Words and relations

Fish form sentence structureIt is surely uncontroversial to assert that the structure must include items corresponding to the phonological units in the sentence.  For convenience here these units are called ‘words’.  As well as something printed as a string of alphabetics between spaces (e.g. fish), a word may be a subdivision of such a string (fish-es), or even a string of such strings (big fish idiomatically).

It’s safe to assume that words are part of the language knowledge in your mental lexicon.

The words in a sentence are interrelated syntactically; for example, (verb)__(object).  Clearly one word is not always related to another in a way so predictable that the relation between the two could be left implicit.  So the structure must also include items specifying how words relate to each other.

We can also assume relations generically are part of language knowledge.  But storing specific pairs of words in a relation is problematic: how would infants acquire all that knowledge so quickly?

Simple example

The three words in sentence (1) are sufficient for the discussion.  The issues identified are certain to apply also to any longer sentence.

(1)     John kissed Lucy

Most orthodox grammars assume the structure of (1) to be a binary-branching tree:

John kissed Lucy sentence structure

To participate in sentence processing this structure needs the following items to be stored somewhere:

a     lexical material accessed for phonological word John

b     lexical material accessed for phonological word kissed

d     lexical material accessed for phonological word Lucy

e     junction of b and d, with b as head (the dominant partner in the relation)

f      junction of a and e, with e as head

The lexical material at a, b and d must determine how the two junction items are formed.  Spurious alternatives might have the ‘head’ role differently assigned, or have a junction of a and b with that and d forming a second junction.

There is a lot of evidence that sentences are processed incrementally.  That being so, the structure for (1) must be built in stages.  First:

a     lexical material accessed for phonological word John

Then:

b     lexical material accessed for phonological word kissed

c     junction of a and b, with b as head

The incomplete string here gives the structure of a grammatical sentence, but that is untypical and irrelevant.  The final stage is:

d     lexical material accessed for phonological word Lucy

e     junction of b and d, with b as head

f     junction of a and e, with e as head

c     junction of a and b, with b as head (replaced by f)

Another structure

Items a to f are the set that would be involved if the structure of (1) is as in one of the class called phrase-structure grammars.  Another, smaller class is dependency grammars; in these the structure of (1) is viewed thus:

John kissed Lucy

In this context, a dependency grammar is differentiated by having junction items joining only words, never other junctions:

v     lexical material accessed for phonological word John

w    lexical material accessed for phonological word kissed

x     junction of v and w, with w as head

y     lexical material accessed for phonological word Lucy

z     junction of w and y, with w as head

Words

The items described above as ‘lexical material…’ present few problems.  Language knowledge must pre-exist in the minds of speaker and hearer and be sufficiently similar in each to avoid confusion.  It’s safe to assume that for each word a permanently-stored item links its phonological shape and its meaning, and also categorises the word according to how it interacts syntactically with other words:

John

Junctions

The items described as ‘junction of…’ are more problematic.  They must be ad hoc, existing just long enough for the sentence to be processed.  A junction item must be linked somehow to the items it joins.  Two permanently-stored items could be used in forming a transient item for the junction:

John kissed (junctions)

The syntactic relation is given by the combined categories of the words it joins. The role of each word within the relation (‘head’ or ‘dependent’) is given by its individual category.

If it’s also possible for an incoming word to combine with an already-formed junction to form a new junction, the whole process can be modelled.  For sentence (1) assuming a phrase-structure grammar:

John kissed Lucy (junctions)

The junction formed at kissed is greyed out because it can no longer be part of the structure when Lucy is encountered.  Here the rule appears to be that an item can participate as head or dependent in only one other item.  In a dependency grammar, the same word can act as ‘dependent’ in only one relation, but in more than one as ‘head’.

Allocation of the word and the existing junction to ‘head’ and ‘dependent’ roles must work as before.  Categories and syntactic relations must be held in ways that are parts of one scheme of combine-able codes.

No surprises yet

Structure must somehow exist for it to be part of sentence processing.  Thus far that looks perfectly feasible.  But all is not well, as we’ll see next time.

Mr Nice-Guy

One comment

Comments