Home » Articles » NG8. Cognitive effect

NG8. Cognitive effect

Bubbles risingLast time we looked at how a word-word junction delivers an atom of meaning to cognition.  This time we’ll see how the set of junctions in a sentence builds up a whole molecule of meaning.

Sentences

A grammatical sentence consists of junctions allowed by the lexicon.  A word shared by more than one junction must use the same P / C / M in each.  A two-junction sentence:

Two-junction sentence

There is no need for an overall structure (as shown) to be built.  The meaning of the whole sentence emerges piecemeal.  Ultimately sentence meaning is a bundle of simple propositions.  This conclusion is from reasoning but one piece of empirical evidence supports it: what interlocutors remember from discourse is gist, not sentences.

So John kissed Lucy delivers KISS / AGENT / JOHN (M2 / RX / M1), KISS / PATIENT / LUCY (M2 / RY / M3), plus whatever there is in the mind that surfaces in language as tense, focus etc.

Structure shows means, not meaning

A sentence-as-language can be depicted as a tree or as nested brackets.  What determines that structure – word order, inflection etc – is only in the sentence to make the phonological string as brief as possible.

Hominids that communicated by somehow articulating individually the propositions KISS / AGENT / JOHN, KISS / PATIENT / LUCY etc would surely have lost out to others who developed the ability to communicate the same thing much faster with John kissed Lucy.  The mental architecture that evolution had already put in place for other purposes could support that.

No stored program

With language knowledge held as in LS7, activation via phonological words leads to delivery of meaning with no separate ‘program’.  Supporting evidence comes from impaired speech.

The mental architecture has to be plausible in relation to aphasias.  Broca’s is associated with damage in one physical area of the brain, Wernicke’s in another.  Any damage to a ‘stored program’ would surely stop language production entirely, not cause the partial disabling characteristic of these aphasias.

More speculatively, a problem with linking the sequence of Cs for a sentence could cause agrammaticism (as in Broca’s?).  Similarly a problem with linking the sequence of Ms could deliver inappropriate phonological words in a grammatical sequence (as in Wernicke’s?).

Delayed delivery

Another difference from LS4 is that, in the LS7 approach, cognitive effect is delivered as soon as possible.  Delivery may be immediate for a word joined to a preceding word (for example kissed__Lucy).  But a junction might not be fully determined until following words have been encountered.

To illustrate that, let’s start using a scenario with emperor Nero, his wife Poppaea and slave-girl Olivia.  Each of these nouns represents a person so semantics can’t suggest their syntactic roles.

(3) Nero gave Olivia …

A verb-object junction gave__Olivia is evident but it’s not yet clear whether Olivia has thematic relation THEME (sentence continues …to Poppaea) or GOAL (sentence continues …a puppy)

Delivery must be delayed where there is uncertainty as in (3).  When there is no uncertainty, an M / R / M proposition is fully activated and delivered.  Otherwise the possible outcomes have to be covered by multiple M / R / M propositions.  Initially the available activation must be shared and nothing can be delivered.  When the uncertainty is resolved, all the activation goes to the correct proposition which is then delivered.

At Olivia in (3):

Unresolved junction

A is the activation level needed for a proposition to be delivered to cognition.  When a fully active GOAL / POPPAEA proposition is created, it displaces the half-activation of GOAL / OLIVIA on to THEME / OLIVIA enabling delivery of the latter:

Resolution 1

Similarly THEME / PUPPY would force delivery of GOAL / OLIVIA:

Resolution 2

The ways a ditransitive like give can be used often include this two-way uncertainty about the role of an argument.  Three-way is also possible.  For instance, at given in (4), Who can be theme, goal or agent and X can be theme or goal:

(4) Who is X given … ?

Still to come

In (3), treatment of preposition to in one continuation and determiner a in the other should be accounted for.  This is not a problem but will not be discussed yet.  The purpose of LanguidSlog is to show structure-in-the-mind is a fallacy.  The purpose of this piece is simply to show that a solution without structure is possible, not to reveal it entirely.

The approach has nonetheless been tried on to a wide range of syntax – actives, passives, wh-interrogatives, polar-interrogatives, imperatives, coordination (of modifiers, arguments, clauses), subordinate clauses, verb-plus-particle forms (including disambiguation between particle and preposition).

The split-activation tactic as for sentence (3) is widely used.  Limits on what can be done using it and limits on grammatical acceptability appear to be connected.

Three crucial points on which the whole approach depends can also be explained.  One is how junctions are correctly identified in a one-pass, left-to-right process.  Another is how a concept is formed, with parts of its content shared with many other concepts.  The third is how language knowledge is acquired.

Watch this space!

Mr Nice-Guy

4 comments

  1. SH says:

    I may have entirely missed the thread (I’m afraid LS6 rather lost me), but does this post not just show another notation for the kinds of dependencies that form linguistic structure? By ‘a solution without structure’, do you mean a notation that doesn’t feature hierarchy?
    And can you explain a bit more the notion that ‘what determines that structure – word order, inflection etc – is only in the sentence to make the phonological string as brief as possible’? Is this speaking in terms of linguistic evolution, meaning that syntax evolved to reduce the load on phonology?

    • Mr Nice-Guy says:

      Thanks for highlighting the confusing bits, SH. Please continue! Where was LS6 difficult?
      First of all, there’s a little terminology problem. LS1-6 show structure can’t participate in sentence processing and therefore can’t be explanatory. But structure is useful to describe a sentence in some graphical way. Let’s call descriptive structure Sd and explanatory structure Se.
      Your first question (‘…form linguistic structure?’). The set of dependencies (I call them ‘junctions’) would likely be the same in my approach as in any other dependency grammar. In that sense, mine is ‘another notation’. But I don’t link together the junctions in a sentence to form an overall Sd (and of course not an Se). The first diagram in LS8 may appear to contradict that; but the intention was simply to show that where a phonological word is shared by two junctions, the same P / C / M must apply in each. The crucial point in my approach is that meaning is delivered to cognition for junctions, not for the complete sentence. Analysis, not synthesis.
      Your second (‘…doesn’t feature hierarchy?’). I mean ‘a solution without Se’. Furthermore the notation is not an Sd. Actually the notation hasn’t been revealed yet. The triangles are unwieldy and I use a tabular format (to be revealed around LS12). A table shows the steps in the one-pass/left-to-right processing of a sentence and the points at which meaning is delivered. What is depicted is a process, not an end-product.
      Your third (‘…brief as possible?’). In the mind, the meaning of a sentence is a bundle of three-concept propositions like KISS / AGENT / JOHN. The propositions in a bundle are only linked by simultaneity, not by sequence or hierarchical relations. Hence ‘analysis’ above. However, speed of articulation/audition is optimised by omitting as many as possible of the concept-occurrences that are in the bundle. For example in ‘John kissed Lucy’ only one token for KISS is needed and no tokens for AGENT or PATIENT – because (in this case) the canonical word-order of English does the necessary. Sure, the sentence can neatly be depicted as an Sd but that doesn’t entail an Se being involved in sentence processing.
      Last question (‘…load on phonology?’). I think so.

  2. SH says:

    I’m sorry, that was a typo – it was LS7 that I found difficult, and I suspect that’s just because I’m not parsing the diagrams in the right way. I’ve never been much one for visualisations.
    An interesting take on how propositions are held in the mind! It hadn’t occurred to me to regard functional grammar concepts as fundamental cognitive units…
    Regarding the role of syntax as against phonology etc, I think there are other pressures on communication than cognitive load (such as noisy environments or competition for a listener’s attention) which may be more salient in that regard.

  3. Mr Nice-Guy says:

    Thanks, SH. In LS7 the first diagram is the most important. Perhaps I should have shown progression with some arrows: TO each of the P concepts and FROM the M / R / M proposition. LS10, out next week, explains what I mean by ‘progression’ through the mental network and perhaps that will be enough for understanding the output side. As for input, I’m assuming that speech sounds and other noise are processed upstream, extracting sets of phonemes, each set converging on its own P node. I need to work that out properly but my plan is first to validate the mental architecture by developing a robust theory of syntax and only then to think about how the architecture could support phonology.
    I hope that covers your second para as well as the first sentence.
    Your ‘interesting take…’ is gratifying. LS8 (just out) reveals where I’m coming from.
    Re ‘functional grammar concepts’, again LS10 should help. Until then, just be careful about ‘fundamental cognitive units’: I’m proposing that a concept is not AT a node in the mental network, but is given by progression FROM the node. You could consider the node as a fundamental unit but in isolation it has no content and is not differentiated from other nodes.

Comments