We’ve said that the sharing of a concept between multiple propositions means propositions exist in a network. After illustrating the idea, this piece shows that, with no gaps, progression through the network could be the answer to ‘How can knowledge stored in the mind be both program and data?’
All of Networj Grammar comes from introspection by an IT man. This piece is even more speculative than the others. I could omit it and dive straight into syntax using the ideas in NG7 and NG8. But if you can accept – provisionally – the ideas here, you’ll find my sentence analyses more plausible.
If they work for a representative sample of English sentences, the view of mental architecture here and elsewhere in the blog should be worthy of attention from neuroscience people. Could this logical architecture have a physical realisation in neurons and synapses?
At least LanguidSlog has some methodology. Few other writers on language are upfront with their assumptions about what supports it.
So far triangles have only been used to illustrate language knowledge. The simple taxonomy in NG4 can be re-used to show triangles as what I’ve been calling ‘conscious knowledge’:
In every proposition here, the relation is HAS PROPERTY. The relation is directional, as shown by the arrowheads: DOG / HAS PROPERTY / MAMMAL, not MAMMAL / HAS PROPERTY / DOG. To work in the other direction the proposition would need a different relation: MAMMAL / INSTANTIATED / DOG.
A dependency grammarian like Richard Hudson would use ISA in place of HAS PROPERTY. That would work fine in the diagram as it stands but not when the diagram is elaborated. For example, given that not all mammals have tails, DOG / HAS PROPERTY / TAIL is OK but DOG / ISA / TAIL is not. ISA can build a hierarchy of propositions – for example, a taxonomy for zoologists – but it does not allow the full story:
(I’ve slimmed down the symbols and represented HAS PROPERTY with a simple H in the relevant concept circles.)
For none of the added properties would ISA be appropriate. Most would be shared with other concepts; for example, BIRD / HAS PROPERTY / WARM BLOODED. One that is not shared is SUCKLER which, by definition, is specific to MAMMAL but is nonetheless only one of many characteristics of mammals.
I’ve left out the properties of LABRADOODLE. Please have a go at defining a few. Not so easy because, being a crossbreed, it has some properties from LABRADOR and others from POODLE – each of which HAS PROPERTY / DOG.
I’ve no idea how the network is implemented physically, yet my logical model seems to rely on the possibility of many, many connections at a node. What if neurophysiology says a node can have only a few connections?
That objection can be overcome by recognising that the properties of a concept need not all be connected directly by HAS PROPERTY to the node for that concept. For example:
This might actually be useful because there could be some optimal way to arrange the attachment of the properties of MAMMAL to allow other concepts to attach intermediately. For example, BIRD could attach (directly or indirectly) to the node that HAS PROPERTY / WARM BLOODED.
It should now be clear that, starting at any node in the network, the concept there is given by following the paths radiating from it. Each path splits into two or more at each step. The process is essentially divergent (although it’s possible for two paths converge on some other concept, e.g. from LABRADOR and POODLE to DOG).
Each concept is defined by a subset of all the rest. Although a node is the locus of a concept, it has no content. Each one is unique because of how it is connected to all the others. There must be progression along all these paths. (I’m avoiding ‘spreading activation’ because that phrase is used in other quite specific ways. The idea of activation is however crucial in NG sentence analysis as hinted in LS8.)
What happens as paths diverge? There’s little cognitive value in sequences where HAS PROPERTY means ISA because the concepts become increasingly general. DOG → MAMMAL → ANIMAL continues → LIVING THING → THING. THING is vacuous. It can connect via HAS PROPERTY to nothing and via INSTANTIATED to anything.
More typically HAS PROPERTY (≠ ISA) leads to increasingly granular concepts. For example, CARNIVOROUS could lead to very many conceptually indivisible end-points. When CARNIVOROUS is encountered in context, we probably only need a small subset of end-points to get the gist, not necessarily the gory details.
There must be end-points but what are they? I imagine them as the ‘pixels’ on a ‘screen’ that forms the interface between the network and consciousness. My overall model of cognition is:
Perception, introspection and action all require progression along many paths through the network. In ‘acquired knowledge’, nodes may be linked or unlinked in order to reflect these experiences. This part includes phonological patterns and enables recognition of those patterns from new tokens the subject experiences through sound or sight. The patterns link to the concepts needed in order to make sense of language input. Most of the concepts are in ‘acquired knowledge’ but some, like LOVE or RED, are ‘hardwired’ The phonological patterns are also linked through to the motor channels so it’s possible to follow a reverse path from the concepts in order to speak, sign or write.
Rough and ready
Yes, the last two paras took me well out of my comfort zone. But this stuff is not the focus of Network Grammar. The intention here is simply to reveal enough of my mindset to facilitate discussion of NG sentence analysis in upcoming pieces.