Home » Articles » NG10. Knowledge in the mind is both program and data

NG10. Knowledge in the mind is both program and data

Mind the networkWe’ve said that the sharing of a concept between multiple propositions means propositions exist in a network.  After illustrating the idea, this piece shows that, with no gaps, progression through the network could be the answer to ‘How can knowledge stored in the mind be both program and data?’

Caveat emptor!

All of Networj Grammar comes from introspection by an IT man.  This piece is even more speculative than the others.  I could omit it and dive straight into syntax using the ideas in NG7 and NG8.  But if you can accept – provisionally – the ideas here, you’ll find my sentence analyses more plausible.

If they work for a representative sample of English sentences, the view of mental architecture here and elsewhere in the blog should be worthy of attention from neuroscience people.  Could this logical architecture have a physical realisation in neurons and synapses?

At least LanguidSlog has some methodology.  Few other writers on language are upfront with their assumptions about what supports it.

Zooming out

So far triangles have only been used to illustrate language knowledge. The simple taxonomy in NG4 can be re-used to show triangles as what I’ve been calling ‘conscious knowledge’:


Zooming out

In every proposition here, the relation is HAS PROPERTY.  The relation is directional, as shown by the arrowheads: DOG / HAS PROPERTY / MAMMAL, not MAMMAL / HAS PROPERTY / DOG.  To work in the other direction the proposition would need a different relation: MAMMAL / INSTANTIATED / DOG.

A dependency grammarian like Richard Hudson would use ISA in place of HAS PROPERTY.  That would work fine in the diagram as it stands but not when the diagram is elaborated.  For example, given that not all mammals have tails, DOG / HAS PROPERTY / TAIL is OK but DOG / ISA / TAIL is not.  ISA can build a hierarchy of propositions – for example, a taxonomy for zoologists – but it does not allow the full story:

Zooming out


(I’ve slimmed down the symbols and represented HAS PROPERTY with a simple H in the relevant concept circles.)

For none of the added properties would ISA be appropriate.  Most would be shared with other concepts; for example, BIRD / HAS PROPERTY / WARM BLOODED.  One that is not shared is SUCKLER which, by definition, is specific to MAMMAL but is nonetheless only one of many characteristics of mammals.

I’ve left out the properties of LABRADOODLE.  Please have a go at defining a few.  Not so easy because, being a crossbreed, it has some properties from LABRADOR and others from POODLE – each of which HAS PROPERTY / DOG.

Physically possible?

I’ve no idea how the network is implemented physically, yet my logical model seems to rely on the possibility of many, many connections at a node.  What if neurophysiology says a node can have only a few connections?

That objection can be overcome by recognising that the properties of a concept need not all be connected directly by HAS PROPERTY to the node for that concept.  For example:

Zooming out


This might actually be useful because there could be some optimal way to arrange the attachment of the properties of MAMMAL to allow other concepts to attach intermediately.  For example, BIRD could attach (directly or indirectly) to the node that HAS PROPERTY / WARM BLOODED.

Zooming in

It should now be clear that, starting at any node in the network, the concept there is given by following the paths radiating from it.  Each path splits into two or more at each step.  The process is essentially divergent (although it’s possible for two paths converge on some other concept, e.g. from LABRADOR and POODLE to DOG).

Each concept is defined by a subset of all the rest.  Although a node is the locus of a concept, it has no content.  Each one is unique because of how it is connected to all the others.  There must be progression along all these paths.  (I’m avoiding ‘spreading activation’ because that phrase is used in other quite specific ways.  The idea of activation is however crucial in NG sentence analysis as hinted in LS8.)

What happens as paths diverge?  There’s little cognitive value in sequences where HAS PROPERTY means ISA because the concepts become increasingly general.  DOGMAMMALANIMAL continues → LIVING THINGTHINGTHING is vacuous.  It can connect via HAS PROPERTY to nothing and via INSTANTIATED to anything.

More typically HAS PROPERTY (≠ ISA) leads to increasingly granular concepts.  For example, CARNIVOROUS could lead to very many conceptually indivisible end-points.  When CARNIVOROUS is encountered in context, we probably only need a small subset of end-points to get the gist, not necessarily the gory details.

There must be end-points but what are they?  I imagine them as the ‘pixels’ on a ‘screen’ that forms the interface between the network and consciousness.  My overall model of cognition is:

Zooming in


Perception, introspection and action all require progression along many paths through the network.  In ‘acquired knowledge’, nodes may be linked or unlinked in order to reflect these experiences.  This part includes phonological patterns and enables recognition of those patterns from new tokens the subject experiences through sound or sight.  The patterns link to the concepts needed in order to make sense of language input.  Most of the concepts are in ‘acquired knowledge’ but some, like LOVE or RED, are ‘hardwired’  The phonological patterns are also linked through to the motor channels so it’s possible to follow a reverse path from the concepts in order to speak, sign or write.

Rough and ready

Yes, the last two paras took me well out of my comfort zone.  But this stuff is not the focus of Network Grammar.  The intention here is simply to reveal enough of my mindset to facilitate discussion of NG sentence analysis in upcoming pieces.


  1. SH says:

    Fun ideas, taken in the speculative spirit laid out in the intro and outro! I’ve been trying to think of a falsifiable hypothesis that a neuropsych person could test out, and haven’t been coming up with much. What would you say to an honours student, say, who wanted to explore your ideas here?

  2. Mr Nice-Guy says:

    I read an allusion to the Bonzo Dog Doo-Dah Band, but did you mean it, SH?

    No, I can’t think of a hypothesis either. Perhaps language is the best hope for those guys. Broadly I’m hypothesising a mental architecture (starting from ‘no addressability’) and trying to develop a grammar that fits both my hypothesis and the empirical facts. If successful, neuropsych should try to reconcile the hypothesis to what they know about what’s inside our skulls. They could do that sooner but if I fail, they’ll have wasted their time.

    As for the question, is your honours student doing linguistics or neuropsych or what? To a linguistics student I would say: ‘Follow LanguidSlog if you have the time but don’t try to persuade your teachers. They’ll hate it and you too, probably. When you’re comfortably PostDoc – and if NG hasn’t caught on by then – you can revisit the theory and consider whether it could be a way to distinguish yourself.’

    To a neuropsych student: ‘Please read up to LS10 carefully. Let Mr Nice-Guy know if he’s gone seriously wrong. No need to go on into sentence analysis – but you may find it interesting.’

  3. JCS says:

    Greetings! I’m hoping you still monitor comments as I seem to have come to this late. I was wondering if you had considered relative meaning as an alternative to the sequential “has property” structure you describe here. Relative meaning, essentially defining a concept by what it is NOT, seems more logical to me because positive “has property” definitions can break down. For example, a zebra might have “stripes” as a property, but if some kind of genetic mutation occurred, and a zebra gave birth to a stripe-free animal, would this offspring cease to be a zebra? One of the properties of humans is that we walk upright, but those who have lost their legs are still human. Mammals were all thought to give birth to live offspring (rather than eggs) until the discovery of the duck-billed platypus, if I’m not mistaken. Defining something according to what it is, rather than what it is not, is tricky. Best regards, JCS

    • Network Grammar says:

      Hi JCS. Yes, still monitoring but a bit slow this time – sorry!
      Three objections to your ‘what it is NOT’ idea come to mind. One, how would an infant start acquiring concepts? Two, a concept would need more storage if defined negatively. Three, the properties that something lacks would need to be defined somehow.
      But the stripe-less zebra is still there! NG would handle this by holding one concept for the type of thing in general and further concepts for specific instances of the thing. The generalised zebra would have stripes. That’s unavoidable – at least for English speakers – because every alphabet book has ‘Z is for zebra’ with a picture of a striped, pony-sized animal.
      A specific instance might include properties such as the animal’s name, age, sex, time and place of the encounter etc, as well as having the property ‘zebra (general)’. If the instance lacked stripes, it would also have the property ‘no stripes’.
      Each individual’s first encounter with some type of thing will be unique in some way (even for ‘zebra’ with everyone using the same alphabet book). It follows that one individual’s generalised concept for a type of thing may be a bit different from another’s. Of course there is a lot of overlap between different individuals’ concepts for a particular type of thing, otherwise they could not communicate usefully.

      • JCS says:

        Hi RH. Thanks for your reply. You must be familiar with the vectors used by Google Translate and others (Word2vec)? That’s a good example of the relative (and negative) definition of concepts. The model makes no attempt to define concepts by inherent properties; they cluster together in “word-space” based on statistical relationships. One of the potential applications of your Network Grammar is natural language processing, and (in my humble opinion) your “has property” structure could struggle because words have different properties in different languages. For example, in English we have “sheep” (the animal) and “mutton” (the meat), whereas in French both terms are covered by “mouton”. I appreciate my example would be more convincing if people still ate mutton, but I’ve taken it from Saussure’s 100-yr-old text! If you persist with “has property”, you’re going to have to draw up a separate concept structure for every language, I reckon. That’s quite a lot of work, could make translation and interpretation more problematic, and comes with storage implications for multi-lingual speakers (your point two). Your first point, on language acquisition, is fascinating. My daughter called me “mummy” for at least six months. I kept reminding her “I’m not your mother”, and eventually she got it: daddy is not mummy! Is this an example of a child learning negative, relative relationships? I’d like to think so. Her third word was “cow-cow” (cuddly toy). Cow-cow is not mummy and not daddy! Makes sense to me! (though I appreciate that the first concept, mummy, would have to be positively defined) I look forward to your comments, JCS


This site uses Akismet to reduce spam. Learn how your comment data is processed.