56 years is a long time for doing something. Beethoven’s age at death. And Hitler’s. But modern linguistics, born of Chomsky’s Syntactic Structures, is now 57. Is it also dead or just languishing? Anyway it hasn’t delivered the killer theory. OK, there’s a lot else to linguistics but, as Rutherford said, that’s stamp-collecting, not physics.
To be explanatory and valid generally, shouldn’t the theory of language – the physics – be based on mental architecture?
‘Architecture’ means the way data is stored and processed at micro-level – much as a computer might be described as having ’16-bit architecture’. Also significant here is ‘structure’. Together they could be misleading because, literally, the structure of a building enables the architecture. Here it’s better to think of structure as the model and architecture as the Lego® bricks used to make it.
Mental architecture should accommodate (a) cognitively valuable stuff and (b) phonologically usable stuff, encoding (a) into (b) and decoding (b) into (a). Of course, (a) would include much besides what is shared using language.
There are many theories. Multiple descriptive theories could logically coexist if each were drawn from a different subset of language use. But no more than one explanatory, mental architecture-based theory can be true. So, of Minimalism and CG, HPSG, SFG, TAG, WG and all the other ~G spots – which is the one?
Probably none of them. Wouldn’t one true theory have led computational linguists to mimic mental processes in their software? They still prefer their clever maths.
You could agree with the above and then object that, while mental architecture is practically unknown, even the best language theory must be provisional. But research in mental architecture and in language should be synergistic. A bit of speculation on one side could lead to better speculation on the other, and so on. Since language is the part of cognition that has the most detailed data, linguists shouldn’t wait for neuroscience but should grab the initiative.
We should start again, avoiding a pitfall that makes theories implausible. Specifically, we shouldn’t model human language as in a stored-program computer – with complex processes manipulating complex data structures. A computer may perform a human-like task but that doesn’t mean a human could perform that task in the same way. That would need addressable storage – for which neuroscience provides no evidence.
Presume addressability and you can postulate fantastical properties for lexical items and elaborate rules about the interplay of those properties. That is why there are so many theories endlessly competing.
Instead, we should look for an autonomous, homogeneous architecture – no separation of process from data, no ‘ghost in the machine’. You may wonder how there could then be, for example, movement of sentence constituents. That persistent feature of Chomsky’s evolving position is indeed impossible to reconcile with such an architecture. My hunch is that ‘movement’ is at best a misleading metaphor.
I have more ideas and will detail them later in this blog. If you agree, disagree or have other ideas, please post your thoughts here. Together we can get to the theory that academia has somehow avoided (perhaps because orthodoxy is best to get funding and build a career). A slog – but for 57 weeks, not years.
The blog will not routinely cite academic literature. This is partly to achieve a less formal style. But it is mainly because there is little out there to support the radical ideas.
The arguments are built from quite simple premises. Jargon cannot be avoided altogether but readers will find Wikipedia gives enough support.
Network Grammar will reflect the mindset of a 1960s programmer, not a linguistics prof. I did 39 years building software. Having never screwed up and having done some innovative things, I looked around for an amusing computer project to occupy my retirement. ‘Computable meaning’ resonated nicely with Turing (1936) and promised to keep me busy for a while. No, I didn’t know what I was talking about … meaning that can be computed from natural language and then can be computed with?
As a start, I went back to uni to find out about language. Fun but frustrating. Linguistics doesn’t have all the answers and what it does offer lacks plausibility. My work since has therefore been on the basic issues that have consumed thousands of man-years since Chomsky (1957).
The ideas that have emerged seem promising. I have approached several scholars but sparked no interest. That would be less vexing if one of them said ‘Your ideas will not work because A, B and C’. Then if A, B or C were good, I could get on with something completely different.
Likely my problem is that I am implying to these profs ‘Your ideas will not work because X, Y and Z’. But I am still …