Bad Metaphors Make for Bad Theories

Imagine for a moment, that you have been thrown back into the Ellisonesque world of the 1980’s, with a delightful perm and even better trousers.  One fragile Monday morning, you are sitting innocently enough at your cubicle, when your boss comes to you with the summary of a report you have never read, on a topic you know nothing about.  “I’ve read the précis and I’d love to take a peek at the report," he intones, leaning in.  "Apparently, they reference some fairly intriguing numbers on page 76.”  You stare blankly at him, wondering where this is going.  “Yess—so I’d love if you could generate the report for me.”  He smirks at you expectantly.  You blink, twice, then begin to stutter a reply.  But your boss is already out the door.  “On my desk by five, Susie!” he whistles (as bosses are wont to do) and scampers off to terrorize another underling.

You would be forgiven if, at that moment, you decided it was time to knock a swig or two off the old bourbon bottle and line up some Rick Astley on the tapedeck.

Because the task is, in a word, impossible.

Certainly you might, on a slightly less surreal morning, tidily summarize a long research article into a brief abstract. But the reverse process simply doesn’t work.  When an abstract is created, information is discarded for that purpose.  Which means that it can’t be recovered.  Which means that you, sweet Susie, have a problem.

The insight that this gives rise to – that abstraction can only work in one direction – becomes critically important when we study language, because words are abstractions of what they mean.  Think about the difference between say, the word ‘dog’ and a real life dog.  The word is a sound symbol : a brief sequence of phonemes.  The dog is a complex perceptual entity : visually, it has a quite a number of different discernible features (its size, its coat, its snout); it has a distinctive woof and a distinctive gait; it smells in a particular way when it comes in out of the rain.  This reality is not—and cannot be—fully captured by the word.  You simply cannot code all of the complexity of dog into 'dog.'

This means that while the world can get abstracted into words, we can’t get all of the ‘world’ back out of words.

That may seem blindingly obvious.

But it might not appear that way if you were a trained philosopher.

In analytic philosophy – and to some extent in linguistics and cognitive psychology – there are those who believe that words get their meaning by reference to a set (or class) of things in the world.  Take the following example, from the Stanford Encyclopedia of Philosophy:

How do I manage to talk about George W. Bush and thereby say meaningful and true things about him? In a word: Reference. More picturesquely, we are able to use language to talk about the world because words, at least certain types of words, somehow ‘hook on to’ things in the world — things like George W. Bush.

If you aren’t intimately familiar with this idea, not to worry.  In the history of ideas, 'reference' isn't going to have much of a shelf life.  Luckily, the basics are fairly easy to grasp.

Just think of the way we use the word ‘refer’ in everyday language.  In the course of a conversation, for example, a friend might confide to you that Todd was ‘referring’ to his mother-in-law in salacious terms, and you might decide you’d rather he didn’t ‘refer’ to your mother that way.

‘Refer’ is quite a handy word conversationally, because it allows us to specify “Oh, but I meant this not that.”  The problem arises when it is understood literally by theorists of various persuasions, who want to say that words get their meaning by ‘referring’ to things in the world.  A theory of reference is inconsonate with the idea that words are abstractions.  Words cannot ‘latch onto’ the world and feed us back meaning any more than an abstract can feed us back a much longer paper; the metaphor simply doesn’t work.  To build a theory around the idea that they can and do, is to be dealings in fictions.

As you may remember from an earlier post, our understanding of the mind and of language is governed by the metaphors we use to guide our study.  There are better and worse metaphors—subject to empirical scrutiny— and some that make no sense at all.  It would be hard to say how language is ‘like a toaster,’ for example, or how using a ‘toaster model’ of language would add anything useful to our methods of study.  Of course, posing a 'referential' model of language is far more tempting than elaborating a toaster model; so long as we don't push on the analogy too hard, it seems that language just might work that way.

The problem is that when we do push on the metaphor, and think about the problem in cognitive terms -- instead of 'hand wavy' ones -- it becomes both logically incoherent and computationally intractable.  Indeed, Wittgenstein famously mocked the concept of reference in the Philosophical Investigations:

"This is connected with the conception of naming as, so to speak, an occult process.  Naming appears as a queer connexion of a word with an object.  --And you really get such a queer connexion when the philosopher tries to bring out the relation between name and thing by staring at an object in front of him and repeating a name or even the word "this" innumerable times.  For philosophical problems arise when language goes on holiday.  And here we may indeed fancy naming to be some remarkable act of mind, as it were a baptism of an object.  And we can also say the word "this" to the object, as it were address the object as "this"--a queer use of of the word, which doubtless only occurs in doing philosophy."

Yet here we are, sixty years later, still recycling the same bad ideas.  I admit -- it puzzles me.

For a technical discussion of some of the problems with 'referential' models, see Ramscar et al (2010) Section 1 (910-912).

Ramscar, M., Yarlett, D., Dye, M., Denny, K., & Thorpe, K. (2010). The Effects of Feature-Label-Order and their implications for symbolic learning. Cognitive Science, 34 (6), 909-957 : 10.1111/j.1551-6709.2009.01092.x

21 responses so far

  • "Shecky R." says:

    This reminds me a lot of the 'General Semantics' movement of Korzybski/Hayakawa et.al. (should it?), which I always liked a lot more than Wittgenstein... but again, I'm going back decades, and haven't really kept up.

  • I think you are doing the notion a disservice. Sure, words do not automatically or inherently refer. They are merely signs. But the act of using signs refers, because the language community refers using those signs. While sometimes people get a little overheated about the magical nature of words, they aren't often philosophers of language. The most likely cause of reference is something akin to a process of natural selection (in culture), which goes by the term "teleosemantics" and was proposed by Ruth Garrett Millikan in her Language, Thought, and Other Biological Categories.

    Some aspects of language, possibly most, are conventional. But terms, like names and nouns, can refer in the usage of the linguistic community. "Dog" refers to members of Canis lupus domesticus because that's how we use it. And when an individual - say my five year old son - uses it wrongly we can say to him that he is wrong, because "dog" refers to dogs.

    The problem of denotation in philosophy is an old one. It goes back at least to the medievals. It is the problem of how words, having intensions (meanings) can denote the world (extensions). You imply we philosophers do not know this; when in fact we have talked about it extensively for over a thousand years, recently almost to the exclusion of anything else. I suggest you look at the entries in the Stanford Encyclopedia of Philosophy on language and intension.

    And reference is rather different to metaphor. In fact, if you use a metaphor, by definition you are not referring. A bad metaphor that leads to a bad theory is "selfish gene"; not a philosophical error, I think.

    • melodye says:

      Sr. Wilkins,

      It's heartening to know that philosophers occasionally find their way here to read my scribblings! I took a degree in philosophy and my thesis work was within philosophy of language (specifically, how metaphors mean). While this post is intended for a broad audience, I hope you won't take me for a sometime Wikipedia scholar 😉 Now, as to the content of your comment...

      It's all fine and good for a philosopher to say:

      "Words do not automatically or inherently refer. They are merely signs. But the act of using signs refers, because the language community refers using those signs."

      And at first blush, what you have said is trivially true, because that is how we use the word 'refer' in English. But as a cognitive scientist, you are then faced with the question of asking how you would neurally instantiate such a system and how it would work, in biologically real terms. At that point, 'reference' stops being a useful construct. Indeed, it fast becomes a computationally intractable. This is what the paper I referenced above aims to show, in fifty-odd pages.

      (As the esteemed British philosopher P.M.S. Hacking has often noted, philosophers have long been free to make claims that have no basis in reality, because they are working to describe how reality should be understood; unfortunately, I think much of contemporary analytic philosophy has gone rather far afield in this regard, and psychology has followed in suit.)

      Now, back to reference: As likely you know, Wittgenstein was at pains, in his later work, to make clear that our colloquial use of words can mislead us into queer forms of thinking about language. He wrote, "The confusions which occupy us arise when language is like an engine idling, not when it is doing work."

      In this vein, when I say that we use 'reference' as a metaphor, what I mean is that we take our everyday use of the word 'refer' and then use it as a metaphor for understanding how language works. For example, we say "words get their meaning by reference to..." In this, we are trying to make a claim about how language works by understanding it (metaphorically) in plain terms. Like Wittgenstein before me, I think this is a misleading metaphor.

      Let me make underscore that no where in this piece was I equating reference to metaphor. I am saying that, in philosophy, its use is metaphorical. Moreover, I'm arguing (or beginning to, in this brief post) that referential models of language -- while intuitively appealing -- are inherently the wrong approach to take to language. (Donald Davidson has some wonderful essays problematizing the approach as well; his work on malapropisms, in particular, is what led me to Wittgenstein and then into cognitive).

      Of course, I have not begun (here) to lay out alternatives, so this post may be unsatisfying in that regard. There is more to come.

  • js says:

    I don't even get what the problem with THE analytic philosophical notion of reference (as if there was one dominant account within the various traditions that might plausibly be organized under that label) is supposed to be, according to you.

    Is it supposed to be this: "This means that while the world can get abstracted into words, we can’t get all of the ‘world’ back out of words"?

    If so, then you know nothing about the notion of reference as employed by any of the philosophers that discuss such a notion.

    Indeed, the notion of reference as it is usually employed is motivated precisely by the idea that words themselves cannot magically contain their semantics, but that the semantic features of words must depend on their relational properties (the relations between words, speakers, utterance contexts, and things, for starters).

    If you want to talk about bad metaphors, start with this one: "words are abstractions of what they mean." Nope, they're not. "Dog" is not an abstraction of dogs, not hardly. It's a way to speak of dogs, but that's not the same as an abstraction.

  • You cannot sever a brilliant woman from her words, nor an ignorant man. Or woman.

  • Bob O'H says:

    The word is a sound symbol : a brief sequence of phonemes. The dog is a complex perceptual entity : visually, it has a quite a number of different discernible features (its size, its coat, its snout); it has a distinctive woof and a distinctive gait; it smells in a particular way when it comes in out of the rain. This reality is not—and cannot be—fully captured by the word. You simply cannot code all of the complexity of dog into ‘dog.’
    ...
    In analytic philosophy – and to some extent in linguistics and cognitive psychology – there are those who believe that words get their meaning by reference to a set (or class) of things in the world.

    I am not a philosopher, and have never even played one on TV, but I can't see where these two are different. If the word "dog" refer to Canis lupus domesticus (as John wrote), then we can fill in the rest from our understanding of what a dog is. Thus "dog" doesn't need to code all of those properties of a dog, as we fill it in with out background knowledge. I don't see it as any different to writing "x=144". That doesn't code all of the information about x (e.g. that it has prime factors 2432, and all in all is pretty gross), but it doesn't need to.

    I guess the point is that the set of things that the word latches on to has properties, and we can infer these properties onto a thing when we label it as a member of that set.

    • melodye says:

      Bob: Like Talkative Stranger, I agree with you. I think TS absolutely nailed the response below, but I'm going to try to flesh out the account you're giving and contrast it with what standard philosophical theories of meaning imply.

      ...

      When we learn a verbal category, we discard information for a purpose. Nietzsche writes about this in The Gay Science:

      “Just as it is certain that one leaf is never totally the same as another, so it is certain that the concept ‘leaf’ is formed by discarding these individual differences and by forgetting the distinguishing aspects.”

      Work on category learning suggests that Nietzsche was right: in the process of learning categories, we learn to discard featural information that is irrelevant to the category, and home in on aspects that are useful and informative. For example, we remember a lion's mane and golden colour because those features distinguish it from other, potentially similar big cats; at the same time, we need not remember the coloration of its snout or the peculiarities of its gait, which are not germane or obviously distinguishing features.

      If we think of the perceptual information we encounter as continuous, then, category learning imposes discontinuities on that continuous input. As we divvy up the world into categories, we lose and distort the information we're encountering for the purposes of communication. This is why I said that words are "abstractions" of what they mean.

      What does this mean for language? Well, this goes to what you said about "x=144" and to the "abstract" analogy I used. Imagine again we're in Susie's shoes, with an abstract in front of us. While we can't rewrite the paper from the abstract, we can make predictions about what the paper might contain. Indeed, depending on how well we know the paper's topic, we can make better or worse predictions about its content. (This ties in with what you said about "background knowledge.")

      If we conceptualize language in this way, then we start thinking about language -- both comprehension and production -- as a predictive process. Words don't "refer" to concrete, established established categories out there in the world, and they don't give us access to the myriad properties that the numerous exemplars in those categories contain. Rather, using the linguistic norms of our language (or culture), we form verbal categories that pick out and emphasize certain features of the world. Notably, the exact contents of these categories will vary somewhat by individual, because the usage of words is not homogenous (and we do not all learn words with the same exemplars, in the same contexts). Moreover, our predictions will vary by linguistic and social context, by speaker, and so on.

      Now, you may wonder how this relates to the kinds of things philosophers have proposed.

      To be fair to @JS, above, in the post I lumped together a number of philosophical theories of meaning, most of which I find intolerable. If you want the lengthy overview, that separates out some of the camps, you can read "Theories of Meaning," over at that trusty stand by, the Stanford Encyclopedia of Philosophy. In general, philosophers of language -- like e.g., Max Black, Donald Davidson and so on -- have wanted to say that what sentences mean is determinate (whether in terms of truth-conditions or in terms of semantics), and you get that determinacy by reference to things in the world, and the peculiar syntactic workings of the sentence. The motivation for this (generally) is to establish that there can be logical truths to arguments. Philosophers, as always, are bent on the idea that "truth" is more than just a word we use in particular ways; it's a real thing, gosh darnit, and we can generate it in language, too.

      Many of these overlap with 'decoder' theories of language -- the idea that as we speak, there is a certain determinate cognitive content which we wish to convey to our listeners (and which they must 'unspool' as they listen, as if through a secret decoder ring). All of this is a completely misdirected way of imagining language, in my view. If you want to be convinced of this, spend the next six months reading analytic philosophers of language trying to explain how metaphors work (as I did, perplexedly, when I was 21). You'll read a lot of jolly good stuff about the "ineffable" and the "indeterminate," which is philosopher-speak for "and then we ran out of a theory."

      To get more specific though: the basic premises underlying most semantic theories is that meaning is something concrete, deterministic, 'real'; that categories are definable (or at least, 'prototypes of categories are); that language is somehow like mathematics, and we can evaluate it in terms of logical forms. The later Wittgenstein roundly denounce all of these ideas as patently absurd. What referential theories want to get out of language simply isn't *in* language.

      We want to say that there can't be any vagueness in logic. The idea now absorbs us, that the ideal 'must' be found in reality. Meanwhile we do not as yet see how it occurs there, nor do we understand the nature of this "must." We think it must be in reality; for we think we already see it there. ...It is like a pair of glasses on our nose through which we see whatever we look at. It never occurs to us to take them off.

      Having just rambled on, the bottom line is: There's nothing wrong with using the word 'refer' colloquially. It's when you try to establish direct lines between words and the world, as analytic philosophers have, that things start getting dicey.

  • Talkative Stranger says:

    I strongly disagree with some of the previous comments. Our very way of thinking about the world requires abstraction.

    js says:

    "If you want to talk about bad metaphors, start with this one: “words are abstractions of what they mean.” Nope, they’re not. “Dog” is not an abstraction of dogs, not hardly. It’s a way to speak of dogs, but that’s not the same as an abstraction."

    "Dog" may not be an abstraction of the class of things-in-the-world that we know as "dogs", but classifying these things as "dogs" in the first place requires abstraction. I mean, if you didn't know what a dog was, you might think that a doberman was a different type of animal from maltese. But we, as English speakers, see these two things as members of the same abstract class because we call them both "dog". Similarly, European and American robins are totally different species of birds that look very different (although they both have red bellies), but we call them both "robin" and thus they get stuck in the same class.

    Bob O'H says:

    "I am not a philosopher, and have never even played one on TV, but I can’t see where these two are different. If the word “dog” refer to Canis lupus domesticus (as John wrote), then we can fill in the rest from our understanding of what a dog is."

    But "canis lupus domesticus" is another word (er, well, it's three words, but same idea). Scientific names try to make the categorization of animals more systematic, precise, and purposeful, but they are still a means of abstraction. This is because they, like any word, are taking a disparate bunch of objects in the world, which each have their own particular characteristics, and grouping them together based on the characteristics that they share. In the process of describing an individual dog as "dog", we are necessarily throwing out the things that make that dog different from all the other dogs in the world. This is the essence of abstraction. It is a matter of stripping away the unnecessary details and leaving only the parts that are perceived as most important. Furthermore, it is a categorization of objects based on those aspects we think are significant.

    So categories are abstractions of individuals, groupings of individual objects based on their shared properties. But one could perhaps claim that a word is a direct representation of a category, and therefore not an abstraction. I disagree with this viewpoint, because if we didn't have a word for the category, then the category would not exist in our minds as a single entity. The very fact that we have a word "cat" means that we classify cats as a single group based on their "catness", and that this classification is important to us as English speakers. If it weren't, we wouldn't have a word for it.

    Then there's the chicken-and-egg problem: if we didn't have the words, we wouldn't have the categories, but if we didn't have the categories, then we wouldn't have the words. I see this not as a paradox but as more evidence that these two things are inextricably interconnected.

    The world is indescribably complex. We could not even begin to describe all of it, so we don't try - instead, we abstract the world into words.

    • melodye says:

      Wait!! Someone who agrees? =) Thank you, marvelous stranger! It's very happy-making to know that there are others out there thinking along these lines (and particularly, others who can write and explain much better than I can manage).

  • Talkative Stranger says:

    Oops, I meant to indicate that I was agreeing with Bob O'H - not disagreeing. Sorry!

  • Avery Andrews says:

    Regardless of the fate of reference as such, I think that the notion of 'co-reference' in discourse, ie two expressions referring to 'the same entity', or to a familiar one, is quite central to language, and there for example doesn't seem to be any problem in identifying nominal material in texts that is supposed to be doing this, as opposed to referring to a 'new entity' or referring 'generically'. ie.

    We went hunting kangaroo (generic)
    He speared a kangaroo (novel)
    They cooked it (familiar)

    This accords with my own experiences poking around in the texts in the back of descriptive grammars, and, on Wednesday, I made a claim to this effect in front of a goodly number of real field linguists, some of them highly seasoned, and nobody disputed it, so I think it can be taken as true.

    Of course then we ask, what is the relationship between 'reference' and this kind of 'co-reference in discourse', to which I don't know the answer,

    • melodye says:

      Avery, you're right, but I would describe this in a slightly different way, which is to say that two or three (or five) different expressions can lead us to highly similar sets of predictions.

      My view is that there's a difference between what we're doing for descriptive purposes -- just the usual human activity of categorizing and making sense of the world -- and what we're doing for theoretical purposes, as when we try to make sense of how something works.

      For example, I think attempting to describe a universal grammar is an ambitious and interesting project. There's something inherently compelling about trying to systematically catalog the structural similarities between languages (and also, to make generalizations about verbal distributions even within one language). At the same time, I think a "universal grammar" is a theoretically empty construct : i.e., I do not think that the description we might arrive at can tell us much about how language works (mechanistically). (I know you might disagree, but I hope the example at least makes sense; just because we have a possible description, doesn't mean it's the right one)

      I think problems arise when philosophers and theorists conflate descriptions with mechanistic theories.

      Does that make sense?

      • Avery Andrews says:

        I'm not sure. One point I'd want to put forth is that a substantiated and substantial UG (one that, mathematically, says something definite about language (not an easy standard to attain, as Pullum enjoys discussing) and also substantiated (good typological base, all putative known counterexamples dealt with without 'letting all the air out of the theory' (phrase cribbed from Haj Ross)) should be regarded as a *problem* for psychology. I.e. what on earth could possibly be the real reason that this thing seems to work???!!? Wheareas linguists have tended to view themselves as providers of solutions rather than problems, which is OK internally to linguistics, but not, I suspect, in a wider view.

        But anyway, the salience of generic vs novel vs familiar reference in texts from all sorts of languages indicates some sort of universal, whatever the true reason/explanation for it might be.

  • The referential model is problematic in mathematics education (and likely all education). What we find is a group of people who understand language (i.e., instruction) not as a flexible medium, the manipulation of which might induce understanding, but as something that must conform to an imagined reality--fixed and cold; to be accepted but not toyed with.

  • "Shecky R." says:

    I may be mixing apples and oranges here (not sure) but I think part of the problem in all this is that virtually all words are ambiguous, resolving their specific meaning only in the larger context of sentences... but sentences are generally subtly ambiguous as well, requiring still larger context to clarify them. I can think of a half-dozen-or-so meanings for a simple sentence like "Mary had a little lamb" depending on which word is emphasized and the overall context.
    This is why science tends toward reductionism (because mathematical terms are about as precise as we can get with language)... a term like "evolution," or "dog" for that matter, is really very amorphous and difficult to define and apply consistently, but a term like "Higgs boson" can virtually be reduced to mathematics for a truly standardized (or referential) meaning.

  • Ariel says:

    "We also speak of attention as noticing. To notice is to select, to regard some bits of perception, or some features of the world, as more noteworthy, more significant, than others. To these we attend, and the rest we ignore—for which reason conscious attention is at the same time ignoreance (i.e., ignorance) despite the fact that it gives us a vividly clear picture of whatever we choose to notice. Physically, we see, hear, smell, taste, and touch innumerable features that we never notice. You can drive thirty miles, talking all the time to a friend. What you noticed, and remembered, was the conversation, but somehow you responded to the road, the other cars, the traffic lights, and heaven knows what else, without really noticing, or focussing your mental spotlight upon them. So too, you can talk to someone at a party without remembering, for immediate recall, what clothes he or she was wearing, because they were not noteworthy or significant to you. Yet certainly your eyes and
    nerves responded to those clothes. You saw, but did not really look.

    It seems that we notice through a double process in which the first factor is a choice of what is interesting or important. The second factor, working simultaneously with the first, is that we need a notation for almost anything that can be noticed. Notation is a system of symbols— words, numbers, signs, simple images (like squares and triangles), musical notes, letters, ideographs (as in Chinese), and scales for dividing and distinguishing variations of color or of tones. Such symbols enable us to classify our bits of perception. They are the labels on the pigeonholes into which memory sorts them, but it is most difficult to notice any bit for which there is no label. Eskimos have five words for different kinds of snow, because they live with it and it is
    important to them. But the Aztec language has but one word for snow, rain, and hail.

    What governs what we choose to notice? The first (which we shall have to qualify later) is whatever seems advantageous or disadvantageous for our survival, our social status, and the security of our egos. The second, again working simultaneously with the first, is the pattern and the logic of all the notation symbols which we have learned from others, from our society and our culture. It is hard indeed to notice anything for which the languages available to us (whether verbal, mathematical, or musical) have no description. This is why we borrow words from foreign languages. There is no English word for a type of feeling which the Japanese call yugen, and we can only understand byopening our minds to situations in which Japanese people use the word. There must then be numberless features and dimensions of the world
    to which our senses respond without our conscious attention, let alone vibrations (such as cosmic rays) having wave-lengths to which our senses are not tuned at all. To perceive all vibrations at once would be pandemonium, as when someone slams down all the keys of the piano at the same time.

    ...For what we mean by "understanding" or "comprehension" is seeing how parts fit into a whole, and then realizing that they don't compose the whole, as one assembles a jigsaw puzzle, but that the whole is a pattern, a complex wiggliness, which has no separate parts. Parts are fictions of language, of the calculus of looking at the world through a net which seems to chop it up into bits. Parts exist only for purposes of figuring and describing, and as we figure the world out we become confused if we do not remember this all the time.

    ...We are forced, therefore, to speak of it through myth—that is, through special metaphors, analogies, and images which say what it is like as distinct from what it is. At one extreme of its meaning, "myth" is fable, falsehood, or superstition. But at another, "myth" is a useful and fruitful image by which we make sense of life in somewhat the same way that we can explain electrical forces by comparing them with the behavior of water or air. Yet "myth," in this second sense, is not to be taken literally, just as electricity is not to be confused with air or water. Thus in using
    myth one must take care not to confuse image with fact, which would be like climbing up the signpost instead of following the road." ~Alan Watts

    The take away and point I think you are making quite fairly: Don't confuse the map for the territory.

  • AK says:

    Then there’s the chicken-and-egg problem: if we didn’t have the words, we wouldn’t have the categories, but if we didn’t have the categories, then we wouldn’t have the words. I see this not as a paradox but as more evidence that these two things are inextricably interconnected.

    This is (IMO) a totally unjustified assumption. It seems perfectly plausible that our ancestors had primitive versions of concepts/categories before our lineage split off from the great apes, perhaps even the old-world monkeys. Both words and concepts/categories are actually phenomena within the brain (and perhaps the CNS), and in studying them we must always distinguish between the categories we form as observers/researchers, and the process of categorization that actually takes place within the brain.

    I've taken a shot at discussing this, but the uncertainties seem to me to explode in all directions. Sill, it seems intuitively obvious to me that there would be very strong selective incentives for any mammal (or other active animal) to be able to form effective categories and manipulate abstract symbols for them in the brain. Although I'm not going to track down references, I recall several papers dealing with differences in vocalizations by monkeys responding to predator categories (e.g.birds vs cats).

    Bottom line: this cannot be assumed to be a chicken/egg problem, as it's completely plausible that concepts/categories were already there before words (as arbitrary symbols) were invented/evolved.

  • John says:

    interesting take on things, but seems like you're caricaturing your foes.

    • melodye says:

      psychologists before me have made a caricature of philosophy (and philosophical notions of reference). my response is very much aimed at that..

  • Hye says:

    There are recently been following a weblog for the thirty days approximately and have grabbed a huge amount of info and also adored the method you've set up your website. I am looking to manage our extremely private web site even so. I think the also common i should target quite a lot of scaled-down subject areas. Becoming all things to all or any people is just not all that the damaged up to always be.