A Pure Physiological Basis for the Life of the Psyche

(by melody) Mar 31 2011

It isn't the sort of argument Pointsman relishes either. But he glances sharply at this young anarchist in his red scarf. "Pavlov believed that the ideal, the end we all struggle toward in science, is the true mechanical explanation. He was realistic enough not to expect it in his lifetime. Or in several lifetimes more. But his hope was for a long chain of better and better approximations. His faith ultimately lay in a pure physiological basis for the life of the psyche. No effect without cause, and a clear train of linkages.

"It's not my forte, of course," Mexico honestly wishing not to offend the man, but really, "but there's a feeling about that cause-and-effect may have been taken as far as it will go. That for science to carry on at all, it must look for a less narrow, a less . . . sterile set of assumptions. The next great breakthrough may come when we have the courage to junk cause-and-effect entirely, and strike off at some other angle."

"No--not 'strike off.' Regress. You're 30 years old, man. There are no 'other angles.' There is only forward--into it--or backward."

(Pynchon winkingly giving cognitive science a what-for. cc: "levels of analysis")

10 responses so far

The Observer's Paradox

(by melody) Mar 12 2011

If you have been following the debate on acceptability judgments and other linguistic methods, you may want to check out computational linguist Mark Liberman's (mini)-argument against self-observation on Language Log.  The tongue-in-cheek response is in reply to comments on Bill Poser's post about the pronunciation of the word "tsunami."  Mark writes:

"This gap between phonetic intuition and phonetic fact is a special form of the observer's paradox. Just as we behave differently when we're aware of being observed by others, we also behave differently when we imagine observing ourselves."

In doing some follow-up reading on the history of the paradox, I found a brilliant essay by the famous sociolinguist, William Labov, on the declining methodological standards in linguistics research (see p. 105-8 for what he thinks of the role of 'intuition').  The essay was published in 1972.  Labov wrote then:

"If new data has to be introduced, we usually find that is has been barred for ideological reasons, or not even been recognized as data at all, and the new methodology must do more than develop techniques.  It must demolish the beliefs and assumptions which rules its data out of the picture.  Since many of these beliefs are held as a matter of deep personal conviction, and spring from the well-established habits of a lifetime, this kind of criticism is seldom accomplished without hard feelings and polemics, until the old guard gradually dissolves into academic security and scientific limbo."

He could well have been writing about corpora.

For those of you who read my post last week about language change, you may be amused to note that in the same essay, Labov wrote: "We are forced to ask whether the growth of literacy and mass media are new factors affecting the course of linguistic change that did not operate in the past."  Well, Mr. Greene and I certainly never claimed to be the first to articulate these ideas!

Cheers, William.

On a different note, I have been extremely troubled by the reports about what is going on in Japan.  One of my best friends left the country a day before the earthquake.  To everyone there - or with friends and family there - my heart goes out to you.  If any readers are interested in giving to the disaster relief fund, you can find more information here.

9 responses so far

Why LOLCats ruined my English

(by melody) Mar 09 2011

The talented writer and polyglot Robert Lane Greene has a short guest blog in yesterday's NY Times today suggesting that (as Emily Anthes recapped on Twitter): "Perhaps we're seeing more grammatical mistakes because literacy is on the rise."

In the post, Greene scrutinizes the prescriptivist rallying cry that language is in a perpetual decline and must be enshrined (quick!) before it's too late.  He trots out the usual counter-arguments: linguistic change is constant and inevitable; linguistic change is not necessarily bad ("when a good thing changes it can become another good thing"); and so on.

The most interesting claim Greene makes comes at the end of the post, when he notes that illiteracy rates have plummeted over the last century, to virtually zero.  However, as he is quick to point out:

Literacy is a continuum of skills. Basic education now reaches virtually all Americans.  But many among the poorest have the weakest skills in formal English.

This, he thinks, is to blame for the rise of misplaced apostrophes and teen-text speak.  It's a truly interesting observation, but one I think he squanders in his conclusion.

Continue Reading »

15 responses so far

The Ashtray Argument

(by melody) Mar 07 2011

"I call Kuhn’s reply “The Ashtray Argument.” If someone says something you don’t like, you throw something at him. Preferably something large, heavy, and with sharp edges. Perhaps we were engaged in a debate on the nature of language, meaning and truth. But maybe we just wanted to kill each other."

--Errol Morris in the opinion pages of yesterday's NY Times, aptly capturing that spirit of wonder academics bring to the world... Or something.

 

20 responses so far

Let's stop mistaking 'thought experiments' for science

(by melody) Mar 04 2011

Trends in Cognitive Sciences recently published a provocative letter by a pair of MIT researchers, Ted Gibson and Ev Fedorenko, which has been causing a bit of a stir in the language camps.  The letter - "Weak Quantitative Standards in Linguistic Research" and its companion article - have incited controversy for asserting that much of linguistic research into syntax is little more than - to borrow Dan Jurafsky's unmistakable phrase - a bit of "bathtub theorizing."  (You know, you soak in your bathtub for a couple of hours, reinventing the wheel).  It's a (gently) defiant piece of work: Gibson and Fedorenko are asserting that the methods typically employed in much of linguistic research are not scientific, and that if certain camps of linguists want to be taken seriously, they need to adopt more rigorous methods.

I found the response, by Ray Jackendoff  and Peter Culicover, a little underwhelming, to say the least.  One of the more amusing lines cites William James:

"Subjective judgments," they claim, "are often sufficient for theory development. The great psychologist William James offered few experimental results."

Yes, but so did "the great psychologist" Sigmund Freud, and it's not clear whether he was doing literary theory or "science"...  More trivially, James was one of the pioneers of the fields and didn't have access to the methods we now have at our disposal.  That was his handicap - not ours.

We can contrast that (rather lame) response with what computational linguist Mark Liberman said about corpus research last week in the New York Times:

"The vast and growing archives of digital text and speech, along with new analysis techniques and inexpensive computation, are a modern equivalent of the 17th-century invention of the telescope and microscope."

Here, here, Mr. Liberman.  I couldn't agree more.

Last month, Michael Ramscar and I published a seven-experiment Cognitive Psychology article, which uses careful experimentation and extensive corpus research to make something of a mockery of one piece of "intuitive" linguistic theorizing that has frequently been cited as evidence for innate constraints.  Near the end of the piece, we take up a famous Steve Pinker quote and show how a simple Google search contradicts him.  After roundly (and amusingly) trouncing him, Michael writes - in what must be my favorite line in the whole paper -

"Thought-experiments, by their very nature, run into serious problems when it comes to making hypothesis blind observations, and because of this, their results should be afforded less credence in considering [linguistic] phenomena.”

No doubt, this one-liner owes some credit to a brilliant P.M.S Hacker quote (actually a footnote to one of his papers!):

"Philosophers sometimes engage in what they misleadingly call 'thought-experiments.'  But a thought experiment is no more an experiment than monopoly money is money."

Let's stop mistaking 'thought experiments' for science.

87 responses so far

Peer review favors vanilla

(by melody) Mar 01 2011

In high school, I had a history teacher who would berate any student who gave a ‘vanilla’ response in class – code for a ‘middle of the road’ or ‘safe to all ears’ answer.  As Mr. Toy explained, while vanilla is the most consumed flavor of ice-cream in the western world, and by dint, the most ‘popular,’ vanilla isn’t also the most liked flavor – people tend to feel much more strongly about chocolate, butter pecan, and strawberry – it’s simply the one that’s most passable to the greatest number of people.

Peer review can, at times, reward papers for being merely ‘vanilla,’ particularly at the high-impact commercial journals.  Last year, we had a paper at one of the top journals rejected on a 4:2 split (4 in favor, 2 added in 2nd and 3rd round against).  We've since had 1:1 (reject), 1:2 (reject), 2:1 (reject).  It's never going to be the case that there isn't someone out there who doesn't hate our work.*  We take a very definite theoretical stand, which means we elicit bimodal responses at every single journal we try to publish in, doesn't matter the impact factor.  Does that mean that our work is somehow less worthy of publication than papers that receive a chorus of middling votes?

[Oh, you already know what I think!]

But the issue isn't personal - we're certainly not the only lab to face this problem.  The question is: Is science supposed to be controversy-free?  Or, dare I say, vanilla?  Are commercial journals really just out to preserve the status quo?

*Um, did that triple-negation work?  I've tried rereading it three times and I'm still not sure.  Late night.  At least Charlie Sheen would approve.  ("You can't process me with a normal brain.")

See also: Graphene.

19 responses so far

Polysemy has never been so lovely

(by melody) Feb 21 2011

Yesterday, a famous linguist in construction grammar sent me the following Radiolab video:

Words are Beautiful

...Which was a powerful and joyful experience.

This prompted me to send her an excerpt of a paper I wrote with Prof Plum on how words mean (and also on metaphor):

The kinds of expectations that people build up about words in listening and reading may have an important part to play in their conceptualisation of those words. Words are often thought of as being abstractions of objects and events in the world, but defining a simple relation between the thing being represented and the label that represents it is problematic (Murphy, 2002). Indeed, it has been argued that the meanings of words are better understood in relation to their patterns of use, rather than to the things in the world they appear to represent (Wittgenstein, 1953). When we talk about names, for example, we say things like ‘did you catch her name’, ‘his name is mud’, ‘they were called by name’, ‘she made a name for herself’, and so on. From this perspective, a ‘name’ is not only a word ‘by which something is called or known’, as the dictionary designates, but also a thing to be had, caught, muddied, cleared, called, and made. People ‘go by names’, they ‘throw names around’, they hope to see their ‘name in lights’, and on this view, the meaning of ‘name’ is inextricable from its patterns of use: from the words it co-occurs with and the words that modify it, and the effect that these have on the way that people think about ‘name’.

These kinds of co-occurrence patterns offer a rich and readily available source of information for anyone learning to understand the world and the way that language relates to it, and there is considerable evidence to support the idea that people are sensitive to this information. Our suggestion is that people’s understanding of the patterns of use associated with motion words actually plays an important part in shaping their understanding of them. For instance, saying that time can ‘run out’ or ‘fly by’ influences what we understand time to be in the first place, because thinking about time in this way involves processes shared with other things that ‘run out’, ‘fly by’, or ‘stand still’ (Slobin, 1996). In a sense, the mind works metaphorically, associating words with other words that are used in similar ways.

If understanding results (at least in part) from predictive processes and the expectations produced by patterns of co-occurrence, then when we use words in similar ways they ought to become more closely aligned in meaning. This would suggest that saying literally ‘the man runs by’, fictively ‘the road runs along the river’, and figuratively ‘time runs out’, should, as a result of this common pattern of usage, more closely align our notions of how space and time operate. Accordingly, we suggest that the similar ways in which people talk about motion through space and motion through time is an important part of their common underlying conceptualisation." (Ramscar, Matlock & Dye, 2010)

...The writing of which was inspired by Wittgenstein, and also by the great Ira Allen, who has some wonderful leads to literary science writing.  Thanks also to Seth, who looks just like a young Foucault (and is almost certainly just as clever).

Hope you are having a nice winter, folks.

6 responses so far

Bad Metaphors Make for Bad Theories

(by melody) Dec 22 2010

Imagine for a moment, that you have been thrown back into the Ellisonesque world of the 1980’s, with a delightful perm and even better trousers.  One fragile Monday morning, you are sitting innocently enough at your cubicle, when your boss comes to you with the summary of a report you have never read, on a topic you know nothing about.  “I’ve read the précis and I’d love to take a peek at the report," he intones, leaning in.  "Apparently, they reference some fairly intriguing numbers on page 76.”  You stare blankly at him, wondering where this is going.  “Yess—so I’d love if you could generate the report for me.”  He smirks at you expectantly.  You blink, twice, then begin to stutter a reply.  But your boss is already out the door.  “On my desk by five, Susie!” he whistles (as bosses are wont to do) and scampers off to terrorize another underling.

You would be forgiven if, at that moment, you decided it was time to knock a swig or two off the old bourbon bottle and line up some Rick Astley on the tapedeck.

Because the task is, in a word, impossible.

Continue Reading »

21 responses so far

From across the reaches of an Internet...

(by melody) Dec 20 2010

Here, dear readers, is a hilarious review of Gregory Sampson's "The Language Instinct Debate" from humorist and Amazon "top 1000" reviewer Olly Buxton.  There's politics; there's drama; and it is delightfully droll!  (Steve Pinker and Noam Chomsky also make appearances).

Olly gave the book a five-star rating and titled this review, "The sound of leather on willow floats across the village green."

Continue Reading »

14 responses so far

Language doesn't feature much at the top

(by melody) Dec 16 2010

There is currently a debate raging over at The Economist over whether language shapes thought.  In the latest rebuttal, posted by the whimsical L.B., she makes the claim that:

"These days, scientists do not just make claims, they make measurements. The scientific study of how language shapes thinking comprises decades' worth of empirical discoveries, published in premier academic journals like Science and Nature..."

L.B. makes it sound like the two 'premier' journals regularly devote their pages to the sundry and subtle workings of the Whorfian question.  --Which is a misleading way to make it sound, I assure you.  (If by "like Science and Nature" she means PNAS, there may be slightly more credibility to the claim, but I'll take her at face value for the moment).

In the past year, I've had a number of manuscripts peer-reviewed at these journals, and in the interim, I've spent the time to actually dig through the archives of both journals to find out what work on language they've published over the last decade.  While L.B.'s claim is technically correct, it's also misleading.  The only Whorfian topics that have been published in either journal are to do with numerical cognition in the Pirahã.  'Decades worth' of discoveries have been published in other journals, no doubt, but Nature and Science have devoted relatively few pages to the question of how language shapes thought, or any other topic in language, period.

To give you a flavor: this year, Nature published 0 original research articles on language.  The best they did was a news brief on the genetic basis for stuttering and a feature on speed reading.  Science published 1.  On average, Nature publishes less than 2 a year on language; Science publishes a little over 3.  For Nature, that's 2 out of over 800+ articles published a year.  Clearly a hot topic.

Continue Reading »

21 responses so far

Older posts »