Archive for the 'Forget What You’ve Read!' category

The Observer's Paradox

Mar 12 2011 Published by under Forget What You've Read!

If you have been following the debate on acceptability judgments and other linguistic methods, you may want to check out computational linguist Mark Liberman's (mini)-argument against self-observation on Language Log.  The tongue-in-cheek response is in reply to comments on Bill Poser's post about the pronunciation of the word "tsunami."  Mark writes:

"This gap between phonetic intuition and phonetic fact is a special form of the observer's paradox. Just as we behave differently when we're aware of being observed by others, we also behave differently when we imagine observing ourselves."

In doing some follow-up reading on the history of the paradox, I found a brilliant essay by the famous sociolinguist, William Labov, on the declining methodological standards in linguistics research (see p. 105-8 for what he thinks of the role of 'intuition').  The essay was published in 1972.  Labov wrote then:

"If new data has to be introduced, we usually find that is has been barred for ideological reasons, or not even been recognized as data at all, and the new methodology must do more than develop techniques.  It must demolish the beliefs and assumptions which rules its data out of the picture.  Since many of these beliefs are held as a matter of deep personal conviction, and spring from the well-established habits of a lifetime, this kind of criticism is seldom accomplished without hard feelings and polemics, until the old guard gradually dissolves into academic security and scientific limbo."

He could well have been writing about corpora.

For those of you who read my post last week about language change, you may be amused to note that in the same essay, Labov wrote: "We are forced to ask whether the growth of literacy and mass media are new factors affecting the course of linguistic change that did not operate in the past."  Well, Mr. Greene and I certainly never claimed to be the first to articulate these ideas!

Cheers, William.

On a different note, I have been extremely troubled by the reports about what is going on in Japan.  One of my best friends left the country a day before the earthquake.  To everyone there - or with friends and family there - my heart goes out to you.  If any readers are interested in giving to the disaster relief fund, you can find more information here.

9 responses so far

Let's stop mistaking 'thought experiments' for science

Trends in Cognitive Sciences recently published a provocative letter by a pair of MIT researchers, Ted Gibson and Ev Fedorenko, which has been causing a bit of a stir in the language camps.  The letter - "Weak Quantitative Standards in Linguistic Research" and its companion article - have incited controversy for asserting that much of linguistic research into syntax is little more than - to borrow Dan Jurafsky's unmistakable phrase - a bit of "bathtub theorizing."  (You know, you soak in your bathtub for a couple of hours, reinventing the wheel).  It's a (gently) defiant piece of work: Gibson and Fedorenko are asserting that the methods typically employed in much of linguistic research are not scientific, and that if certain camps of linguists want to be taken seriously, they need to adopt more rigorous methods.

I found the response, by Ray Jackendoff  and Peter Culicover, a little underwhelming, to say the least.  One of the more amusing lines cites William James:

"Subjective judgments," they claim, "are often sufficient for theory development. The great psychologist William James offered few experimental results."

Yes, but so did "the great psychologist" Sigmund Freud, and it's not clear whether he was doing literary theory or "science"...  More trivially, James was one of the pioneers of the fields and didn't have access to the methods we now have at our disposal.  That was his handicap - not ours.

We can contrast that (rather lame) response with what computational linguist Mark Liberman said about corpus research last week in the New York Times:

"The vast and growing archives of digital text and speech, along with new analysis techniques and inexpensive computation, are a modern equivalent of the 17th-century invention of the telescope and microscope."

Here, here, Mr. Liberman.  I couldn't agree more.

Last month, Michael Ramscar and I published a seven-experiment Cognitive Psychology article, which uses careful experimentation and extensive corpus research to make something of a mockery of one piece of "intuitive" linguistic theorizing that has frequently been cited as evidence for innate constraints.  Near the end of the piece, we take up a famous Steve Pinker quote and show how a simple Google search contradicts him.  After roundly (and amusingly) trouncing him, Michael writes - in what must be my favorite line in the whole paper -

"Thought-experiments, by their very nature, run into serious problems when it comes to making hypothesis blind observations, and because of this, their results should be afforded less credence in considering [linguistic] phenomena.”

No doubt, this one-liner owes some credit to a brilliant P.M.S Hacker quote (actually a footnote to one of his papers!):

"Philosophers sometimes engage in what they misleadingly call 'thought-experiments.'  But a thought experiment is no more an experiment than monopoly money is money."

Let's stop mistaking 'thought experiments' for science.

87 responses so far

Peer review favors vanilla

Mar 01 2011 Published by under Forget What You've Read!

In high school, I had a history teacher who would berate any student who gave a ‘vanilla’ response in class – code for a ‘middle of the road’ or ‘safe to all ears’ answer.  As Mr. Toy explained, while vanilla is the most consumed flavor of ice-cream in the western world, and by dint, the most ‘popular,’ vanilla isn’t also the most liked flavor – people tend to feel much more strongly about chocolate, butter pecan, and strawberry – it’s simply the one that’s most passable to the greatest number of people.

Peer review can, at times, reward papers for being merely ‘vanilla,’ particularly at the high-impact commercial journals.  Last year, we had a paper at one of the top journals rejected on a 4:2 split (4 in favor, 2 added in 2nd and 3rd round against).  We've since had 1:1 (reject), 1:2 (reject), 2:1 (reject).  It's never going to be the case that there isn't someone out there who doesn't hate our work.*  We take a very definite theoretical stand, which means we elicit bimodal responses at every single journal we try to publish in, doesn't matter the impact factor.  Does that mean that our work is somehow less worthy of publication than papers that receive a chorus of middling votes?

[Oh, you already know what I think!]

But the issue isn't personal - we're certainly not the only lab to face this problem.  The question is: Is science supposed to be controversy-free?  Or, dare I say, vanilla?  Are commercial journals really just out to preserve the status quo?

*Um, did that triple-negation work?  I've tried rereading it three times and I'm still not sure.  Late night.  At least Charlie Sheen would approve.  ("You can't process me with a normal brain.")

See also: Graphene.

19 responses so far

Bad Metaphors Make for Bad Theories

Imagine for a moment, that you have been thrown back into the Ellisonesque world of the 1980’s, with a delightful perm and even better trousers.  One fragile Monday morning, you are sitting innocently enough at your cubicle, when your boss comes to you with the summary of a report you have never read, on a topic you know nothing about.  “I’ve read the précis and I’d love to take a peek at the report," he intones, leaning in.  "Apparently, they reference some fairly intriguing numbers on page 76.”  You stare blankly at him, wondering where this is going.  “Yess—so I’d love if you could generate the report for me.”  He smirks at you expectantly.  You blink, twice, then begin to stutter a reply.  But your boss is already out the door.  “On my desk by five, Susie!” he whistles (as bosses are wont to do) and scampers off to terrorize another underling.

You would be forgiven if, at that moment, you decided it was time to knock a swig or two off the old bourbon bottle and line up some Rick Astley on the tapedeck.

Because the task is, in a word, impossible.

Continue Reading »

21 responses so far

From across the reaches of an Internet...

Dec 20 2010 Published by under Forget What You've Read!

Here, dear readers, is a hilarious review of Gregory Sampson's "The Language Instinct Debate" from humorist and Amazon "top 1000" reviewer Olly Buxton.  There's politics; there's drama; and it is delightfully droll!  (Steve Pinker and Noam Chomsky also make appearances).

Olly gave the book a five-star rating and titled this review, "The sound of leather on willow floats across the village green."

Continue Reading »

14 responses so far

On philosophical confusion

Dec 06 2010 Published by under Forget What You've Read!

"Any decent philosophical problem is held in place not by one mistake or confusion but by a whole range. Wittgenstein has a wonderful metaphor: if you shine strong light on one side of a problem, it casts long shadows on the other. Every deep philosophical confusion is held in place by numerous struts, and one cannot demolish the confusion merely by knocking one strut away. One has to circle around the problem again and again to illuminate all the misconceptions that hold it in place."

--P.M.S. Hacker, quoted in The Philosophers' Magazine

One response so far

The Knobe Effect

As an avid reader of Language Log, my interest was recently piqued by a commenter asking for a linguist's eye-view on the "Knobe Effect":

"Speaking of Joshua Knobe, has any linguist looked into the Knobe Effect? The questionnaire findings are always passed off as evidence for some special philosophical character inherent in certain concepts like intentionality or happiness. I'd be interested in a linguist's take. If I had to guess, I'd say the experimenters have merely found some (elegant and) subtle polysemic distinctions that some words have. As in, 'intend' could mean different things depending on whether the questionnaire-taker believes blameworthiness or praiseworthiness to be the salient question. Or 'happy' could mean 'glad' in one context but 'wholesome' in another, etc…"

Asking for an opinion, eh?  When do I not have an opinion?  (To be fair, it happens more than you might expect).

But of course, I do have an opinion on this, and it's not quite the same as the one articulated by Edge.  This post is a long one, so let me offer a teaser by saying that the questions at stake in this are : What is experimental philosophy and is it new?  How does the language we speak both encode and subsequently shape our moral understanding?  How can manipulating someone's linguistic expectations change their reasoning?  And what can we learn about all these questions by productively plumbing the archives of everyday speech?

Continue Reading »

24 responses so far

What is ADHD? Paradigm Shifts in Psychopathology

Wow. Lots of psycho linguists around lately, huh? How about a change of pace? Think you guys can handle something not about Lord Chomsky?

Over the last one hundred years, paradigm shifts in the study of psychopathology have altered our conceptualization of attention deficit/hyperactivity disorder (ADHD), as a construct and as a diagnostic category. With few exceptions, it has generally been accepted that there is a brain-based neurological cause for the set of behaviors associated with ADHD. However, as technology has progressed and our understanding of the brain and central nervous system has improved, the nature of the neurological etiology for ADHD has changed dramatically. The diagnostic category itself has also undergone many changes as the field of psychopathology has changed.

In the 1920s, a disorder referred to as minimal brain dysfunction described the symptoms now associated with ADHD. Researchers thought that encephalitis caused some subtle neurological deficit that could not be medically detected. Encephalitis is an acute inflammation of the brain that can be caused by a bacterial infection, or as a complication of another disease such as rabies, syphilis, or lyme disease. Indeed, children presented in hospitals during an outbreak of encephalitis in the United States in 1917-1918 with a set of symptoms that would now be described within the construct of ADHD.

In the 1950s and 1960s, new descriptions of ADHD emerged due to the split between the neo-Kraepelinian biological psychiatrists and the Freudian psychodynamic theorists. The term hyperkinetic impulse disorder, used in the medical literature, referred to the impulsive behaviors associated with ADHD. At the same time, the Freudian psychodynamic researchers (who seem to have won the battle in the DSM-II) described a hyperkinetic reaction of childhood, in which unresolved childhood conflicts manifested in disruptive behavior. The term "hyperkinetic," which appears in both diagnoses, describes the set of behaviors that would later be known as hyperactive – despite the fact that medical and psychological professionals were aware that there were many children who presented without hyperactivity. In either case, it was the presenting behavior that was the focus – which was implicit, given the behavioral paradigm that guided the field.

When the cognitive paradigm became dominant, inattention became the focus of ADHD, and disorder was renamed attention deficit disorder (ADD). Two subtypes would later appear in the literature, which correspond to ADD with or without hyperactivity. The diagnostic nomenclature reflects the notion that the primary problem was an attentional (and thus, cognitive) one and not primarily behavioral. The attentional problems had to do with the ability to shift attention from one stimulus to another (something that Jonah Lehrer has called an attention-allocation disorder, since it isn't really a deficit of attention). The hyperactivity symptoms were also reformulated as cognitive: connected with an executive processing deficit termed “freedom from distractibility.”

In DSM-IV, published in 1994, the subtypes were made standard and there wasn’t much change in the diagnostic criteria per se, but there were changes in the name of the disorder, which reflected changes in the literature in terms of the understanding of the etiology of the disorder. The term ADD did not hold up, and the disorder became known as ADHD, with three subtypes: ADHD with hyperactivity/impulsiveness, ADHD with inattention, and a combined subtype in which patients have both hyperactive and attention-related symptoms. Due to improved neuroimaging technology, these subtypes seem to reflect structural and functional abnormalities found in the frontal lobe, and in its connections with the basal ganglia and cerebellum.

The set of the symptoms associated with ADHD seem not to have changed much in the last one hundred years. However, paradigm shifts within the field of psychopathology have changed the way in which researchers understand the underlying causal factors, as well as which of the symptoms are thought to be primary.

6 responses so far

The reality of a universal language faculty?

Oct 05 2010 Published by under Forget What You've Read!

An argument is often made that similarities between languages (so-called "linguistic universals") provide strong evidence for the existence of an innate, universal grammar (UG) that is shared by all humans, regardless of language spoken.  If language were not underpinned by such a grammar, it is argued, there would be endless (and extreme) variation, of the kind that has never been documented.  Therefore -- the reasoning goes -- there simply must be design biases that shape how children learn language from the input they receive.

There are several potentially convincing arguments made in favor of innateness in language, but this, I think, is not one of them.

Why?  Let me explain by way of a evolutionary biology:

Both bats and pterodactlys have wings, and both humans and squid have eyes, but neither pair shares a common ancestor that had these traits.  This is because wings and eyes are classic examples of 'convergent' evolution -- traits that arose in separate species as 'optimal' (convergent) solutions to the surrounding environment.  Convergent evolution has always struck me as a subversive evolutionary trick, because it demonstrates how external constraints can produce markedly similar adaptations from utterly different genetic stock.  Not only that, but it upends our commonsense intuitions about how to classify the world around us, by revealing just how powerfully surface similarities can mask differences in origin.

When we get to language, then, it need not be surprising that many human languages have evolved similar means of efficiently communicating information. From an evolutionary perspective, this would simply suggest that various languages have, over time, 'converged' on many of the same solutions.  This is made even more plausible by the fact that every competent human speaker, regardless of language spoken, shares roughly the same physical and cognitive machinery, which dictates a shared set of drives, instincts, and sensory faculties, and a certain range of temperaments, response-patterns, learning facilities and so on.  In large part, we also share fairly similar environments -- indeed, the languages that linguists have found hardest to document are typically those of societies at the farthest remove from our own (take the Piraha as a case in point).

Continue Reading »

11 responses so far

Intelligent Nihilism

Sep 30 2010 Published by under Forget What You've Read!

I wanted to register a quick reply to some of the comments on last week's post "The question is : are you dumber than a rat?"  In the comments there, and in posts on other blogs, our research program has been accused of intelligent nihilism.  By one such characterization, our position is that "we don't how the brain could give rise to a particular type of behavior, so humans must not be capable of it."  Though I think the label is quite witty -- and would love to have badges made for the lab! -- I think this misrepresents our stance rather badly ; our argument is that many of the properties that linguists have attributed to language are either empty theoretical constructs (hypotheses that are not supported by the empirical evidence) or are conceptually confused (and have been shown to be so; by Wittgenstein, Quine and many others).  We are not denying that language -- and linguistic behavior -- are complex; rather, we are rejecting a particular  stance towards language that we think is theoretically and empirically vacuous.  This does not lead us to nihilism, but rather to a different conception of language and how language is learned.

In any case, the comments on last week's post prove to be fertile ground for discussion, so I've posted them (in pared down fashion) along with a brief response.  The full comment thread can be found at the original post.

Continue Reading »

19 responses so far

Older posts »