Archive for the 'Downloads from Jason’s Brain' category

We've been drawn!

Frequent commenter and excellent artist Joseph Hewitt has immortalized us in colored pencil. Apparently I play the role of "pseudoscientific dogmatist." wut?

(click to see the full page)

While we're at it, here are a few more links to enjoy:

From the NY Times:

When a 12-year-old’s mother asks him “How many times do I have to tell you to stop?” he will understand that the answer, if any is required, had better not include a number.

But that insight requires a sophisticated understanding of ironic language that develops long after fluent speech. At what age do children begin to sense the meaning of such a question, and to what degree can they respond appropriately to other kinds of irony?

In laboratory research on the subject, children demonstrate almost no comprehension of ironic speech before they are 6 years old, and little before they are 10 or 11. When asked, younger children generally interpret rhetorical questions as literal, deliberate exaggeration as a mistake and sarcasm as a lie.

Also from the NYT:

Child Protective Services investigated more than three million cases of suspected child abuse in 2007, but a new study suggests that the investigations did little or nothing to improve the lives of those children.

Feel free to share other interesting links in the comments.

2 responses so far

What is ADHD? Paradigm Shifts in Psychopathology

Wow. Lots of psycho linguists around lately, huh? How about a change of pace? Think you guys can handle something not about Lord Chomsky?

Over the last one hundred years, paradigm shifts in the study of psychopathology have altered our conceptualization of attention deficit/hyperactivity disorder (ADHD), as a construct and as a diagnostic category. With few exceptions, it has generally been accepted that there is a brain-based neurological cause for the set of behaviors associated with ADHD. However, as technology has progressed and our understanding of the brain and central nervous system has improved, the nature of the neurological etiology for ADHD has changed dramatically. The diagnostic category itself has also undergone many changes as the field of psychopathology has changed.

In the 1920s, a disorder referred to as minimal brain dysfunction described the symptoms now associated with ADHD. Researchers thought that encephalitis caused some subtle neurological deficit that could not be medically detected. Encephalitis is an acute inflammation of the brain that can be caused by a bacterial infection, or as a complication of another disease such as rabies, syphilis, or lyme disease. Indeed, children presented in hospitals during an outbreak of encephalitis in the United States in 1917-1918 with a set of symptoms that would now be described within the construct of ADHD.

In the 1950s and 1960s, new descriptions of ADHD emerged due to the split between the neo-Kraepelinian biological psychiatrists and the Freudian psychodynamic theorists. The term hyperkinetic impulse disorder, used in the medical literature, referred to the impulsive behaviors associated with ADHD. At the same time, the Freudian psychodynamic researchers (who seem to have won the battle in the DSM-II) described a hyperkinetic reaction of childhood, in which unresolved childhood conflicts manifested in disruptive behavior. The term "hyperkinetic," which appears in both diagnoses, describes the set of behaviors that would later be known as hyperactive – despite the fact that medical and psychological professionals were aware that there were many children who presented without hyperactivity. In either case, it was the presenting behavior that was the focus – which was implicit, given the behavioral paradigm that guided the field.

When the cognitive paradigm became dominant, inattention became the focus of ADHD, and disorder was renamed attention deficit disorder (ADD). Two subtypes would later appear in the literature, which correspond to ADD with or without hyperactivity. The diagnostic nomenclature reflects the notion that the primary problem was an attentional (and thus, cognitive) one and not primarily behavioral. The attentional problems had to do with the ability to shift attention from one stimulus to another (something that Jonah Lehrer has called an attention-allocation disorder, since it isn't really a deficit of attention). The hyperactivity symptoms were also reformulated as cognitive: connected with an executive processing deficit termed “freedom from distractibility.”

In DSM-IV, published in 1994, the subtypes were made standard and there wasn’t much change in the diagnostic criteria per se, but there were changes in the name of the disorder, which reflected changes in the literature in terms of the understanding of the etiology of the disorder. The term ADD did not hold up, and the disorder became known as ADHD, with three subtypes: ADHD with hyperactivity/impulsiveness, ADHD with inattention, and a combined subtype in which patients have both hyperactive and attention-related symptoms. Due to improved neuroimaging technology, these subtypes seem to reflect structural and functional abnormalities found in the frontal lobe, and in its connections with the basal ganglia and cerebellum.

The set of the symptoms associated with ADHD seem not to have changed much in the last one hundred years. However, paradigm shifts within the field of psychopathology have changed the way in which researchers understand the underlying causal factors, as well as which of the symptoms are thought to be primary.

6 responses so far

Your Humble Narrators on Bloggingheads


Jason and Melody are the subjects of today's Bloggingheads.tv Science Saturday program. Watch us chat with eachother for about an hour on how we became scientists and science bloggers, our thoughts on the state of psychology as a field, peer review and the journal system, how the study of language learning and comparative cognition may not be so different, and a smattering of other thoughts.

Comments are off for this post

Is The Child The Father of the Man?

One of the fundamental themes (and a continuing debate) in developmental psychology concerns the continuity or discontinuity of temperament and personality from infancy through the rest of a child’s life and into adulthood.

Some researchers believe that they have found evidence for the continuity of relatively stable personality traits through development. Despite the clear importance of environmental stressors and other random events, the evidence seems fairly clear that the personality traits that dictate the response pattern to such life events in adulthood is fairly predictable based on early childhood temperament.

ResearchBlogging.orgSchwartz and colleagues, in 2003, investigated amygdalar responses to novelty in adults who had been previously classified as inhibited or uninhibited at age two. (The amygdala, a part of the limbic system, has been shown to be involved in the processing of emotional information.) Children classified as inhibited tend to be shy around people, objects, or situations which are unfamiliar, while uninhibited children tend to approach or even seek out novel people, objects, or situations. They hypothesized that there would be neural differences between the two groups, particularly in response to novel versus familiar faces. The hypothesis was confirmed for this sample, as the two groups had different responses to the stimuli in this fMRI study. One interpretation of these results is that there is continuity in temperament at least to early adulthood, although only a longitudinal study could truly address that question. This study found a correlation between early temperament categorization and adult amygdala activity, which leaves open several possible alternative interpretations.

Caspi, in 2000, gave somewhat more convincing longitudinal evidence that there is developmental continuity of temperament, using data from the Dunedin longitudinal study. In general terms, he found that undercontrolled (uninhibited) three year olds grew up to become impulsive, unreliable, and antisocial, while inhibited three year olds became unassertive and depressed, and had less social support.

More specifically, undercontrolled toddlers were rated by teachers and parents as having more externalizing problems at age 5, 7, 9, and 11. In adolescence (age 13 and 15), the undercontrolled toddlers continued to have externalizing behavior problems, and they showed more internalizing problems as well. The inhibited children had significantly more internalizing problems than the undercontrolled or control groups.

By age 18, the undercontrolled children were low on traits designed to measure constraint. In self-descriptions, terms used included “reckless” and “careless”, and they indicated low harm avoidance. They scored high on negative emotionality measures. They also reported high aggression and alienation. Much of these findings are consistent with the findings from early adolescence. The inhibited children were low on the constraint measurements, and low in positive emotionality. They self-reported high self-control, high harm avoidance, and low aggression. They also reported low social potency – that is, they shied away from leadership roles. Informant ratings at age 21 were consistent with these self-ratings at age 18. Finally, by age 21, undercontrolled children were involved in conflicted relationships; inhibited children had significantly higher social support. Similar significant patterns were also found by age 21 for employment, psychopathology, and criminal behavior.

Ultimately, a considerable amount of data from these studies and others suggest that adult personality is indeed predictable from childhood temperament, but that still does not explain why this is so. A more comprehensive view, accounting for biological, cognitive, emotional, social, and environmental factors, is necessary. Despite the fact that random life encounters cannot be predicted, stable differences in personality likely influences how such events are subjectively experienced.

Image Source

Schwartz, C. (2003). Inhibited and Uninhibited Infants "Grown Up": Adult Amygdalar Response to Novelty Science, 300 (5627), 1952-1953 DOI: 10.1126/science.1083703

Caspi, A. (2000). The child is father of the man: Personality continuities from childhood to adulthood. Journal of Personality and Social Psychology, 78 (1), 158-172 DOI: 10.1037//0022-3514.78.1.158

3 responses so far

Historical Perspectives on Social Development

Aug 03 2010 Published by under Downloads from Jason's Brain

What are the key assumptions of the major theoretical perspectives on social development? In future posts, I will refer back to several of the major constructs that have dominated the field of psychology, at one point or another. Here, then, is a short glossary.

There are six major theoretical constructs that have, at one point or another, guided research and thought on social development: psychoanalytic, behavior genetics, social learning, ecological, cultural, and ethological.

The psychoanalysts (e.g. Freud, Erikson) took a dynamic and structural approach to the self. They believed that all human behaviors emerge from the interaction of the major structures of the psyche, most of which are unavailable to consciousness. Erikson built on Freud’s original theories and focused on the social context in which a child is raised. At each of Erikson's psychosocial stages, the self evolves in its social context.

The behavior genetic approach is mainly concerned with the relative influence of heredity and environment in producing behavior. Historically, behavior geneticists (and currently, much of the lay public) operated under a deterministic assumption, that behavior proceeds from biology without any possible interaction from environment (social and otherwise). Modern behavior genetics takes a more sophisticated approach to the question, and attempts to understand gene-gene interactions, gene-brain-behavior processes, and gene-environment interactions.

Social learning theorists believe that all behavior (social and otherwise) arises from learned associations between stimuli. In this case, almost all the “blame” for behavior is placed on environmental and social context.

The ecological perspective is another context-centric approach, which is best described by Bronfenbrenner’s concentric circles model, which places the individual at the center of a set of concentric spheres of influence, each with varying amounts of influence on the behavior of an individual. One of the main problems with these environment-centric perspectives is that they generally ignore biology, or just give it a cursory treatment.

The cultural perspective claims that an individual cannot be separated from the sociocultural environment, since each individual subjectively interprets the environment in a culturally bound manner.

All these perspectives have their merits and are useful for thinking about certain phenomena, but it is the ethological that speaks most strongly to me (if you are a reader of my other blog, The Thoughtful Animal, this should come as no surprise).

The ethological perspective proceeds directly from Darwin’s theory of evolution by natural selection. Ethologists study behavior in the context of what is known about anatomy, physiology, neurobiology, and phylogenetic history. Nikolaas Tinbergen developed four questions that should be asked about any behavior, which focused on causation, ontogeny, phylogeny, and adaptation. It is critical to understand that, in some sense, humans and their social context evolved alongside each other. It is because social behavior is adaptive that it is so pervasive throughout the animal kingdom. For example, children spontaneously respond in predictable ways to their mothers’ voices and faces. While the other perspectives have important interpretations that can provide insight, ultimately, a series of anatomical and physiological responses underlie this behavior. Humans evolved with such big brains (relative to body size) that they are born well before they are able to adequately care for themselves – unlike chickens, for example, who have excellent visual and motor skills from the day they hatch and need no parental care to survive. As a consequence of the need to be delivered so early in development, it became important for infants to act in such a way that would evoke nurturing and protective behaviors from their parents. Thus the infant smile and cry. It is the evolutionary perspective which reminds us that social behaviors do not exist for their own sake, or because anything is good or right about social behavior in any universal sense – instead, they evolved because they probably conferred some sort of evolutionary advantage unto the species (or, conversely, they emerged and did not confer an adaptive dis-advantage), and insured that the individuals would mature to an age when they could procreate.

This is not to say that social behavior should not be studied, nor do I mean to explain away social behavior as being entirely bound up in anatomy and physiology. On the contrary, I believe that a full understanding of the biological and phylogenetic origins of complex social behavior can only add to the richness of the social experience.

One response so far