Academically Adrift’s Methodological Shipwreck

On Tuesday we had a university-wide faculty meeting on revising the general education requirements at Morgan State, and predictably President Wilson held up a copy of Arum and Roksa’s Academically Adrift and made some comments about how we had to do better while horribly mangling the actual findings of the book. Though there’s a lot going on in the text, which the group blog In Socrates Wake took up early last year, there are also serious methodological issues to consider before using it as a guide.

Let me start with the good news. Arum and Roksa’s conclusions are a triumph for the liberal arts: the only classes that guarantee an increase in “critical thinking, analytic reasoning, problem solving, and written communication” are classes that have at least forty pages of weekly reading, twenty pages of writing per semester, and high expectations from the professor. As a result, students with majors outside of the traditional liberal arts and sciences did not appear to be developing these skills. It also helps a lot for a student to start with high abilities in those areas, which is perhaps a problem when you’re dealing with poorly prepared students like 65% of Morgan’s first-year class.

As much as I love the conclusions of this text, there are a number of important methodological concerns that academics and administrators ought to consider before adopting policies on the basis of the book. Alexander Astin explains the statistical problem in his Chronicle piece “In ‘Academically Adrift,’ Data Don’t Back Up Sweeping Claim” and there are many other critical reviews in this pdf from the journal College Composition and Communication. Let me see if I can summarize the issues.

Arum and Roksa depended on a voluntary standardized test called the Collegiate Learning Assessment, which they administered in the first and fourth semesters of undergraduates’ education.  Right off the bat, then, we know that this is not a test of a full college education, but rather of the general education supplied during the first two years.

But even then, the methodological problems are threeefold. First, the authors use a 95% confidence interval, which allows us to say that there is only a 5% chance that the students identified as increasing in ability didn’t actually do so. But by controlling for “false positives” so stringently, we’ve left the door open to “false negatives.”In other words, the data analysis does not justify the “Adrift” claim that students who fall outside of that group haven’t increased in ability. We can be sure that reading and writing intensive courses work; we can’t be sure that other things don’t work.  This is simply a flaw in the way that the authors present their findings.

Second, and perhaps more dangerously, this test is not very reliable for testing the increased performance (i.e. the difference between two tests taken years apart) of individual students. As researchers for the CLA’s maker, the Council for Aid to Education, themselves note:

The CLA focuses on the institution (rather than the student) as the unit of analysis. Its goal is to provide a summative assessment of the value added by the school’s instructional and other programs (taken as a whole) with respect to certain important learning outcomes. […C]orrelations are about 0.35 higher (and explain three times as much variance) when the college rather than the student is used as the unit of analysis. This huge increase stems from the much higher reliability of the school level scores.

This raises substantial questions about the value of data that takes the student as the unit of analysis, and thus uses the weakest data available. Sometimes we have to admit that we do not have the data to say anything at all with certainty, even when we wish we could and even when our own experience suggests a tempting explanation.

Third, Arum and Roksa ran more than a thousand different statistical tests looking for correlations, but “with a confidence level of .05, for every 1,000 tests run odds are that 50 are false positives.” That means that even my cherished findings in favor of the liberal arts could simply be a mistake.

It’s one thing to say that their results are overly demanding, but another entirely to suggest that the very solid foundation they offer could equally well be based on statistical happenstance, overly keen researchers desperate to find any fit at all, and an over-reliance on faulty data.

We have to be honest with ourselves about the limits of certainty within educational assessements. I still believe that high expectations, high reading loads, and a lot of writing are the key to teaching students to become critical and complex reasoners. But I have to admit that the evidence for my position thus far is not as reliable as I wish it were: the practice of critical thinking and complex reasoning depends on just such acknowledgments.

Comments

2 responses to “Academically Adrift’s Methodological Shipwreck”

  1. […] What follows is a proposal I’ve been working on to convince my university to switch from its General Education requirements to a first-year seminary, given the data in Academically Adrift. […]

  2. […] it like a set of skills and practices, or even worse, as the acquisition of discrete knowledge, then the real benefits (especially of college education) will be lost. The real benefits of education are soft skills that are hard to “acquire” in that […]

Second Opinions