Blogroll

Become a Fan

« On the so-called "The Stanford School of Philosophy of Science." | Main | Cool way Theatre Studies types do conferences »

12 September 2013

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341ef41d53ef019aff569afd970c

Listed below are links to weblogs that reference Figlio responds to Criticism:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Ed Kazarian
1.

Isn't this interesting! So definitely *not* 'adjuncts' in the 'freeway flyer' sense, and folks who generally stick around.

John Protevi
2.

There are interesting ethical questions about framing and authors' responsibility for the predictable uptake of a study given the current climate. In other words, with all the "school reform" bullshit which tries to blame the teachers for systematic problems at K-12 and the parallel and ongoing adjunctification in HE (with its subtext of disengaged TT profs), I'm not sure we want to let these guys off the hook because they put in some plausible deniability safeguards. In still other words, they had to know what headline writers would make of this study and how many folks only read headlines.

John Protevi
3.

See also this debate on the topic, relative to the Regnerus study of 2012: http://www.newappsblog.com/2012/06/john-corvino-article-in-the-new-republic.html

Alan Nelson
4.

Does this study use some measure of "learning outcomes" that is not complete bullshit? Let us grant that entry/exit exams are a good measure for introductory courses making use of simple technical methods: remedial math, or even intro calculus for example. I'd like to see someone propose a "learning outcome" metric for Philosophy 101. Obviously, if outcomes are measured by standard letter grades, that would require some careful controls.

Eric Schliesser
5.

Well, one interesting fact is that the authors did not reconsider re-evaluating Northwestern's internal ranking/scoring of incoming students. For, one thing that the study seems (unintentionally) reveal is that there is a class of students who 'benefit' 'more' from teaching than others.

Ed Kazarian
6.

I discuss this a bit in my post. Their 'outcome' is a grade in a subsequent course in the same discipline. There are more details about how the comparisons work than that, but their suggestion is that this is a good measure of longer term, stable learning. I have to admit to punting on the question of how valuable that is as a measure -- though I can't see a good reason why it wouldn't, in a sufficiently large study (which this seems to be), at least be statistically reliable.

Ed Kazarian
7.

I'm not sure that'd be a reason to re-evaluate. As I understand it, there are a number of studies showing that lower performing students and students from less 'advantaged' backgrounds respond much better to classroom instruction than more 'self-directed' learning, a la the MOOC model. One good thing that may come out of this, again contingent on learning a lot more about what the 'difference' between the classes of faculty really amounted to, is that it provides more evidence that *classroom teaching* isn't something we should be looking to dispense with, especially on 'democratic' grounds.

Jonathan Kaplan
8.

Ed - but what I found odd was that the (small) effect on future grades was largest not for the weakest students (as identified by NU) but the 'middling' students...

In any event, the effect, while highly statistically significant, is objectively fairly small; that's the nice thing about giant samples! But for all the talk of the results being so "significant" it is important to keep in mind that an average increase of .06 on a 4 point scale may, or may not, be academically significant, even if it statistically very significant.

Ed Kazarian
9.

"the effect, while highly statistically significant, is objectively fairly small"

YES! This is really key. When statisticians say "significant," they mean outside the margin of error. This is a marginal gain, at most.

George Gale
10.

Exactly. I can't think of an instance when a science journalist took the time to point this out. They slide over the distinction between ordinary "significance" and statistical "significance" effortlessly. It's a terribly unprofessional thing to do, if they do so knowingly. Of course, some might not even know the difference... :(

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

Categories