salvēte, amīcī et sodālēs! This is the second in a series of posts about “ways to encourage more qualitative learning in a quantitative online environment,” as I said at the end of yesterday’s post. Of course, we should probably pause and define both qualitative and quantitative before we go any farther! As I wrote the first draft of this post yesterday evening, I’d just come from a weekly book group where we were discussing, among other things, the ideas of forgiveness and reconciliation. It occurred to several of the long-time members of the group that in several books they’d read together, the words had clearly been defined very differently: what one author might call forgiveness, another might call reconciliation, and vice versa. Of course, I’m not sure we’ll ever agree completely on a definition for any word, but I do think it’s important to clarify just a bit before we go on.
By qualitative, I mean a more globally focused sense of the overall quality of one’s learning – a qualitative focus is certainly not exactly hostile to numbers, but it’s not bound by numbers or obsessed with them either.
By contrast, when I say quantitative, I imply a focus, if not a preoccupation, with types of learning that can easily be measured or expressed with numbers: scores on a quiz, for example, or percentage of work completed – things like that.
I don’t mean to imply that qualitative measures are better than quantitative ones, or vice versa; I just want to point out that they’re very different things. It’s certainly been the case over the past few decades that American education, in particular, has focused hard on quantitative measures of learning and teaching – test scores, of course, but also all the other statistics that educators love to collect. In so doing, I’m afraid we’ve discounted the things that are harder to count. And so, with the Tres Columnae Project and with the work I do with my face-to-face students, I want to restore a bit of balance. In fact, I even want to use some well-chosen numbers in a qualitative way. For example, with the self-assessments I mentioned in several posts last week, our learners are rating their perceived performance and comfort level on a numeric scale from 1-5 … but the point is not to “average some grades” or to determine a mean, median, mode, or any other statistic about the numbers themselves. Rather, the numbers are a tool that the learners (and their teacher) can use to observe their performance … especially their performance over time.
It may even be appropriate to produce charts, graphs, and statistics about the changes in those numbers over time … but the numbers, charts, and statistics are a tool for learning, not an end in themselves. Too often, when the focus is excessively quantitative, we educators forget that the numbers are a tool and start elevating them into a goal. When we do that, the results are too often disastrous – not just for the learning that we’re supposed to be measuring, but also for the accuracy and validity of the numbers themselves.
We’ve all read sad stories of students, teachers, and school leaders who respond to number pressure in wrong or unethical ways. When students cheat, they’re usually motivated by a couple of factors: a desire to do well (which is commendable) and an inability (real or perceived) to do well “the right way” (which is not commendable). How do we teachers respond in such cases? Too often, I’m afraid, we’re angry at what we perceive as an offense against ourselves, or against the purity of our academic discipline or something. We see the fault, but we fail to acknowledge the underlying desire to do well … and we don’t help our learners channel that desire into a more positive direction. You may have seen this recent discussion on a textbook-specific listserv about a “sample test” that the publisher had made available on its website. No one came right out and said it, but it was evident that a lot of the ire stemmed from a fear that “students might find it and cheat.”
As a young teacher, I’m afraid I took pleasure in “catching cheaters” – and I’m sure I wasn’t alone. I sometimes forgot, though, that the purpose of catching them isn’t to punish them so much as it is to correct the problem and keep it from happening again! But then, we educators often seem to have trouble remembering that. We love to catch and punish, but then we’re surprised when our students repeat the problem behaviors – and we’re quick to blame them, or their parents, or society, or television, or computers, or video games, or whatever. Unfortunately, we’re slow to examine what part, if any, we and our methods of catching and punishing might have contributed to the problem! We’re also quite slow to consider such factors as
- whether the measures we’re using (the tests, quizzes, and such) actually measure what we’ve taught;
- whether we’ve adequately prepared our students for the measures;
- whether our students actually know and believe that they can be successful on the measures; and
- whether we’ve established an environment where “the numbers” are seen as a helpful tool for learners, not just a punishment (or a sorting method) imposed by teachers.
quid respondētis, amīcī?
- What do you think of my qualitative vs. quantitative distinction?
- What, to you, is the purpose of assessment? Is it a tool for learners or a sorting method for teachers? Or is it a combination of these?
- What would you say is the proper response when students take improper shortcuts?
- What do you think about my idea that catching and punishing are sometimes contributing factors in students’ occasional dishonesty?
- And how can the Tres Columnae Project. a self-paced online learning environment, avoid or minimize the possibility of widespread cheating?
Tune in next time, when we’ll explore these ideas more fully and look at some specific examples. intereā, grātiās maximās omnibus iam legentibus et respondentibus.