More Quality and Quantity, II

salvēte, amīcī et sodālēs! As I mentioned on Saturday, our posts this week will focus on two main themes:

continuing to explore the ideas of qualitative and quantitative approaches to assessment, and

thinking more about the idea of assessment as conversation, with many thanks to my colleague who mentioned this idea in the assignment she submitted as part of that staff-development course I teach for my face-to-face school district.

Ironically, as I write this post, it’s midterm exam week in my face-to-face teaching world … not a time when assessment usually feels like a conversation to students. Indeed, it sometimes feels more like a punishment, both to the students who have to take the exams and to the teachers who have to grade them.

But why is that? Any time that I find myself avoiding a task, I assume there’s some kind of a mismatch going on. Perhaps the task is too hard, or I’m not well-prepared for it. Perhaps it’s too easy and I find it insulting. Perhaps it’s just tedious because it doesn’t match my personality. Perhaps I’m avoiding it because it was an imposed task rather than a chosen one. And, of course, all of those factors can be involved when teachers procrastinate about writing midterm exams, or when students procrastinate about studying for them! 🙂

As it happens, my midterm exams are all written; I just need to take a quick look at them, make a few minor revisions, and get them copied before Tuesday (for my Latin III students) and Wednesday (for the I’s). I also need to deal with a small pile of papers generated over the past few days – one set from that period when I was first sick, and another from the middle of this week, as well as some last-minute makeup assignments that my students have been turning in. I haven’t been consciously avoiding these, but I realized I wasn’t as eager to look at them as I typically would be. I suppose it’s partly because I’ve been doing so much work on the assessment part of the Tres Columnae Project recently. Once you see the power of instantaneous corrective feedback, it’s hard to go back to “the old-fashioned way” of hand-grading things and the inevitable time lag that results. Fortunately, that small stack consists of summative rather than formative tasks, and they were mostly small-group collaborative efforts. So my students know how they’re doing with these tasks even if I don’t have “official” numbers yet.

And I think that’s really important. Even before I had articulated the distinction between qualitative and quantitative approaches to assessment, I was moving toward the qualitative approach. I’m a lot less interested in “official” numbers than I am in students’ learning … and if I had to choose, I’d rather that they knew how they were doing than that I did. Of course, I don’t want to choose: I obviously need to know how my students are doing, if only so that I can plan appropriate activities for them, and so do they, if only so they can figure out whether they need extra practice or are ready to move on. And if we all know, then assessment as conversation must be happening, at least to some degree.

But too often, in too many schools and classes, it isn’t happening. Assessment is still being used as a club rather than a conversation, a weapon rather than a window into greater understanding. If I wait more than a day to look at assessment results – unless it’s a pre-test for something that we’ll be doing in a couple of weeks – I’m obviously not going to be able to respond to any weaknesses or deficiencies revealed by those assessments. At best, they’ve become a snapshot of my students’ performance; at worst, they’re completely useless to everybody.

I suppose that lengthy delays in delivering assessment results to those who need them are probably a legacy of the factory-model approach that has governed American public education for such a long time. After all, if you’re running a factory, the cars, radios, and washing machines really don’t need to know how well they’re being built … and, in fact, they obviously can’t know such things! In a mid-twentieth-century factory, even the production workers probably don’t need to have much of an idea about the overall quality of the product; they just need to make sure to do their step correctly. For that matter, even the foremen and supervisors need not be concerned with the overall quality of the product; they could just focus on the work done by the workers under their supervision. And that model, where no one involved in the production is all that concerned with quality, continues to influence the operation of schools to this day.

Of course, factories can’t work that way anymore, and there’s a lot of pressure on schools to change their approach, too. But old habits die hard. Just the other day I heard a colleague mention her belief that students “have to have the right to fail” and the choice not to do what’s expected of them. Now, on one level, that’s true: in the end, no one can truly compel anyone else to do anything. But hidden under that truth was an expectation that lots of learners probably would choose to exercise this “right” – and that such a choice was perfectly OK with her. That’s where I part company with her – just as I would disagree with a manufacturing company that found it acceptable to ship 10% or even 5% of its products with significant defects. I wouldn’t buy stock in that company, and I definitely wouldn’t buy its products – especially if I needed 10 or 20 of them! In the same way, I can’t see how, as a society, we can possibly accept a 10% or even 5% failure rate on the part of our schools … let alone the 40-50% or more that seems to be routine in some large urban school districts.

quid respondētis, amīcī?

Tune in next time, when we’ll continue to explore these ideas … and begin to look at ways that the Tres Columnae Project and other online resources can make a real difference. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 25, 2010 at 9:53 am  Leave a Comment  
Tags: , , , , , ,

More Quality and Quantity, I

salvēte, amīcī et sodālēs! I’m sorry this post is a few hours later than usual today. It was a quiet, peaceful start to the weekend in my world, but it had been a very busy and tiring week … and it’s also the weekend before midterm exams in my face-to-face teaching world. And it was “Spirit Week” at school – always a tiring, if enjoyable, time – and an utterly beautiful Fall day today.

I was intrigued at all the connections with our conversation about qualitative and quantitative approaches to assessment that I’ve noticed over the past few days … and about the connections to students’ Ownership of their learning. For example, I spent a good bit of Friday afternoon on the phone with a very concerned parent of one of my Latin I students, who’s apparently been struggling with all of his classes this year. I had hoped to hear from this set of parents, as I’d been very concerned about their son as well: he’s one of those quiet, very respectful, but very disengaged kids who would “fall through the cracks” at many large schools … and, from talking with his mom, he had apparently been hoping to fall through the cracks with us, too. Fortunately for him (but unfortunately for his desire), he has very caring parents and a small school with caring teachers, so we’re now working on creating conditions where taking Ownership will be less painful for him than his current practice of refusing Ownership.

I had a real shock, though, when I looked up D’s current grades in his other classes and discovered just how badly he’d been doing there. If, as a profession, educators embraced the idea of qualitative assessment as we’ve defined it, all kinds of warning bells would have gone off weeks ago, when his grades began to decline. Think about it! If the purpose of assessment is to help teachers and learners, wouldn’t it have helped both D and his parents to know as soon as he started struggling? If we really lived in a qualitative world, I would have been in touch with them early this month, right around the time I got sick … or at least when I had recovered from that horrible, draining virus. But if we really lived in a qualitative world, I suppose there would be systems and procedures in place that allowed students, parents, and teachers to monitor their progress much more easily … and that notified everyone when students’ performance began to slip.

Unfortunately, American public education usually takes a quantitative approach, as we’ve defined the term, when it comes to assessment. We’re much more interested in crunching numbers – in seeing statistical patterns, on the macro level, and “averaging grades” on the micro level – than we are in using the information to help individual struggling learners. The more I think about that, the less I understand it. Even if we fully embraced the factory model, the purpose of quality assurance in a factory is to improve the production process, thus lowering costs and decreasing production defects. So, if an inspector at the local plant discovered that a significant number of widgets had a defect that could be traced to Step 43 on the production line, most companies would be paying some significant attention to Step 43, if only for economic reasons.

And yet, in the “education industry,” we develop all kinds of statistics – statistics about student performance, about the number of students proficient with a given objective, about the number of students who miss a particular question on tests we administer in our own classroom. And then we stop. We don’t change the data into information by acting on it! For example, I’ve noticed this year that my Latin I classes complete less homework on Wednesday nights than they do on other nights … and I stopped there, influenced by decades of a quantitative approach to such information.

In a qualitative world, I would have acted on this discovery somehow:

  • Perhaps I would have asked my students if they had a lot of outside commitments on Wednesdays.
  • Given their responses, I might have adjusted the amount of homework assigned on Wednesday evenings, or I might have worked with them on time-management skills.
  • I might have gone to my colleagues and seen if they were noticing a similar pattern.
  • I might even have contacted colleagues at other schools in the district to see if they were experiencing a similar issue.
  • But no … I just observed the information and recorded it!

The participants in that online staff-development course I teach have mostly reached our unit about “Assessing Your Assessment Approaches” as I write this post. It’s always an eye-opener for them. We don’t use the qualitative and quantitative terms, but we do stress the idea that the results of both formal and informal assessments aren’t a goal in themselves. Instead, the purpose of assessment is to find out how our learners are doing so that we can make changes, if necessary, in our instruction. We may need to speed up, slow down, divide into different groups, or whatever … but the purpose of assessment is to have a basis for our future actions. Anyway, one of “my” participants made the best comment in an assignment I just finished reading. She said she’d always resented the time it takes to develop, grade, and record tests and quizzes, but she now realizes that assessment is a “conversation” (her term) between the teacher and the learners.

A conversation between teacher and learners! What a great definition for assessment … and for education in general! I’m still pondering all the implications of that … and how we can build such a conversation into the heart of the Tres Columnae Project.

quid respondētis, amīcī?

Tune in next time, when we’ll continue to look at the implications of qualitative and quantitative assessment approaches … and we’ll also think more about assessment as conversation. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 23, 2010 at 7:15 pm  Leave a Comment  
Tags: , , , ,

Returning to Life, II

salvēte, amīcī et sodālēs!  Since it’s been such a long time, I wanted to look back at the questions and issues I left you with at the end of our last “normal” post … back before the crazy period of sickness and ultra-busy times that intervened over the past couple of weeks.  You may recall that we were talking about a distinction between qualitative and quantitative approaches to assessment.  I had been working on how to phrase the distinction more clearly – and it finally came to me as I was responding to something that someone sent me as part of that online staff-development course I teach for my face-to-face school district.  She had made a comment about a set of district-wide benchmark assessments that we formerly used, but have since abandoned for a variety of reasons; her point was that the information from these was often helpful, but the assessments themselves took such a long time to give – and it took such a long time to get the information back – that the usefulness was compromised.  I thought that perfectly encapsulated the distinction between what I’m calling the qualitative and the quantitative approaches to assessment:

  • With a qualitative approach, the focus is on the quality of the learners’ learning.  Numbers may well be involved, but they’re seen as the means to an end of improving learning – for example, if a child consistently misses questions about Objective 3.2 (whatever that may be), she clearly needs help with the knowledge or skills involved.  But there’s not necessarily a focus on the bigger picture.
  • With a quantitative approach, on the other hand, the focus is on the numbers themselves.  One might note that 55% of the learners in a given class struggled with Objective 3.2, or that 73% of 4th-graders were proficient with Objective 3.3.  One might even look at trends over time to see whether these proficiency levels had increased or decreased, and consider how they  compared to the levels in other schools or school districts or nations.  But there’s probably not a focus on how to help the individual children, or on the specific teaching strategies a teacher might employ with a child who struggles with Objective 3.2.

If you’re a long-time lēctor fidēlissimus, you can probably guess that I’ll be arguing for a creative synthesis of these two approaches.  Each, after all, has some strengths that the other lacks, and neither, by itself, will improve both the big picture and the small picture of students’ learning.  You’d be right … but I think the quantitative approach has been significantly over-emphasized in factory model schools!  And it hasn’t been emphasized in a way that led to improvements in “production quality,” either.  I’ll have more to say about that in tomorrow’s (or Saturday’s) post.

Anyway, here are the questions I left us with a couple of weeks ago:

  • What do you think of the redefinition of qualitative and quantitative approaches in this post?
  • What types of information do you want to collect about your students?
  • What are some ways that we can take raw, unprocessed data and transform it into helpful information?
  • And how can we use such information – whether we get it from the Tres Columnae Project or from another source – to help our students grow in specific areas?

I’d also like to add a couple of new questions:

  • How might we work toward a creative synthesis of the qualitative and quantitative approaches to assessment?
  • Do you think it’s even possible for a quantitative approach to help teachers teach – and learners learn – more effectively?  Or do you think a quantitative approach, by its very nature, can only measure, but never improve teaching and learning?

quid respondētis, amīcī?

It’s good to return to life, and I look forward to hearing from you as this conversation develops.  intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 21, 2010 at 10:13 am  Leave a Comment  
Tags: , , , ,

Quality and Quantity, III

salvēte, amīcī et sodālēs!  As I looked back over yesterday’s post, I realized I left out one very important distinction in my definitions of qualitative and quantitative approaches to teaching and learning.  Both can certainly use numbers, but a quantitative approach is all about manipulating those numbers – producing an average, for example – while a qualitative approach is more concerned with what the numbers represent.

Of course, as a teacher in an American public school, I find that I use elements of both approaches.  One important part of my job is to report an “overall grade” – a single number that somehow represents my students’ overall performance with five distinct curricular strands, work habits, “percentage of correct responses” (to quote part of a policy about grades that I read somewhere), and whatever other factors I, as the teacher, find important enough to include.  If you’re a long-time lēctor fidēlissimus, you know that I’m a bit skeptical of that single number, and you’ve probably read some of my prior posts about ways that I try to give Ownership of that number to my students.  I’m actually much more interested in the kinds of numbers that a qualitative approach can give:

  • My face-to-face Latin I students took a test yesterday, and many of them were struggling with singular and plural verb forms.  I’m curious to compare each student’s number of correct responses from that test with the number of correct responses on a quiz we took today … after we had some extra practice with the difficult verb forms.
  • At the start of each grading period, I try to give a diagnostic reading assessment.  There’s not a “grade” per se, but I want to know how many details my students can find in a Latin passage in a fairly short amount of time.  Then, as we continue to work on reading speed and fluency, I’m curious to see if that number increases over time.
  • My Latin I students also did a rather complicated, collaborative vocabulary review activity today.  I’ll be curious to see if they can match more verbs with their meanings when we do a similar activity next week.

I realize that all of these examples are focusing not on individual numbers, nor even on calculations involving those numbers, but on trends in those numbers over time.  Is that the biggest difference between a qualitative and a quantitative approach?  I’m not sure … I’ll have to ponder that myself!

One of the great benefits of an online learning environment like the Tres Columnae Project is that it can very easily automate the record-keeping needed for both qualitative and quantitative approaches.  As soon as a student completes an activity, his or her work can be scored immediately, and the system can capture all kinds of numeric data:

  • how long the student took to answer each question;
  • which questions were answered correctly;
  • what specific Knowledge, Skills, or Understandings were tested by each question;
  • how the student has progressed – or failed to progress – in Knowledge, Skill, and Understanding over time.

As I reflect on the kinds of data that teachers often receive about students – things like their “overall score” or “proficiency level” on a standardized test – it seems to me that more specific information is much more helpful.  Little Johnny or Suzie scored a “Level II” on the 8th grade Language Arts Exam … but what were the areas of strength and weakness?  And what progress has Johnny or Suzie made, or failed to make, in particular Language Arts skills over the past few years?  Score reports are often silent in these areas, but I think we need to break the silence if we really want to help Johnny or Suzie progress as a learner.

quid respondētis, amīcī?

  • What do you think of the redefinition of qualitative and quantitative approaches in this post?
  • What types of information do you want to collect about your students?
  • What are some ways that we can take raw, unprocessed data and transform it into helpful information?
  • And how can we use such information – whether we get it from the Tres Columnae Project or from another source – to help our students grow in specific areas?

If all goes well, we’ll address these questions in our next post … and I sincerely hope that next post will happen tomorrow.  Unfortunately, this is the beginning of that crazy period I mentioned in yesterday’s post, so it may be Friday or even Saturday … and I apologize in advance.  If it does take a few days, I hope you lectōrēs fidēlissimī will continue the conversation, either by email or by comments here.

intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 6, 2010 at 10:16 am  Leave a Comment  
Tags: , , , , , , , ,

Quality and Quantity, II

salvēte, amīcī et sodālēs! This is the second in a series of posts about “ways to encourage more qualitative learning in a quantitative online environment,” as I said at the end of yesterday’s post. Of course, we should probably pause and define both qualitative and quantitative before we go any farther! As I wrote the first draft of this post yesterday evening, I’d just come from a weekly book group where we were discussing, among other things, the ideas of forgiveness and reconciliation. It occurred to several of the long-time members of the group that in several books they’d read together, the words had clearly been defined very differently: what one author might call forgiveness, another might call reconciliation, and vice versa. Of course, I’m not sure we’ll ever agree completely on a definition for any word, but I do think it’s important to clarify just a bit before we go on.

By qualitative, I mean a more globally focused sense of the overall quality of one’s learning – a qualitative focus is certainly not exactly hostile to numbers, but it’s not bound by numbers or obsessed with them either.

By contrast, when I say quantitative, I imply a focus, if not a preoccupation, with types of learning that can easily be measured or expressed with numbers: scores on a quiz, for example, or percentage of work completed – things like that.

I don’t mean to imply that qualitative measures are better than quantitative ones, or vice versa; I just want to point out that they’re very different things. It’s certainly been the case over the past few decades that American education, in particular, has focused hard on quantitative measures of learning and teaching – test scores, of course, but also all the other statistics that educators love to collect. In so doing, I’m afraid we’ve discounted the things that are harder to count. And so, with the Tres Columnae Project and with the work I do with my face-to-face students, I want to restore a bit of balance. In fact, I even want to use some well-chosen numbers in a qualitative way. For example, with the self-assessments I mentioned in several posts last week, our learners are rating their perceived performance and comfort level on a numeric scale from 1-5 … but the point is not to “average some grades” or to determine a mean, median, mode, or any other statistic about the numbers themselves. Rather, the numbers are a tool that the learners (and their teacher) can use to observe their performance … especially their performance over time.

It may even be appropriate to produce charts, graphs, and statistics about the changes in those numbers over time … but the numbers, charts, and statistics are a tool for learning, not an end in themselves. Too often, when the focus is excessively quantitative, we educators forget that the numbers are a tool and start elevating them into a goal. When we do that, the results are too often disastrous – not just for the learning that we’re supposed to be measuring, but also for the accuracy and validity of the numbers themselves.

We’ve all read sad stories of students, teachers, and school leaders who respond to number pressure in wrong or unethical ways. When students cheat, they’re usually motivated by a couple of factors: a desire to do well (which is commendable) and an inability (real or perceived) to do well “the right way” (which is not commendable). How do we teachers respond in such cases? Too often, I’m afraid, we’re angry at what we perceive as an offense against ourselves, or against the purity of our academic discipline or something. We see the fault, but we fail to acknowledge the underlying desire to do well … and we don’t help our learners channel that desire into a more positive direction. You may have seen this recent discussion on a textbook-specific listserv about a “sample test” that the publisher had made available on its website. No one came right out and said it, but it was evident that a lot of the ire stemmed from a fear that “students might find it and cheat.”

As a young teacher, I’m afraid I took pleasure in “catching cheaters” – and I’m sure I wasn’t alone. I sometimes forgot, though, that the purpose of catching them isn’t to punish them so much as it is to correct the problem and keep it from happening again! But then, we educators often seem to have trouble remembering that. We love to catch and punish, but then we’re surprised when our students repeat the problem behaviors – and we’re quick to blame them, or their parents, or society, or television, or computers, or video games, or whatever. Unfortunately, we’re slow to examine what part, if any, we and our methods of catching and punishing might have contributed to the problem! We’re also quite slow to consider such factors as

  • whether the measures we’re using (the tests, quizzes, and such) actually measure what we’ve taught;
  • whether we’ve adequately prepared our students for the measures;
  • whether our students actually know and believe that they can be successful on the measures; and
  • whether we’ve established an environment where “the numbers” are seen as a helpful tool for learners, not just a punishment (or a sorting method) imposed by teachers.

quid respondētis, amīcī?

  • What do you think of my qualitative vs. quantitative distinction?
  • What, to you, is the purpose of assessment? Is it a tool for learners or a sorting method for teachers? Or is it a combination of these?
  • What would you say is the proper response when students take improper shortcuts?
  • What do you think about my idea that catching and punishing are sometimes contributing factors in students’ occasional dishonesty?
  • And how can the Tres Columnae Project. a self-paced online learning environment, avoid or minimize the possibility of widespread cheating?

Tune in next time, when we’ll explore these ideas more fully and look at some specific examples. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 5, 2010 at 10:20 am  Leave a Comment  
Tags: , , , , , ,

Quality and Quantity, I

salvēte, amīcī et sodālēs! In today’s post we’ll develop some preliminary answers to an important question I asked on Friday. After describing some of the ways in which I’ve moved away from numeric grades to constructive feedback on certain assignments, I asked:

How do you suppose these qualitative measures could be adapted to the quantitative, number- and data-driven format of an online environment?

In other words, even though computers are so good at numbers, how might we get away from a number focus for assessments in the Tres Columnae Project?

If you’ve looked at the sample assignments in the Instructure Demo Course for Lectiō Prīma, you’ve probably noticed that they’re all set up as “practice quizzes” rather than “graded quizzes.” There are a couple of good reasons for that:

First, if we set them up as “graded quizzes” in the Instructure system, only enrolled students would be able to see them … which certainly makes sense when you stop and think about it. But the whole purpose of the demo course is to demonstrate some of the assessments that our subscribers will be able to use (and create for each other) when Version Beta is available. Since we wanted everyone to be able to see them, the only viable solution was to create “practice quiz” versions.

Once I had made the “practice quizzes,” though, I realized that I liked the idea of a low-stress, low-stakes assessment, especially for newer or more difficult material. You may have seen this New York Times article, which I’ve mentioned a couple of times in previous posts, or you may have even read the underlying study about the positive effects of practice quizzes and practice tests on learning and retention. I’ve noticed with my face-to-face Latin students that they really benefit from low-stress, low-stakes assessments … even if those assessments are just a reconfigured version of an ungraded practice activity I might have used in the past.

Somehow the idea that someone will be looking at the assignments – or, in the case of an online exercise, that you’ll get some form of instantaneous feedback from the assignment itself – helps you, as a learner, focus on what you’re doing. In my own life, I find that I do a better job of lesson planning when I know that someone besides me will actually look at the plans … and I’m certainly more consistent at writing for you lectōrēs fidēlissimī than I’ve ever been when I maintained a private, “just for me” journal. Apparently the idea of an audience is a big help … and of course we’re probably all aware of the research about the positive effects on student writing when there’s an authentic audience, not just an “audience of one” armed with a red pen! 🙂

As you know, one of the driving forces behind the Tres Columnae Project is the idea of providing a “real” audience for our learners’ Latin writings, illustrations, audio clips, video clips, and other creative efforts. I just heard from the teacher at one of our piloting schools; her students are very excited at the idea of creating additional characters (more animals, for example, and grandparents for familia Valeria), and I’m eager to see what they develop. They’ve truly taken Ownership of the stories and characters, just as I hoped they would! She also mentions that they love to take and retake the practice quizzes until they have perfect scores … then proudly share their perfect scores with her. I wonder if they’d be equally engaged if they had to take “real” quizzes and have a “permanently” recorded score?

So one way to make a quantitative, computer-based learning system more qualitative is to de-emphasize the importance and permanence of the numbers, and another is to to emphasize the virtual community over the individual numbers. But what else can we do to encourage our learners, especially the ones who may struggle with reading, or with grammatical concepts, or (as one of my favorite former students used to say) “with everything – but I love Latin anyway?”

quid respondētis, amīcī?

This is a difficult and extremely full week in my face-to-face world … a lengthy faculty meeting tonight followed by an evening function; a possible rushed trip out of town Tuesday afternoon for dealer service on one of the family cars; my daughter’s track meet Wednesday; Parent-Teacher Conferences at school on Thursday; and the wedding of dear friends Friday evening. And of course I’m also busy with “normal” face-to-face teaching responsibilities, as well as with the beginning of that online professional-development class I teach. I hope to maintain a somewhat normal schedule of posts, but I hope you’ll forgive me if they’re a bit short … or a bit infrequent, for that matter! Next time, if all goes well, we’ll continue to look at ways to encourage more qualitative learning in a quantitative online environment. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 4, 2010 at 10:07 am  Leave a Comment  
Tags: , , , , , ,

Testing, Testing, V

salvēte, amīcī et sodālēs! As I promised in yesterday’s post, we’ll wrap up that list of assignments and assessments I’ve been using in my face-to-face classes this year, and we’ll also take a look at ways that such assignments might be adapted to an online environment like the Tres Columnae Project.

As I write the first draft of this post on Thursday evening, I’ve just sent a welcome message to the participants in the fall session of that online professional-development course about Differentiated Instruction that I teach for my face-to-face school district. Even though it can be time-consuming, I really enjoy working with the participants in the course. Over a six-week period, we typically move from a group of strangers (many of whom are “just fulfilling a requirement”) to something very much like the Joyful Learning Community that the Tres Columnae Project hopes to build. I’ve never known exactly how that happens, but I think it’s because we form a metaphorical circle around a truly interesting, engaging Subject (to borrow a term from the work of Parker S. Palmer that will be familiar to truly long-time lectōrēs fidēlissimī). As teachers and learners ourselves, we all want our students to be successful, and the course is all about what successful learning looks like and how to make it happen in a face-to-face classroom.

Perhaps that’s another reason why I enjoy teaching the course so much: it gives me an opportunity to learn from the participants in the course, just as I get to learn every day from my face-to-face students. As teachers, we sometimes forget how much we learn – and how much we need to learn – from our students. Obviously we have to learn something about them as learners so that we know best how to reach them, and sometimes we learn about connections between their lives and the subjects we teach. Sometimes we even get great strategies or lesson ideas from our students – if they trust us, they’ll suggest that we try something that worked well in Mrs. X’s or Mr. Y’s class. Just the other day, one of my Latin I students asked if there was a song we could use to help her remember “how verbs work.” I can’t think of an existing song, but developing one is going to be an option for her class as they review verbs over the next few days.

Allowing and encouraging students to develop their own assignments and assessments is a “growth area” for me at the moment. I’ve always been committed to the idea in theory, and as you know, I’ve sought student input and suggestions for a long time. But only this year have I really started letting go of my Ownership of my Latin I classes in particular. For the past few years, I had been striving to develop the “perfect” set of Latin learning materials – and then the idea for the Tres Columnae Project came to me. As I’ve worked on it, and as I’ve seen the implications for my face-to-face teaching, I’ve realized that “perfect” learning materials are an elusive goal. Every class is different, every student is different, every day is different, so the “perfect” materials, even if they could be developed, would immediately be imperfect for the next group that worked with them.

Instead of striving for timeless, unchanging perfection, I’ve been learning to seek a good balance or fit between students and materials, and I’ve re-learned and re-learned the importance of learners’ Ownership of the process as well as the outcomes. Hence the song idea for my Latin I classes … and hence a very directed, closed-ended review of subjunctive verbs for my Latin III’s on Thursday. We’d done more open-ended work, but they were struggling with too much freedom and too many choices, and they were delighted by a more structured, less open-ended task today. My Latin I students are actually more comfortable with open-ended tasks than the III’s at the moment, but even they needed and wanted a more structured, closed-ended task today. And all the groups have been asking for specific work with vocabulary during class, a request which surprised and confused me at first! For the last several years, most of my classes had not needed or wanted to do vocabulary work in class; they liked studying by themselves. But for whatever reason, the current groups love to practice and check vocabulary in class … and the more work we do with it, the better their reading-comprehension skills. That has often not been true in the past, which is another reason I’d been avoiding vocabulary work in class for a while. It was very frustrating to see and hear students who could perfectly define a word in isolation, but would look at me in utter confusion when that same word appeared in the context of a reading or listening passage!

Aside from student-driven work in general, and vocabulary practice in particular, the other area where I’ve been challenging myself this year has to do with formative and informal assessments. I have used these for a long time, but for most of that time I moved too quickly to “put a number” and check for accuracy … which is sad and ironic, I suppose, given my publicly-stated disapproval of teachers who “check homework for accuracy.” But there I was, checking classwork for accuracy before my students were ready! 😦 I’ve started listening to them, paying closer attention to informal self-assessments that I described earlier this week, and giving feedback without numeric grades more often, especially when we’re in the early stages of working with a new concept.

quid respondētis, amīcī?

  • What do you think of my list of assignments and/or assessments?
  • What do you think about formative and informal types of assessment?
  • And how do you suppose these qualitative measures could be adapted to the quantitative, number- and data-driven format of an online environment?

Tune in next time for some preliminary ideas … and for any responses you’re willing to share. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on October 1, 2010 at 10:00 am  Leave a Comment  
Tags: , , , , , ,

Testing, Testing, IV

salvēte, amīcī et sodālēs! Today we’ll continue to wrap up our series of posts about testing and assessment with the remaining items on that list I referred to in Tuesday’s post – a list that began with various types of self-assessment, both formal and informal, that help my students build a sense of Ownership of their learning.  I had written a draft of this post on Tuesday evening, but then life intervened in the form of a nasty cold and the bad weather that’s been affecting much of the eastern United States for the past few days.  I’m still battling the cold, but it’s not any worse than it was.  As for the remnants of Tropical Storm Nicole, they led to a two-hour weather delay for most school districts in this part of the world, but it’s still too dark as I write this post to see what else they’ve done.  When my favorite-and-only dog and I went out to get the newspaper just now, there were a lot of big puddles in yards, but no sign of street flooding in our neighborhood.

Before we go on to the rest of that list I started Tuesday, I should probably say that some of the assessments I’ll describe here – and some of their electronic equivalents in the Tres Columnae Project – may blur the line between assessments and assignments that some teachers rigidly maintain. I’ve always been a bit skeptical of that distinction myself, but some teachers (and some experts in the field of assessment) would argue that an assignment or activity allows learners to practice a new skill, while an assessment (whether formal or informal, formative or summative) allows them to demonstrate what they’ve learned. It seems to me that any learning activity will necessarily involve both things: some additional practice of the “new skill” or “new knowledge” or “new understanding” as well as an opportunity for the learners and their teachers to see how well the learners have mastered that “new thing.”

If you’re among the long-time lectōrēs fidēlissimī of this blog, you know that I’m very skeptical of neat distinctions and simple dichotomies. I have a tendency to look for a creative synthesis, a “Third Alternative” in Stephen Covey’s memorable term. That’s certainly involved in my reluctance to draw simple distinctions between assignments and assessments. But after nearly two decades leading a face-to-face Latin classroom, I’ve found that the neat assignment-assessment distinction often breaks down in the real interactions among me, my students, learning materials, and the learning goals we set.

Anyway, here are some more of the assessments (or assignments, or hybrid assignment-assessments) that have been working well in my face-to-face classroom this year:

We’ve been doing a lot of random practice of grammatical forms using multi-colored dice and a key that translates the roll of the dice into a form to be generated. For example, my Latin I students have “officially” learned about eight Latin verb forms so far:

first, second, and third-person singular present tense verbs;

third-person plural present tense verbs;

third-person singular and plural imperfect tense verbs; and

third-person plural perfect tense verbs.

As you know, the order of presentation in the Tres Columnae Project is somewhat different, but we’ve held ourselves responsible only for the forms introduced in our “official” textbook. Of these eight, the third-person singular present and perfect forms are used as the dictionary entry for the moment – of course, we’ve also seen the “real” dictionary entry for a Latin verb, and we’ve learned how the two listings correlate, but we’re not yet “officially” responsible for standard dictionary listings. Anyway, that leaves six other verb forms that my students can generate, so it’s simple to use a single die to determine which form they’ll make: 1 = first-person singular present tense, 2 = second-person singular present tense, etc.

As my students work in pairs or small groups to make verbs this way, I have a wonderful opportunity to observe both their thought processes and the actual products, the conjugated verbs … and they have a much more engaging, meaningful way to work with verb endings than a “traditional” conjugation drill. I’m reminded of the excellent point my colleague made in an email this weekend – the one I mentioned the other day about games as “fun tests.” My students don’t really feel like they’re being tested, since the activity is game-like and engaging, but they produced a large number of well-made verb forms in a short time today – and they actually begged for more time with the activity, too!

Another game-like assignment-and-assessment that’s been very successful this year involves small groups or pairs working together to find as many details as possible in a reading selection. One could obviously use Tres Columnae Project stories for this, and we’ll be doing that later in the week, but one can also use textbook stories, fables, or other types of texts … and one can ask questions about the passage in English, Latin, or some combination. For my Latin I’s, the game is simple: they read one or two stories, I keep track of the total number of details they find, and three sets of winners receive a small prize – the first group to finish, the group with the most right answers, and the group with the greatest improvement over the last time we played. My Latin III students have a more complicated, longer-term game with an actual (paper) game board; they’ve been playing on and off for about three weeks, but no one has yet advanced all the way up the six-page CVRSVS HONORVM to become consul and ultimately Emperor. That will probably happen tomorrow, as one group is quite close to completion.

In any case, with all the different classes, I’m able to watch my students’ reading strategies, see how much vocabulary assistance they need, and steer them to closer examination of the passages they’re reading – all without that sense of drudgery and dread with which so many students greet the idea of reading in any language. I’ve used versions of the game for years, but I think the secret to its success this year is that I’ve found the right balance (or at least the right balance for my current groups of students) between the intrinsic rewards of the task itself and the extrinsic rewards of winning the game. Until you find that balance, it’s easy for learning games to falter – they can easily lose emphasis on the learning, of course, but they can also easily deteriorate from a game into a boring activity that students dread.

quid respondētis, amīcī?

  • What do you think of the blending of assignments and assessments I’ve described?
  • How do you think it would work in your face-to-face teaching and learning environment?
  • What about the use of learning games as assignment and assessment?
  • Do you see any pitfalls I haven’t mentioned?

Tune in next time, when we’ll continue to focus on that list of assessments and assignments that have been working well in my face-to-face classroom this year … and we’ll also consider how they might be adapted for an asynchronous online environment like the Tres Columnae Project. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on September 30, 2010 at 9:28 am  Leave a Comment  
Tags: , , , , , , ,

Testing, Testing, III

salvēte, amīcī et sodālēs! Today we’ll start to wrap up our series of posts about testing and assessment with a brief description of some alternative assessment strategies I’ve been using with my face-to-face students this year. Several of these are “old favorites” that I’ve used for years – and a few are really old favorites that I used years ago but had stopped using for various reasons. Collectively, their purpose is to build the Joy, the Community, and the sense of Ownership in my classes while also giving me (and my students) a good sense of how they’re doing with the Knowledge, Skills, and Understandings we’ve been working on together.

The longer I work with students, the more convinced I am that the primary customer of assessment results ought to be the students themselves. After all, it’s their learning at stake, not mine; their high-school transcripts, not mine; their future plans, not mine. Doesn’t it make sense that they, not I should be most interested in the results of any assessments I use with them? After all, if I’ve done my job at all, I probably have a pretty good sense of how my students will perform on a given measure even before I give that measure to them – but depending on their maturity level and how well they’ve developed their ability to self-assess, they probably don’t know … or at least they probably don’t know as well as I do.

And yet I know so many teachers who want to keep students’ overall grades – and even their individual test scores – secret from the students who have, presumably, done the work that earned those grades. What’s up with that? Those same colleagues, when they go to the doctor for a medical test, would be outraged if the doctor refused to tell them the results – after all, they’d say, it’s my body and my health! So tell me the results! And they’d be quite right … but yet they wouldn’t see any contradiction in returning to school the next day and not answering a student’s question about how he/she was doing in their class!

Unlike those inconsistent colleagues of mine, I’m firmly convinced that my students need to know how they’re doing – and they really need to have Ownership of how they’re doing as well as of what they’ve been learning. So I’ve gradually been redesigning my system of assessments – and the ways I give feedback on assessments – to put the focus more squarely on students’ Ownership of the results. Here are a few of the critical elements of the new system … and if you’re a long-time lēctor fidēlissimus (or fidēlissima), I’m sure you’ll see obvious connections with the assessments we’ve developed for the Tres Columnae Project.

One of the biggest changes I’ve made is the incorporation of a lot more self-assessment by students. Sometimes this is very informal (on a scale from 1-5, where 5 is “quite well,” hold up the number of fingers that represents how well you understand the new concept), and sometimes it’s more formal. The more I use self-assessment, the better my students tend to do … so I’ve become a big believer in it. If you’ve looked at the assessment components of the Tres Columnae Project, and especially at the items on display in the Instructure Demo Course for Lectiō Prīma, you’ve probably seen how much self-assessment we ask our participants to do. In a perfect world, I think I’d ask for a self-assessment after each explanation and each practice exercise, and we’ve come pretty close to that … but not so close that self-assessment becomes a tedious chore!

In addition to the informal self-assessments, I also ask my face-to-face students to do a more formal, journal-type self-assessment after they take each “formal” test but before they see their scores. Part of this “Self-Assessment of Preparation” is a chart where my students rate their comfort level with each new (or familiar) concept or skill, using a similar scale to the five-finger one I described above, but part is a series of open-ended prompts:

  • My greatest strength as a Latin student is ….
  • My area of greatest concern is ….
  • My area of greatest improvement over the past few weeks has been ….
  • I need to ….
  • My group needs to ….
  • I would like Mr. S. to ….

That last question has been extraordinarily helpful, and extraordinarily humbling, for me as a teacher. Sometimes I get really good, specific suggestions (“practice vocabulary with us,” for example, or “re-explain how verbs work,” which I’ll be doing as you read this post today); sometimes I get silly suggestions; and sometimes I’m asked to “continue what he is doing” or “change nothing.” Either way, I find that my students do take increasing amounts of Ownership of the whole learning process when they have these chances not only to assess their own performance, but to give feedback to me and (anonymously) to their classmates.

In the interest of time, I think I’ll save the other items on my list of assessments for tomorrow’s post.

quid respondētis, amīcī?

  • What forms of assessment work well in your face-to-face teaching and learning situation?
  • Are there forms that used to work well but have stopped being effective?
  • Have you been experimenting with anything new and different?
  • What role for technology in the assessment process do you see?
  • Are there any technological pitfalls you’d like to avoid?
  • And what forms of assessment would you like to see – or not see – in Version Beta of the Tres Columnae Project and beyond?

Tune in next time, when we’ll look at some other assessments on my list and finish wrapping up this series of posts about Testing. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on September 28, 2010 at 8:36 am  Leave a Comment  
Tags: , , , , , ,

Testing, Testing, II

salvēte, amīcī et sodālēs! I hope I didn’t test your patience too much by stopping Saturday’s post where I did … right before describing an alternative to a traditional test that allows me (and my students, too) to observe students’ thought processes as well as the product of their thinking. In prior posts, I’ve described a strategy I call the Relaxed Rotating Review, in which my face-to-face students rotate, as groups of four or five, through a series of different stations in preparation for a “traditional” pen-and-paper test. They have one last opportunity to ask me questions about concepts that are difficult, and they also have one additional opportunity to watch their friends and classmates interact with the concepts. In a well-structured group, one where everyone has taken Ownership of his/her learning, the Rotating Review can be amazingly helpful. On lots of occasions, I’ve seen students suddenly grasp an idea, a strategy, or even a vocabulary item that had eluded them for days or weeks.

Of course, for students who haven’t yet taken Ownership of their learning – and for those who are convinced that they can’t succeed academically – the Rotating Review can be pretty frustrating. But it does give me – and their classmates who have taken Ownership of their own learning – another chance to show them that success is possible and that the risk of Ownership is worth the rewards. (When I stop and think about it, I find it amazing that our factory-model schools have managed to remove any idea of Ownership of learning in only nine or ten short years. I look at the four-year-olds through fourth-graders in the children’s Sunday School classes I work with each week, and I find that they all still have both Joy and Ownership in the learning we do together. I wonder how many of them will lose the Joy and the Ownership by the time they’re my “regular” students’ age … and what I, or anyone else, can do to prevent such a loss.)

Anyway, given the benefits of the Rotating Review for my students, I’ve experimented with small-group collaborative work on summative tasks, and the current experiment seems to have worked quite well. I told my students on Wednesday that, depending on how things went for the rest of the week, we could select among three different summative tasks on Friday (for the Latin I students) and Monday (for the Latin III’s):

  1. A “traditional,” individual cumulative examination;
  2. A paired activity in which they worked together to answer questions from a prior version of a cumulative exam; or
  3. A paired or small-group task in which they created and analyzed an original Latin story.

I was actually hoping that most groups would choose the third option, but they overwhelmingly voted for Option 2 – it had been a long, tiring week for them, and they all said they didn’t want to think as hard as they’d have to for the third option. So Option 2 it was.

At the beginning of class on Friday, my Latin I students received a self-assessment rubric for the task, which focused their attention on three critical factors:

  • Their level of engagement in each section of the task;
  • Their level of collaboration with their partner; and
  • Their own sense of the accuracy of their responses.

As they worked through the old exam, which has five distinct sections, I asked them to pause at the end of each section and use the rubric to assess their own performance and that of their partner. I also reminded them that I, too, would be using the rubric to assess everyone’s performance, and that I’d be looking at the accuracy of the completed product (the questions from the old exam) as well.

The morning Latin I class did a fantastic job – they were all engaged in the process, did an excellent job with the product, and were thoughtful and accurate in their self-assessment … except for the one group that forgot to turn in their product! Fortunately for them, the reporting period doesn’t end until today, so by the time you read this, they will have found and turned in their product. The afternoon class, which has struggled a bit, got off to a slower start with the task, but they also did well overall. I was especially pleased with the level of meaningful self-assessment they displayed – a bit less pleased with their reading comprehension, but then it was Friday afternoon at the end of a long, exhausting week for them.

Over the weekend I had a wonderful email exchange with a colleague about tests and games. Her opinion is that games (well-designed ones) are “fun tests” – that is, they’re intrinsically engaging and motivating, but they also require you, the learner, to apply the Knowledge, Skills, and Understandings you’ve developed. I don’t think my little task was exactly a “fun test” as she’d define it, but it was a lot more fun both to take and to grade than a traditional test would have been. It also gave me a great opportunity to observe where my students were still struggling and where they were feeling comfortable – information that will be very helpful as we start the new grading period this week. I’m looking forward to a similarly enlightening experience with my Latin III class as they do theirs on Monday. I also look forward to the amazingly creative tasks that Tres Columnae Project subscribers and their teachers will develop in the next few years!

quid respondētis, amīcī?

  • How do you feel about testing and assessment in your face-to-face teaching and learning situation?
  • How do you feel about observing process as well as product?
  • What alternative ways to observe process and product have you found?
  • And what about the idea of “games as fun tests?”

Tune in next time, when we’ll explore some other types of assessments I’ve been experimenting with in my face-to-face classes and see how they might be adapted to the Tres Columnae Project. intereā, grātiās maximās omnibus iam legentibus et respondentibus.

Published in: on September 27, 2010 at 8:51 am  Leave a Comment  
Tags: , , , , , , , , ,