Winding Down the Year
It’s been almost a month since my last post and I feel like I should update this with something. This is, after all, supposed to be something where I can look back on it a year from now and remember what I was doing and thinking at this stage in my career. So, here are some things I’ve been thinking about
Assessment: I’ve been thinking a lot about assessment – in particular, how I create and grade my tests. How I write them. How I grade them. How I distribute points. I think I’ve spent this whole year gradually difting towards a standards-based grading approach but I haven’t been calling it that. I like the idea of targeted assessments – being able to look at a low score and say ‘This directly correlates to a weakness in this skill or concept’. I like reassessments – this is something I’ve done all year and has been one of the greatest assets of my classroom. I’m starting to shift towards the minimal grading systems – grading on a scale of 1-5 instead of 0%-100%.
I started to notice this with my last few units – I started giving smaller quizzes instead of larger tests and the quizzes happened more frequently (I think I will have done 3 quizzes in 4 weeks). They target a small, specific set of skills and I don’t give a lot of partial credit – I want the impression to be ‘either you know it or you don’t’. I’ve also been analyzing my student’s scores by grouping them into 4 categories: exceeds expectations, meets expectations, approaches expectations, and falls far below (these labels are the same ones that the state gives to student’s performance on the high-stakes exit exam). This has been a really useful exercise to see (1) a realistic, data-driven analysis of where my students stand, and (2) an interesting analysis of how I grade and how much weight I give certain problems. This exercise has led me to think that the continuum of 0%-100% is too wide for what I need. When I did this analysis on my last few quizzes, here are the percentages where my students tended to fall:
Falls Far Below: 0-25%
What I mean by this is: if one of my students is completely missing a skill/concept, they tend to score in the very very low end of the continuum – no more than 30%. Which is so low that any student who cares about their grade will feel compelled to retake the test. Then I’ve somehow tweaked my problems/weight/grading so that a student who’s almost got it will almost be passing and students who know what they’re doing are in the high C, low B range. Very few students fell in the 25-50 range – if someone got the hang of it, they usually knew enough to get into the 50-60 range, which is what I wanted.
I don’t know if any of this makes sense without seeing one of my assessments, but I find myself giving a small number of problems (My last test had 15 problems for a total of 30 points) and weighing them such that student’s scores are clearly organized into certain ranges that I’m comfortable with. I guess I’m trying to avoid a dense test out of 100 points where the percentage is the same as the score – how can you see what a student doesn’t understand with a test so broad? And what does that grade really mean in that situation anyway?
UPDATE: I was perusing older blog posts and found this one from Christopher Danielson regarding the rubric for standards based grading and how to translate those into percentages: http://christopherdanielson.wordpress.com/2011/10/09/those-arent-numbers-so-dont-treat-them-as-though-they-were/. It was actually really interesting to read after unpacking all of my thoughts in the paragraph above, because I think I’m struggling with the same issue but from the opposite direction. Instead of wanting to convert an SBG 0-4 rubric into percentages, I’m trying to pigeonhole my percentages so they fit in an SBG rubric. The process that Christopher talks about – converting 1-5 into percentages – that’s essentially what I did (from the other direction) a few paragraphs above. It also sorta validates my feelings that the 0-100 scale is too broad for what I need. Anyway, rest of the original post continues below.
Right now, I’m thinking that next year I’ll break my units into 4-5 essential skills and design a quiz for each skill/concept. Then instead of giving a large unit test, I’ll give the 4-5 quizzes throughout the unit and let them reassess as necessary. Which sounds more and more like standards-based grading. It also sounds like something I can do over the summer that I’m much less likely to throw away in the middle of next year (like I did with a lot of things I made this last summer).
A Post-AIMS Classroom: I teach sophomores and sophomore year is when my students take the high-stakes Arizona exit exam called the AIMS test. For about 2 weeks, it engulfed my classroom and stressed out my students. It’s been a big deal at my school because last year only 33% of the students passed the test – this means 66% still needed to pass before they could graduate. And yes, the test is designed to be passed by the end of the sophomore curriculum – algebra I and geometry standards only – no Algebra II or trig or any of that. Anyway – these low numbers had lots of consequences, which led to a school-wide focus on passing this test. I mentioned it at least once a week for the whole semester. The results come in next Tuesday.
Anyway – the test date is in the middle of April, which means there’s still about a month of school left. But with this test so ingrained in the school’s culture and in my classroom culture, it was hard for me to adjust to a classroom that was post-AIMS. In January or February, I could get away with certain review topics by appealing to their motivation to pass the test. After the AIMS, a lot of that motivation disappeared and many students lost the will to be successful in my class. I also had a hard time adjusting to a classroom where I wasn’t limited by the AIMS test – I had the freedom to try to develop habits of mind and assign more complex problems than what is on the test. The problem I found, though, is that (1) I was still exhausted from preparing for the test, and (2) my students don’t have the same buy-in that they did earlier in the year, which has made some of my lessons harder to teach. It was actually a little scary to find the right things to teach after the AIMS Test – they’re so used to having AIMS as a motivation (both from me and the school as a whole) that it’s been hard to adjust to anything else. I think I need to be more mentally prepared for this backlash, both from me and my students, when I teach geometry again next year.
So, note to future-Dan: have something low-stakes/fun prepared for the time after AIMS. Be prepared to have a pep-talk about where the class is going and how we no longer have to confine ourselves to the AIMS test. Be prepared for students to give up, so have some sort of quick quiz to zonk them back into the swing of things right away. Don’t let yourself get complacent either. And take at least 1 day off shortly after the AIMS Test.
Less Geometry, More Algebra: With the year winding down and me teaching less explicit algebra, I’m getting a real sense of what my students have picked up from my class. And it hasn’t been a lot of geometry. They’ve learned algebra, they’ve learned arithmetic, and they’ve learned hard work. Students used to loath multi-digit multiplication or long division – now they do it without a second thought. They know how to work with fractions and negative numbers. They don’t give up as much with procedural problems – they check their answers and find their mistakes. They persevere. These are good habits, but they’re limited to the realm of algebra and arithmetic. Or actually, a better description is: they’re limited to the realm of process and practice. All the practice they should have had over the last few years – they got all of it at once from their one year in my classroom.
This isn’t what I planned when I mapped out my curriculum over the summer. I wanted more investigation problems – more hands-on – more problem solving. I collected a ton of resources and activities for it. And I’ve hardly done any of it. I had in my syllabus that we were going to keep a math journal – I threw that out by the end of 1st semester. I wanted to do some sort of problem of the week – nobody knew how to start them. Geometry is rich with situations where students can develop patterns, can write explanations, can solve problems that require complex thinking instead of complex calculation – and I never really had the chance to develop that. Which has made me sort of sad now that I’m at the end of the year and my students can finally handle it but don’t want to.
This wasn’t the type of year that I wanted to teach, but it was the type of year that my student’s needed. I don’t think very many of them realize that now – they’re frustrated and overwhelmed and don’t always understand why we’re ‘reviewing’ – but maybe they will one of these days.
10,000 Views: Here’s the last thing on my mind. Earlier this week, this blog got over 10,000 hits. I don’t know how this compares in the world of blogging, but to a 1st year teacher who makes mistakes and is figuring stuff out, it seems sort of overwhelming. Definitely not where I imagined this blog would go – I thought I’d still be the only one reading it these days. I’ve only recently started telling friends and colleagues about this site, which means most of those 10,000 are from people I’ve never met. I’m surprised by how many people have found me and I’m glad people find my ideas meaningful. I guess I’m just amazed at how easy it’s been to have a voice in the semi-anonymity of the internet, especially after hearing so many disheartening stories from 1st year teachers whose voice is ignored by colleagues who have become complacent with mediocrity (thankfully, there are no teachers like this in my department)
Well… so much for this being a short post…