# Winding Down the Year

It’s been almost a month since my last post and I feel like I should update this with *something*. This is, after all, supposed to be something where I can look back on it a year from now and remember what I was doing and thinking at this stage in my career. So, here are some things I’ve been thinking about

**Assessment**: I’ve been thinking *a lot* about assessment – in particular, how I create and grade my tests. How I write them. How I grade them. How I distribute points. I think I’ve spent this whole year gradually difting towards a standards-based grading approach but I haven’t been calling it that. I like the idea of targeted assessments – being able to look at a low score and say ‘This directly correlates to a weakness in this skill or concept’. I like reassessments – this is something I’ve done all year and has been one of the greatest assets of my classroom. I’m starting to shift towards the minimal grading systems – grading on a scale of 1-5 instead of 0%-100%.

I started to notice this with my last few units – I started giving smaller quizzes instead of larger tests and the quizzes happened more frequently (I think I will have done 3 quizzes in 4 weeks). They target a small, specific set of skills and I don’t give a lot of partial credit – I want the impression to be ‘either you know it or you don’t’. I’ve also been analyzing my student’s scores by grouping them into 4 categories: exceeds expectations, meets expectations, approaches expectations, and falls far below (these labels are the same ones that the state gives to student’s performance on the high-stakes exit exam). This has been a really useful exercise to see (1) a realistic, data-driven analysis of where my students stand, and (2) an interesting analysis of how I grade and how much weight I give certain problems. This exercise has led me to think that the continuum of 0%-100% is too wide for what I need. When I did this analysis on my last few quizzes, here are the percentages where my students tended to fall:

Falls Far Below: 0-25%

Approaches: 50-65%

Meets: 70-85%

Exceeds: 90-100%

What I mean by this is: if one of my students is completely missing a skill/concept, they tend to score in the very very low end of the continuum – no more than 30%. Which is so low that any student who cares about their grade will feel compelled to retake the test. Then I’ve somehow tweaked my problems/weight/grading so that a student who’s almost got it will almost be passing and students who know what they’re doing are in the high C, low B range. Very few students fell in the 25-50 range – if someone got the hang of it, they usually knew enough to get into the 50-60 range, which is what I wanted.

I don’t know if any of this makes sense without seeing one of my assessments, but I find myself giving a small number of problems (My last test had 15 problems for a total of 30 points) and weighing them such that student’s scores are clearly organized into certain ranges that I’m comfortable with. I guess I’m trying to avoid a dense test out of 100 points where the percentage is the same as the score – how can you see what a student doesn’t understand with a test so broad? And what does that grade really *mean* in that situation anyway?

**UPDATE**: I was perusing older blog posts and found this one from Christopher Danielson regarding the rubric for standards based grading and how to translate those into percentages: http://christopherdanielson.wordpress.com/2011/10/09/those-arent-numbers-so-dont-treat-them-as-though-they-were/. It was actually really interesting to read after unpacking all of my thoughts in the paragraph above, because I think I’m struggling with the same issue but from the **opposite direction. **Instead of wanting to convert an SBG 0-4 rubric into percentages, I’m trying to pigeonhole my percentages so they fit in an SBG rubric. The process that Christopher talks about – converting 1-5 into percentages – that’s essentially what I did (from the other direction) a few paragraphs above. It also sorta validates my feelings that the 0-100 scale is too broad for what I need. Anyway, rest of the original post continues below.

Right now, I’m thinking that next year I’ll break my units into 4-5 essential skills and design a quiz for each skill/concept. Then instead of giving a large unit test, I’ll give the 4-5 quizzes throughout the unit and let them reassess as necessary. Which sounds more and more like standards-based grading. It also sounds like something I can do over the summer that I’m much less likely to throw away in the middle of next year (like I did with a lot of things I made this last summer).

**A Post-AIMS Classroom**: I teach sophomores and sophomore year is when my students take the high-stakes Arizona exit exam called the AIMS test. For about 2 weeks, it engulfed my classroom and stressed out my students. It’s been a big deal at my school because last year only 33% of the students passed the test – this means 66% still needed to pass before they could graduate. And yes, the test is designed to be passed by the end of the sophomore curriculum – algebra I and geometry standards only – no Algebra II or trig or any of that. Anyway – these low numbers had lots of consequences, which led to a school-wide focus on passing this test. I mentioned it at least once a week for the whole semester. The results come in next Tuesday.

Anyway – the test date is in the middle of April, which means there’s still about a month of school left. But with this test so ingrained in the school’s culture and in my classroom culture, it was hard for me to adjust to a classroom that was post-AIMS. In January or February, I could get away with certain review topics by appealing to their motivation to pass the test. After the AIMS, a lot of that motivation disappeared and many students lost the will to be successful in my class. I also had a hard time adjusting to a classroom where I wasn’t limited by the AIMS test – I had the freedom to try to develop habits of mind and assign more complex problems than what is on the test. The problem I found, though, is that (1) I was still exhausted from preparing for the test, and (2) my students don’t have the same buy-in that they did earlier in the year, which has made some of my lessons harder to teach. It was actually a little scary to find the right things to teach after the AIMS Test – they’re so used to having AIMS as a motivation (both from me and the school as a whole) that it’s been hard to adjust to anything else. I think I need to be more mentally prepared for this backlash, both from me and my students, when I teach geometry again next year.

So, note to future-Dan: have something low-stakes/fun prepared for the time after AIMS. Be prepared to have a pep-talk about where the class is going and how we no longer have to confine ourselves to the AIMS test. Be prepared for students to give up, so have some sort of quick quiz to zonk them back into the swing of things right away. Don’t let yourself get complacent either. And take at least 1 day off shortly after the AIMS Test.

**Less Geometry, More Algebra**: With the year winding down and me teaching less explicit algebra, I’m getting a real sense of what my students have picked up from my class. And it hasn’t been a lot of geometry. They’ve learned algebra, they’ve learned arithmetic, and they’ve learned hard work. Students used to loath multi-digit multiplication or long division – now they do it without a second thought. They know how to work with fractions and negative numbers. They don’t give up as much with procedural problems – they check their answers and find their mistakes. They persevere. These are good habits, but they’re limited to the realm of algebra and arithmetic. Or actually, a better description is: they’re limited to the realm of process and practice. All the practice they should have had over the last few years – they got all of it at once from their one year in my classroom.

This isn’t what I planned when I mapped out my curriculum over the summer. I wanted more investigation problems – more hands-on – more problem solving. I collected a ton of resources and activities for it. And I’ve hardly done any of it. I had in my syllabus that we were going to keep a math journal – I threw that out by the end of 1st semester. I wanted to do some sort of problem of the week – nobody knew how to start them. Geometry is rich with situations where students can develop patterns, can write explanations, can solve problems that require complex *thinking* instead of complex *calculation* – and I never really had the chance to develop that. Which has made me sort of sad now that I’m at the end of the year and my students can finally handle it but don’t want to.

This wasn’t the type of year that I wanted to teach, but it was the type of year that my student’s needed. I don’t think very many of them realize that now – they’re frustrated and overwhelmed and don’t always understand why we’re ‘reviewing’ – but maybe they will one of these days.

**10,000 Views**: Here’s the last thing on my mind. Earlier this week, this blog got over 10,000 hits. I don’t know how this compares in the world of blogging, but to a 1st year teacher who makes mistakes and is figuring stuff out, it seems sort of overwhelming. Definitely not where I imagined this blog would go – I thought I’d still be the only one reading it these days. I’ve only recently started telling friends and colleagues about this site, which means most of those 10,000 are from people I’ve never met. I’m surprised by how many people have found me and I’m glad people find my ideas meaningful. I guess I’m just amazed at how easy it’s been to have a voice in the semi-anonymity of the internet, especially after hearing so many disheartening stories from 1st year teachers whose voice is ignored by colleagues who have become complacent with mediocrity (thankfully, there are no teachers like this in my department)

Well… so much for this being a short post…

Hi. I use similar categories for my classes this year though instead of weighting the points and scaling them, I give feedback and give students a 1(beginning), 2(developing), 3(proficient) or 4(exceeds). I also have them keep a chart in their notebook so they have a visual of their own progress.

I’m happy to be one of the 10,000+ visitors who read your blog and follow you on Twitter. I’m also currently twice your age, Daniel. But it’s important for me to NOT “become complacent with mediocrity.” I so value and appreciate the fresh ideas and voices from my younger colleagues and from my blogosphere. Your “assessment” reviews just gave me more to think about shifting to testing essential skill sets instead of these large end-of-chapter tests.

Hi,

Amazing! How do you know you got 10,000 hits? Do you use a conventional HTML based counter?

I am not a new teacher, but always feel like one. I have adopted a similar SBG program that sounds like you do. I am curious about how your students did on the AIMS. (I also am ending my geometry and algebra classes realizing a whole lot of review was done, and not a whole lot of investigating)

Dan- How many times do you let your student’s retake the quizzes? Are they allowed to re-test until they get the score they’re happy with?

Julia! Students can retake quizzes/tests as many times as they want. I always take the retest grade, which means if they score lower, I keep that lower score. However…

I have something that I call a ‘retake ticket’, which are a bunch of problems from the homeworks/other supplements/the original test which students *must* complete correctly before I let them take the test. If a student comes in with the ticket, I look over it and make sure it looks correct – if there are some glaring mistakes, we talk about them, I give them another assignment, and they come back another day to take the test (in other words, if I have to do any tutoring/teaching, then they have to take the test another day). This process goes on as many times as it takes for a student to get the grade they want on the test.

These posts really informed my initial feelings on this kind of assessment: http://blog.mrmeyer.com/?p=811 and http://samjshah.com/2010/09/04/my-sbg-system/

This post & a few other ideas have started to inform how I’m thinking about assessment for next year: http://blog.mrmeyer.com/?p=346

Cheers!

Because I know you and your school and some of the awesome things you’re doing, you might consider using Khan as part of the retake tickets. That magic spreadsheet you made would be *perfect* for this

Ok so here’s how I do it now:

Students take quizzes about every week or week and a half. Then they take a test, if they score better on the test than they do on the quiz I’ll replace their quiz percentage with their test percentage.

The problem is that I have no idea if they’re actually learning what they didn’t know on the quizzes or if the test just has bad proportions of the information on the quizzes and they still only know the same things they knew on the quizzes but scored better because my test stinks. Did that make sense?

How many different versions of every quiz do you end up making?

I also fear that students will quiz well because we just learned that concept then forget it all after that. (I mean I guess that can happen with tests also.) ahh my brain is going crazy.

Also I REALLY like the retake tickets. Especially if I stop or cut down the amount of homework I give. That can be incentive for students to do something on their own.

Seriously – retake tickets are one of the best things I’ve done all year. I read your recent blog post – I think you’re on track to this kind of idea on your own. You gave supplemental material to prevent a poor grade – retake tickets are supplemental material to replace a poor grade.

I usually make 2 versions of every quiz – the original one, then the retake one. With tutoring and a retake ticket, students don’t usually need more than one retake.

Seriously – assessment SUCKS. I stress about it all the time. Some things that have been helping me are considering what a grade _really_ means in my class. Does a 60% mean a student knows 60% of a certain selection of topics? Or does it mean that they are performing at a ‘poor but passable’ level on a certain selection of topics? I’ve decided that I want the latter representation for my grades, but it means that my assessments have to be HIGHLY targeted – otherwise, the grade starts to lose its meaning. Maybe yours tests are too broad? Maybe a ‘test’ is really 3 quizzes spread throughout a unit?

Actually – one thing I did for one of my tests is I let students retake only certain _parts_ of the test. This test broke up very easily into 3 distinct sections – dilations, similar polygons & algebra, and real-world similarity word problems. These three types of problems fall under the heading ‘similarity’, but mastery of one type of problem is almost completely separate from mastering another type of problem. I realized that if a student aced the first 2 sections but bombed the last one, why should they have to retake all three sections? Maybe you could section off your tests and keep that agreement so if a student gets 100% on that *section*, then it will replace their quiz grade. Or something like that. I dunno – maybe consider having certain rules for certain sections of the test?