Creating Assessments: Three Types of Standards
I’m proud to be on the ground-floor of a possible ‘better assessments’ movement here on the math Blogotwittersphere. I’m excited to see people talking about their assessment process and admitting that our assessments could be better than what they could be – which is reassuring, because I’ve been thinking that for a long time. See this and this for some background of where my head is at. This post is a reflection on the things I choose to assess and how I choose to assess them.
One thing I’ve really begun to understand is how much assessment is guided by curriculum, and how choices about assessment can have amazing impacts on curriculum choices. I’m a believer that the things we assess and the way we assess is how we send a message to our students “HEY! This stuff is important! And you need to be able to do it if you want to be successful in this class!”
The organization of these individual skills and knowledge usually falls under the header of a ‘standard’. The way I assess, each page of my test is a separate standard that is graded independently from the other pages on the test. My underlying philosophy of these assessments is: It should be clear both to me and my students the standard that I am assessing. It should be clear both to me and to my students what the expectations of ‘mastery’ are for that standard. Assessments should make it clear both to me and to my students where their gaps in knowledge are, as well as their strengths in understanding. Assessments should promote student-directed remediation. Assessments should provide accurate data for a teacher about the level of understanding of his or her students. That’s a lot of pressure for an assessment.
This means it’s a big deal when we choose to assess something, and its a big deal when we choose not to assess something. I take this choice seriously, which means I need to examine the curriculum for each of my units and decide what it is that I want to assess and how I want to assess it. After doing this for a year, I’ve come to the following realization: Not all standards are created equally, which means not all standards should be assessed equally. When I look through my units and decide what I want to assess or how I want to assess it, I’ve started to group skills and concepts into three types of standards: Procedural Standards, Conceptual Standards, and Synthesis Standards.
Optional Reading: The choice of Procedural and Conceptual as the terms I chose comes from the article Adding it Up: Helping Children Learn Mathematics. This was an article I read in college when becoming a teacher – ‘Procedural Fluency’ and ‘Conceptual Understanding’ are two of the 5 strands of mathematical proficiency. As I was searching for words to describe what I was noticing in my curriculum, I reread the two sections on procedural fluency and conceptual understanding and felt it matched up pretty well to what I was observing in my standards.
As I looked through my curriculum for individual standards to assess, I realized there are certain skills that were simply foundational for everything we would do for the rest of the unit (or, in some cases, for the rest of the year). I needed to make sure my students understood these things at the level of ‘consistently computationally correct answers’. These are usually skills from previous courses or the foundation skill for a particular unit. My assessments for these skills are barebone: here are several problems you need to know, all at roughly the same level of difficulty, and you need to be able to do them consistently. I grade these harshly because they’re the foundation – there’s no question on here higher than the level of ‘identify’ or ‘apply a procedure correctly’. These are the skills that usually appear embedded in my Conceptual and Synthesis skills later on. They’re the skills where, once a student understands these, their success in the other skills suddenly skyrockets.
Outside Influence: Kelly O’Shea’s post about A and B objectives in Standards-Based Grading. I’ve taken her ideas about A objectives (what I’m calling Procedural) and B objectives (what I’m calling Conceptual) and added a third one: Synthesis. This post really helped cement a lot of the ideas I presented above, so I’m really grateful that I found it. If you don’t read her post now, I highly recommend returning to it and reading her bullets at the bottom of the post describing the benefit of separating objectives.
Procedural Skills that come from Algebra: Solving Algebra Equations, Integer Arithmetic (Assessment Below), Graphing Lines
Procedural Skills that come from Geometry: Applying the Pythagorean Theorem, Angle Identification (Assessment Below), Trigonometry Ratio Identification
For these skills, I don’t trust one solitary question to let a student demonstrate understanding – I make sure I have several so students can demonstrate consistency. I also grade these pages very harshly. On the integer test – if a student misses any more than 2 problems, they’ve failed that page. Most of my students aren’t used to this – they’re used to ‘slipping by’ on tests from other classes because its several skills collected together, so their little mistakes get lost in the mess of 100-pt test that they’re taking. This system doesn’t let them hide anymore – I’m making a statement with my assessment: consistency and computational correctness is important. The entire point of this page is to get these very specific problems correct. If you miss them on a different skill later in the year, I’ll cut you some slack – but for this procedural skill, I’ve purposefully created very little gray area: you either know it or you don’t.
This was one of the toughest things for me at the beginning of the year – explaining to students that they failed one page of the test because of several small mistakes. They’re not used to this, so they’re not happy about this, so it created some friction early in the year. But I’ve gotten better at this conversation as the year’s progressed and as my students have started to understand how high I’ve set my standards for these procedural skills. The conversation I’ve started having is: “Let’s say I asked you to spell your name 8 times. Should be easy, right? You know your name – no big deal. So you go to spell it and you give it to me, but I tell you that on the 5th line, you mispelled your name. Even though it’s just one time, what am I supposed to think? Spelling your name is something I would expect everyone to do no matter how many times – if you can’t, we need to have a serious talk, or you better do it 8 more times and prove to me that you really do know it. That’s what integers are for me: you need to be able to do it every single time. And if you can’t, either we need to talk, or you need to try again and prove to me that you can”.
Teaching Note: Procedural Skills almost assume that students will need to reassess. Multiple Times. And the work they do to reassess is not at the same level as the more conceptual skills in your course – some students with very low skills will need these basics from scratch, but many students will just need lots and lots of practice. I’ve solved this problem with my Wall of Remediation and by having assessment templates that I can use to create reassessments quickly and on the fly. But, in my experience, my high standards makes earning a 100% on these pages sooooooo satisfying.
These are the meat of my unit – the central conceptual understanding that I need my students to walk away with. These are skills that I imagine as scaffolded – there is a basic understanding, a strong understanding, and a mastery understanding. These are skills that usually have a problem-solving component or ‘explain’/’justify’/’analyze’/’sketch’ component embedded in them. They’re the ones where I really spend time trying to think about how to assess: “What’s the right question to ask so I that I can tell that they truly understand what they’re doing? How do I know they’re not mindlessly applying a procedure?”. When I think of a ‘bad’ test question I’ve written, it’s usually a question trying to assess one of these standards.
Outside Influence: I think this post by Jason Buell does a great job of emphasizing part of what I’m talking about: on a traditional test, how do you handle a student who nails all the trivial application of skills/vocabulary questions, but falls short on the application and synthesis questions? The resulting conversation about grading is worth reading too.
I’ve started creating Tiered Assessments for these skills, which I first read about at the It’s All Math blog but have since rediscovered a few other places. The basic idea is: You make a decision about what types of problems/prompts demonstrate ‘Mastery’ versus ‘Strong Understanding’ versus ‘Weak Understanding’ versus ‘No Understanding’. Or, for students who think purely in terms of grades, what an “A” student can do, what a “B” student can do, a “C” student, and a “D” student (and if you miss all of them, you’re an “F” student). I use numbers to communicate these ideas:
1 = Weak Understanding, 2 = Basic Understanding, 3 = Strong Understanding, 4 = Mastery with Small Mistakes, 5 = Mastery
This satisfies pretty much all of my goals for an assessment: it clearly communicates my expectations, it informs students about the remediation that they need, and it helps me collect data about my class. Creating one of these assessments involves deciding what my level 2, 3, and 5 problems look like.
I design my level 2 problems with the same philosophy as my Procedural standards: “These are problems everyone should be able to do consistently. Low-level Blooms. If you can’t get this right, we need to have a serious talk about these ideas”.
I design level 3 problems with the idea “What questions can I ask that requires you to make a choice about how to apply what you know? That may be multi-step or rely on some foundational procedural skill in addition to the current conceptual skill?” I usually use released items from the state assessment to gauge where these problems should be.
I design level 5 problems with the idea “Okay – prove to me that you really know what you’re doing. You’ll either have to apply this skill to a slightly new context, or decide how to apply it multiple times, or explain your thoughts in a way that proves to me that you know what you’re doing”. I’ve told my students: “my assessments are like an argument from you to me: it’s your job to convince me that you really understand what you’re doing. You can do this with your scratch work, with your explanations, or with your pictures – but whatever you do, it’s your job to be clear and correct so I believe you”. I think this especially applies with level 5 problems: I want to design a problem that really requires a student to do some leg-work to show me that they understand what they’re doing. For really conceptual problems, I want them to really explain what they know for me to judge. For procedural problems, I want there to be some sort of problem-solving or ‘habits of mind’ aspect to the problem that they’ll need to apply. When considering these problems, I usually look at Common Core resources, the Park Math curriculum, or any set of problems grounded in problem-solving strategies or habits of mind.
Here are some tiered assessments I’ve made that I’m proud of:
Analysis: I feel like the level 2 question gives me an immediate entrance into how the student thinks – the shapes are simple and the questions are simple. If a student misses this, we definitely need to have a talk, although I’ve debated giving them the areas as well rather than have them calculate it. The level 3 questions are straightforward if you know what you’re doing and build on a foundational skill (Calculating Area). But the level 5 question really gets to the heart of the student’s understanding – it requires explanation, analysis and reasoning, gets to the heart of how a student understands probability and how it relates to area.
Analysis: For the Level 5 Problem: I’ve written about Parallel Line Mazes before, but the gist is: a student has to ‘jump’ from angle to angle using the different parallel line relationships (Alternate Interior, Vertical, Corresponding, etc) meeting a certain set of criteria. This problem challenges the student to know more than just the name of the relationships, but how to apply those relationships in a novel situation and they must be comfortable with certain problem-solving strategies and perseverance.
See Also: Sam Shah’s Favorite Test Question is a Level 5 question – novel, gets to the heart of a student’s understanding, and requires explanation.
I can tell already – whenever I make an assessment I’m proud of, it’ll be when I’ve found the perfect Level 5 Question and the right transition from Level 2 to Level 3 to Level 5. I’m not there yet with all my assessments, but I think this is a good start. I feel extremely confident about the labels of ‘Master’ vs ‘Strong Understanding’ vs ‘Weak Understanding’ with the way this test is broken up, especially since I haven’t padded my test with extra questions just to hide what they do or don’t know.
Creating Better Assessments
Michael Fenton has written about his frustration with SBG assessments being purely application of skills. This is something I can absolutely relate to – when I first started implementing SBG and following the guides that I read online, I began feeling that the only type of assessment I could write was one that acted as a checklist of skills for my students to do. I struggled trying to find a way to keep that balance – of promoting problem-solving skills and ‘habits of mind’ while still holding students accountable for basic application of skills. This is the struggle that led to this blog post and my curiosity about assessments – I haven’t had very long to implement these types of assessments, but I feel pretty good about the direction this is going.
I’m still curious how other people write assessments. Michael Fenton is leading the charge and I highly recommend reading and responding to his post over at his blog. Tina C has written about her process and Lisa Henry is asking for feedback on a test question of her own. I think this endeavor is related to the question of “How do we create opportunities for our students to exceed our expectations”, and I’m excited to see these conversations continue and grow so that we’re all searching for these Level 5 Questions to give our students.
Some Parting Words from Sam Shah: (If there’s one thing I’m good at, it’s aggregating posts from the Blogotwittersphere with a similar theme, even if they’re from ages ago). Here’s Sam from when he gave a test that really asked students to express their thoughts:
“For me the obvious corollary is that: we need to start rethinking what our assessments ought to look like. If we want kids to truly understand concepts deeply, why don’t we actually make assessments that require students to demonstrate deep understanding of concepts?”