# Some Reflections: How Assessment Impacts Curriculum

**Some Foundational Ideas**: Assessments are how I communicate to my students “These are the important mathematical ideas of my course – you are responsible for them”. When I tinker with my assessments, there is collateral damage to my curriculum (the *order* that I present mathematical ideas) and my lessons/activities (the *depth* with which we explore mathematical ideas). It all has to be aligned.

This post is building up to a realization I had earlier today. It comes from two key ideas I stole from **Standards Based Grading**:

1) I dissected my course into discrete concepts and skills that I could assess individually. When my students see my tests, each page is a separate skill and goes into the gradebook as a separate grade. This gives me a way to isolate particular skills (such as solving an algebra equation, or performing integer arithmetic) away from other mathematical ideas that build on these (finding missing angles with parallel lines, or finding the slope of a line given two points). This makes remediation easier, **but it also makes it more explicit to my students which skills are ‘foundational’ and are needed to solve more complex problems**

2) I assess certain skills multiple times. If my class still struggles with integer arithmetic (-2 + 5, etc), that skill appears on later assessments as its own page. Because each page is designated as a separate skill, students are aware of the fact that this is the explicit ‘algebra’ page. I also include this page **when it’s the building-block for a skill we’ve been working on recently** (for example: when I teach distance and midpoint in the coordinate plane, I also reassess on integer arithmetic because **you need integers in order to do distance and midpoint calculations**).

I’m realizing that these choices have** fundamentally impacted some of my curriculum choices**. Here’s what I mean:

**Typical Situation from Last Year (Before SBG)**: I assess basic algebra skills at the beginning of the year, including integer operations (-3 + 4) and solving algebraic equations (2x – 14 = 26). For the purpose of this post, let’s say the skill of choice is *solving algebra equations *(two other foundational skill students usually take time to master at the beginning of the year are integer arithmetic and drawing geometric figures).

My test has several skills on it so the grade is more of a ‘summary’ than an itemized analysis – we lose information in a purely numerical grade. Because of this, many of my students get an ‘acceptable’ grade on my test (for some students, a 61% is acceptable), so they stuff it in their backpack and don’t think of it again – they passed, so it doesn’t bother them that they missed **every single algebra question**. However, I as the teacher can see that most of my class doesn’t know their algebra, even if each individual student doesn’t really care that they don’t know their algebra (remember: they still passed my test, so they’ve moved on to think about other things). I need to figure out a way to revisit algebra so my students realize that they need this skill for work we’re going to do later. **Therefore, I adjust my curriculum so that algebra magically reappears a few weeks later in a different context, forcing my students to again confront the fact that they don’t understand this skill**. So we spend a few more days on algebra ‘wrapped’ in a geometry concept, and then several problems like this appear on the test at the end of the unit. This gives me a chance to stealthily remediate and reassess their algebra skills without it seeming like we haven’t moved forward in the curriculum. One of the most unmotivating factors in curriculum is to linger on a topic for too long, which is why I need to create the illusion that this is actually a ‘new skill’ and we’re moving forward with our year.

So, we test again. Several of the problems on the test are these algebra problems ‘wrapped’ in a geometric context. After the test, more of my students understand algebra but still not as many as I would like. **So I repeat this process**. Before long, half of my curriculum has some sort of algebra component because I know that’s how long it will take for me to stealthily remediate and teach this skill.

More after the break below…

**Real-Life Examples of This from My 1st Year:**

Last year, at any given time, about one-third of my tests were old skills ‘wrapped’ in a new geometry context. When designing the test, the assumption was never “My students will use their skills with (triangles/bisecting angles/quadrilaterals) to solve these problems” – or, as I’m thinking this year, “These problems will give me an accurate picture of how well a student understands (triangles/similarity/congruent)”. Instead, the assumption was always: **These problems are an excuse to make my students do algebra because they still need to learn it and these problems will force them to do so.**

**Reflection 1:** This is fundamentally **dishonest** – I’m ‘tricking’ my students into learning algebra by making it reappear throughout the whole year. Because it’s a trick, when my students do start to understand, they rarely (if ever) realize that its because of their weak foundation. I would try to tell them “You need to learn your algebra! Once you know algebra, everything else will click!”, but they would usually respond with “Just teach me what I need to learn for the test!” (which is the whole reason I put algebra on my future tests in the first place).

**Reflection 2**: This practice kept the cognitive demand of my classroom at a continually low level. An example: I’m teaching triangle congruency and how to write congruency statements. Then I throw in algebra problems because I need to hit algebra again because they didn’t master it the first time. Now my students are struggling with mastering the new skill (triangle congruency) and the old skill (solving algebra equations). If I want to be fair to my students, I need to keep my expectations simple and concrete: if they can just *solve* these problems completely and correctly, that’s enough to show that they’ve mastered both of these skills. My idea of a higher-level question was one that incorporated several procedure skills rather than one that required a deeper exploration of a singular skill. Again: this is because in any particular unit, I’m usually teaching to one or two *new* skills and one or two *previous* skills that my students never mastered.

**To Summarize**: Last year, because of how I graded and how I assessed, I was adjusting my **curriculum** to account for when students failed a particular skill and still needed to master it. I remember having this thought a lot: “I want to do problems like (this) because most of the class failed these types of problems on the last assessment, so we better hit them again”. **I was using my curriculum to solve an assessment problem.**

**Now That I do SBG Grading**: Separating my skills makes it **completely apparent** when a student doesn’t understand a particular skill, which encourages more immediate attention. They can’t hide it anymore. It also makes remediation on that skill **meaningful** since they know it will appear on later assessments anyway, which means there is always the incentive to raise their grade. Students don’t like getting the same questions wrong week after week after week, which is part of the motivation to work on skills that we first learned at the beginning of the year. And since I make it apparent that these skills **build** on each other because of how I layer my assessments, my students now **appreciate** how a weak foundation can affect **everything else we do in my class**. Separating and layering my assessments makes this conversation more meaningful: “Whoa… I see it now. Integers are everywhere!” (real comment from a student earlier today, which may be another catalyst for this post).

**Back to Curriculum**: There will always be the problem of “How do you motivate students to go back and master a skill that they need to know?” Last year, I solved that problem by adjusting my curriculum so students were *forced* to face these skills that they didn’t know. This year: that motivation is built into the very foundation of how I assess. I don’t need to think ‘what problems do I need to talk about so my students will be forced to learn (this)’. Instead, I can rely on the very nature of my assessments to make it apparent when a student doesn’t understand something and needs to remediate.

And just to be clear: I still keep these types of problems in my curriculum and still expect my students to solve them. But my mentality towards them is different – I now approach them as a way to *apply* conceptual knowledge rather than the catalyst to revisit algebra for a few days. They appear on my assessments *embedded* in a concept rather than the primary *focus* of the assessment.

Lastly, I think this also explains some of the disappointment I felt with some of my lessons from last year. For example, I remember my unit on Quadrilaterals being *excellent* last year – my students were very successful with pretty much everything we did. But this year, it was very mediocre. I’ve been trying to figure out why that was, when it hit me: my unit on Quadrilaterals last year was almost entirely grounded in algebra problems (like the parallelogram one in the document above) and in integer arithmetic (here are 4 points – use slope and distance to determine what kind of shape this is). And last year, this was the unit where both of these concepts *finally* clicked for most of my classes, which is why it was so successful. But this year, when I taught these same lessons, my students saw **straight through** these problems and recognized them for what they were: an excuse to do algebra and an excuse to do slope & distance problems. And since I didn’t raise the level of rigor beyond “Connect all these different skills together”, my students felt like they were spinning their wheels. And they realized it because they had learned to see through my dishonest curriculum tricks which my SBG assessment system has brutally exposed.

So… lots of stuff in this post. I’m still exploring this idea of how assessment choices affect curriculum choices. Any ideas or comments are absolutely appreciated in the comments.

Could be missing something here since my 3 yr old woke me up so early and I admit that I have done some thinking and reading about SBG but am a novice at best in thinking about this. If your page by page breakdown of problems allows you to focus clearly on a skill at a time, does that get in the way of having problems that blend skills or that call on the students to really put things together in a new context? I am thinking about conversations I have had with parents where they complain (because their child did) that a problem on the test does not look like one that has been explicitly explained and/or practiced. My response is – of course, there are problems that look a little different, I want to see how my students can apply the ideas they’ve wrestled with to novel problems so I can peek in their brain a bit and try to gauge their level of understanding.

What am I missing about this level of problem-solving in SBG style assessment?

Mr Dardy,

This is something I’ve wrestled with too – the question of: how do you assess synthesis or problem-solving on an SBG-style assessment?

What I’m realizing is: I don’t think most people do. Most SBG assessments I’ve looked at are more like a checklist of individual skills – ‘Can they do this? and this? and this? Okay – we can check this off in our sbg gradebook’. The assessments don’t have a way to measure synthesis of skills or the application of skills to new situations. Personally, I’m not convinced a pen-and-paper assessnent is the place to measure these things, which is why I think most teachers that do SBG try to supplement their pen-and-paper assessments with projects or tasks that measure how well a student can synthesis or problem solve. But in terms of the actual assessment: most of its is low-level and simplistic. I’ve started calling them ‘checklist’ assessments.

Evidence: http://www.mrmeyer.com/blog/wp-content/uploads/070830_5.pdf

This is why I’m asking questions about how we create assessments. I can’t find very many people who share the process they go through when they create their assessments. I don’t know how other teachers write their SBG assessments. I don’t know if they really are as bare-bones and simplistic as I found myself doing, or if there’s a way to write an SBG assessment that synthesizes skills. Right now, my thoughts are: a strict SBG implementation, following things that Dan Meyer or Shawn Cornally have written about, _requires_ an assessment without synthesis or novel problem-solving.

But maybe I’m wrong. I just don’t know. This is why I’m asking questions – I want more people to talk about how they write their assessments.

I am a bit of an outlier in my department – even though I am the chair – and that is one of the reasons I have not taken the SBG plunge yet. I did have an interesting conversation with my department that may be meaningful for you. I related a story about a grumbling student in Calc AB years ago who said ‘You must think that AP means All Problems’ She went on to discuss the difference between exercises and problems. I think I want to adopt this to my benefit and tell my students when an assessment is an exercise assessment (show me the particular skills we’ve been working on) or when it is a problem assessment (novel situation – show me how you can piece things together).

I think that if I can figure out a smart way to balance these ( I am thinking of about a 3 : 1 point ratio ) then I can reward the diligent kid who learns what s/he needs to from practice while also recognizing the clever kid who can make connections. Does this paradigm make any sense? Can it be implemented in an SBG environment?

I think part of the issue is whether or not you care about students being able to synthesize these particular skills or their ability to synthesize skills in general. One possible avenue that I have thought about but not yet tried is to use extra standards similar to the CCSS of Mathematical Practice or in my school, the components of the IB learner profile, to assess students ability to attack novel problems with a variety of resources. If the synthesis of these particular skills is so important that it should be included in your list of standards, if not then you are mainly worried about their mathematical practice or their learner profile.

Yes! I’ve thought about trying this too! I wanted to have standards related to Habits of Mind – I chose to look at the Park School Habits of Mind descriptions to influence my thoughts (details here: http://parkmath.org/mathematical-habits-of-mind/). And I agree with you – if synthesis and problem solving and habits of mind are things I want to hold my students accountable for, then I need to make that explicit in my standards. And the thing I’ve struggled with is: how, then, do I assess it? Is it possible to assess in the same way as the purely mathematical content standards? Or do these assessments exist purely in the realm of projects and deep mathematical tasks?

I spent half of my first semester trying to emphasize mathematical habits of mind – I would tie each unit to a particular habit and had them hanging on the wall. I dropped it later because I couldn’t figure out a way to assess it, so my students didn’t know why I spent so much time harping on ‘tinkering’ or ‘representing symbollically’ or ‘simplify the problem’. But, on the plus side, I _feel_ my curriculum became more rigorous because I knew I wanted these habits of mind to be a part of my standards, and these habits exist at a higher cognitive level than purely applying procedural knowledge. So again – the choice to make habits of mind explicit and try to assess them started to affect my curriculum, but since I couldn’t figure out an honest way to assess them, it started to fall apart. Its something I need to focus on _a lot more_ for next year. Especially with the Common Core coming.

I think it should be pretty straightforward if it is kept simple. For each “Habit/Practice/Trait” create a rubric and then for each assignment let the students know the habit you’ll be looking for and assess where in the rubric they fall. For example, the checkerboard squares problem from IMP, students are asked to find how many squares of any size there are on an 8 by 8 checkerboard. Tell the students you are checking “Taking things apart” and “Change representations”. Students then begin to know that in order to get full credit they have to break the problem down into a smaller problem to solve first and then once they have the answer they have to create their own problem for a different sized board or for a bigger or smaller board. You could also choose to assess “Visualize” and “Represent symbolically” and students should begin to understand that you want a detailed picture drawn and in the end they should have a formula that describes the scenario.

I think that by creating a rubric for each habit and then assessing a limited number of habits per assignment it should be possible to end up with written assessments that address habits of mind.

Here is another strong link for a discussion of habits of mind in the math classroom

http://www.withoutgeometry.com/2010/09/habits-of-mind.html

Yep – Avery Pickford (without geometry), the Park Math team (that I linked above), Bryan Meyer (http://www.doingmathematics.com/2/post/2012/03/habits-of-a-mathematician-take-two.html), and Al Cuoco (http://www2.edc.org/cme/showcase/HabitsOfMind.pdf) are my go-to resources for thoughts on Habits of Mind and how to build those hard-to-pinpoint problem-solving skills.

I have been trying to get my department on board with this type of approach rather than the Atlas curriculum maps approach to thinking about what we want to accomplish. I think that the Park Math statement is really well written.

When you put algebra skills on assessments throughout the year, what do you do about the kids who already “mastered” the skill earlier in the year? Do you exempt them from the algebra page? Do you have a problem with kids who had previously mastered a skill forgetting it?

The way I do things this year, if a student has already demonstrated mastery, they can cross that page off of the test. I haven’t had issues with students forgetting skills because I grade so harshly. This has been one of the challenges of SBG that I wasn’t prepared for – allowing students to demonstrate mastery only once means that I need to set high standards for what mastery means. Because I set my standards so high, I’ve rarely had issues with students forgetting something that they’ve mastered. This was always something I was skeptical about with SBG – how do you ensure retention? – but I’ve been pleasantly surprised by how much of a non-issue this is. I think it’s related to level with which you grade the assessments and how high you set your standards.

Time for some meta-reflection here. I have comments (after thinking about this for awhile) about each of your two reflections above.

Reflection on Reflection One – You are not being dishonest at all if you constantly emphasize the connectedness of their learning. Algebra and geometry have links. Physics and algebra and Calculus are linked. Chemistry and algebra are linked. etc etc etc If we tell our students that learning is a cumulative process AND we are clear about what from the recent past is especially troubling or important – then I don’t think it’s dishonest or ‘tricky’ to revisit skills on a new assessment. My students have come to expect that that problem we had a long talk about from the last test is likely to reappear in some form.

Reflection on Reflection Two – I think that keeping the ties to the past (both recent and not so recent) allow you to up the cognitive level of conversation. Kids are looking for clues, they’re tying up loose ends, they are making connections. This can be powerful stuff.