Outcomes based assessment: Planning for assessment tasks – Connect Charter School

Outcomes based assessment: Planning for assessment tasks

-a research project by Kevin Sonico and Louis Cheng (grade 8/9 Math teachers)
Terms such as benchmarks, competencies, standards, and outcomes are used interchangeably (Brindley, 2001) to indicate objectives that students must achieve. Hereafter referred to as outcomes-based assessment, the use of objectives in assessment compares student learning and progress with the intended targets. These outcomes are determined by the Education Ministry and, as such, are universal among all schools in the province. These outcomes are described in the Alberta Program of Studies and this document serves as a guide for teachers. Although there are ensuing conversations around primacy and utility of certain objectives over others, we accept and acknowledge the comprehensive nature of the outcomes.
For the purposes of this action research, we do not intend to contribute to the divisive debate surrounding the learner objectives’ complexity. Rather, we used the outcomes to reinforce our focus in our learning activities – from discussions, assignments, and tests. Although the use of outcomes as a basis for reporting learning may sound clear, their forms of implementation in classrooms by teachers vary. Some may place emphasis on standardized assessments, such as provincial exams. For others, it may look like the use of multiple sources of evidence, such as observations, portfolios, and conversations (Brown & Hudson, 1998, as cited in Brindley, 2001; Davies, 2011). Known as triangulation, assessment of student learning through the use of different assessment practices becomes more reliable (Lincoln & Guba, 1984, as cited in Brindley 2001).
For us, outcomes-based assessment is making the objectives more apparent not only to us, but also to the students. This includes identifying skills that we want students to develop and/or to assess prior to an activity. For this action research, we wanted to find out how outcomes-based assessment impacted three parts of our practice: planning learning activities, recording student achievement, and reporting progress. We collected qualitative feedback from students via survey and used our observations and reflections during the research process.

In planning learning activities, the outcomes should be known to teachers. Although it can be tempting to plan and implement lessons that appear to be engaging and purposeful in achieving a nebulous, clarity in the objective of an assignment is key. In using this approach to designing learning activities, the outcomes rather than the products become the focus. For this action research, I used outcomes-based assessment in Math 8 as it allowed me to generate alternative questions or tasks more easily if or when students did not meet grade-level expectations in each of the outcomes. Here are the ways that I planned using outcomes-based assessment in Math 8.
The Math Program of Studies is hierarchical and encompassing, in that it goes from overarching (strands and general outcomes) to more distinct (specific outcomes and achievement indicators) targets. In using the Program of Studies, the general outcomes are shared among all grade levels are, therefore, too vague. Rather, I found the specific outcomes fairly straight forward. These statements often became directions for students. For example, “draw and construct nets for 3-D objects” is a specific outcome of the Shapes and Space strand. When specific outcomes were too comprehensive, such as “demonstrate an understanding of ratio and rate,” achievement indicators are used as targets. There were also instances when outcomes were jargon-loaded. Modifications to the wording of an objective made them more accessible to students. For example, “Model multiplication of a positive fraction by a whole number concretely or pictorially, using an area model” (as an outcome) became “Draw a diagram to answer these multiplication sentences (as a task). Regardless of what part of the program of studies was used to dictate the task, it was done with an outcome or outcomes in mind. And by assigning outcomes for each task, it allows two things: a task can accomplish specific multiple outcomes; and outcomes deemed more significant by the teacher can be achieved multiple times.
Further to using the outcomes as instructions on assignments, they were also used as topics to study in outlines for tests. My students became accustomed to me providing them with a list of topics for their review, and when I forgot them, students were willing to remind me. In structuring tests, the outcomes became instructions. By making the content of the test, that is, the outcomes, less mysterious, I observed a positive difference in their approach to test preparation and test taking. [More on student behaviours/responses to outcomes-based assessment in a later post.] Tests are structured around having an outcome for each section. Although tests are typically scored on a student’s performance compared to the total points, students’ test marks during this action research were scored per section, that is, per outcome. Because each section of the test represented an outcome, this itemized method of assessment allowed us to know which particular outcomes were theirs strengths and areas of need. Although it was difficult to wean students from the practice of tallying a cumulative score or total to indicate their performance on a given task, individually itemizing provides more specific feedback on their learning.
Setting up a system such as this is more laborious than merely setting up software or entering formulas into a spreadsheet, it requires a fundamental shift in the way teaching and learning occurs. First and foremost, the use of the “Understanding by Design” or “Backwards Design” model is key to success – starting first at the curriculum and then designing activities to learn, practice and demonstrate. This will have you immersed in the curriculum – the next logical step is to share this with students which may involve combining or consolidating and translating curricular outcomes into student and parent friendly language. It is only reasonable that students and parents should be privy to what they specifically need to accomplish ahead of time. Furthermore, to focus on these discipline specific outcomes rather than the ability to make a poster, fill in a worksheet, film or a movie or complete any other form of a product is not only the general trend in education, but does not filter student ability on non-related outcomes such as “neatness” or “penmanship”. Once these curricular outcomes are sorted, designing assessments to fit these outcomes, I design 3 types of activities: “Big Questions” or inquiry activities for students to explore, group tasks where students can work together to build or complete a challenge, and individual tasks to check for student understanding. This portion might be altered pending on the teacher’s comfort level in using different types of assessment activities, but I have found that checking over skills a number of times in different ways seems to have worked for me. Creativity is still possible with this model – and developing projects with multiple outcomes is not only possible, but encouraged as many objectives are very closely linked.
While on this note, some may choose to use each curricular objective verbatim from the curricula, and others like me might choose to reword/rework the curriculum. However, in either case, during the creation portion of assessment activities, it is vital to think ahead. If a student were to get a question wrong, would the cause of this be because of a misunderstanding of the curricular objective, or something else? Some skills rely on prerequisite skills, so deciding if you want to include these within your outcomes is up to you, but one of the advantages of outcomes based assessment is finding patterns of student strengths and weakness – so there will be no actionable difference between outlining the prerequisite skills or not.
To me, this type of assessment is more work, takes a great deal of foresight, but is educationally logical. In addition, for me, it is the basis in how I as a teacher differentiate. Below are 2 fictitious students’ results and their progress for the first few weeks of a sample class:

Each graph represents a student, along the right hand side, there is a collection of their quantitative assessment feedback, and each assignment contains qualitative feedback as well. However, conferencing with students regularly allows for students to see specifically how they are strong, and how they are weak within a subject discipline. This helps students to see that every student has something that they can improve upon, and every student can celebrate what successes look like for them. Moreover, it helps teachers to adjust their teaching to specifically cater to individual student needs. While many teachers can point out some of the weaker students, knowing what each and every student needs to improve upon can best help improve student engagement and learning.
Lastly, setting up assessments for my students can be a daunting task. How can I clearly separate skills that are so closely related, while not creating work that is too isolated from context? When I was a student, I sometimes struggled with word problems. I quickly learned that if the calculation problems were (for example) multiplying fractions, than the word problems would most likely be the same – thus I would take the numbers given to me in the problem and thoughtlessly multiply them. Working in an inquiry based school, I try very hard to integrate meaningful work in my classes that gives freedom to students to explore within a framework of Mathematics and Science, and while these may seem as though they are conflicting ideas, I believe that tasks both large and small, can be chunked into sub-tasks and eventually skills. It does take a good amount of pre-planning and a good understanding of the curriculum, but I feel that it yields many benefits for students, parents and teachers.
Brindley, G. (2001). Outcomes-based assessment in practice: Some examples and emerging insights. Language testing, 18(4), 393-407.
Davies, A. (2011). Making classroom assessment work. Bloomington, IN: Solution Tree Press.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *