Improving the evaluation of learning in a project-based class

I’ve been struggling for a few years with providing rich, authentic tasks for my computer science students and then having to evaluate their work.

My students learn a lot of skills quickly when solving problems they’re interested in solving. That’s wonderful.

I can’t conceive of a problem they will all be interested in solving. That’s frustrating.

In the past, I have assigned a specific task to my entire CS class. I tried to design a problem that I felt would be compelling, and that my students would readily engage with and overcome. The point has always been to develop broadly-applicable skills, good code hygiene, and deep conceptual understanding of software design. The point is not to write the next great 2D platformer nor the most complete scatterplot-generating utility.

Unfortunately, I could never quite get it right. It’s not because my tasks were inherently weak; rather it’s that my students were inherently different from one another. They don’t all like the same things.

I believe that students sometimes need to do things that are good for them but that they don’t like to do. They sometimes need the Brussels sprouts of learning until they acquire the taste for it. But if they can get the same value from the kohlrabi of learning and enjoy it, why wouldn’t we allow for that instead?

So I’ve tried giving a pretty broad guideline and asking students to decide what they want to write. They choose and they complete a lot of great learning along the way. Their code for some methods is surprisingly intricate, which is wonderful to see. They encounter problems while pursuing a goal that captures them, and they overcome those problems by learning.

Sounds good, eh?

Of course, they don’t perform independently: they learn from each other, from experts on the Internet, and from me. They get all kinds of help to accomplish their goals, as you would expect of anyone learning a new skill. And then I evaluate their learning on a 101-point scale based on a product that is an amalgam of resources, support, and learning.

Seems a bit unfair and inaccurate.

I asked for suggestions from some other teachers about how to make this work better:

  • ask students to help design the evaluation protocols
  • use learning goals and success criteria to develop levels instead of percentage grades
  • determine the goals for the task and then have students explain how they have demonstrated each expectation
  • determine the goals for the task and then have students design the task based on the expectations
  • find out each student’s personal goals for learning and then determine the criteria for the task individually based on each student’s goals

I’m not sure what to do moving forward, and I’d like some more feedback from the community.

Thanks, everyone!

Learn-practise-perform cycle limits learning in CS

Like many courses, the beginning of my current computer science e-Learning class looked like this:

  • Teach small skill
  • Teach small skill
  • Give feedback on practice work
  • Teach small skill
  • Teach small skill
  • Give feedback on practice work
  • Evaluate performance task

This separation of learning from graded performance is intended to give students time to practise before we assign a numerical grade. This sounds like a good move on the surface. It’s certainly well-intentioned.

But this process is broken. It limits learning significantly.

If the performance task is complex enough to be meaningful, it requires a synthesis of skills and understandings that the students haven’t had time to practise. In this case I’m evaluating each student’s ability to accomplish something truly useful when they’ve only had the opportunity to practise small skills.

If instead the performance task has many small components which aren’t interdependent, students never develop the deeper understanding or the relationships between concepts. In this case I’m evaluating each student’s small skills without evaluating their ability to accomplish something truly useful, which isn’t acceptable either.

And there isn’t time to do both. I can’t offer them the time to complete a large, meaningful practise task and then evaluate another large, meaningful performance task.

The barrier here is the evaluation of performance. It requires a high level of independence on the part of the student so that I can accurately assign a numerical grade.

So I’m trying something different.

Instead of these tiny, “real-world” examples (that I make up) to develop tiny, discrete skills, I started teaching through large, student-driven projects. I got rid of the little lessons building up to the performance task, and I stopped worrying about whether they had practised everything in advance.

The process looks more like this:

  • Develop project ideas with students and provide focus
  • Support students as they design
  • Provide feedback through periodic check-ins
  • Teach mini-lessons as needed for incidental learning (design, skills, etc.)
  • Summarize learning with students to consolidate

I couldn’t design a sequence of learning tasks that would be as effective as my students’ current projects are. They’re working hard to accomplish goals they chose, and they’re solving hundreds of small and large problems along the way.

They couldn’t appreciate the small, discrete lessons I was teaching with the small, artificial stories. They didn’t have the context to fit the ideas into. It was only when the project was large and meaningful that my students truly began to grasp the big concepts which the small skills support.

And now I don’t have a practise/perform cycle. It’s all practice, and it’s all performance. It’s more like real life, less like school, and it’s dramatically more effective. It’s much richer, much faster learning than the old “complete activity 2.4” approach.

Evaluation is very difficult, though.

My students told me what’s going on in my class

I talked to my data management kids today about the not–so-great class we had yesterday. We pushed all the desks aside and put our chairs into a (sort of) circle for this conversation. I explained how frustrated I was with the lack of feedback I was getting during class, and that I was concerned that my goals did not align with their goals for the course.

I asked them why they were taking the course, and what they were hoping to get out of it. My speculation last night was partly on target: their primary goals are to get a high school diploma, with a good mark in this course, so that they could get into “the next thing” (university programs for most of them). Some mentioned that they thought statistics would be helpful for their planned program. Overwhelmingly the course is seen as a means to an end. It’s not 110+ hours of learning; it’s more like a long tunnel they must pass through to get on with life.

This is what I was afraid of, and yet sitting there with my students I can’t blame them. Our school system (through post-secondary as well) trains them to focus on achievement, which is measured by task completion and marks. Our system doesn’t (can’t?) train them to value learning over these other goals, because the system itself doesn’t value learning over task completion and marks.

We had an honest conversation about what really matters in a math class. We talked about how they all learn exactly the same things even though they don’t all have exactly the same plans for the future. We talked about how we have a “just-in-case” curriculum: you must learn these skills just in case you need them someday.

And the most frustrating part for me was that they all know that a lot of what we do in class doesn’t really matter in the sense that it doesn’t really change them. They haven’t been improved by learning how to use the hypergeometric probability distribution. They will forget it when the exam is over because it doesn’t matter much to them. It’s not something that they’ll use, likely. And if they need it, it’ll be because they’re steeped in all the math that goes along with it.

But not everything we do is like that in my class. Some things do matter. And I’m feeling a bit guilty tonight because I think I should have focused the course a bit differently, spending more time on the parts that will change my students. We’re only a few weeks from the end of the course and we don’t have the luxury of a slow, thoughtful pace that the remaining topics deserve. I can’t fix that now, but I can work on it for next year.

I grabbed the Chromebook cart and sent my kids to a Google Form with three paragraph-response prompts:

  • Start
  • Stop
  • Continue

They each wrote anonymously about what they think we should start to do in our class (perhaps an approach they like from another class), stop doing (approaches I’m taking that aren’t working for them), and continue doing (class components they don’t want to lose if I change things). Their responses were fascinating, and I’m going to read them over a few more times to make sure I get it all. It was pretty clear they don’t want any more audio clips, though :)
Our conversation also revealed that I misinterpreted their silence as a lack of interest or understanding. What I learned from them today was that there were portions of yesterday’s class that they did enjoy, but I couldn’t see it. They didn’t provide feedback I was expecting and I didn’t adjust my teaching to suit their needs. It was a difficult conversation for me (and probably them), and it took some time, but it was worth it. I understand my students better now, and I think I can be a better teacher.

It’s not all fixed, but I don’t feel quite like I did yesterday. I’m going to go to class tomorrow with a plan to improve my teaching and their learning at the same time.

We have to stop pretending… #MakeSchoolDifferent

I’m responding to Sue Dunlop’s challenge (which is the result of a series of challenges stretching back to Scott McLeod). I’ve only read a few of the other posts that this challenge has generated, so I apologize to anyone who already expressed these same thoughts.

  1. We have to stop pretending that it’s okay to complain about someone else instead of offering them support.
  2. We have to stop pretending that telling people to learn how to cope is an effective strategy for dealing with mental health challenges.
  3. We have to stop pretending that evaluation can be both objective and accurate when implemented by a single human.
  4. We have to stop pretending it’s acceptable and reasonable for reporting periods to dictate the pace of learning in our classrooms.
  5. We have to stop pretending that there is a single, correct solution to any one of these complex problems.
  6. We have to stop pretending that we can do this on our own.

Oops, that’s 6. Ah well.

The tagged? David Jaremy, Peter Anello, Tim Robinson, Eva Thompson, and Doug Peterson. Additional apologies if you’ve already been tagged.

Summative Task for Quadratics – #MCF3M

My Grade 11 e-Learning math class is completing a unit on quadratic equations. I have a few things happening for their summative assessment, but the part I find most interesting is the following “experiment”. It’s heavily based on the Leaky Tower task from TIPS4RM at EduGAINS.ca. I’m going to test it out tonight with my kids before I finalize the evaluation criteria and post the task. If you have feedback, I’d love to hear it. I’ll be adding photos to help explain the setup.

Leaking Bottle – Summative Task – Part 1

You’ll be completing a short experiment and writing a report to go with it. You can get help from a classmate, family member, etc. while running the experiment, but just as an extra set of hands. No one should be helping you with the math part.

Preparation

Gather the supplies you’ll need:

  • a clear, disposable, empty, plastic bottle
  • a ruler
  • a watch, phone, or other time-keeping device OR a video-recording device.

—photo here—

Carefully poke a hole in the bottle about 3cm from the bottom. Seriously, be careful here. You might try using something sharp, like a pin or a nail, to start the hole, then widen it with a pencil. You want the final hole to have a diameter of 3-7mm. Don’t worry about being super-precise.

—photo here—

Hold a ruler next to your bottle, or tape a ruler to your bottle if you need both of your hands free. You want to be able to measure the water level, so put the “zero” end of the ruler at the bottom.

—photo here—

Cover the hole and fill the bottle with water. If your bottle has a tapered top (like the one pictured here), only fill it up in the cylindrical section (i.e. before it starts to narrow). You can cover the hole with your finger, or you might try a piece of tape (if you use tape, fold the end on itself so it’s easier to remove).

—photo here—

Data Collection

If you’re recording video (easier, I think), start recording. If you’re just using a watch or other timing device, wait for a “good” time, like a whole minute, for a starting point.

Uncover the hole, letting the water in the bottle flow out into a sink or another container. Don’t make a mess; nobody wants a mess.

—photo here—

If you’re using a watch, use the ruler to record the water level every 5 or 10 seconds or so. Pick an easy time to keep track of. Record measurements until the flow of water stops.

If you’re recording a video, let the water finish flowing out, then stop the video. Play the video back, noting the height of the water every 5 or 10 seconds or so.

Analysis

You now have a table of values: time (independent variable) and height measurements (dependent variable). If you didn’t get good data (you lost track of time, the video didn’t work, etc.), perform the experiment again. It doesn’t take long.

  1. Using Desmos, create a scatter plot for your measurements.
  2. Find an equation to fit the data as best you can.
  3. Identify the key points on the graph.
  4. How should the equation you found be restricted? i.e. what should the domain and range be?
  5. Write the equation you found in Standard Form and Vertex Form.

Leaking Bottle – Summative Task – Part 2

One small change

Repeat the above experiment, but this time put another hole about 7-10cm above the first one. Uncover them at the same time, so water will flow out of both holes.

—photo here—

Your analysis will be a little more complex, because you won’t have a single, nice equation that can accurately model the data.

  1. Using Desmos, create a scatter plot for your measurements.
  2. Find an equation (or equations!) to fit the data as best you can.
  3. Identify the key points on the graph.
  4. How should the equation(s) you found be restricted? i.e. what should the domain(s) and range(s) be?
  5. Write the equation(s) you found in Standard Form and Vertex Form.

Improving report card comments with a checklist

It’s report card season in Ontario, and I don’t know too many people who are happy about it.

I don’t love evaluating student performance in general, and the persistent and poisonous focus on MARKS by most stakeholders in student learning is infuriating. Marks are a huge loss of information about student performance, in my rarely-humble opinion. Along with those percentage marks we have a much-less-valued-but-more-valuable evaluation of Learning Skills. My students mostly ignored those, I think.

In truth, the hero of the report card is The Mighty Comment. It has the superpowers of Explanation and Recommendation. It’s here that I can talk about what’s going on, why, and how to improve.

After all, assessment is for improving learning. Reporting a mark of 68% doesn’t do that.

So The Mighty Comment is our hope for the future, the only power that can save our students and their parents from receiving an all-but-useless document.

Let’s do it right.

I’m teaching in a high school, and we have both a provided comment bank and the latitude to write our own comments. The only rules are that we need to follow the guidelines in Growing Success and we have to keep it under 458 characters.

I read an interesting article at rs.io called The Unreasonable Effectiveness of Checklists.

Fireworks blazed across my brain. I need a checklist to make sure I’m doing what I want to do with every comment.

So I made one

The Report Card Comment Checklist (catchy name, eh?) is now live. I also included The Verbose Report Card Comment Checklist immediately after it to help explain what I mean. Please leave comments here on the blog if you can help me to improve it.

I sat with each of my students this term to review their marks, learning skills, and comments before I submitted them to my school admin team. I wanted them to know that I tried to write what I thought and that I cared about their improvement. I articulated their strengths and what I need them to do next. I asked them each to reflect on their comment (most of them needed to be prompted) and to tell me whether they thought it was fair, accurate, etc. One student found a typo (yay!) and two asked me to clarify what I meant. About five students said their comments sounded exactly like them, which makes me proud.

I have to admit that I made the checklist this evening; I may have to edit my comments a bit next week before they’re published.

You should just click the link for the complete version, but here it is anyway:

The Report Card Comment Checklist

Check each student’s report card comment and ask yourself these questions:

Strengths

  • does it include at least one strength?
  • are the strengths related to the course?
  • are the strengths worded positively?
  • do the strengths stand alone?

Next Steps

  • does it include at least one next step?
  • are the next steps related to improvement in the course?
  • if a student reads the next steps, will they know what to do to improve?
  • are the next steps worded positively?
  • do the next steps stand alone?

Language and Tone

  • did I check for spelling, grammar, etc.?
  • did I read it out loud?
  • did I listen for sarcasm and negative feeling in my voice?

The Point

  • will the student feel that I care about their success?
  • will the student “see themselves” in the comment?
  • will the student want to continue to improve?
  • will the parent understand how to help their child improve?

 

Different kinds of Thinking: Ontario Math Achievement Chart

I’m evaluating some student work today and I’m struggling with the Achievement Chart for Mathematics (see page 28). In particular, this part of the Thinking category is bothering me:

An excerpt from the math achievement chart for Ontario

Take a look at the first point in “Use of planning skills”, called “understanding the problem”, which includes “formulating and interpreting the problem” as an example of that skill.

Now look at “Use of processing skills” point “carrying out a plan”, which includes “modelling” as an example of that skill.

Are these different? In my mind (up until now, at least), “formulating and interpreting the problem” has meant representing a situation mathematically so that we can apply our other math skills to solving it. Isn’t “modelling” in the context of “carrying out the plan” sort of the same thing? Representing components of the problem mathematically? Is the difference just when it happens (i.e. formulating/interpreting is initial planning, and modelling is during the act of solving)?

I’m not trying to be pedantic here; I’m having trouble distinguishing between the different components of Thinking when I’m trying to assess and evaluate my students’ work. I could use some external thinking on this issue (and math evaluation in general, I suppose).

Please comment; I’d love to talk to you if you have ideas about this stuff.

Credit Recovery is a Good Idea

There is a lot of controversy surrounding the idea and implementation of Credit Recovery in Ontario high schools. Opponents I have spoken with usually say that CR:

  • gives credits to students who haven’t earned them
  • allows students to have poor work habits/work ethic/organization/etc. but still succeed (i.e. if they don’t fail they’ll never learn resilience)
  • take jobs from teachers by preventing students from repeating courses

There may be other concerns as well, but these are the big three. I don’t think the last one merits much discussion, since it presumes that teacher jobs are more important that student success, which I feel is an unethical perspective. The other two are worth looking at.

How Credit Recovery is supposed to work

Here’s the Credit Recovery process from a student’s perspective:

  1. Fail a course
  2. Repeat parts of the course

Here’s the Credit Recovery process from the school’s perspective after the student fails the first course attempt:

  1. Identify the student’s level of success on each overall expectation
  2. Create/select a custom set of lessons and activities for the student based on their success in the first course attempt
  3. Guide the student through re-learning key concepts from the failed course
  4. Re-evaluate the student’s new work, including a new final summative task/exam (30%)
  5. Base the student’s new grade on their new performance, with the 70% term work possibly including information from the first course attempt

Credit Validity

I agree that this is a concern, for a couple of reasons. First of all, many schools will create a “Credit Recovery Package” for a course, but not for a student. For example, there will be an “AVI1O Credit Recovery Package” which will include 10 lessons about grade 9 visual arts, 4 assignments and a short exam/culminating task. The school will “administer” the package to every student who needs to recover AVI1O; if they pass each assignment and the exam, they get 50% in the course.

This is not okay for two reasons:

  1. The course package is not customized to the students’ needs. If a student performed well on half of the expectations but very poorly on the other half, what guarantee is there that the package is not mostly testing the same items they were originally successful on?
  2. The student gets a maximum grade of 50% (this isn’t part of Credit Recovery, and not necessarily part of this scenario, but it’s a common practice).
A pre-made, standard Credit Recovery "Package" may not be what the student really needs to learn/demonstrate.

A pre-made, standard Credit Recovery “Package” may not be what the student really needs to learn/demonstrate.

Aren’t we teaching them to not work hard?

I understand this argument too. After all, if students never fail, never have to persevere, are we preparing them for “the real world”?

Let me give a few scenarios, and think about whether you believe the student should be able to recover the credit involved.

Scenario 1

A student is taking a full course load, including three grade 12 Science courses. The student is working hard throughout the semester. About two-thirds of the way through the semester, he sits at 70% in Grade 12 Biology. Then he starts to have some personal issues, including a difficult struggle with depression. He stops handing in assignments, disengages from the lessons in class, and enters into a spiral of failure. He fails the exam miserably, and the teacher assigns a final grade of 42%; the student basically flunked the whole last strand of the course.

Scenario 2

A student is taking a full course load, including three grade 12 Science courses. The student is working hard throughout the semester. About two-thirds of the way through the semester, she sits at 70% in Grade 12 Biology. Then she starts to spend a little too much time partying, and not enough time on homework. She stops handing in assignments, is often absent, and is disruptive enough to be sent to the office and suspended. She shows up to the exam but barely writes anything on it. The teacher assigns a final grade of 42%; the student basically flunked the whole last strand of the course.

Scenario 3

A student is taking Grade 12 Biology. He works hard, but consistently performs below Level 1 throughout the semester and achieves 42% in the course.

Who should be eligible for Credit Recovery?

These three scenarios are different, but they have some similarities. First, they’re all in the same class. Second, they all achieve 42% overall. Should they all (or any of them) be eligible to recover their credit?

Scenario 1 and 2

These are the same for me. The reason that the student struggled and was unsuccessful is not important to me. If they are both willing to learn the content, to demonstrate their understanding, and to recover their credits, great. Why would we insist that the students work through the first two-thirds of the course again when they have already demonstrated their understanding?

Create a custom package of material for each student. Base it on the expectations they did not successfully demonstrate and the ones they wish to improve upon. This will also allow them to improve their performance on overall expectations that the students didn’t fail but which they have the ability to demonstrate a better performance on given an opportunity (i.e. fix the downward spiral, not just the part that was below 50%).

Wait, what about teaching them a lesson? No thanks. I can think of better ways to teach a student to be more responsible than making them “suffer” through 13 unnecessary weeks of a course they already know. For example, make them fix their mistakes by redoing the parts they did poorly.

Scenario 3

This student should not recover the credit. He’s that nice kid, the one who always puts in the effort, but he failed consistently. The only way to recover the course would be to demonstrate an improved understanding of all of the overall expectations. That’s called repeating a course.

Is this a good idea?

Let’s say you’re learning to make wooden cupboard doors. You study under a master carpenter for 6 weeks, learning to select the wood, prepare it, coax it into the shape you want. When you finally produce a finished door for your teacher, he says that you need to improve your finishing technique. So, naturally, he wants you to take 6 more weeks to relearn all of the components of door construction that you have already mastered, right?

No. He’d work with you on your finish, helping you to improve by providing timely, relevant feedback. When you produce another door that is perfect in his eyes, you start working on the next type of project (table legs, in this case).

Not the best idea, but the best idea we have

Credit Recovery is a way for us to sort of fix the problem of 20 week courses. Students learn at different paces, and life gets in the way. Ideally, we wouldn’t give students the opportunity to fail. Everyone would be enrolled in the right courses, they would all be 100% engaged at all times, they would have the extra two weeks they need to master the topic.

Since that’s not going to happen, I think it’s reasonable to give students more chances, and to be careful to keep those opportunities relevant to their past performance to ensure the validity of the credits they achieve. It’s a good idea; we just need to do it right.