//
you're reading...
SBG

Conjunctive Standards-Based Grading

I’ve had a lot of requests lately to explain my grading system, so I thought I would outline it in as much detail as possible here. I learned this summer that what I do is apparently called “conjunctive scoring”. That is, students cannot compensate for a low score in one area (say, their understanding of forces) with a high score in another area (say, conservation of energy). Rather, it requires at least a minimum amount of mastery in every area to earn a passing grade overall.

A minimum amount of mastery?

My physics objectives are split into “A” and “B” flavors. At minimum, each student has to demonstrate consistent mastery on the A level objectives. So grades between 70 and 90 depend on how many of the B level objectives a student has mastered. More on how I get the number grade below. The important part for now is that it is clear to students whether each skill is an A or a B skill from the start, and that A skills are the required, more basic (though not necessarily easier) objectives.

Even if a student has shown mastery on some B’s, if they are missing an A objective, they are not yet in a position to earn a passing grade at the end of the term. Luckily, their grade doesn’t exist until the end of the term, so they have plenty of time to do corrections and pick up skills that they missed along the way. During the final week or two of the quarter, I remind students which A objectives they have yet to demonstrate, and I start making sure (writing comments to advisors, etc) that they come in for help and then to assess.

Quick note on my naming convention: I didn’t want the name (A, B, etc) to conote a letter grade, I wanted to make it easy to have as many levels as needed (turned out 2 is right for now, though I started out trying three last year), and there are too many numbers involved already. Of course other naming conventions would be possible (colors? people? objects? etc).

How I grade assessments: Yes/No and Feedback

A scanned copy of a grading sheet from last year. This is a typical example of what the front page of a returned test looked like. Click to see it bigger.

First, I work through every test, marking them up with comments, corrects, etc. Since I don’t have to worry about points or grades, this process goes pretty quickly for me, even when I’m writing feedback on their work as I go.

I use a binary system to score each objective for each student on each question (where it is relevant). In the picture, I have a scanned copy of the scoring sheet I attach when handing back a test. In the rows going down are objectives, in the columns going across are question numbers. The final column shows the overall recorded score for each objective. I only record one score per objective per assessment, even though I usually measure each skill multiple times on the test.

The markings that I use:

2 = Mastery shown (this is a “yes”)

1 = Developing mastery— could be an error in process, arithmetic, units, etc, but something about the approach was correct. (this is a “no”)

0 = No mastery shown— so many errors or confusions that the student does not seem at all close to mastering this skill. (this is a “no”)

– = No data— student misinterpreted a question so much that the skill I’m trying to test is not observable in their response, or I don’t see their response as good evidence either way, or their response simply did not involve the skill. It is sometimes possible to have a completely correct solution without showing a particular skill that I was expecting to see.

In the final column, I put their overall score. If I recorded a 1 for a particular skill on any question, then the overall score is a 1. Even if the majority of scores are 2’s, I still record it as a 1 because I am looking for any evidence that a student has not mastered the skill. If I were to “do them a favor” and ignore the 1 in favor of two or three 2’s, then I would actually be setting them up to fail down the line because I’m letting them ignore a problem (even if it is, or they think it is, a small one) that they will likely continue to have.

As shown in the scanned example, sometimes a student shows mastery (or in this case, inadequate mastery) on an objective that I didn’t anticipate. It is easy to simply write those in at the bottom of the sheet.

At the end of the marking period, I only look at the most recent score when determining a number grade. (This process gets a bit more nuanced when the end of the marking period includes an exam, but more on that later.) Scores can go up and down as more data is accrued, and the scores do tend to fluctuate (which I take to mean that one data point is not sufficient for measuring complete mastery on many skills). I think letting scores go back down is an especially important piece of this grading scheme, and I’ll talk more about that below.

Important note about a difference between A and B objectives: To get a 2 for an A objective, the student must show that skill perfectly in the problem. To get a 2 for a B objective, the student typically must both show that skill perfectly and get the problem completely correct. That is, you can get problems wrong but still get “credit” for the A objectives, but to get the B objectives, you must be able to do the entire problem consistently. So you will not be stopped from passing the class if you never learn to use your calculator in a proper and repeatable way, but you will not be able to get an A in physics if you can’t finish all of the problems correctly. This idea both seems reasonable to me, and also seems (after one year of trying it) to result in much more careful students who are much better at doing arithmetic and routine calculations.

What if I get a 1 or a 0? How additional assessments work.

On the first test for a topic, more than a couple of students get all or many 1’s (while a 0 is not uncommon on one problem, often students still get a 1 overall because they have shown some developing mastery on another problem).

Now that they’ve gotten some feedback, it is time for students to start remediating and improving. This work comprises the majority of the out-of-class work, aka homework, that I’m giving them and is very self-directed. Still, I help them develop the tools and process to address their mistakes so they are not completely on their own.

The first step is to make corrections to their work. Often they will check in with me briefly once they have done that (usually at breakfast, a nice benefit of boarding school) to make sure what they have done is now correct. Next, they need to additional practice. The next assessment on the same skill will probably look very different to them, so they need to make sure that they learned the skill itself, not just how to answer the original question. This year, I am putting extra practice problems (plus answers) on a class website.

Then, when they are ready, I will have them apply for an additional, out-of-class assessment. I’ve blatantly stolen Sam’s application and modified it for my needs here. Thanks, Sam!

This year, I am only giving these opportunities once per week. I am also planning to give in-class assessments once per week instead of waiting until the end of each unit. I will vary the length (sometimes 10 minutes, sometimes full period tests). Everything that we’ve done so far in class will always be fair game. I hope this change will improve the data that I’m taking and help make students feel more comfortable with assessment as a part of the improvement process.

What does it mean to earn an “A”?

In my system, an end grade of “A” represents mastery of all the objectives detailed for each unit. To get beyond a 90, though, they need to move past the atomized skills and show synthesis. They must know when to use each model, must be able to use multiple models for different parts of one motion or problem, and must show creativity in their thinking.

I try to measure this depth of understanding on the semester exams in two ways: by seeing how they approach a series of comprehensive but traditional physics problems and by using goal-less problems. Before the exam, they might have shown mastery on skills in a somewhat isolated way. They apply for an additional assessment on specific objectives, then I give them questions that address those requests. On the exam, they must demonstrate most or all of the skills on problems that are not categorized for them as belonging to a particular model.

I wrote about my semester exam last school year, and I plan to write an update after going through the same process one more time this coming January.

Calculating the semester grade

In the end, I have to turn all of that rich information I collected into one final number.

Here is the basic plan:

I’ll start by assuming all students have demonstrated all A objectives going into the exam. I make it pretty difficult for them not to do this by the time exams start, and I have the advantage of students living here when I am tracking down those final few students.

On the exam, I give them a print-out of all of the objectives from the semester. Any that they have not yet demonstrated, I highlight for them. They turn this paper back in with their test and I use it for grading.

If they demonstrate a highlighted skill, I cross it off. If they falter on one they had already shown, I circle it. For the circled skills, I look back at their history over the semester (thank you, ActiveGrade). I then have to decide whether their mistake on the exam outweighs a consistent history of mastery. More often, though, a mistake on the exam corresponds with an inconsistent history on that skill. Sometimes they perform a skill correctly on one problem and incorrectly on another during the exam. After looking at their history and all of their work, I decide whether to count that skill as a yes or a no (or sometimes if they can do it correctly on an easier problem but not on a harder one, then I’ll count it as a 1/2 yes).

In the end, I count up the number of missing B objectives (counting any 1/2 yeses as 1/2 of a B objective) and subtract it from the total number. I use that percentage of B objectives to interpolate a score between 70 and 90. So mastering half of the B objectives would correspond to a final grade of 80.

To get the final 70 to 100 score, I also take into account their work on the goal-less problems. And in the event that their performance on the exam is much worse than what they were showing me on smaller assessments (this is very rare), I also have a means to take into account the earlier data that I collected. I describe that in my earlier post, so I’ll end the calculating grade discussion here.

Why I think this grading system rocks

I’m sure I will miss some of the reasons, but here are a few highlights of why I love grading this way. I’ll also focus more on why I especially love this particular flavor of SBG. For more reasons why the whole idea of grading in a feedback loop of formative assessment and remediation based on standards is amazing, read Shawn’s blog.

  • For the most basic skills in my class (about 9 out of 30 total in my regular class), it means there is no moving on. It’s not okay to never “get” a topic. It is a promise to my students that they are going to learn at least this much physics, and that I will keep working with them on it all year.
  • It helps struggling students see where to focus their energy first (start with the A objectives). It rewards starting problems even when you are sure you won’t be able to finish them (and often students find that once they start, they actually CAN finish).
  • It raises the expectations of learning for everyone in the class. If 100 in a class means that you’ve mastered all of the skills, then the expectation is that not everyone will do that. No school is going to be okay with everyone in the class getting 100 (or even with a many in the class getting 100). More on that in a minute. Students are essentially encouraged to be satisfied with (or even happy about) a very incomplete understanding of the content covered in the class. Moving that pin down to a 90 means that there is an expectation that many (I really hope ALL) students will master the skills that we learn in class. The 90 to 100 now represents doing something extra with those skills once you have them.
  • It exposes cheating as the pointless, silly exercise that it is. There is little point to cheating on a test when your score on that test will not affect your final grade (since all of the future tests of the same skills will bury that score).
  • It devalues “cramming” because while that may work for one test, it won’t keep working, and it definitely won’t work on the exam.
  • Students feel strong and powerful with their skills. They have demonstrated them over and over. There is no way to explain their success away as luck. So even though the exam is still a bit intimidating, they are able to have a more substantial faith in their tried and true abilities.

Tangent time: can you think of a practical class that people would want to take where we would be unhappy to have a 100% average? Example: swimming class, CPR class, driver’s ed, cooking class, etc. So why, for a class like physics that many are perhaps not taking in a completely voluntary way, do we not want them to all learn everything? Why is our expectation that they not all be completely successful (in fact, our expectation is even worse: that no one will be completely successful)? Perhaps this is a post for another time, though.

About these ads

About Kelly O'Shea

I teach high school kids physics at an independent day school in NYC. Less homework, more thinking. Follow @kellyoshea

Discussion

42 thoughts on “Conjunctive Standards-Based Grading

  1. Hi Kelly,

    I love this post, and I hope to steal some of these ideas when I go back to SBG. I have a couple of clarification questions, though: it seems like students mainly demonstrate objectives by taking tests, right? How often do they take them, and how long is each test?

    Also, I get the sense that a “test” is a recurring assessment throughout the semester, whereas an “exam” only occurs at the end of the semester. Is this a correct interpretation?

    Thanks!
    Bret

    Posted by bretbenesh | August 2, 2011, 10:52 AM
    • Thanks, Bret. I’ve found tests to be the most reliable way to assess in my physics classes. I want to base my data on work that belongs only to one person. I don’t want to get misled into thinking that someone understands something and fail to prepare them properly for the exam. And yes, exams are at the end of the semesters, and I tend to call any other written assessment a test. I used to give tests about every two or three weeks. This year I’m going to give them every week. They were usually 40 minute tests in the past, with an occasional (in Honors Physics) 70 or 80 minute test during a double-period. This year I will try for 15 minute tests as the norm with a 40 minute test every three or four weeks. I’ll probably write up a post about how that goes and whether I stick to that plan.

      I’ve sometimes used conversations with students to assess very conceptual objectives. A few times I gave them a tricky question, telling them they could talk to any students in any class about it and that they should come back and find me when they are ready (the most productive approach was for them to go straight to my current or former Honors Physics students, which many of them realized). When they came back, I listened carefully to their response. If it was correct, I had some prepared (but not known to them) follow up questions to see whether they had just memorized an answer to give me or whether they really understood it. Usually the Honors kids did a great job of really explaining it to them, not just giving an answer, and the current student was able to answer the follow up questions, too.

      As far as projects, etc, I just worry that it would be much easier to get some information about things they don’t understand than things they do, and I wouldn’t want to create a grading task that they couldn’t use to demonstrate mastery. I think projects are great for learning, but if they truly did learn, they should be able to follow through with it on a written assessment.

      Oh, and a guiding principle for choosing out-of-class assessment questions was that it would look like a very different question to the kid, but basically the exact same question to me. If that makes sense. I wanted to make sure they really got the skill so that I knew the would be able to do it again on a future test. Some students saw this as me making it harder each time they assessed, though the difficulty of the question didn’t really increase. I think the once-per-week assessment will make this aspect much easier this year as opposed to the whenever-you-want-as-chaos-ensues version that I used last year. They won’t be expecting the exact same question in the way that they did when they came back the next day for another chance.

      Posted by Kelly O'Shea | August 2, 2011, 11:30 AM
  2. Thanks, Kelly. I look forward to digging in

    Posted by Dorrie Bright | August 3, 2011, 9:36 AM
  3. Ohhhhhh this is so helpful! Thanks for posting, and explaining the particulars of how you do SBG. I muddled through it last year, with varying degrees of success — I think your method is much better. I will be, um, borrowing heavily from your materials. :)

    Posted by Jennifer Whalen | August 3, 2011, 10:13 AM
  4. This totally rocks Kelly! Do many teachers at your school grade this way? I was asked to wait a year (at least) while my admin looks into SBG more before they allow me to do just the standard SBG. I really like what you have done. What do you do for progress reports? We have 3 per semester and online grading program that we must use. Do you have this too?

    Posted by Chija Bauer | August 3, 2011, 5:42 PM
    • The other two people teaching physics do. And one of those two teaches math and uses it there, too. Oh, and the Choral Scholars teacher essentially uses SBG, too, just a different flavor of it.

      We just have quarter grades and semester grades. I use ActiveGrade and the kids can log in there, but I luckily have relatively few requirements on what I’m doing, so it hasn’t been too tricky to fit it into the fold.

      Posted by Kelly O'Shea | August 7, 2011, 2:56 PM
  5. Hi again! I’ve got a question about how you handle re-assessments.

    In the graded example shown above, the quiz had 6 problems, and (for example) 7.2 showed up in 4 of the 6 problems. The student must scored a 2 in each instance standard 7.2, and thus earned an overall score of 2 for that standard. …this part I get.

    But take standard 7.3… it shows up in 5 of 6 problems, the student gets an overall score of 1. Later, they want to re-assess that standard. Do you give them another 6-problem quiz? How many times do they need to show mastery of a standard upon re-assessment to get “bumped up” to a 2?

    Also, if they want to re-assess 7.3, but 7.2 shows up as well, do you “grade” the 7.2 and drop their grade down (if warranted), or do you ignore standard 7.2 on a student-initiated reassessement of standard 7.3?

    Thanks,
    Jennifer

    Posted by Jennifer Whalen | August 9, 2011, 12:11 AM
    • Great questions!

      Last year, I usually just gave them one more problem. I didn’t like that at all. This year, with the applications for assessments and only one day per week, I’ll know what they want further ahead of time. So I’ll probably try and get them twice on that extra test. Also, I have their earlier work (I scan everything), so I can make sure I hit the aspect that they had trouble with the first time (but from a different angle, probably). I want to make sure they totally have it before I let them stop thinking about it in such a focused way.

      And on your last question, absolutely yes. It is clear to them that everything is fair game on any assessment, whether they asked for it or not. That can be frustrating for them, but if you can get the buy-in, then they also realize that you would only be marking them down so that you can make sure they are absolutely prepared for the exam in the end. Looking the other way when you know that they need help with something would be setting them up to fail later. So they aren’t happy about it at the time, but it isn’t that big of a deal in the long run.

      Of course this is just what I do, and there are lots of different ways to get there. :)

      Posted by Kelly O'Shea | August 9, 2011, 6:26 AM
  6. My goodness – I’m trying to figure out how to do assessments (what I think you call tests here) and this was a lot for me to digest. I’ll probably have to start much simpler than what you have, but it was fascinating trying to dissect (can you tell I’m a Bio teacher) each part of the assessment. I’m real nervous about creating the assessments as I have no idea where to begin.

    Posted by Harry Wood | August 18, 2011, 11:38 PM
    • Take it one step at a time! :) Start with your objectives. Decide what you want to assess and make an outline. Then fill it in with questions that hit the objectives. And get student feedback on how they think the whole process of assessment is working. They usually have great ideas. Even if they don’t always know exactly what it is that they want, if you can get them to describe what isn’t working, you can usually come up with a plan that will meet both their goals and your goals. You could put an (obviously ungraded) extra question at the end of a test every once in a while asking for that kind of feedback from them.

      Posted by Kelly O'Shea | August 19, 2011, 7:05 AM
  7. Kelly,

    I really like your use of the checklist with the test and am starting to see how I could make use of your binary system. At the intro college physics level I am used to using a lot of short conceptual questions (multiple-choice and explain your reasoning) and it seems like your system would let me continue to use a lot of those same types of questions in an SBG implementation.

    Posted by Joss Ives | August 20, 2011, 1:56 AM
    • Thanks. I’m not sure whether I’ll use that grading sheet as often this year since I’m doing (at least) weekly shorter assessments instead of unit tests. I’m sure I’ll post about it once I get into the groove of my new assessment strategy. Also, I wouldn’t mind giving them the sheet with the test (instead of stapling it on after I’ve graded it) except that it calls out which questions hit which objectives, and sets it up for kids who get by being good at testing to get by being good at testing (and therefore obscures some of the data you’re trying to take).

      Posted by Kelly O'Shea | August 21, 2011, 1:05 PM
  8. This is great information Kelly, your website has been a great resource for me.

    Perhaps I missed it, but how does a student obtain a “2”? Is it that the re-assessment must be done with the objective having only “2”s? In which case the “1”s from a previous quiz no longer count? That seems to make sense to me.

    Consider 1.3 B CVPM. Suppose a student is at a “1” and wishes to re-assess. How many questions would the student be asked on the re-assessment?

    cheers,
    Doug

    Posted by bcphysics | October 8, 2011, 7:01 PM
    • Doug asked the question I was thinking yesterday. Do you reassess at all levels 0,1, and 2 or just at level 2? I am also curious to what is typically done.

      Posted by Harry Wood | October 8, 2011, 8:24 PM
      • Hey Harry,

        I’m not 100% sure what you’re asking here. If a kid has a 0 or a 1, then they are looking to get another chance. If they have a 2, then they are temporarily in good stead (I’m trying to get them to think about it this way: once you have three 2’s in a row for the same objective, then you can start to consider yourself at the “mastery” level… before then, we just don’t have enough data yet— I’m trying to get them to stop thinking that when they get a 0/1 after having a 2 that it means they “lost” an objective).

        When they ask for an extra test, I make one up for them that hits what they ask to show. But it will almost definitely also hit other objectives, too, that they didn’t specifically ask to show. I always take data on anything I can, so I will score those other objectives, too (for better or for worse).

        Am I answering what you’re asking?

        Posted by Kelly O'Shea | October 8, 2011, 9:10 PM
        • Yes and no.

          For example, you use a scoring rubric from 0-2. I assume you have at least one question at level 1 and at least one question at level 2 for your first time assessing a particular skill. Suppose a student scores only a 1 on that first assessment. When that student asks to take a reassessment, do you again have a question at level 1 or since they already showed they can do level 1, do you just provide a question or two at level 2?

          Does that help?

          Posted by Harry Wood | October 8, 2011, 9:17 PM
          • Aha, now I think I’m getting your question. I don’t use different “levels” of questions. That is, on any question, I will give a 0 for not showing any understanding, a 1 for showing that they are developing mastery (they know something about it, but they don’t have a completely correct solution), or a 2 for a perfect solution.

            So, just one type of question. But I’ll probably hit them a couple of times on the same thing even on one assessment to try and get a better read on what they can/can’t yet do.

            Posted by Kelly O'Shea | October 8, 2011, 9:27 PM
            • I see! Wow…I’ve been probably putting too much work into my assessments then. I’ve been developing roughly 1 or 2 questions per level (I use a 0-5 rubric currently). You’ve given me a different take on developing questions to gauge their understanding.

              This does bring me to another question on a tangent. How much does a student knowing key vocab words figure into your scoring? I struggle with this sometimes because my students usually have low English skills. So when they need to know a vocab word, that is usually one of the lower level questions. And doing the actual skill is a higher level. Sometimes I get students who can’t explain adequately to me what a key vocab means but they can show me how to solve the problem. Basically, they can’t answer a level 1 question, but they can answer a level 4 question. What would your suggestion be?

              For now, I can’t justify giving a 4 or 5 because of the inability to explain a key vocabulary word.

              Posted by Harry Wood | October 8, 2011, 9:36 PM
    • A student can get a score of 2 (mastery) on an objective on any quiz/test. Only the most recent score counts. If they have a 1 or a 0 on something they are ready to demonstrate, then they can ask for an extra test (we’re ditching the word “reassessment” this year because it doesn’t connote what we want it to connote) or wait for the objective to come up again on another quiz in class.

      On an extra test (a personalized test that I make for the kid based on which objectives they ask to show), I try to get a couple of data points so that I feel good about a solid 2 (if they get one). It’s not fun for them to get a 2 using a single question on one quiz, then back to a 1 on the next quiz because they were still a little shaky. So if they want to show that they can draw FBDs, I’ll probably give them at least two or three to draw (at least one with angles, one without, so I can give more directed feedback on what they’re still missing).

      Another thing is that sometimes they come in for an extra test and aren’t 100% ready for it. But by the end of that extra test, they have totally figured it out. Then next time it comes up on an in-class quiz, they are ready for it and get a 2 then. So coming in for an extra test can be really useful, even if it doesn’t result in an immediate bump up in a score.

      But all of these scores are just temporary scores. Once we get to a time when they school wants me to take a snapshot (that is, put a number grade in), only the most recent score is going to count.

      I’m not sure if I totally understood your question, so I might have been answering something else here. If that doesn’t help, please try asking again! :)

      Posted by Kelly O'Shea | October 8, 2011, 9:07 PM
  9. Hi Kelly,

    Considering the process required for determining a grade at the end, what do you think we should tell students that continually want to know their grade as the term progresses? My students know where they are at in terms of achieving goals on their learning objectives, but they still want to know a “grade.” My only idea at this point is to take a running tally of all their learning objectives (last one counts) and calculate it. So a kid that has been tested on 6 learning objectives and got 3, 4, 4, 4, 4, 3 would be sitting at 22/24 or 92%. I’m not sure how to handle this…

    Posted by bcphysics | November 3, 2011, 4:42 PM
    • In my case, I tell them the truth: Their grade simply does not exist yet. I never do averages or use points (I don’t like the “out of 4″ system… I think the binary system does a better job of getting them to stop thinking of the numbers as percentages and also is better at encouraging a fuller mastery). When I have to put down a number at the quarter, it is just a snapshot. It has no numerical effect on the semester grade. Before and after the quarter ends, their grade just doesn’t exist at all. I don’t have a secret grade in my head. I don’t have a secret gradebook. Their current scores on each objective are only temporary scores as they will all certainly be replaced again and again before the end of the semester.

      I think the only thing to do is to start working on getting them to shift their paradigm about grades. The feedback that I get from the kids here is that the biggest downside of SBG is how everyone “freaks out” when you have to actually translate to a number grade. And of course, that “freaking out” is nothing compared to the constant panic of physics students back when we had points and number grades on every test. But relative to the pretty serene calm of every other week, it seems like a big deal. :)

      Posted by Kelly O'Shea | November 3, 2011, 5:21 PM
      • Kelly, I think you are absolutely right in the mindset. And this is what I’ve been telling the kids so far. I ask them to review their learning objectives progress and that will tell them where they are at.

        This year is quite strange for me, the school I’m at is very different from other schools. This one is considered to be quite “academic.” There are a lot of kids who are used to getting very high grades, and their parents expect the same. The grades have been very misleading though imo, as many grade 11 physics students cannot do simple equation manipulation even though they always get 97% in math. But I digress. 1/2 of my students are writing SATs (I’m in Canada). They all want to go to a prestigious university in the US, or I should say that their parents want them to go. Many students have been here for only a year, coming from China. They see this as a stepping stone to going to Stanford et al. From what I understand, parents are quite willing to pull kids from classes or school if their grades are not high enough, and send them to distance learning programs where frankly speaking, high grades are easy to get. So SBG here isn’t necessarily about getting the students on-board, but more about a cultural shift “in progress” and somehow reaching out to parents that have a completely different attitude about learning. To many families, the school’s job (ie my job) is to get their kids into a US university. It is very strange. Many of the kids really like the SBG method, although I can see the stress of grades is very real for them, even if it isn’t their doing.

        I really enjoy your blog, sorry about the little bit of OT ramble!

        Posted by slugga | November 4, 2011, 9:28 PM
  10. Kelly,
    Thank you so much for this post. After attending my first modeling workshop this summer, my head is literally swimming with thoughts of how I am going to implement modeling while at the same time implementing SBG for the first time. This helps to clarify a systematic method for SBG. I cannot wait to start!

    Posted by Chuck White | August 13, 2013, 1:57 PM
  11. Hi Kelly, I was trying to access your class website to look at the extra practice you post for students, but it keeps saying I have insufficient privileges. Any chance you can post your extra practice? Would be great to use with my physics students during RTI.

    Posted by mathwithmoxie | September 9, 2013, 9:37 PM
    • Hi. My personal class websites are not meant to be available to anyone but the students in those classes. I’ve also had problems with others using them and confusing students. I’m sorry, but I can’t make them available to anyone else. I’m also not sure what RTI is.

      Posted by Kelly O'Shea | September 10, 2013, 6:06 AM
      • Sorry Kelly, I didn’t mean to seem intrusive or nosy. I just assumed that since you had the link up to your website, that you had intended to give people access to the extra practice you use. I completely understand why you want to keep it private, though!

        RTI is *supposed* to stand for “response to intervention.” However, at my school, it’s really just a time when teachers can makes struggling students come into their class for 30 minutes to work on things. I use it for remediation and reassessment. I put together extra practice for all of my classes that they have to complete and check before I allow them to reassess.

        Posted by mathwithmoxie | September 15, 2013, 3:33 PM
  12. Have you found most of your students are having success with this format? In other words what Is your average breakdown of grades per class, percentage A/s B/s etc.

    Posted by Madison | October 10, 2013, 10:31 AM

Trackbacks/Pingbacks

  1. Pingback: Assessment, Feedback, and Grading « Teach. Brian. Teach. - August 8, 2011

  2. Pingback: Raising the bar for an ‘A’—Capstones « Quantum Progress - August 9, 2011

  3. Pingback: Feedback to a former you | laid-back science - September 16, 2011

  4. Pingback: A Culture of Do | jaytheteacher - January 30, 2012

  5. Pingback: Beoordelen van proefwerken | Bernard Blogt - July 24, 2012

  6. Pingback: Technology Integration for Math Engagement » Standards Based Grading Rubric for Math – Part 2 - August 15, 2012

  7. Pingback: Syllabus, Tweeting, and Conjunctive SBG - August 26, 2012

  8. Pingback: Day 5: Hippie grading and first quizzes « O'Shea Physics 180 - September 8, 2012

  9. Pingback: Day 18: Weekly Quiz « O'Shea Physics 180 - September 22, 2012

  10. Pingback: Putting Together a Standards-based Assessment System for Physics (part 2) - February 14, 2013

  11. Pingback: Creating Assessments: Three Types of Standards | Mathy McMatherson - March 23, 2013

  12. Pingback: Balancing Assessment/Pedagogy and SBG | Right Brained Math - April 7, 2013

  13. Pingback: First Year into SBG: a Summary | Hilbert's Hotel - May 20, 2014

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 310 other followers

%d bloggers like this: