A few weeks back, I took my paper prototype for the Rubric Management control, and did some UI testing. All went well, until last week, where I found a small flaw with my test and design.
The problem was that I tested the wrong user base – I tested my paper prototype on other computer science students, when I really should have been testing computer science profs.
It all comes down to weights. As a student, I assumed that a professor takes a rubric, and distributes the weight among the criteria however they see fit: Style is worth 5%, Creativity is worth 10%, etc. My fellow CS students seemed to have the same impression.
Unfortunately, this is not the case. CS profs, after creating the criteria for the rubric (where there are usually at least 10 criteria), add weight multipliers to the criteria that they deem more important. So, for example, Style could get a weight multiplier of 2, and Creativity a weight multiplier of 1 (where 1 is the default weight).
So it’s a difference of orientation of thought: I thought that 100% of weight was distributed amongst the criteria. Instead, the criteria is created, and the weights are applied as professors see fit. After the weight is applied, the 100% has been ‘created’, as opposed to the other way around.
So this means that the slider widget that I had originally planned for adjusting weights for the criteria no longer makes sense. Instead, I plan to have a simple text input for inputting weight multipliers. I’ll also put increment and decrement buttons on each side of the text input for quick bumps.
Just thought I’d share that.