MarkUs Blog

MarkUs Developers Blog About Their Project

Archive for October, 2009

Meeting minutes – October 30rd 2009

without comments

The meeting started at 12:02 and ended at 13:02.

Meeting Agenda

  • Evaluation schemes
  • Current progress
  • Start evaluating how much work is left on each feature and plan accordingly

Joel Spolsky’s Blog Post about Student Projects

  • Karen was disappointed to hear him praising Greg’s initiative in one breath and then complaining about how students don’t produce anything in the next.
  • Karen also said that MarkUs wouldn’t exist without the work done by students on course projects.
  • Karen is confident that we will be using everything (or almost everything) that we are working on this term.

Evaluation Schemes

  • Karen wanted to see the Ulaval students evaluation scheme.
  • Ulaval students posted it on the blog 2 minutes before the meeting.
  • Karen was wondering if Tara, Fernando and Farah evaluation scheme had a team component or it meant to be a team mark.
  • Tara said that since Fernando and her will be working on the Note system, a team mark for this feature is appropriate.
  • Severin said that a team is the whole team, not only who you are working with for your individual projects.
  • Farah proposed that the “Overall Process” section should be a team mark.
  • Karen said the features could also be treated as a single mark for the group that worked on them.
  • Karen is inclined to treat the maintenance marks as a separate feature.
  • Severin had the idea that team evaluation could be done by your peers rather than Karen.
  • Gabriel proposed that it could be both (peer evaluation and Karen’s).
  • Karen will always reserve final say (except when she has to defer to another supervisor).
  • Karen would like everyone to write up answers to the questions posed by Severin (or an altered list if we decide) and she will use that as one component of the evaluation.
  • Severin said (in his blog post) that Mike should come up with a team mark after each individual answered questions about team performance.
  • Mike said he knew about that part but he haven’t yet spoken with Greg or Karen about it.
  • Karen is extremely happy with the work that everyone has been doing.
  • Karen will be consulting with Mike on how to assign the grades, because he has been more closely monitoring weekly progress.
  • Karen said that the evaluation plan is now what everybody proposed.
  • Severin said that anyone with other questions for peer evaluation should comment on his blog post.

Current Progress

  • Tara and Fernando did a meeting and they will make mock-ups by Tuesday to have a feedback by next week’s meeting.
  • Farah is currently working on finishing the creation form and it’s going well so far.
  • Tara proposed a good idea on the ReviewBoard : all interface related review requests should include at least one screenshot.
  • Everyone liked the idea.
  • Gabriel, Mélanie and Simon found solutions to our design problems before the meeting and everything is great now.
  • Severin said that we should work on a backup scenario for markusproject.org because the server’s importance seems increasing.
  • Fernando was wondering what was used to generate documentation. He said an error occurs when he runs “rdoc”.
  • Severin believes there is an option to use a directory outside MarkUs source to generate documentation and it should resolve the problem.
  • Mélanie proposed to look at the database index to improve MarkUs performance because she found missing index. She is good with databases.
  • RDOC is used to generate API documentation
  • Other documentation should be posted on the blog or wiki.
  • Mike said that MarkUs’ application help/documentation could work like the ReviewBoard. Help pages are hosted on markusproject.org and linked to from the application.
  • Karen proposed it to be a major component for next terms’ students.

Evaluate how much work is left on each feature and plan accordingly

  • Simon said that Ulaval students will evaluation what they plan to do and the time needed for each tickets immediately after the meeting.
  • Karen wants to see an estimate of how much is left to do for each features into the punchlines.
  • Fernando had some problems with unit testing the logger.
  • Simon said it is harder because  the logger uses environment constants and didn’t find a way to mock it.
  • Gabriel said a solution could be a configuration object.
  • mikeg1a found a way to mock constants and gave us a link : http://www.danielcadenas.com/2008/09/stubbingmocking-constants-with-mocha.html
  • Mike told Simon to check out rake tasks (lib/tasks/demo.rake, populate.rake, and results.rake) because there could be some fixups to do for the “marking_scheme_type” in the assignment table.
  • Simon said he will also make a migration to put the “marking_scheme_type” to “rubric” for all the existing assignments.

Other questions

  • Simon was wondering if there were a weight property in a flexible criterion.
  • Karen said no. There is only a max value.

Written by simonlg

October 31st, 2009 at 4:05 pm

Posted in Meetings,Minutes

New marking scheme proposal: Last iteration

without comments

This is our last iteration of the new marking scheme proposal. This report comes in answer to all discussions we had about the first iteration. It contains the decisons we have taken about each of the following bullets:

  • User stories(modified the stories about old marks that will not remain in the DB if the user changes of marking scheme)
  • Prototype(same as previous iteration)
  • Chosen DB schema(Scenario 4)
  • Class diagram (Showing polymorphic associations)
  • Tickets(almost the same as previous iteration, but added some to fit our DB schema choice)
  • Grep related to our DB choice

Written by melagaud

October 31st, 2009 at 12:28 pm

Posted in Uncategorized

Marking Scheme for Gabriel, Mélanie and Simon

with one comment

Since Gabriel, Mélanie, and Simon are focusing on the development of the same feature, they have decided to use a team evaluation scheme.

1) Flexible marking scheme feature completion: 30%

  • An instructor can choose/change which marking scheme an assignment should graded with.
  • An instructor can add/edit flexible criteria for an assignment.
  • An instructor or TA can grade submission with the flexible marking scheme view.
  • A student can consult his results with the flexible marking scheme view.
Create or edit assignments As an instructor
I want to be given the choice of which marking scheme an assignment should be graded with.

2) Automated testing: 20%

  • Unit testing
    • Tool integration (ex : mocha)
    • Use of mocks to isolate units.
    • Code coverage (average of 85%)
    • Meaningful test cases
    • No test failure
  • Functionality testing
    • Tool integration (ex : Selenium)
    • Meaningful test cases
    • No test failure
  • Acceptance testing
    • Tool integration (ex : Cucumber)
    • Meaningful test cases (the most important acceptance criteria for each user story)
    • No test failure

3) Documentation: 20%

  • The code should be documented
  • The tasks for this feature should be split up into tickets with clear descriptions
  • There should be a document posted under “MarkUs Component Descriptions” on the DrProject site that explains the current state of the feature, future plans, etc.
  • There should be a short screencast of the feature posted on the DrProject site

4) Overall Process: 30%

  • Blog posts, status reports, or review requests indicated steady progress throughout the term
  • The new marking scheme respects the client’s request.
  • Good programming practice (clear code, small methods, use of constants, OO architecture)
  • Consulted with other team members about design/implementation decisions and/or helped other team members (eg. through blog posts, review requests etc.)
  • Contribution to the existing tests.

Written by simonlg

October 30th, 2009 at 11:58 am

Marking Scheme for Tara, Fernando, and Farah

with 2 comments

Since Tara, Fernando, and Farah have been focusing on the development of features, they have decided to use a similar evaluation scheme.

Specifically, Tara, Fernando, and Farah will each be graded individually based on the following marking scheme:

1) Feature completion: 30%

  • For simple grade entry, this means:
    • An instructor can create a grade entry form and can assign TAs to enter the marks
    • An instructor and the appropriate TAs can enter grades into the form
    • An instructor can upload/download a grade entry form in CSV format
    • Students can view their grades for a grade entry form
  • For logging, this means:
    • A developer can log a message
    • A developer can specify the log level that he wants
    • An administrator can specify the desired type of rotation
    • An administrator can specify the desired file names and path of the log files
    • Localized messages for using I18n
  • For the notes system, this means:
    • Students can see nothing related to the notes
    • An instructor or a TA can create new notes on groupings
    • An instructor or a TA can view existing notes on a grouping when creating a new note
    • A user can edit or delete notes created by him/herself
    • An admin can edit or delete any notes
    • An instructor or a TA can see an aggregate view of notes on the new Notes tab
    • The groupings object designation for notes should be extended to also work for assignments and students

2) Feature testing: 20%

  • For simple grade entry, this means:
    • There should be tests for instructors, TAs, and students to ensure the above functionality exists as desired
  • For logging, this means:
    • Messages are logged properly
    • Log Files are rotated properly
    • Log level severity are differentiated and send to the corresponding log file
  • For the notes system, this means:
    • There should be tests for instructors, TAs, and students to ensure that the functionality does or doesn’t exist as desired

3) Documentation: 20%

  • The code should be documented
  • The tasks for this feature should be split up into tickets with clear descriptions
  • There should be a document posted under “MarkUs Component Descriptions” on the DrProject site that explains the current state of the feature, future plans, etc.
  • There should be a short screencast of the feature posted on the DrProject site if applicable

4) Overall Process: 30%

  • Blog posts, status reports, or review requests indicated steady progress throughout the term
  • A thorough design process was followed
  • Demonstrated good programming practice
  • Consulted with other team members about design/implementation decisions and/or helped other team members (eg. through blog posts, review requests etc.)
  • For Tara, Overall Process also includes maintenance (i.e. other tickets)

Fernando would also like the following breakdown for these categories:

  1. Feature completion (30%)
    • Logging – 15%
    • Notes System – 15%
  2. Feature testing (20%)
    • Logging – 10%
    • Notes System – 10%
  3. Documentation (20%)
    • Logging – 10%
    • Notes System – 10%

Tara would also like the following breakdown for these categories:

  1. Feature completion (30%)
    • Maintenance – 10%
    • Notes system – 20%
  2. Feature testing (20%)
    • Maintenance – 5%
    • Notes system – 15%
  3. Documentation (20%)
    • Maintenance – 5%
    • Notes System – 15%

Written by Farah Juma

October 30th, 2009 at 2:25 am

MarkUs developers’ status report, October 30th 2009

with one comment

Farah

Fernando

  • Status
    • Added test for MarkusLogger
    • Added Documentation for MarkusLogger
    • Met with Tara to discuss the message feature
  • Next Steps
    • Add localized standard messages for the logger i.e. I18n
    • Ship the logger by next week
    • Start working on the message feature
  • RoadBlocks
    • Singleton pattern is not very flexible for testing

Gabriel

  • Status
  • Next Steps
    • Get my hands dirty in developing new code
    • I still have my blog about “needs for fixtures replacement” as a draft on my HDD, I should just unleash it to the blog.
  • Roadblocks
    • None, other than forgeting that I had to produce last week meeting minutes

Mélanie

  • Status
    • Published on Basie blog, a 500 words comment in response to a question about development process(Greg Wilson writing requirement)
    • Wrote a document that presents the last iteration of the new marking scheme it contains:
      • User stories(modified the stories about old marks that will not remain in the DB if the user changes of marking scheme)
      • Prototype(same as previous iteration)
      • Chosen DB schema(Scenario 4)
      • Class diagram (Showing polymorphic associations)
      • Tickets(almost the same as previous iteration, but added some to fit our DB schema choice)
      • Grep related to our DB choice
    • Read “A Guide to Active Record Associations” (http://guides.rails.info/association_basics.html#polymorphic-associations)
  • Next Steps
    • Publish the last iteration of the new marking scheme document
    • Plan to work on the ticket “First marks table migration”(new marking scheme)
    • Identify how should our work in the course be evaluated?
    • Continue with ticket #309
  • Roadblocks
    • Need to meet with Gabriel Simon and our teacher to discuss about how should our work in the course be evaluated.
      We are in an unusual situation, feels like following 2 different courses at a time to reach the same goal It’s a little ambiguous to me.

Mike

  • Status
    • Met with Karen and Severin to figure out RIA showcase logistics
    • Reviewed a bunch of code
    • Doled out some tickets, filed a new ticket, answered a bunch of email
    • Missed last weeks meeting, but caught up by reading the IRC log
  • Next Steps
    • Do my portion of the RIA showcase set up
    • Prepare for more feedback/emergencies from CSC108 when they try marking their A2 assignment
    • Review more code
  • Roadblocks
    • The usual

Severin

  • Status
    • Wrote some reviews, emails, blog posts
    • Crafted a blog post about a possible grading scheme for me
    • Was reading some Ruby code to get inspired for configuration of
      the repository library
    • Helped Tara in getting her testing environment up and running
  • Next Steps
    • Finalize the started merge (fix tests)
    • More work on automated testing
    • Work on one of my tickets
  • Roadblocks
    • None

Tara

  • Status
    • Submitted Ticket #428: Marking state icons are too similar
    • Investigated the accordion expansion problem – the if statement was important
    • Code done for Ticket #358: When adding members to a group as an Instructor, we cannot add several students at a time
    • Met with Fernando to discuss the notes system and broke up the to do items into more specific tasks (created in DrProject)
  • Next Steps
    • Tests for ticket #358 (groups controller, see unit/group_test.rb and functional/groups_controller_test.rb
    • Submit and close tickets #358 and 405
    • Mock-ups for the Notes tab
    • Start meeting regularly with Fernando to work on MarkUs
  • Roadblocks
    • I somehow managed to erase my entire development database contents earlier in the week, but Fernando helped me fix that today.
    • I should be stepping up the MarkUs time now since the last couple of weeks of the term will be hectic with paper writing and research for said papers.

Simon

  • Status
  • Next Steps
    • Finish the FlexibleCriterion table and model.
    • Work on another ticket for the flexible marking scheme.
    • Finish the grading scheme.
  • Roadblocks
    • None

Written by gabrielrl

October 30th, 2009 at 2:15 am

Meeting minutes – October 23rd 2009

with 3 comments

Minutes from last MarkUs Developers’ meeting that took place on October the 23rd 2009.

Gabriel was absent
Mike was on and off due to network problems while attending a conference

  1. Meeting opening at 12:03
  2. The new flexible marking scheme:
    • Karen raises the question whether it is important to reduce the number of tables;
    • To Melanie it isn’t important, what matters is reducing inquiries and creations speed while optimizing room usage. She votes for scenario 1;
    • Karen states that on a maintenance point of view : “we want to make sure that changes don’t have to be made in more than one place, and that we aren’t adding unnecessary pieces”;
    • Melanie asserts with Farah that she is confortable with scenario 1 since she’s going to use those tables (criteria and marks);
    • Farah likes scenario 1 and she’ll only need the marks table;
    • Karen says that keeping the code clean and simple is more important than worrying about the number of tables;
    • Severin asks for the specifics between scenario 1 and 4;
    • Melanie answers that scenario 4 is interesting but: “we will have to keep the marking_scheme_type in table mark, to know to what marking_scheme each mark are associated”;
    • Severin asks Melanie about the comment he made;
    • Melanie answers: “yes! For me this suggestion fits, almost entirely, with scenario 3. Maybe with some variation in keys, but at the end we need a way to tell to marks table to what marking scheme each record are associated. I have designd a schema based on your idea, I can post it in scenarion 1, the marks associated to each marking scheme are in there own table, so it is easy to figure out to what marking scheme they are related”;
    • Mike Gundelroy pops in and tells us about “Rails’ polymorphic relations” stating that someone should look into that. (Simon takes note);
    • Karen says she isn’t worried about keeping previous marking results in the DB and that we should focus on the simpliest solution. On that regard, scenario 1 seems the simpliest to her;
    • Melanie thinks that DB matters shouldn’t influence the development choices. She sees the application as a “coat” above the DB “coat”;
    • Severin disagrees to have 2 markings table (rubric_marks and flexible_marks) and if we drop previous results once we change it – we’re fine;
    • Karen says that works for her because it is unlikely that an instructor will want to keep previous results once marking scheme has changed though there is an argument about keeping the data trail;
    • Melanie fears of what might happen if the marking scheme is changed by mistake though we could give the instructor a warning;
    • Simon, Tara, Severin and Fernando all votes for the warning while Karen remembers the others that we do that in other context, like uploading a group file;
    • Simon is back with some fast reading on polymorphic associations and says that is should do the trick keeping the marks in one table (he’ll look further on that);
    • Karen asks to look further on polymorphic associations and disadvantages of scenario 4. She’s ok with making the descision now;
    • Severin remembers Simon that we’re doing something similar for users – single table inheritance;
    • Simon asks if the name “flexible” is still ok and Karen says that it is, for now;
    • Severin points out we should keep that documented (Rubric based marking vs. Flexible marking);
  3. The new icon for “ready to mark”:
    • Karen is happier with the blue square, just for visual distinction;
    • Melanie states that the blue square dosen’t mean much to her though;
    • Severin points out that Mike had suggested a green flag;
    • Karen is not sure whether the green flag really represents “ready but not started” but the blue square stands out from the other icons;
    • To Tara, the flag means “problematic” rather than “started”;
    • Severin is quite indeferent on this (after all, it’s one line of code to change);
    • Karen states we will stick with the blue square for now;
  4. Messaging system:
    • Karen congratulates Tara for the document she wrote and raises what is the “big question” to her: what will be the mechanism for adding comments?;
    • Simon proposes a new tab;
    • Tara anwers it is definitely not the way to go to create comments on assignements/groupings;
    • Karen says it might be a button;
    • Tara thinkgs of something similar to the way notes are added in ReviewBoard, but not on lines of comments. She adds: “So you click on a button and then a dialog pops up, without greying out the rest of the page, and you can see previous notes on the grouping and add a new one”;
    • Simon propose expanding a div instead of a dialog;
    • Severin asks to what a note sticks? line of code/submission/grouping?;
    • One the things Karen liked about the proposal is that notes can attach to a variety of different components; She wonders if we could have a higher level button (or div) that would know about the context while she reminds we want to make it simple;
    • Severin asks whether grouping should be the common denominator?;
    • Tara answers that: “A grouping is linked to an assignment, so using the grouping is the best idea for talking about a group and an assignment combination. I see there being a button, say on the line with the list of submission files, saying “Notes (x)” where x is the number of notes so far, if there are some and then clicking on that button would pop up the div of some sort showing the notes and a form”;
    • Karen’s original idea was that notes would be applied to submissions or groupings.
    • Severin thinks the commonvdenominator might be the user then;
    • Tara answers that there could be multiple people within a group and the TA would most likely want to comment on the submission as a whole;
    • To which Severin answers: “right, so a note on a submission would be attached to every student member of that grouping. That duplicates things, but it would make it most universally applicable such as notes about individual users, submissions, even labs (once simple grade entry is in place). filtering notes appropriately might be a challenge”;
    • Karen thinks she and Tara will have to take this offline and fill in this part of the proposal (which sounds good to Tara). She also thinks that the more general approach is appealing, but she’s concerned about making it too complicated, and wondering about how useful the generality will be;
  5. Logging:
    • Fernando uploaded a review request for v2 of the logging proposal;
    • Severin is not sure about the class name anymore (although he suggested it) Logger4Markus;
    • Fernando likes the name and Karen smiles, Severin retracts himself saying that it’s not important anyway;
    • Karen asks about the next steps;
    • Fernando thinks he just needs a “ship it”. He’s waiting on the reviews;
    • Karen asks if he thougth about the basic log messages that we should be putting into the code;
    • Fernando didn’t thought about that but remembers it is pretty straight forward to add things and Severin would like other people weigh on this;
    • Simon asks that unit testing be made on the logger class;
    • Karen asks Fernando to spend a bit of time adding the basic messages when the structure is set and asks him for some testing too;
    • Fernando will add the messages to the locales file (which Severin thinks is a good idea to use I18n right away);
    • Karen says that once a bunch of messages have been added, and we have it up on markusproject.org, we should be able to let it run for a bit and then see what info we can get from the logs (Serverin states that, once commited, this should happen automatically [on markusproject.org]);
    • Severin thinks we should be really scarce with error/fatal messages;
    • Fernando remembers that he did separate the errors from the info messages (Severin knows, but sill…);
  6. Grade entry:
    • Karen inquiries to Farah if she has any comments/questions about the grade entry form;
    • Farah had a busy week but she”ll be working a lot more on grade entry this weekend and next week. She has no question at the moment;
  7. Quick word on tests:
    • Karen makes sure that everyone is still keeping up on writing tests.
    • Severin notes he sees a lot of improvement has happened;
    • Karen everyone to keep working on it;
    • Simon says: “Just to make you think about it : Writing tests in ruby is really necessary because there isn’t a compiler checking errors. You have to run your tests to ensure your refactoring didn’t break anything”;
    • Severin wonders what does a compiler checks, other than syntax (to which Simon answers: if the method exists, for one);
    • Melanie asks about emphasis on TDD;
    • Severin says that if we do it as Adam Goucher has suggested, it might not be very painful. Little switches back and forth – red – green -refactor;
    • Adam Goucher pops in to tell us that: “The hardest part about following someone else’s (poorly documented) test suite is figuring out their assumptions and style so please document these things”;
    • Karen suggests a good blog post topic: “given that you have been told that TDD is a good idea, are you doing it, how hard is it to follow…;
  8. Evaluation scheme:
    • Karen starts by probing the group for any questions about the evaluation scheme (stating that ULaval students are kind of bound by their course requierments);
    • Simon (from UL) wonders if they have to write an evaluation scheme;
    • Karen replies that they could send their course’s one and chip in on what methods should be used (if they weren’t bound) to evaluate them.
    • Melanie and simon reply they can do that;
  9. Other news:
    • Karen tells that Diane is planning to try again to use Markus for the next assignment, so she has her fingers crossed that we won’t find any new showstoppers. On top of that, the patches are working fine for her in her course so far. There will also be a demo at the department Research In Action Showcase on Nov 17. It’s a pretty big affair with lots of industry people invited. Mike, Severin, and her will be working on a poster and demo for it, which they will undoubtedly look for feedback on;
  10. Meeting closing at 13:06.

Written by gabrielrl

October 29th, 2009 at 11:31 am

Posted in Meetings,Minutes

Proposal: Grading Scheme Severin

without comments

Here is a possible way how my grade for this course could be evaluated. If I have included too vague criteria or this wouldn’t work for other reasons, I’d be happy to get feedback (particularly feedback from faculty).

  • Team performance (worth 40%). I would evaluate the team grade by peer evaluation. There should be a small questionnaire to be answered by each member of our team. Questions I’d ask: Did anybody in the team help you when you hit a roadblock (if ever)? If yes, who? How often? Did you get feedback, when you needed it? For example, did you get good feedback for your design proposal? If you could redo this term working on MarkUs, what would you make different? Overall, how would you rate the team performance (1-10; where 10 is the highest mark)? How did reviews work out for you? Did they help improving your Rails skills? Did you get good feedback? Individual answers will be send to Mike, who will then set the final grade for the team performance part. When answering the questionnaire, everybody should corroborate his or her answers with some specific facts, which led to his/her answer.
  • Individual evaluation (worth 50%). For my individual evaluation I would expect that the following criteria would be considered:
    • Given that I have been working on MarkUs before the start of the term, was I approachable and helpful in getting new developers up to speed/solving roadblocks?
    • A focal point for me this term is to create a paper on automated testing. This should provide a solid basis for future developers which get to implement this feature. I.e. the expected outcome is to produce a document which specifies requirements, possible problem points and advantages/disadvantages of possible scenarios (accounts for 20% of the grade).
    • Progress and completion of tickets which have been assigned to me. This includes complete test suites for submitted code (if possible), and according reviews have been posted on markusproject. Code has then been integrated into the main source code branch and tested for regressions. Each chunk of code submitted included appropriate documentation. Tickets have been appropriately used throughout the development process and necessary documentation (if any) has been created on the wiki. Standards for testing and code quality are met if the code passes the review process.
    • A significant chunk of my work has been maintenance and providing support for the MarkUs instances installed at University of Toronto (fixing, testing bugs in branches/release 0.5; creating and testing patches).
    • General participation and administrative tasks (participation in IRC meetings, provided meaningful feedback, wrote blog posts, took meeting minutes as arranged, collected status reports, etc.).
    • Has the aggreed upon process been followed when working on MarkUs. Did I make meaningful progress in working with the team.
  • Writing requirement (worth 10%) Have assigned writing tasks been completed as requested. How much effort did I put into carrying out the writing tasks.

What do you think? Please feel free to drop me a comment.

Written by Severin

October 28th, 2009 at 9:45 am

Grade Entry Form: Database Schema

with 4 comments

Yesterday’s meeting got me thinking more about the database schema for the grade entry feature. Here’s what I originally had in mind: Original Grade Entry Database Schema

I had originally considered a separate Grades table, as shown in the diagram. However, it would probably be better if we could combine the Grades table and the Marks table. There are two fields in the Marks table from Scenario 4 for the new marking scheme that do not really apply to simple grade entry: criterion_id and result_id. In addition, a table which stores grades for simple grade entry would require two fields which the existing Marks table does not need: grade_entry_item_id and grade_entry_student_id. The fields both tables have in common are: grade/mark, created_at, and updated_at. Any thoughts on how we could go about combining these tables?

Written by Farah Juma

October 24th, 2009 at 5:21 pm

How should your work in the course be evaluated?

with one comment

Greg has asked everyone to design their own evaluation plan.   There are two separate goals here:

  1. Get you thinking about how to define the success of your project.  How will you know if you have succeeded?
  2. Get you thinking about the things that you are doing that could be evaluated.  (More along the lines of a performance evaluation at a job.)

Perhaps the most obvious definition of success is that the feature you were asked to implement is complete and works.  Now define “complete” and “works”.  Think about the components of the feature and what you are doing to give the users/customers confidence that the features works.

One aspect of “complete” that I would like to highlight is the documentation left behind.  Could a new developer go to our web sites and find information on the design decisions that were made, on the state of each feature in terms of know problems,  or future enhancements.  The tickets should be up to date and clear.  There should be some kind of document that describes the current state and future plans (probably a short one).

Since this is a course and I care about not only the end product, but also the process we used to get there, and the learning experience(*), other things you might think would contribute to your grade include: participation level, willingness to help other students, willingness to participate in reviewing code, demonstrating “good” programming practice, consultation and design process, demonstrating steady progress.  I’m sure you can think of others.

I think it is probably appropriate to have at least 3 different evaluation mechanisms.  Mélane, Simon, and Gabriel fall under the evaluation scheme of their course, so it is probably appropriate for them to submit primarily that scheme.  Tara, Fernando and Farah have been focusing on the development of their features, and may want to use a similar evaluation scheme.  Severin has been doing more maintenance work and team lead kind of work, so his evaluation scheme should include those components.

A final tip.  Please don’t try to make the evaluation scheme too fine-grained.

* Can you tell I’ve been going to curriculum and teaching evaluation meetings?

Written by Karen Reid

October 23rd, 2009 at 10:43 am

Status Reports, Oct. 22

with one comment

Mike Conley

Status

  • Reviewed a tiny bit of code here and there

Next Steps

  • Dole out some new tickets to willing volunteers

Roadblocks

  • School

Mélanie Gaudet

Status

  • Worked on ticket #309 (Write test for annotationControler)
  • Wrote a document highlighting details for scenario 1 and created a document detailing scenario 4 and 5 for the flexible marking scheme

Next Steps

  • Continue with ticket #309
  • Plan to work on Greg Wilson’s writing requirement
  • Plan to work on the new marking scheme

Roadblocks

  • We really need to identify the best DB scenario in order to start development for the new marking scheme

Fernando Garces

Status

  • V2 of logging will be in review board by Thursday night
  • Commented on Severin’s post about automated testing
  • Started working on bug #377

Next Steps

  • Finalize the logging feature
  • Close ticket #377 by next week
  • Meet with Tara to start working on the messaging systems

Roadblocks

  • Having 3 midterms, a presentation and an assignment in 3 days is not good for my mental health ^_^

Severin Gehwolf

Status

  • Prepared patches for MarkUs 0.5; helped Alan with the patches
  • Wrote blog posts and comments
  • Merged code from the release_0.5 branch into trunk, but hit a roadblock (testing didn’t work as expected)

Next Steps

  • Rethink changes to repository library (it’s not as cleanly a distinct library from MarkUs anymore; I think this makes testing a little hard, merge problem)
  • Automated testing document

Roadblocks

  • Adaptation of tests to reflect changes in repository library (Simon and Gabriel helped, but it’s not resolved yet). Thanks Simon and Gabriel

Tara Clark

Status

  • Fixed Ticket #435 (https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/435) – defect found in Assignments Controller
  • Completed Ticket #434 (https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/434) (analysis for messaging system) Blog post on this: http://blog.markusproject.org/?p=607
  • Commented on Greg’s blog post here: http://ucosp.wordpress.com/2009/10/21/a-lesson-from-coders-at-work/ as per his request
  • Commented on Flexible marking scheme
  • Ticket #427 (https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/427): The area for clicking on a rubric level is too small (code complete, will submit soon)

Next Steps

  • Submit Ticket #428 (https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/428): Marking state icons are too similar
  • Finish up Ticket #358 (https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/358): When adding members to a group as an Instructor, we cannot add several students at a time
  • Proposing a more technical proposal for the notes system and breaking it up into sub-tickets
  • Picking up some more tickets

Roadblocks

  • Waiting on comments on the notes system prior to being able to examine it technically

Simon Lavigne-Giroux

Status

  • Finished ticket #326
  • I’ve read the blog
  • I created tickets for each task in the new marking scheme
  • I finished my exams, I can now work on MarkUs

Next Steps

  • Work on the new marking scheme
  • Work on ticket #439

Roadblocks

  • I can’t work on the new marking scheme before a final decision is made about the database model.

Farah Juma

Status

  • Didn’t get much time to work on MarkUs this week because I had 2 assignments due and 2 midterms

Next Steps

  • Work on Ticket #429 (https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/429): Create grade entry creation form

Roadblocks

  • My other courses took up a lot of time this week
  • Will spend more time on MarkUs this weekend and next week

Gabriel Roy-Lortie

Status

  • Completed ticket number #416 – Making student_test.rb pass [https://stanley.cdf.toronto.edu/drproject/csc49x/olm_rails/ticket/416]. The code is still on review board.
  • Tried to help Severin with his merging by providing with a new version of admin_test.rb (which, unfortunately still doesn’t completely do the trick)
  • That’s it. (big exam week, as anyone else)

Next steps

  • I guess we will have enough feedback to start working (I mean implementing) the flexible marking scheme.
  • I’m still looking forward to writing a “fixture replacement” proposal blog post.

Roadblocks

  • Getting the GO on the flexible marking scheme.

Karen Reid

Status

  • Worked with Severin, Mike and Alan to get patches installed on the production version
  • Spent some time checking to see if the patches solve the problems.
  • Wrote a script to deduce grace day usage from the repositories to get around both the problem of having TAs add results files to the repos after the fact, and because there is a bug in the download of the CSV
  • Diane is planning to use MarkUs for A2.  Fingers are crossed that we don’t encounter another show-stopper.

Next steps

  • Write a proposal for a CRIF (Curriculum Renewal Initiatives Fund) grant to work on MarkUs, especially the automated testing component.
  • Start preparing a poster for the Research in Action day on Nov 17.
  • Write a blog post about the problem of adding results files to the repositories.

Roadblocks

  • Meetings, interruptions, and overcommitment.

Written by Severin

October 22nd, 2009 at 10:16 pm