Archive for November, 2010
- put up my graphing caching code for review
- finish the review
- finish issue 106 which I started working on
- I had a lot of trouble putting up my diff for the review on RB. Thanks for helping me out with that Mike!
- Worked on Waterloo’s problem of duplicate groups
- Wrote tests & smaller details from the test framework stuff that I wrote last week and put everything on ReviewBoard
- Fix up any remaining bugs that get noticed in the framework and hopefully collaborate with Benjamin on the documentation aspect
- Try to get everything that I have on ReviewBoard committed
- Was swamped with PEY interviews, so had a bit of a productivity decrease
- implementing instructor mark summaries for remarks
- finish soon and get it up on review board
- figure out what else needs to be done with Misa
- screen cast
- updated code for the pending review for submissions table
- updated code for the pending review for remark request tab
- pushed the 2 branches for the reviews to git so that people can test it out
- get submissions table code approved
- wait for feedback from review board
- continue implementing on the remark request tab
- waiting for feedback on review board
- had to clear out master branch and replace it with a fresh copy due to my stupidity with git… Thank you so much Mike and Severin!
- put up new dashboard for review
- internationalized dashboard, someone who actually knows French should look at the review.
- worked through a few suggestions on dashboard
- complete the dashboard upgrade
- add graphs to the dashboard
- lots of other assignments due in the next two weeks
- Hora and my shared branch got a little messy, had to spend an hour fixing that up.
The MarkUs team is proud to announce MarkUs 0.9.2!
What happened to 0.9.1, you ask? Don’t worry, it’s tucked inside the 0.9.2 release. I suppose we’re a bit like the Google Chrome team: we get excited about version numbers.
You can patch up your 0.9.x release with the patches available here.
Here’s a list of all the stuff we fixed:
Changes for MarkUs 0.9.1:
- Submission collection problem due to erroneous eager loading expression
Changes for MarkUs 0.9.2:
- Issue #180: Infinite redirect loop caused by duplicate group records in the database in turn possibly caused by races in a multi-mongrels-setup.
- Issue #158: Default for Students page shows all Students, and bulk actions renamed.
- Issue #143: Fixing penalty calculation for PenaltyPeriodSubmissionRule.
- Issue #129: Uploaded criteria ordering preserved for flexible and rubric criteria
- Issues #34, #133: Don’t use i18n for MarkusLogger and ensure_config_helper.rb
- Issue #693: Fixing confirm dialog for cloning groups
- Issue #691: Adding Grace Credits using the bulk actions gets stuck in “processing”
- Fixed INSTALL file due to switch to Github
- I18n fixes
Meeting log here
Graphing and Dashboard
- Kurtis had to leave early
- Horatiu started work on caching, first review should be up soon
- Evan is almost finished with the tokens management
- Benjamin is writing the tutorial for the test framework
Problem with Groups
- Waterloo is having problems with duplicate groups
- We might get Evan to help out on this issue
- Misa posted submission table for status to review board
- Misa also posted template for remark request tab
- Vivien’s code got approved
- Vivien is working on the marks summary tab
- Misa and Vivien both had trouble with the db and migration
- Karen has contacted Jiahui, hope to hear from her soon
- Horatiu blogged about migration in rails
- Horatiu is also working on issue 106 in github
- Benjamin finished to implement the database and the model for ShapeAnnotations
- Benjamin is working on including easily annotations on a SVG file, over the image
- Benjamin will refactor Anton’s work and include it in our work
- Finished first draft of Dashboard
- Add multi-lingual support for dashboard
- Add Graphs to dashboard
- Possibly add a notes section to the dashboard
- Look into periodic grade deductions for late assignments again
- wrote a blog post about caching and started working on it
- wrote the migration and figured out where to update the cached data
- I found out I claimed Issue 106 back in October, I figure I may as well fix that
- continue working on the caching
- had some TestDrive issues, worked them out with Mike (hopefully, I’m waiting for the next update which should fix the problems I’ve had)
- Wrote a bug fix to make the number of tokens remaining decrease on the screen when a student runs a test (through AJAX call)
- Implemented new functionality to allow an instructor to choose to have test tokens refreshed every day, every hour, or never
- Still have to write test cases for the new functionality
- Work on testing examples
- Having a bit of a git workflow issue, am working it out with Severin
- icon for remark request committed
- waiting for submissions table code to be reviewed
- also submitted initial template for remark request tab (instructor and student) to review board
- get both codes reviewed
- continue implementation of remark request tab
- rake db: migrate didn’t do everything it was supposed to, and had trouble with the database
- manually trying to fix this broke my Markus build twice
- a lot of time spent getting things back working (twice)
- code finally reviewed and approved (though no one commented on my French)
- implementing instructor mark summaries for remarks
- continue implementing
- screencast with Misa
- was sick for a couple of days the past week
I particularly studied 3 frameworks :
- Prototype, as it is already bundled in RoR
- jQuery, which is a toolkit, rather than a framework
- Mootools, a lightweight but powerful framework
JQuery is a popular toolkit, mainly oriented towards fast and concise DOM manipulation. JQuery is easy to learn, and quick to be efficient with. It also has a big and active community, and many plugins.
JQuery is designed to be self-contained and does not “pollute” the global namespace. (JQuery is a Monad)
Mootools can be a bit more difficult to learn, but its consistence is really a plus.
On the down side, MooTools modifies the native types. While providing readability, it can cause problems with third-party chunks of code.
Prototype and Mootools share the same approach of extending native types, that’s why it’s impossible to combine the two frameworks.
Prototype is a bit slower than Mootools, but provides more functionality through the use of scriptaculous.
In this case, the most relevant advantage is the fact that it’s already bundled in RoR.
Chosing a framework
The annotation feature needs good event handling, as new devices are leaving the mouse paradigm behind. The framework also needs to interact smoothly with the canvas element.
RoR already uses Prototype for all its JS stuff (mostly AJAX), and the cost of using several frameworks side-by-side is not acceptable. In addition to that, Mootools and Prototype can’t be used at the same time. So, chosing a framework for the annotation feature implies chosing it for all the application.
So far, the best option is to use Prototype, as it does not require heavy changes, and its syntax is clear and consistent. I personally prefer Mootools, which shares the same design philosophy, but is smaller and faster.
I’m going to start working on creating a caching system for the grade distribution (for now – we’re likely to have more things to cache in the future). I’m going to create a model called assignment_stats with the following columns for now:
t.column :assignment_id, :int
t.column :grade_distribution_percentage, :text,
:default => “0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\n”
Meeting log here
Misa uploaded the new remark schema to the repository.
She and Vivien settled on adding 2 columns to the submission table:
- remark_result_id which links to a remark result object
- remark_request (a text column)
However, Mike pointed out that there may be some problems with the new schema. Another review is currently up to solve the problems.
Graphing and Dashboard
The first version of the dashboard is nearing completion. A sample can be seen here.
Hora has been looking into adding a caching model for graphs. The team has decided that it would be best to simply begin with a cache model instead of adding one in later. So he will be working on that next week.
The test framework was committed at the beginning of the week. Everyone can now start experimenting and report bugs.
Evan has been working on getting tokens to update correctly every day. There are still some problems so Evan will continue to work on this through the next week.
Term Wrapping Up
We have 4 weeks left before we have to wrap up the term. Before the next meeting, everyone should provide a breakdown by week of how they plan on wrapping up their project(s). Additionally, by the end of term, Karen would like to see a blog post (with screenshots) or a webcast showing off the work everyone has completed. If you are going to leave any work unfinished (and don’t plan on continuing with MarkUs) then please document this in the blog post.
- Nearly completed Dashboard work. Should be in tomorrow.
- Looked into having ranges for mark deduction. However, it may require a lot of changes to the period class.
- Add the dashboard changes.
- Generate the needed graphs for the dashboard.
- Try to get range deductions working.
- Period ranges may require a decent amount of changes to models
- Some trouble figuring out how to get information from the assignments.
- finished up my 2 open reviews
- started a conversation about caching grade distribution data
- blog about grade distribution data to get more input from the community
- start working on graphing the grade distribution using the function I wrote
- Got tokens per day to work properly for the test framework
- Worked on bug fixes for test framework
- Worked on writing test cases for one of my current reviews
- Keep working on Test Framework bug fixes and features
- Had a lot of issues with the switch to bundler, but they are fixed now
- Having trouble getting my test cases to work for review 771
- issue 46 is now fixed
- 2 more blog entries about the DB schema for remarks
- committed new schema to the repository
- get icon to show on submissions table
- discuss with Vivien what needs to be done next
- script/generate migration needed installing of a few things to get working (including altering Gemfile)
- waiting for code to get reviewed and approved
- implementing student and instructor views for remarks
- get my code approved
- continue implementing
- Installed Bundler
- Seeking help from teammates on Test Framework. Tried running Ant from console, Ant doesn’t work when testing Java code on my machine (console says jUnit not found), but works when testing C.
- Got into a ‘NoMethodError’ after updating to the newest repository code, just sent Evan an email for help
- Either keep trying to set up the Test Framework or switching to other tasks
- Test Framework still not working 🙁
- ‘NoMethodError’ as mentioned above.
This is a summary and continuation of the discussion from this review. I (Hora) created a function which returns the grade distribution as a percentage for a given assignment, a function which even with eager loading can make a lot of database queries. Since the graphs will tentatively be displayed in the dashboard, often the first page students will see, the question is whether we should cache grade distribution data and how.
Severin had suggested for the creation of a new model (for example AssignmentStats) which can have a column for each type of graph required. In the case of the grade distribution, we could have a column called ‘grade_distribution’ which stores a CSV list of the grade distribution data for the assignment. He further suggested that the graphs initially load as empty container boxes with spinners to indicate loading, and use XHRs to get the required data to generate the graph. Each graph could also have a Refresh link that will re-calculate (and re-cache) the distribution data if the instructor wishes to see the most up-to-date information.
Karen had suggested that we cache the actual generated graph, but Bluff (the graphing API we selected) uses HTML canvas tags to display the graphs and it doesn’t look like there’s a way to save the graphs.
I think Severin’s suggestion is great, and I think it would make sense to update the cached data every time a mark is denoted as complete. I also like the ideas of having a Refresh link that an instructor can use to update the data and using XHRs to get the data. My suggested plan of attack is for Kurtis and I to write the code for generating some of the graphs to see exactly how that would fit into the existing code, then to add caching and make appropriate modifications where necessary to take advantage of it. As long as we don’t release a stable version of MarkUs before we finish the caching, there probably won’t be any problems with some slow-loading graphs. Does that sound like a good idea? Or should I get to work on caching first, and then on generating graphs? Any other comments or suggestions?
Edit: the final decision was to add 2 columns to the Sumbission table. “Remark_result_ID” which links to a remark result, and “remark_request” to keep the request in text form.
So, after a discussion during last Friday’s meeting, it seems that the best way to do this is to add a column to the Submission table and create a new Results object per remark. I think this then leads to adding two columns in the SubmissionFile table, to keep track of remark request files OR adding a blob column to keep remark requests.
Please see previous post for the previous blog discussion.
From what I understand of the discussion, the new schema suggested is as follows. There is a remark_results column added to the Submission table, which is the id of the new result object created for the remark.
0) Only 1 remark_result ID may exist for one submission (no multiple remarks)
1) If this column is NULL, then it means there is no remark requested.
2) If there is an id there, it means there is a remark requested.
3) If the remark result object’s marking state is “unmarked” it means that the request has still not been reviewed by the professor, and the student can still cancel his or her request. The original mark is still released to the student.
4) (If the student cancels, the result object will be removed and the remark_results column in the Submission table will be NULL again).
5) If the remark result object’s marking state is “partial” it means that the professor has started looking into the request. At this point, the student can no longer cancel the request. Both the original and remarked grade are unreleased to the student.
6) If the remark result object’s marking state is “complete” then both the original and remarked grades are released to the student.
The remark request would be saved in a text file on the server, and the file name and path saved in the SubmissionFile table under the columns “remark_filename” and “remark_path.” We could also standardize the filename so that we only need path. The other option is to save it directly into a “remark_request” column in the SubmissionFile table as a blob.
Quick question… Do we want to save when (timestamp) the remark request was made? I guess this could be saved in the remark text file.