MarkUs Blog

MarkUs Developers Blog About Their Project

Archive for September, 2010

Meeting Minutes: September 24, 2010

without comments

The IRC log for this meeting can be found be find here.


  • Status report
  • SVN to Git conversion
  • Code sprint questions

Status report:

  • Everyone has MarkUs up and running
  • If you have time, start looking at a few tickets
  • Creating a few bug reports, so there will be a bunch of small things to fix in the next few weeks

SVN to Git conversion:

  • Switching version control system to Git
  • Using GitHub to host MarkUs source code
    • GitHub provides nicer exposure
  • Suggested starting place to learn git (
  • Git is a distributed version control system (good for MarkUs’ distributed development style)
  • Need to close pending reviews or have someone open new reviews after the switch
  • Reviews are going to be frozen before the switch
  • Switch is scheduled for the day of the 29th
  • Install documentation will need to be updated to account for git

Code sprint questions:

Q. Will food costs be covered in Toronto?
A. Lunch and dinner are provided on Friday, all other costs are left to the individual.

Q. Where is the code sprint taking place?
A. The Bahen Center ( Someone will meet the team at the hotel on Friday morning to guide us.

Q. What hotel are we staying at?
A. Holiday Inn, Bloor Yorkville

Q. How do I get from the Airport to the Hotel?
A. Cab or bus seem to be the options.

Q. Is there anything going on during the weekend?
A. Nuit Blanche

Q. What time are we planning to start Friday?
A. 9:30am

Written by kurtis

September 24th, 2010 at 10:50 pm

Posted in Minutes

*Status Update* September 23rd, 2010

without comments



  • Upgraded to Rails 2.3.9 to test what in MarkUs has been depreciated in Rails 3.0.0
  • Continued learning rails
  • Explored MarkUs

Next Steps

  • Explore issues and attempt to fix some small bugs in MarkUs
  • Update OSX setup guide to get around problems found when setting up MarkUs
  • Familiarize myself with the MarkUs development tools (ReviewBoard, bug tracking, etc.)


  • None this week




  • Successfully installed RoR and set up MarkUs
  • Played with the Demo version of MarkUs
  • Started reading up on Rails

Next Steps

  • Learn framework and do some tutorials
  • Pick an easy ticket to completeFamiliarize myself with the MarkUs development tools (ReviewBoard, bug tracking, etc.)


  • None




  • Worked on bug fixes, in particular fixed #691
  • Posted blog article with everyone’s bios
  • Responded to emails, ReviewBoard stuff

Next Steps

  • Start working on functionality to assume the role of a different user
  • Bugfixes as necessary
  • Help out over email/irc


  • None




  • finally got MarkUs set up and Eclipse w/Radrails and subclipse

Next Steps

  • testing it out, learning about Rails and MarkUs


  • new Ubuntu needed lots of things updates, and slow connection at cdflabs greatly slowed the process, plus postgres woes




  • getting up to speed with Ruby and Rails

Next Steps

  • start reading the MarkUs code, start understanding how it works
  • get ready for the code sprint next weekend!


  • None




  • checked out the MarkUs repository and got it running on my machine
  • Learning about Ruby on Rails and Ruby

Next Steps

  • Keep learning about RoR
  • Play around with MarkUs on my machine, get familiar with the MarkUs UI and data models, etc. 


  • None

Written by jiahui

September 23rd, 2010 at 11:12 pm

Posted in Status Reports

Automated Test Framework – Part 3: Student’s View

with 2 comments

To complete the testing framework implementation, the following post describes the student’s ability to execute the tests.

Implementation Overview:

As a starting point, this implementation allows a student to login and run the tests defined by the instructor against their own code and view the results.  The current implementation has been kept quite simple for now as further design decisions based on feedback will impact how the behaviour could change.  In addition, most of the student view implementation had already been added by Benjamin so not much had been added or modified in this round of implementation.  The following describes the current design:

  1. Students can login, select an assignment in which they have submitted a file for, and from the Assignment view, if tests were enabled by the instructor for this assignment, a separate ‘Test’ area will be enabled and visible.
  2. In this ‘Test’ area, the students will be able to execute tests (similar to how graders/instructors execute the tests from the Submissions view by clicking on the ‘Test Code’ button) and press ‘Load’ to view the results.
  3. A few things to note:
  • Students can only run non-private tests. In other words, any test files that were marked ‘is_private’ by the instructor WILL NOT be executed by the student.  This gives the instructor a little bit more control over which tests they allow the students to run.
  • Each time the student executes the tests, a token is used. Therefore, students can only execute the tests if they have tokens available.
  • Students can only view the test results if the marks have been released to the students.
  • Students can only view the tests results if they belong to a group.
  • Students can only view their own test runs. Any test runs for this assignment made by any other user will not be visible to the current student.
Students Test View

Students Test View

Related Posts:

Automated Test Framework Overview
Automated Test Framework – Part 1: Assignment View
Automated Test Framework – Part 2: Submission View

Written by diane

September 19th, 2010 at 7:00 pm

Automated Test Framework – Part 2: Submission View

without comments

As a continuation from Part 1 of the testing framework, Part 2 of the testing framework consists of the actual execution of the tests from the Submission view.

Implementation Overview:

As a starting point, this implementation allows a grader to execute a set of tests using Ant, a Java-based build tool, against students’ code.  The files used will be those specified from the Testing Framework form as detailed in Part 1 of this post.  The results will be logged and saved to the ‘Test Results’ table in the database and made available for the grader to view upon execution.

Three possible states may occur upon running the tests:

Ant Builds Successfully – This means Ant was able to build and execute all the test and other related files successfully.  It does not mean that all the tests passed.  It simply means that Ant was able to run successfully.

  • This state will most likely occur when students (more about this later) and graders run the tests because by the time they run them, we would assume the build files and all other Ant related configurations have already been tested and work properly.

Ant Build Failed – This means Ant was not able to build and/or execute the tests and related files successfully.  There could be a problem with how the build.xml is configured or some other issue.  However, the build log is available so the user can diagnose the problem appropriately.

Ant Build Error – This means some other unknown error with Ant has occurred.  Perhaps Ant was not setup and installed properly.  However, the build log is available so the user can diagnose the problem appropriately.

  • These above two cases are mainly made available for the purposes of helping the instructor upon initial setup of the Ant tests.  Along with the build logs, this will provide easier diagnosis when they are setting up their tests for the first time.

**Note: In the current implementation, there is no feature indicating whether all the tests passed or failed because this gives instructors more flexibility in how they execute their tests.  It simply tells you that Ant executed successfully and you must look at the logs to see what tests passed and failed.  The reason why there isn’t currently one is because Ant can support multiple languages and perhaps the instructors may want to parse their test output and display their own customized results.   As a result, the test output is not consistent.  This implementation could change depending on if there is a consistent way to tell if tests passed or failed.  Perhaps we could restrict users to always provide a Test Results Summary in the end that can be easily parsed, or allow users to specify a Test Results expression to look for in the Testing Framework form, and when the tests are executed, the expression provided is searched for in the test output and the number of tests passed and the number of tests failed can be reported.

ANT Overview:

Part 1 of the testing framework gave details as to how to upload the necessary Ant files for the Testing Framework.  Here is now an overview of how Ant uses those files and how they can be customized:

  1. Here is a zip containing a sample A1 folder and the necessary Ant files used by the Testing Framework:  A1
  2. As you can see, the zip maintains the same folder structure as previously detailed in Part 1 of this post.
  3. Open up the ‘build.xml’ file and you can see all the various targets and tasks defined for Ant.  The properties that it uses are defined in the ‘’ file.
  4. Once Ant is installed on your computer, you can try testing it out by simply running ‘ant’ from this A1 folder.
  5. Installation, further documentation and help can be found here:
  6. <optional> Parse Feature:
  7. The parse feature allows users to parse the test output for addition items if needed and display a more customized presentation of the results.  This feature is optional so if no parsers are specified (in the Testing Framework form), the original test output from the test execution will simply be displayed.  Similarly, if the Ant build fails or errors out, the original test output will be displayed.  However, if the Ant build is successful AND at least one parser file has been specified, the test output will get parsed accordingly and the resulting test output is what will be displayed.

    To support the parse feature, users must define a ‘parse’ target in their build.xml.  They can have as many parsers as needed but the main one called must be called ‘parse’.  For example, if the user has 2 consecutive parsers they wish to run against the test output, the ‘parse’ target can depend on a ‘preparser’ target.  The ‘preparser’ target can perform the initial parsing so that the resulting output is fed into the ‘parse’ target to do the remaining parsing.  Users can add the following to their ‘build.xml’ in order to achieve this:

<!– Target ‘parse’ depends on target ‘preparser’ –>

<target name=“parse” depends=”preparser”>
<java classname=”Parser” classpath=”${build.dir}” fork=”true”>
<arg value=”${testoutput}”/>

<target name=”preparser”>
<java classname=”PreParser” classpath=”${build.dir}” outputproperty=”testoutput” fork=”true”>
<arg value=”${output}“/>

Explanation: What the above does is it takes the test output from ${output}, which is the test output collected from the test execution, and feeds it into the PreParser class for parsing.  The resulting output from this parse gets stored into ‘testoutput’ and this then gets fed into the ‘Parser’ class.   The important items here are the target name=“parse” and the <arg value=”${output}“/>.  These two items MUST BE defined in the build.xml in order for the parsing to work because the Testing Framwork code looks for a parse target when it executes Ant and all the initial test output (before any parsing is done) is collected and stored into an ‘output’ variable.

Here is a more simplified version of what the parse target could look like:

<target name=“parse”>
<java classname=”Parser” classpath=”${build.dir}” fork=”true”>
<arg value=”${output}“/>

Running Tests:

*The following assumes you have uploaded the necessary test files into the Testing Framework form as detailed in Part 1 of this post.

  1. From the ‘Assignment View’, click on the sub menu ‘Submissions’.
  2. Select a group / repository to grade and go through the process of collecting and grading the revision.
  3. Once you are on the results page, you can run the tests against the code by clicking on the ‘Test Code’ icon in the ‘Test Results’ section.
  4. Submission View - Test Results Feature

    Submission View - Test Results Feature

  5. As the tests are running, you will see a ‘Loading…’ message.
  6. Once the tests have completed, you will see either:
    • Tests Completed. View ‘<date_time>.log’ for details.  (If Ant was successful)
    • Build Failed. View ‘<date_time>.log‘ to investigate.   (If Ant failed)
    • Unknown error with Ant. Is Ant correctly installed? View ‘<date_time>.log’ to investigate. (If Ant errored out)
  7. To view the results, choose it in the drop down list beside ‘Test Results’ (By default, the latest test result log will be selected), and click ‘Load’. As the results load, ‘Loading results…’ will be displayed.
  8. The results will open in a separate window to be viewed.

Test Results Table Changes:

  1. Status column added:  This column stores the status of the Ant execution (ie. success, failed, error)
  2. User Id column added: This column stores the id of the test executor so that a user may only view test runs executed by themselves.

Test Framework Repository:

*Ensure you have set your ‘TEST_FRAMEWORK_REPOSITORY’ value

Behind the scenes, when you click ‘Test Code’, everything in your assignment folder under the directory you specified for ‘TEST_FRAMEWORK_REPOSITORY’ will be copied over into a group repository folder.  All the assignment related repository files will be copied over as well so the folder structure looks something like this:



– A1 (folder):

  • build.xml
  • … <other files needed for Ant>…

– src (folder):


– test (folder):


– lib (folder):

  • junit-4.8.2.jar

– parse (folder):


Comments and Thoughts on the Current Implementation:

For now, this is a simple implementation that introduces the ability to test code with Ant and displays the results.  This design can obviously be refined as needed, but Benjamin and I just wanted to get something working out there for us to build on.  However, thinking ahead, here are some possible next steps:

  • Fork the Ant Execution – Currently the Ant execution is processed inline.  Need a clean way to handle both windows and non-windows os cases since Windows does not support fork.
  • Test Results reporting – Display the number of tests passed and failed as mentioned earlier
  • Implement this feature for when submissions are collected and graded in bulk (more than one submission at a time) so that the grader does not have to click ‘Test Code’ for each submission.  Instead, the test results are already ready to be viewed as they go in to grade each submission.
  • Figure out a clean way to purge old test results files from the ‘Test Results’ table. In the event that the tests are executed numerous times each, more and more entries will be added to the table, and after awhile, old test runs will no longer be needed.

Related Posts:

Automated Test Framework Overview
Automated Test Framework – Part 1: Assignment View
Automated Test Framework – Part 3: Student’s View

Written by diane

September 19th, 2010 at 7:00 pm

Meeting Minutes: September 17, 2010

without comments

The IRC log for this meeting can be found be find here.


  • Status report
  • Round table

Status report:

  • Karen Reid will remind Alan Rosenthat about drproject accounts for non-UoT students
  • Horatiu has checked out the MarkUs repository
  • Misa has played around with the demo version a bit
  • Kurtis has set up the MarkUs on OSX a week ago

Round table:

  • The Drproject account gives you commit access to the repo, wiki and tickets
  • Evan said we’re getting spam tickets recently, and Karen had turned off anonymous ticket filing to try to stop it.
  • The first steps are to get rails going on your development machine, try out a small tutorial or two so that you understand the basic framework, and then get the MarkUs development environment installed. The next step is to find a couple of tickets that look pretty easy.
  • Horatiu asked about accessing desktop based in Vancouver remotely from Toronto, Mike suggested that bring a laptop with the development environment set up, and then when you get here, choose based on what’s most comfortable
  • Karen talked about two big projects in mind:
    1. Automated Test System for students to submit their work for testing and get nearly immediate feedback.
    2. Take a serious look at report generation and what kinds of summary data to show to whom.

    Several other small ones:

    1. being able to assume the view of a particular student or TA
    2. some modifcations to late penalties
    3. the ability (possibly) to delete assignments and users
    4. some UI issues that might make marking easier for the TAs
    5. occasional bug reports that are urgent that he may ask someone to work on

    Evan said we should add on the accessibility side

  • Karen said we should do punchlines: what you did this week, any roadblocks, what is next. Everyone should read the punchlines. Mike said we also need to do meeting minutes.
  • Marks allocation depends on participation in the meetings, asking questions when you get stuck, working about 8-10 hours/week (but it’s OK if we can’t put in enough hours this week but make up for it later), producing something roughly complete by the end of the term
  • Karen said he’d like everyone to work on predicting what they can get done this week at the start of each week, but should be realistic.

Written by jiahui

September 19th, 2010 at 12:55 am

Posted in Minutes

Introducing our Fall 2010 Team

with 3 comments

We are happy to have 7 team members working on MarkUs us for the fall 2010 term!  Since we are currently spread across the country, this blog post will hopefully serve to allow ourselves to get to know each other before our first opportunity to meet in person.

I’d also like to mention Mike Conley and Severin Gehwolf, two MarkUs alumni who continue to provide an enormous amount of support and advice!

Here we are, in alphabetical order:

Evan Browning
U of T

My name is Evan Browning, and I’m a third year Computer Science student at U of T.  I worked on MarkUs over the summer months, and previously I worked at the Ontario Ministry of Natural Resources and for the Assistive Technology Resource Centre at U of T (now the Inclusive Design Institute).  Besides programming I love to do film and video stuff, and especially things related to visual effects.  I also love to explore new places, both in the city and in the wilderness 🙂

Horatiu Halmaghi


My name is Horatiu Halmaghi and I am nearing the completion of my Computing Science degree from Simon Fraser University. I currently live in Burnaby, Canada but at different times in my life I’ve also called Montreal, Sibiu (Romania), and Prague (Czech Republic) my home.

I started programming in my first year of university, but in that time I’ve come to enjoy a number of different languages and technologies. I have a great deal of experience working on web projects, having worked on a few applications in Django as well as some more fun work in JavaScript and AJAX. The most fun project I’ve worked on was creating a Live Draft system for a hockey pool application.

Which reminds me: one of my greatest passions is hockey! I love to watch the sport and I play regularly on a roller hockey team in Vancouver. The only two other things I love as much as hockey are photography and traveling the globe.

Victor Ivri

My name’s Victor, I’m in the last year of a Computer Science and Cognitive Science double major here at York U. Been working in the industry as a C++ and C# developer on and off for a while, did a Google Summer of Code in ’08 (greatest experience ever, highly recommended!). Lately my interests have brought me to web development, so here I am :).

Besides that I do all sorts of things – especially if it’s outdoors, and especially if it’s far away from the city.

Misa Sakamoto
U of T

My name is Misa and I’m a 4th year in CS: Software Engineering at  UofT.  This summer term, I worked on a web-based repository browser project for Basie, using Pinax/Django.  I also just finished a 16month  internship at IBM. Aside from programming, I’m also interested in psychology, teaching, and dogs 🙂

Kurtis Schmidt
U of M

My name is Kurtis Schmidt.  I am currently completing an Honours Degree in Computer Science at the University of Manitoba in Winnipeg.

I began University working towards a degree in Computer Engineering.  However, after two years I realized I preferred writing software to designing hardware so I changed majors to Computer Science.  Since then I have worked at three different software companies.  At Frantic Films VFX (now Prime Focus VFX) I was a 3D software developer.  My specific work involved mesh manipulation and implicit surface mathematics.  My second position was at Research in Motion in Waterloo, ON.  I worked on the Network and Protocol Analysis team creating automated test suites.  Finally, my last position was at Electronic Arts in Burnaby, BC.  I worked for an internal tools team who created and maintained a procedural animation tool used across EA.

In my spare time I develop OSX/ios software using the Cocoa frameworks. As well, I enjoy film, television, and video games; playing hockey; and swimming.

Vivien Suen
U of T

My name is Vivien Suen, and I’m a fourth year Computer Science student  specializing in Software Engineering here at the University of  Toronto. Previously, I did a bit of research on reversible computing  at the University of Lethbridge, and I helped with research on social  networking as well as develop a Facebook application this past summer  at the University of Toronto.

As for interests, I really enjoy web programming and web design, I  love music and my piano, and playing badminton and baking are two of  my favourite hobbies. One of my life goals is to open a cake shop of  my own one day. Last but not least, I also enjoy my occasional dose of  WoW and SC2.

I’m excited to be working on MarkUs, and I’m looking forward to  working with you this term!

Jiahui Xu

My name is Jiahui(Sherry) Xu, and I’m in the fourth year of my Computing Science degree at SFU. I’m from mainland China. I live in Vancouver, close to Metrotown skytrain. I spent my first two years of this degree in Zhejiang University (China), and joined a Dual Degree Program which transferred me to SFU.

In terms of Web development experiences, I’m most familiar with Django framework and jQuery. I usually work on my Mac but are also good with other OS. I’ve also learnt PHP and CodeIgniter on my own before and have done a project based on that. Ruby on Rails is pretty new to me, but that’s also the reason I chose this project because I would like to learn some new cool tools. Other than the above, I’m good at C and C++ and have done a bunch of projects for my courses.

I like to sit in cafes and program, that’s my ideal spot for programming. I used to love shopping for clothes online in China because they offer them at very low prices. Now I like swimming in order to stay healthy since life as a programmer is stressful and sometimes tiring. Hopefully if I have time I can expolor the swimming pools at Toronto 😀

Written by Evan

September 18th, 2010 at 3:23 pm

Posted in Uncategorized

Who is Doing What? Fall 2010 Edition

without comments

The MarkUs team is meeting weekly on #markus on Fridays at 1:00PM(EST).  Every Thursday, each member of the team must come up with a “punchline” status update.  These updates are short, bulleted, straight-to-the-point reports that tell us how everybody is doing.  They follow a very simple format: see these three examples.  The punchlines need to be published on this blog every Thursday, and it is every team member’s responsibility to give them a read before coming into the meeting.

But instead of everybody logging in and editing a single blog post for the status updates, we’ll rotate responsibility for collecting/publishing punchlines every week.  Similarly, we will rotate the duty of converting our IRC meeting logs into notes.

Here’s the schedule outlining who is doing what each week. Teammates:  I highly suggest bookmarking this page.

  • Sept 17:  punchlines:  Nobody, minutes:  Jiahui
  • Sept 24:  punchlines:  Jiahui, minutes:  Kurtis
  • [Oct 1 – 3 is the sprint, so meetings will happen in person]:  punchlines:  Horatiu
  • Oct 8:  punchlines:  Kurtis, minutes: Misa
  • Oct 15:  punchlines:  Misa, minutes: Victor
  • Oct 22:  punchlines: Victor, minutes: Vivien
  • Oct 29:  punchlines: Vivien, minutes: Horatiu
  • Nov 5:  punchlines: Horatiu, minutes: Jiahui
  • Nov 12:  punchlines: Jiahui, minutes: Kurtis
  • Nov 19:  punchlines: Kurtis, minutes: Misa
  • Nov 26: punchlines: Misa, minutes: Victor
  • Dec 3: punchlines: Victor, minutes: Vivien
  • Post-mortem (date TBD):  punchlines:  Vivien, minutes:  Horatiu

Note that this schedule may be subject to change.  Check back frequently.

Written by m_conley

September 18th, 2010 at 2:00 pm

And another term begins…

without comments

I’d like to start by thanking the summer students for their hard work.  I’ve been working with the new version, and I can definitely see the improvements.  I’d also like to welcome the new students who will be working on MarkUs this term.  While I was setting up the DrProject account, I realized that at least 36 people have been affiliated with the project over the last two years!

Now it is time to start planning what everyone will be working on this fall.  There are some smaller things that I’d like to work on, as well as some larger projects.


Dina made a start this summer with some work on a dashboard. I’d like to take this in a bigger direction and start to look at reporting on a bigger scale.  All three types of users could use better information about grades, annotations, and submissions.

For students, I’d like the option of showing them a histogram of the grade distribution, and possibly other information like how many students have already submitted something.

For TAs it would be useful to see the progress that other TAs are making and even see more useful information about their own progress.

For the instructors there is a whole range of data that I’d like to be able to at least download, if not also see a nice representation online.  This includes information on grades (mean, mode, median, distribution, for rubric criteria, or for all assignments), on submissions (essentially svn repository activity, but also summary information across assignments), on graders (number of annotations applied, average grade given, grade distribution), and on annotations (number of each given, students who receive annotations).

It would be nice to be able to choose what gets displayed to the students and what kind of report can be generated, or downloaded.

Testing Framework

Benjamin and Diane made a great start on the test framework this summer. The next steps are to see if we can fit an actual assignment and tests into this framework. I would personally like to see how we can fit some C exercises into the framework, but even getting something more realistic in there for Java would be interesting.

There are two big pieces to work on.  One is displaying the results to the student and collecting the results for the instructor/TA.  The output of the tests will have to pass through some kind of filter so that the instructor can choose what information gets back to the students.

The other big piece is on the security side.  An intermediate step is to run the student’s code as a different user on the server, but while this protects the web server and applications from corruptions (mostly), it doesn’t prevent a malicious user from getting at data he or she is not supposed to have, like the test code or the test infrastructure code.  The ultimate goal is to run the student’s code inside a locked down VM.  We have to lock down a VM and  figure out a protocol for transferring data from MarkUs to the VM and back.

Remark Requests

I haven’t thought about this one enough, but it would be nice to integrate remark requests into MarkUs.  This might turn out to be a medium size project because it would involve creating a new tab in the grader view for the student to write make their case.  We might allow students to add annotations.  Then the marking status of the student’s work would change so the instructor can see that a remark has been requested.  (I’m a little worried about making it too easy to request remarks.)

Then there are a bunch of small things:

I’d like to be able to assume the role of a particular student or TA.  I’d also like to be able to change the role of a user (e.g. promote a TA to admin, or student to TA).

We need to be able to delete assignments and possibly users.

A repeated pattern for late penalties.

I’d like to take more of a look at the Grader UI again to see if we can improve things.  For example, it would be nice to make it clear when a rubric element has been graded or is still left to be graded.


I’m always open to suggestions. We aren’t going to implement all the suggested features, but we do want to keep MarkUs usable.

Written by Karen Reid

September 17th, 2010 at 3:20 pm