MarkUs Blog

MarkUs Developers Blog About Their Project

Automated Testing: Mockups and some more ideas

with 5 comments

The more that I read up on this and think about it, the more I think this will be a huge project. There is really a lot to concern about. In an ideal world, this is what automated testing would do once implemented.

  1. Students have a system at hand, which they can use to learn writing good tests. For instance they would check if their tests are correct, by running them against a reference implementation an instructor would put up at some point before the assignment due date.
  2. Students would learn which parts of the code they are testing (coverage). This would – again, we are in an ideal world – be reported visually in an Grader View like environment. With little boxes, as to where is unchecked code. See the mockup below.
  3. Students would also learn which areas of their code is particularly fragile. This could be achieved by running a static analysis tool on the code
  4. Students would have the opportunity to see public test results of tests the instructor has created and are marked public. The instructor uploads a public test file and could additionally specify a filter file, which is run on the test output. I can imagine these kinds of things would be shell scripts or Python scripts.
  5. Also, students have the possibility to run some more tests on their code, the restricted test suite as specified by the instructor. Say the instructor specifies that a student or group can run these tests at most 1 or 2 times a day. The idea is to encourage students to start their assignment early and to have lots of opportunity to run those restricted tests. For each restricted test, students would have to log into MarkUs and trigger their execution. We store in the database, how many restricted test runs are left for each grouping. Restricted tests can be filtered similarly as public test. By using some sort of script run on the test output.
  6. An instructor could specify a file, which has to successfully run in order to have the submission graded. This would be useful, if an instructor does not allow submissions which do not compile. Such failing “acceptance tests” would be displayed visually in the submissions view and the grader view.
  7. Graders have a visual report ready, when they start grading. They would see the percentage of failing and passing tests, respectively. See again the mockups for more information.
  8. Tests created by students (for instance to run them on a reference implementation), would be submitted via the web interface in a separate section, “Tests” say. The reason for this is to keep tests and code separate. I can imagine that these tests would go into a different folder in the groupings repository (e.g. /tests/a1). Alternatively, students can submit these tests via SVN command line client into the appropriate folder. Tests would be triggered via MarkUs’ interface in any case. This makes tracking of students testing activity and restricted test runs easier.
  9. Students control their test runs via MarkUs. They trigger runs, and get immediate feedback if they are not allowed to run the desired test or get feedback as to where in the testing queue their run is schedulred. Students can wait a while and check back later if there are test results available for them. Maybe we could plan for the option of email notifications for students once the test has been run and results are available.
  10. Automated testing would support graders in many aspects. All tests will be run which the instructor provided for the assignment. This includes public, restricted, private and basic acceptance tests. If students are required to write tests for their code, too, MarkUs could be configured to also run tests in a specific folder in the grouping’s repository (e.g. /tests/a1). All these tests results would be available to the grader in the grader view. Ideally, failures are already pre-annotated in the Grader View. If basic tests are failing (e.g. program does not compile), automatic deductions could be triggered.

I’ll stop for now. Here are some mocks:

assignment_creation_overview

assignment_creation_detailgrader_view_integrationIf you have thoughts, please feel free to chime in. As I mentioned previously, I think this whole topic would be enough material for a master thesis. The big question remains: How would you make sure that you can integrate test runs into the Grader View and other points? It would be nice if test tools would have a standard output format. I can imagine some built in integration points in MarkUs, though. That’s it for now, stay tuned 🙂

Written by Severin

November 22nd, 2009 at 2:13 am

Posted in Automated Testing

5 Responses to 'Automated Testing: Mockups and some more ideas'

Subscribe to comments with RSS or TrackBack to 'Automated Testing: Mockups and some more ideas'.

  1. Nice work! It’s great to see this in writing.

    A couple of thoughts… I think that one of the first things we need to do is to identify the “simplest thing that can work”. In other words, we need to distill out of all of these great features (and I’d like to eventually have all of them) the one that will have the most impact in the short term and implement that.

    One of the things that I liked about WebCAT is the way they specified the tests. The plugin designer (possibly the instructor) wrote a single (Perl) script that would both identify the tests to be run (via Ant) and provide the filters on the output. It wasn’t modular enough for my taste, but it had the advantage of extreme flexibility.

    Karen

    22 Nov 09 at 9:50 pm

  2. Thanks for your feedback. In order to identify the simplest thing possible that can work, I’d like to have a question answered. What is more important: Students learn how to write proper tests or students can run tests – provided by the instructor – against their code? I mean, should MarkUs initially help to teach TDD or should MarkUs be used to provide more feedback to students. I think this is a matter of priority. What feature is more desired?

    I think we could go in mainly two directions now. Which one should it be? Thanks!

    Severin

    22 Nov 09 at 10:57 pm

  3. I think that the simplest feature would almost be to have automated testing implemented, just for the instructors/graders to be able to see. The students being able to see the test results adds an additional layer of complexity that isn’t necessary right away – we should be building that on top of the back-end and the actual testing framework.

    Tara Clark

    23 Nov 09 at 8:35 pm

  4. Severin and I talked about feedback being the most important focus right now.

    Tara, I like your idea. The first step might be to build a test system that is used only after the due date has passed so we have only one set of results to display.

    Karen

    27 Nov 09 at 12:49 pm

  5. […] Interface Utilisateur Je vous reporte à cet excellent article sur le blog dédié à MarkUs. Vous y trouverez des mockups de l’interface graphique une fois […]

Leave a Reply