MarkUs Blog

MarkUs Developers Blog About Their Project

Ideas for passing data to the Test Runner

with 10 comments

The test runner is the program that would be called to run the instructor-provided test scripts, and would return the data to MarkUs, which would then parse the results.

The test runner is supposed to return the following data as XML:

–          Test id – The name of the test

–          Input – The input used for the test

–          Expected Output – The correct output

–          Actual Output – The actual output (from the submitted code)

–          Result – Pass or Fail, depending on the output, or Error if there was an error during testing

–          Marks – The marks earned by the student

The input for the test runner is currently a file (or input on stdin) that contains the name of each test followed by a flag (to determine whether the program should halt if that test fails), followed by another test, etc.

The issue that arises from this is that there is more data returned than is sent to the test runner in the input file, and the test runner needs some way of receiving that data.

As a result, we must find another way to send that data to the test runner.

 

There are 3 solutions that I can see, and I’d like to ask for feedback.

1.       Return less data

This is the simplest solution, although arguably the worst (I’m just including it here in the off chance that it’s actually a viable solution).

Instead of returning all of the above information, the test runner could return a subset of it; specifically, it could return the test id, the actual output, and the exit status of the test script.

The advantage of this approach is that the input for the test runner would remain the same.

The disadvantage is that MarkUs would need to compare the results and determine the marks received. This means that the instructor would still need to submit the input and correct answer to MarkUs, which raises the question of why that information wouldn’t be sent to the test runner.

 

2.       Include more data in the input file

Instead of simply passing a file where each line is “test_name, halt_on_fail”, more data could be included. For instance, “test_name, halt_on_fail, input, expected_output, marks” could be passed instead.

By passing this data, the test runner could compare the actual and expected output, and return the appropriate status, and determine the number of marks to be awarded. The “input” field could likely be omitted since the test runner doesn’t use it, so the only real changes would be the inclusion of the “expected output” and “marks” fields.

The main advantage to this approach is that the input would remain almost the same.

The disadvantage though is that the testing interface would need to be updated to allow the instructor to specify the input, target output, and number of marks, which can be quite tedious to enter if there are a large number of tests.

 

3.       Have the test script output the data

Rather than change the data that is passed to the test runner, or change the testing interface, we can simply require that the first 3 lines output by each testing script are the input, target output, and number of marks that test is worth.

This seems to be the simplest of the 3 approaches.

Without this approach, the test script would already need to define the input for the student code, as well as the target output. It would be trivial to include a print statement to print that to stdout. The only real addition to the script would be to add a line printing the number of marks that test is worth.

The main advantage of this approach is that the interface does not need to be changed, and the instructor will not be required to submit additional information; they can simply put the information in the test script. As well, no extra input is needed for the test runner.

The main disadvantage though, is that the instructor must output the data in every script, and in the correct order. If the print statements are ordered incorrectly, then the results for that test will be incorrect.

I believe that this may be the best solution to this problem.

 

Feedback would be appreciated.

-Mike

Written by mmargel

October 20th, 2012 at 4:18 am

Posted in Uncategorized

10 Responses to 'Ideas for passing data to the Test Runner'

Subscribe to comments with RSS or TrackBack to 'Ideas for passing data to the Test Runner'.

  1. 1. Figuring out the marks earned is a part of running test. I think we want the test runner to handle everything about running the test – MarkUs only passes necessary information back and forth.
    2. I think some of the instructors do not want to submit input and expected output files, because that can be too much work to do. If the inupt of a question is one integer, I would really like to just put that in the test script, but not a separate data file.
    3. On the other hand, the input and output of a question can be very long. Moreover, the student may be asked to write a program to read from file, which a data file becomes a necessary component of the testing.

    What I’m trying to say here is we shouldn’t limit the way that the instructor runs the test script. Instead, we can ask the instructor to do a bit more. So what we can do is ask the instructor to format the output in a way that the test runner understands, for example, XML. Each test script returns the following content:

    The remaining job for the test runner is to append the outputs from all the test scripts and return to MarkUs in XML format.

    Brian

    23 Oct 12 at 10:08 am

  2. 1. Figuring out the marks earned is a part of running test. I think we want the test runner to handle everything about running the test – MarkUs only passes necessary information back and forth.
    2. I think some of the instructors do not want to submit input and expected output files, because that can be too much work to do. If the inupt of a question is one integer, I would really like to just put that in the test script, but not a separate data file.
    3. On the other hand, the input and output of a question can be very long. Moreover, the student may be asked to write a program to read from file, which a data file becomes a necessary component of the testing.

    What I’m trying to say here is we shouldn’t limit the way that the instructor runs the test script. Instead, we can ask the instructor to do a bit more. So what we can do is ask the instructor to format the output in a way that the test runner understands, for example, XML. Each test script returns the following content:

    The remaining job for the test runner is to append the outputs from all the test scripts and return to MarkUs in XML format.

    Brian

    23 Oct 12 at 10:21 am

  3. Sorry for making multiple comments, but my XML tags disappear (treated as HTML tags I guess?)

    input

    /input
    expected_output

    /expected_output
    output_or_error

    /output_or_error
    marks_earned

    /marks_earned
    marks_total

    /marks_total

    Brian

    23 Oct 12 at 10:26 am

  4. I think I’d rather just have the test script return everything as plain text.

    What I want to do right now is this:
    The first line printed by the test script is the number of marks.
    The second line printed is the expected output.
    The third line printed is the actual output.

    Then the test runner can get that information and pass it back to MarkUs.

    I’m not entirely sure what to do about file I/O assignments. It might be better to allow the instructor to submit a copy of the input file, and then just give the students a link to that (in the results) instead of printing the contents, and we could likely do the same with expected output.
    The problem is going to be how we deal with the actual output for assignments like this, unless we just pass that back the way we do for all other tests.

    mmargel

    23 Oct 12 at 2:26 pm

  5. The database schema we have right now allows the instructor to upload helper files such as input data, but it is not visible to the student.

    Also, the expected output and actual output may contains multiple lines.

    Brian

    23 Oct 12 at 4:04 pm

  6. Try to make this as simple as possible. I suggest to read input always from a file even if the student’s code reads from stdin. If the latter is the case, redirect stin to the filedescriptor of the file.

    Also, keep in mind that input may be binary (an image). HTH.

    Severin

    23 Oct 12 at 5:41 pm

  7. I don’t see, why passing this info to the test runner would be a problem. Isn’t that just a matter of protocol and serialization? In an earlier incarnation a specific directory layout naming-scheme was used. Serialization was just a plain file-copy. You could consider other options such as JSON.

    Severin

    23 Oct 12 at 5:43 pm

  8. The idea is that the test runner would just execute whatever script the instructor provides, and the instructor’s script would execute the student code. That means that there’s no real reason to pass the input to the test runner (because it would be part of the instructor’s script).

    The only reason the test runner would ever need the input is so it can send it back to MarkUs, so it could store the complete results from the test (which includes all the fields I listed above).

    mmargel

    23 Oct 12 at 10:41 pm

  9. I think there are some wrong assumptions here. Specifically, in the original post:

    The issue that arises from this is that there is more data returned than is sent to the test runner in the input file, and the test runner needs some way of receiving that data.

    As a result, we must find another way to send that data to the test runner.

    Reading and writing data via a pipe or socket doesn’t have such limitations. If you had a specific buffer in memory set aside for the data, you’d have to be careful about overflow, but that’s not the situation for a pipe/socket. So we can pass arbitrary amounts of information back and forth.

    I think the code for the test runner should be conceptually quite simple:

    testoutput = “”
    while (stdin has lines)
    read testscriptName, stopOnError
    testoutput += “” + testscriptName + ”
    testoutput += execute testscriptName
    testoutput += “”
    if (test resulted in an error and stopOnError) break

    testoutput = “”
    print testoutput to stdout or a socket

    The assumption here is that MarkUs (Brian!) copies the following to the test directory:
    * the student’s code
    * the test script
    * auxillary files provided by the instructor (data, code)

    The test script runs and provides output in a standard format for the testrunner to concatenate onto testoutput. Some sample output from the test script:

    fail
    Upper Case 349!
    uPPER cASE 349!
    UPPER CASE 349!
    1

    (Hope my in-line HTML works!)

    Byron Weber Becker

    24 Oct 12 at 8:56 am

  10. It ate my in-line HTML 🙁 There should be tags included in my testrunner code as well as the sample output from the test script.

    Byron Weber Becker

    24 Oct 12 at 8:57 am

Leave a Reply