The test runner is the program that would be called to run the instructor-provided test scripts, and would return the data to MarkUs, which would then parse the results.
The test runner is supposed to return the following data as XML:
– Test id – The name of the test
– Input – The input used for the test
– Expected Output – The correct output
– Actual Output – The actual output (from the submitted code)
– Result – Pass or Fail, depending on the output, or Error if there was an error during testing
– Marks – The marks earned by the student
The input for the test runner is currently a file (or input on stdin) that contains the name of each test followed by a flag (to determine whether the program should halt if that test fails), followed by another test, etc.
The issue that arises from this is that there is more data returned than is sent to the test runner in the input file, and the test runner needs some way of receiving that data.
As a result, we must find another way to send that data to the test runner.
There are 3 solutions that I can see, and I’d like to ask for feedback.
1. Return less data
This is the simplest solution, although arguably the worst (I’m just including it here in the off chance that it’s actually a viable solution).
Instead of returning all of the above information, the test runner could return a subset of it; specifically, it could return the test id, the actual output, and the exit status of the test script.
The advantage of this approach is that the input for the test runner would remain the same.
The disadvantage is that MarkUs would need to compare the results and determine the marks received. This means that the instructor would still need to submit the input and correct answer to MarkUs, which raises the question of why that information wouldn’t be sent to the test runner.
2. Include more data in the input file
Instead of simply passing a file where each line is “test_name, halt_on_fail”, more data could be included. For instance, “test_name, halt_on_fail, input, expected_output, marks” could be passed instead.
By passing this data, the test runner could compare the actual and expected output, and return the appropriate status, and determine the number of marks to be awarded. The “input” field could likely be omitted since the test runner doesn’t use it, so the only real changes would be the inclusion of the “expected output” and “marks” fields.
The main advantage to this approach is that the input would remain almost the same.
The disadvantage though is that the testing interface would need to be updated to allow the instructor to specify the input, target output, and number of marks, which can be quite tedious to enter if there are a large number of tests.
3. Have the test script output the data
Rather than change the data that is passed to the test runner, or change the testing interface, we can simply require that the first 3 lines output by each testing script are the input, target output, and number of marks that test is worth.
This seems to be the simplest of the 3 approaches.
Without this approach, the test script would already need to define the input for the student code, as well as the target output. It would be trivial to include a print statement to print that to stdout. The only real addition to the script would be to add a line printing the number of marks that test is worth.
The main advantage of this approach is that the interface does not need to be changed, and the instructor will not be required to submit additional information; they can simply put the information in the test script. As well, no extra input is needed for the test runner.
The main disadvantage though, is that the instructor must output the data in every script, and in the correct order. If the print statements are ordered incorrectly, then the results for that test will be incorrect.
I believe that this may be the best solution to this problem.
Feedback would be appreciated.