MarkUs Blog

MarkUs Developers Blog About Their Project

Automated Test Framework – Part 2: Submission View

without comments

As a continuation from Part 1 of the testing framework, Part 2 of the testing framework consists of the actual execution of the tests from the Submission view.

Implementation Overview:

As a starting point, this implementation allows a grader to execute a set of tests using Ant, a Java-based build tool, against students’ code.  The files used will be those specified from the Testing Framework form as detailed in Part 1 of this post.  The results will be logged and saved to the ‘Test Results’ table in the database and made available for the grader to view upon execution.

Three possible states may occur upon running the tests:

Ant Builds Successfully – This means Ant was able to build and execute all the test and other related files successfully.  It does not mean that all the tests passed.  It simply means that Ant was able to run successfully.

  • This state will most likely occur when students (more about this later) and graders run the tests because by the time they run them, we would assume the build files and all other Ant related configurations have already been tested and work properly.

Ant Build Failed – This means Ant was not able to build and/or execute the tests and related files successfully.  There could be a problem with how the build.xml is configured or some other issue.  However, the build log is available so the user can diagnose the problem appropriately.

Ant Build Error – This means some other unknown error with Ant has occurred.  Perhaps Ant was not setup and installed properly.  However, the build log is available so the user can diagnose the problem appropriately.

  • These above two cases are mainly made available for the purposes of helping the instructor upon initial setup of the Ant tests.  Along with the build logs, this will provide easier diagnosis when they are setting up their tests for the first time.

**Note: In the current implementation, there is no feature indicating whether all the tests passed or failed because this gives instructors more flexibility in how they execute their tests.  It simply tells you that Ant executed successfully and you must look at the logs to see what tests passed and failed.  The reason why there isn’t currently one is because Ant can support multiple languages and perhaps the instructors may want to parse their test output and display their own customized results.   As a result, the test output is not consistent.  This implementation could change depending on if there is a consistent way to tell if tests passed or failed.  Perhaps we could restrict users to always provide a Test Results Summary in the end that can be easily parsed, or allow users to specify a Test Results expression to look for in the Testing Framework form, and when the tests are executed, the expression provided is searched for in the test output and the number of tests passed and the number of tests failed can be reported.

ANT Overview:

Part 1 of the testing framework gave details as to how to upload the necessary Ant files for the Testing Framework.  Here is now an overview of how Ant uses those files and how they can be customized:

  1. Here is a zip containing a sample A1 folder and the necessary Ant files used by the Testing Framework:  A1
  2. As you can see, the zip maintains the same folder structure as previously detailed in Part 1 of this post.
  3. Open up the ‘build.xml’ file and you can see all the various targets and tasks defined for Ant.  The properties that it uses are defined in the ‘build.properties’ file.
  4. Once Ant is installed on your computer, you can try testing it out by simply running ‘ant’ from this A1 folder.
  5. Installation, further documentation and help can be found here:  http://ant.apache.org/
  6. <optional> Parse Feature:
  7. The parse feature allows users to parse the test output for addition items if needed and display a more customized presentation of the results.  This feature is optional so if no parsers are specified (in the Testing Framework form), the original test output from the test execution will simply be displayed.  Similarly, if the Ant build fails or errors out, the original test output will be displayed.  However, if the Ant build is successful AND at least one parser file has been specified, the test output will get parsed accordingly and the resulting test output is what will be displayed.

    To support the parse feature, users must define a ‘parse’ target in their build.xml.  They can have as many parsers as needed but the main one called must be called ‘parse’.  For example, if the user has 2 consecutive parsers they wish to run against the test output, the ‘parse’ target can depend on a ‘preparser’ target.  The ‘preparser’ target can perform the initial parsing so that the resulting output is fed into the ‘parse’ target to do the remaining parsing.  Users can add the following to their ‘build.xml’ in order to achieve this:

<!– Target ‘parse’ depends on target ‘preparser’ –>

<target name=“parse” depends=”preparser”>
<java classname=”Parser” classpath=”${build.dir}” fork=”true”>
<arg value=”${testoutput}”/>
</java>
</target>

<target name=”preparser”>
<java classname=”PreParser” classpath=”${build.dir}” outputproperty=”testoutput” fork=”true”>
<arg value=”${output}“/>
</java>
</target>

Explanation: What the above does is it takes the test output from ${output}, which is the test output collected from the test execution, and feeds it into the PreParser class for parsing.  The resulting output from this parse gets stored into ‘testoutput’ and this then gets fed into the ‘Parser’ class.   The important items here are the target name=“parse” and the <arg value=”${output}“/>.  These two items MUST BE defined in the build.xml in order for the parsing to work because the Testing Framwork code looks for a parse target when it executes Ant and all the initial test output (before any parsing is done) is collected and stored into an ‘output’ variable.

Here is a more simplified version of what the parse target could look like:

<target name=“parse”>
<java classname=”Parser” classpath=”${build.dir}” fork=”true”>
<arg value=”${output}“/>
</java>
</target>

Running Tests:

*The following assumes you have uploaded the necessary test files into the Testing Framework form as detailed in Part 1 of this post.

  1. From the ‘Assignment View’, click on the sub menu ‘Submissions’.
  2. Select a group / repository to grade and go through the process of collecting and grading the revision.
  3. Once you are on the results page, you can run the tests against the code by clicking on the ‘Test Code’ icon in the ‘Test Results’ section.
  4. Submission View - Test Results Feature

    Submission View - Test Results Feature

  5. As the tests are running, you will see a ‘Loading…’ message.
  6. Once the tests have completed, you will see either:
    • Tests Completed. View ‘<date_time>.log’ for details.  (If Ant was successful)
    • Build Failed. View ‘<date_time>.log‘ to investigate.   (If Ant failed)
    • Unknown error with Ant. Is Ant correctly installed? View ‘<date_time>.log’ to investigate. (If Ant errored out)
  7. To view the results, choose it in the drop down list beside ‘Test Results’ (By default, the latest test result log will be selected), and click ‘Load’. As the results load, ‘Loading results…’ will be displayed.
  8. The results will open in a separate window to be viewed.

Test Results Table Changes:

  1. Status column added:  This column stores the status of the Ant execution (ie. success, failed, error)
  2. User Id column added: This column stores the id of the test executor so that a user may only view test runs executed by themselves.

Test Framework Repository:

*Ensure you have set your ‘TEST_FRAMEWORK_REPOSITORY’ value

Behind the scenes, when you click ‘Test Code’, everything in your assignment folder under the directory you specified for ‘TEST_FRAMEWORK_REPOSITORY’ will be copied over into a group repository folder.  All the assignment related repository files will be copied over as well so the folder structure looks something like this:

TEST_FRAMEWORK_REPOSITORY:

group_0001:

– A1 (folder):

  • build.properties
  • build.xml
  • … <other files needed for Ant>…

– src (folder):

  • Math.java

– test (folder):

  • TestMath.java

– lib (folder):

  • junit-4.8.2.jar

– parse (folder):

  • Parser.java

Comments and Thoughts on the Current Implementation:

For now, this is a simple implementation that introduces the ability to test code with Ant and displays the results.  This design can obviously be refined as needed, but Benjamin and I just wanted to get something working out there for us to build on.  However, thinking ahead, here are some possible next steps:

  • Fork the Ant Execution – Currently the Ant execution is processed inline.  Need a clean way to handle both windows and non-windows os cases since Windows does not support fork.
  • Test Results reporting – Display the number of tests passed and failed as mentioned earlier
  • Implement this feature for when submissions are collected and graded in bulk (more than one submission at a time) so that the grader does not have to click ‘Test Code’ for each submission.  Instead, the test results are already ready to be viewed as they go in to grade each submission.
  • Figure out a clean way to purge old test results files from the ‘Test Results’ table. In the event that the tests are executed numerous times each, more and more entries will be added to the table, and after awhile, old test runs will no longer be needed.

Related Posts:

Automated Test Framework Overview
Automated Test Framework – Part 1: Assignment View
Automated Test Framework – Part 3: Student’s View

Written by diane

September 19th, 2010 at 7:00 pm

Leave a Reply