MarkUs Blog

MarkUs Developers Blog About Their Project

Archive for the ‘test’ tag

Introducing MarkUs 1.0-alpha!

without comments

The team is pleased to announce MarkUs 1.0-alpha! This release is tagged as « alpha » in order to allow for further testing and feedback, and all input is welcome. This release may be used in production.

MarkUs 1.0-alpha contains many fixes and some features:

  • We are now using Rails 3.0
  • We fully support for Ruby 1.9.x
  • Changed PDF conversion to Ghostscript for faster conversion
  • We introduced a new REST API
  • Users now have the ability to import/export assignments settings
  • Improvements have been made to section management
  • Tests no longer use fixtures

For a list of all fixed issues, refer to: https://github.com/MarkUsProject/Markus/issues?milestone=8&state=closed Furthermore, the change log can be found at: https://github.com/MarkUsProject/Markus/blob/master/Changelog.

Go to http://www.markusproject.org to download it!

We suggest not updating from previous instances, as due to the significant number of changes introduced to the application, data loss is possible.

Thanks to all the contributors who made this release possible. Keep up the good work!

Written by Benjamin Vialle

September 11th, 2013 at 1:09 pm

Posted in Releases

Tagged with , , , ,

Automated Test Framework – Part 1: Assignment View

without comments

Testing Framework Overview:

Upon grading a student’s assignment, an instructor may want to execute a set of tests against the student’s code to validate the correctness of the code.  The instructor may also have various ways of handling pass/fail cases.  The testing framework has been implemented to allow instructors to perform such operations.  Using Ant, a Java-based build tool, as the core behind this testing feature, instructors will be allowed to upload their test files, any associated library files necessary for the tests to run, as well as any parsers they may wish to execute against the output, and control the compilation, testing and output parsing through a simple build file.

Part 1 of the testing framework consists of the Test Framework form from the Assignment view which allows a user to upload the Ant, test, library and parser files required to execute tests against students’ code.  Part 2 of the testing framework consists of the actual execution of the tests from the Submission view.  Part 3 is the student’s Assignment view where the student will be allowed (assuming they have tokens available to them) to execute the tests against their code before submitting their assignments.

The following documents how to create new test files associated to an assignment, as well as how to update and delete the files and the associated expected errors messages that will appear in response to invalid input.

Creating new test files:
[UI]
1) Create a new assignment and click ‘Submit’
2) Click on ‘Testing Framework’ from the sub menu and you will be redirected to the Test Framework view (by default, the test form is disabled)
3) Check ‘Enable tests for this assignment’ and the test form will be enabled
4) Fill in the ‘Tokens available per group’ as needed.  This value represents the number of times a student may be allowed to execute the tests against their code before submitting their assignment.
5) Add the required Ant files, build.xml and build.properties
6) Click on ‘Add Test File’, ‘Add Library File’, ‘Add Parser File’ to add the appropriate test, library and parser files needed to run the tests respectively.  Test files represent the test cases that will be executed against the code.  Library files are any necessary files required by the tests to execute appropriately.  Parser files (if specified) will be executed against the output from the tests to extract and manipulate the output as needed.
7) Click ‘Submit’ to save the changes

*Optional* You can check the test file ‘private’ box to indicate a particular test cannot be executed by the student.

Testing Framework

Testing Framework

[DATABASE]
1) Verify table ‘test_files’ to ensure the files were saved correctly
2) Verify table ‘assignments’ to ensure ‘enable_test’ and ‘tokens_per_day’ were saved correctly

[FILESYSTEM]
1) Verify the following folder structure in the TEST_FRAMEWORK_REPOSITORY:

A1 (folder)

– build.properties

– build.xml

– test – TestMath.java

– lib – junit-4.8.2.jar

– parse – Parser.java

Updating test files:
[UI]
1) Update any test, library or parser file already uploaded and click ‘Submit’ (able to replace with the same filename or overwrite with a different filename)

[DATABASE]
1) Verify updated file has been saved successfully

[FILESYSTEM]
1) Verify updated file has been overwritten successfully

(If the replaced file has a different filename, it will be deleted from the directory.  If it has the same filename, it will simply be overwritten.)

Deleting test files:
[UI]
1) Delete any test or library file by checking the ‘Remove’ box and click ‘Submit’

[DATABASE]
1) Verify file has been removed from the database successfully

[FILESYSTEM]
1) Verify file has been removed from the filesystem successfully

Negative Test Cases:

Filename Validation –

build.properties:

  • cannot be named anything but ‘build.properties’
Invalid Filename

Invalid Filename

build.xml:

  • cannot be named anything but ‘build.xml’
Invalid Filename

Invalid Filename

test, library and parser files:

  • cannot be named ‘build.xml’ or ‘build.properties’
Invalid Filename

Invalid Filename

any file:

  • cannot have the same filename as any other existing test file belonging to that assignment
Duplicate Filename

Duplicate Filename

  • if you attempt to upload two files simultaneously with the same filename, an error will be displayed
Duplicate File Entry

Duplicate File Entry

  • cannot be blank (File entry will simply be ignored)

Assignment Validation –

tokens:

  • cannot be negative
Invalid Token Value

Invalid Token Value

Other Scenarios:

  • if you update the test form in any way but then disable tests for the assignment and click ‘Submit’, those new changes will not be saved

Related Posts:

Automated Test Framework Overview
Automated Test Framework – Part 2: Submission View
Automated Test Framework – Part 3: Student’s View

Written by diane

August 16th, 2010 at 2:34 am

Posted in Automated Testing

Tagged with , ,

Automated Testing Framework

with 3 comments

Talks concerning MarkUs Automated Testing Framework

We defined some specifications for the Automated Testing Framework we wanted to implement. First, the framework must add as few dependencies as possible, and not disturb the core of MarkUs (MarkUs must not be overcrowded by other programs for testing student’s code). The Automated Testing Framework must also support as many languages as possible, allowing users to test any language regardless of the languages MarkUs has been implemented in.

Technical side

Today, C, Java, Python and Scheme are used in the Universities where MarkUs is deployed (Department of Computer Science, University of Toronto.School of Computer Science, University of Waterloo.Ecole Centrale de Nantes).
C++ should be easy to implement according to the work done with C.

Diane and I are going to build our framework by using examples from these languages. When we speak of an Automated Testing Framework, the idea is not to build a new Unit Testing Framework. We aim to build an “abstraction layer” between MarkUs and most used Unit Testing Frameworks for most used languages in University.

A picture is far better than words :

Automated Testing

Automated Testing Framework - version 1

MarkUs will call an Ant script written by the TA. This script, and all necessary files for the test environment (student code, data and tests) will be sent to a “working environment”. Later, the working environment could be moved to a dedicated server.

MarkUs will ask the Ant script to do many tasks (see Ant tasks), in an asynchronous way. Next, the Ant script, customizable by each instructor, will compile, test and run student’s code. The Ant script will produce an xml or text output for tests and build. Ant’s output will be saved in MarkUs and will be filtered for instructors and students.

The student will be able to see the result of the compilation and some test results will be made available by the instructor.

Written by Benjamin Vialle

June 9th, 2010 at 10:29 am

Posted in Automated Testing

Tagged with , ,

Fixture replacement

with 4 comments

One of the things I’ve done since I joined the project, was to look at how we could make the test suite cleaner and more up-to-date.

That task implied some test debugging, and one annoying thing I bumped into was fixtures. A little internet roaming showed that I’m not alone to be uneasy with them.

Some interesting fixture definition can be found if you follow the first link:

Accessories fixed to structures or land in such a way that they can’t be independently moved without damage to themselves, or the property housing them.

Which applies strangely well to how I fell about fixtures when I use them.

Some Bad Smells

While writing tests, how many times have you asked yourself questions like:

  • Which student was part of group_1?
  • Was assignement_1 submitted?
  • Was student_3 a member of group_3 or group3? (this one is a good example of maintainability issues)
  • Am I going break dozens of tests if modify that fixture field?

To me, fixtures feature the following problems:

  • They make necessary to explore multiple files to understand a particular test;
  • It’s even more complicated to figure out the links between records;
  • They generate more data than needed by any test case;
  • They open the way to tests that pass alone but fail in the suite;
  • Editing a particular fixture may have side-effects in many tests;
  • It is a common statement that all of the above should not be problems if you maintain your fixtures adequately. But I found fixtures are hard to maintain.

The alternatives

Exploring for alternatives I stumbled upon the following projects (reduced the list to those that looked appealing):

I did read about them all, but have not tried them all. A short report on how my experiences went follows.

First attempt – FactoryGirl

My first tries were with FactoryGirl. Influenced by the fact that we were already satisfied with Shoulda and that it got the best rating on the Ruby Toolbox.

I did face some problems though. For instance, “building” an assignment was “building” a SubmissionRule that couldn’t be saved because the assignment_id field was nil. I did not manage to make it build in the correct order.

I am certainly not stating this is impossible. Maybe it’s just that I did not caught the FactoryGirl philosophy quick enough. I probably was missing something.

There were also issues with the way we generate fake memory repositories in the test_helper, but I believe we are going to get those whatever the fixture replacement we opt to use.

Second attempt – Machinist

Then I went with Machinist. First good thing I noticed about it was its syntax. It is much lighter and fun to use that FactoryGirl’s. So I did applied myself in writing blueprints. They are the Machinist artefact that let us tell him how we want our objects to be generated, supplying default values that we can override later according to our particular needs not having to mention the data we do not care about when writing a test.

The whole process is really neat. I created a review request so that you can all take a peek at how I made things work. There still are no blueprints for each of our model object, but I managed to (easily) write working blueprints for many of our classes. I would approximate, twice the number of classes (without hassle) in half the time compared to FactoryGirl.

Once more, I do not want to sound like bashing on FactoryGirl, which definitely look like a great tool and that a lot of people out there are using. I am simply reporting on my experience, and, to me, the bottom line is, Machinist came a lot more naturally.

As for FactoryGirl, Machinist does not happily coexists with fixtures (partly because of fake test repositories), enforcing the idea that if we replace the fixtures, we’ll have to completely migrate our test suite — which makes sense since it all depends on fixtures[1]. This is huge work ahead.

What is to be Gained?

First things that come to mind:

  • Easier to maintain test suite;
  • Reduced side-effects when modifying;
  • More test readability — Tremendous enhancements have been made in that field since the beginning of the semester, but there is always room to do better;

What is to be lost?

We can expect some performance loss (during test execution) if we switch to a data generator (whatever which one). This is bad. But if we want our unit tests to run fast, we should consider letting them be real unit tests and not hit the database at all. Then we could have, say, model tests that verify the relation between our objects using generated data.

Temporary Final Word

An eventual transition from fixtures to, say, Machinist, would take quite some time and would introduce even more inconsistencies in the test suite until completed. That definitely has to be kept in mind. This is no piece of cake. But the final product would most probably be a better/stronger/easier to maintain/easier to read test suite which is not something we should overlook.

My vote? Let’s pack all the courage we got[1] and sail toward a fixture-less world for the benefit of the generations to come!

——–

  1. An Agile development valued practice.

Written by gabrielrl

December 4th, 2009 at 6:51 pm

Quick Word on Test Taxonomy

with one comment

One thing (among lots of others) I have learned about tests during last semester is that, depending on the environment (or the culture you’re developing in) tests get different names.

Here’s an example where Rails’ naming conventions differ from what I have learned in my QA course[1]:

Rails’ Unit Tests are not Unit Tests

Considering a unit test should test an object in complete isolation, it is a breach to that rule to have a “unit” test access the database. Rails’ unit tests are really functional tests where we make sure that the model behaves accordingly (for one) and that its mapping with the database is correct (for two). Another thing about unit tests is that they should be fast to run. Under 30 seconds is a generally good standard (varying depending on the environment). Let me guess, you need somewhat more than 30 seconds to run rake test:units on MarkUs. For example, on my machine I get a run time of approximately 31 seconds, which would be good, if it was not for the fact that it does not include the database setup time as well as (I believe) setup and teardown time between each test[2].

Side-note: I also get 1 error and 1 failure. It makes me want to emphasize that fast-to-run tests get ran more often. It the tests are not used, they loose their ability to help find bugs early, get deprecated and more of a burden that a great tool.

On the other hand. Every model object is so tightly coupled with ActiveRecord that one could argue that we do not really want to test our model classes without ActiveRecord participating.

Anyway, this is an example of what I was stating in the first paragraph. In this case, it is commonly accepted, in Rails’ culture, that, what I could refer to as “open box integration testing of models and ActiveRecord”[3] should be called “a unit test”. The later has some clear advantages over the former and the most important thing is to make sure that everyone is speaking the same language among the team.

Refactoring the test/ folder

As Adam told us during code sprint. It is a common practice, among Rails’ developer teams, to refactor the test folder on the very start of a new project. One way of doing this could give something like:

  • test/ remains test/
  • unit/ becomes model/
  • functional/ becomes controller/
  • unit/ gets added (and contains model unit tests that does not hit the database)
  • functional/ gets added (and contains selenium tests)
  • acceptance/ gets added (and contains cucumber tests)

Being loose

I have myself been pretty loose all these times I have reffered to Cucumber tests simply as acceptance tests. If you dig a little, you’ll find that I was more specifically talking about “user acceptance” tests.

Convention Over Elite Correctness

It’s written up there, but the key thing is to speak the same language among the team. You’ve experienced it, Rails is all about conventions. Keeping the usual setting may help newcomers to the project, but used Rails developers, to understand exactly what is supposed to go where (and what they can expect to find where) when it comes to test.

On the other hand, from what I can tell, newcomers to the project also are newcomers to Ruby and Rails, which, I believe, leaves the way open for a little structure refactoring.

——–

  1. Which, in turn, was not proposing an absolute answer to “how a specific test should be named?” but rather focused on providing us with the necessary tools (read: knowledge) to efficiently and intelligently test the software we develop.
  2. Does anybody know?
  3. and I am no authority on the question. I also intentionally made it too long for its own good 😉

Written by gabrielrl

December 4th, 2009 at 5:03 pm

Cucumber reporting in

without comments

About Cucumber

Cucumber is now integrated in MarkUs’ trunk. Hurray!

I greatly encourage you to try it, it’s pretty simple. First, update to the last revision (just like you do every morning before breakfast, right?) and then visit the wiki page. After reading the requirements and running the tests sections you’re ready for a test run.

I’ll make sure to update the wiki/post/code with any feedback you provide.

About Acceptance Tests

Here are some guidelines to keep in mind when you write acceptance tests.

Speak the Client’s Language

Acceptance test are directed at the client. This is why they should speak your mother tongue and use domain specific language. There should be no reference to the code and it should conceptually stay at the user level.

You Don’t Have to Test Everything

You probably know by now that it’s impossible to test everything. But unlike when you write a unit or a functional test, which you want to be as thorough as you can, an acceptance test aims at asserting that a particular feature is actually there. So, most of the time, one case per features’ functionality is enough. Also, you don’t want to try for error cases. It’s just not the place, unless it is explicitly requested by the client, error case tests belongs to unit/functional tests.

An acceptance test output should not get convoluted with tons of specific border-line test cases. The output should be just what is needed for the client to assert that all of his requested functionalities are implemented and that they do work. That is, give the client some insight on whether or not he should accept the software.

References

Written by gabrielrl

December 4th, 2009 at 4:41 pm

Single Table Inheritance, and Testing Fixtures

without comments

I’m still trying to get our unit tests up and running, and I ran into a snag a few hours ago.

I tried to run a single unit test, on the Admin model (which is a subclass of type User). I kept getting this error message:


ActiveRecord::StatementInvalid: PGError: ERROR: relation "students" does not exist
: DELETE FROM "students"

Hrmph. We don’t have a “students” table, so of course that won’t work. We have Students (which, like Admins, subclass from User), but we certainly don’t have a “students” table.

So how come it’s trying to access that table?

It turns out that Rails test fixtures don’t deal with Single Table Inheritance. Instead, a Rails fixture should be a YAML file that populates a particular table with its contents.

And it turns out I had a fixture called “students.yml” in my test/fixtures folder. So, Rails tried to connect to the “students” table, insert some records, and clear them out afterwards.

The solution was to remove the students.yml, tas.yml, and admins.yml files, and simply have a users.yml file. The same goes for student_memberships.yml and ta_memberships.yml. Replace those with memberships.yml. Boom. Tests run.

Now it’s just a matter of getting some good content in the fixtures, and getting some solid tests written…

Written by m_conley

May 26th, 2009 at 1:54 pm

Migrate Your Tests

without comments

Rails has a nice way of maintaining versions of your database schema through ActiveRecord:Migration.  This lets you modify your existing schema without the hassle of manually copying the same schema to all the different deployed instances that you have, and does this automatically for you.

When I usually change the schema, I usually drop all the tables and recreate them using:

rake db:migrate VERSION=0
rake db:migrate

and then repopulate my development DB environment using a script I use.  However it turns out that this doesn’t migrate your test environment automatically with it. I tried creating a simple test of making sure that my ActiveRecord validation works.  However, I ended up getting this error:

test_numericality_group_limit(AssignmentTest):
NoMethodError: undefined method `group_limit=' for #<Assignment:0xb7242018>
/var/lib/gems/1.8/gems/activerecord-2.1.0/lib/active_record/attribute_methods.rb:251:in `method_missing'
test/unit/assignment_test.rb:11:in `test_numericality_group_limit'
/var/lib/gems/1.8/gems/activesupport-2.1.0/lib/active_support/testing/setup_and_teardown.rb:33:in `__send__'
/var/lib/gems/1.8/gems/activesupport-2.1.0/lib/active_support/testing/setup_and_teardown.rb:33:in `run'

After hours of finding the bug, I found out that you have to migrate your test DB environment as well, by executing the following:

rake db:schema:dump
rake db:test:prepare

This will copy the schema that you have right now in your development environment, and copy it to your test environment.

Written by Geofrey

August 6th, 2008 at 10:49 am