This is the sixth in a series of blog posts aimed at documenting the development of an XCRI-CAP 1.2 validator. The entire series can be found by visiting the validator blog post list. My aim is to post a new blog post every week until the development is complete, then at least once every month to document both the usage of the validator as well as any community-highlighted modifications or issues.

Last week's blog post concentrated upon the feedback that I've received so far for the validator, how useful it's been and how it's been integrated into the validator code-base.  This week, though, I have been concentrating purely upon testing - both automated testing and testing by humans.

Automated testing allows the creation of individual sections of code that can be automatically run when changes are made to the code.  Each test puts the validator into a certain state, runs some code, then tests the validator did as expected.  Some of these tests check the underlying validation code itself, some of the tests check the expected vocabulary values (e.g. languages, studyMode, etc), and some of the tests check the rules which are held within the rule-base against XML fragments.  Every time code on the validator is changed, these tests are re-run.  By knowing whether these tests pass or fail we can be more confident in the quality of the validation that's being done.  In last week's blog post I stated that there were 212 automated tests.  As of the time of writing there are now 620.  The good news is that this automated testing has brought up a couple of issues with the rule-base, typically around the XPath selectors that were being used.  These issues have been resolved as the tests were written.

In addition to the automated testing, Alan and Jennifer Paull have also found some time in their hectic schedules to manually test the validator with test files and have identified additional rules from the Data Definitions Document which needed to also be run, as well as a number of really useful usability/readability issues.  All of the additional rules have been implemented (and automated tests written!), and most of the usability/readability items are also resolved.  This process has been exceptionally useful and the quality of the end result should be much improved as a direct result.

As a last comment, the more eagle-eyed of you may have noticed that the validator now has a new home.  This will be its permanent address now and I recommend that you update any links you may have.  Anyone going to the old address will be automatically redirected across, obviously.

Please feel free to highlight any issues you encounter with the validator either via twitter (), as a comment to this post, or on the XCRI Forum.