A test is a special kind of accessment that allows the respondee's
answers to be rated immediately. Unless otherwise stated, all pages
described are admin viewable only.
- Settings
- Assessment: a selectbox containing all the assessment in the
same subnode as the test package, so the administrator knows for which
survey he will create an test.
- Valid Results: select (first, last, highest, median).
Describes which points to choose if a user has the option to take a
test multiple times. Median is the median of all tries.
- Publication of Results: radio. Publicate results once test is
submitted, once test has been evaluated by a TA, never. This is always
with regards to the respondee. Admins can view the results all the
time.
- What to show in publication: checkboxes.
- Question. The question that has been asked
- Answer. The answer the respondee has given
- Points. The number of Points the respondee has gotten for
the answer
- Evaluation comment. The evaluation comment given by the
evaluator
- Correct Answer. The correct answer of the question
- Feedback. The feedback that is stored with the given
answer
- Feedback for question. The feedback that is stored with
the question
- Total result. The total result of the test (at the
buttoms)
- Submit each answer seperatly: boolean (yes/no). Does the user
have to submit each answer seperatly.
- Answer changeable: boolean (yes/no). Can the user change a
submitted answer.
- Finish button: boolean (yes/no). Allow the respondee to
finish the test early, despite not having answered all the questions.
The respondee will not have the option to go back and continue the test
afterwards (like handing out your written test to the TA in an on site
exam).
- Allow answers to be seen: boolean (yes/no). This will
disallow the respondee to revisit his answers.
- Evaluation overview. This is a page with a table of all
respondees with
- Smart display (to limit the number of respondees per page)
- Name. Name of the respondee, maybe with email address
- Test result (with max number of points in the header). Number
of Points the respondee has achieved in this test
- All tries with
- Points. Number of Points for this try (out of the scoring
system)
- Time. Time taken for a try (yes, we will have to store
the time needed for a try)
- Status. Status of the try (not finished, finished,
auto-graded, manually graded)
- Link to evaluate single response (human grading in
test-processing.html)
- The try that is used for scoring the respondee is
displayed with a green background. If we take the median of all tries,
mark all of them green.
- Furthermore links to details about the test, reports and
summary are given.
Test processing
happens in a multiple stage process.
- The system evaluates the test as good as it can.
- The results of this auto-grading are displayed in the evaluation
table for the admin (see test specifications)
- The test result is stored in the scoring system.
- Staff can manually Human Grade the test. This is mandatory for
open questions for the test to be completely graded.
- The result of the human grading overwrites the auto grading in
the scoring system.
Autograding is different for the types of questions the test has. For
future versions it should be possible to easily add other types of
information that will be autograded. All autograding follow this
scheme:
- The answer is taken from the respondee response
- It is compared with the correct answer
- A percentage value is returned
- The percentage value is multiplied with the points for the
question in the section (see assessment document for more info).
- The result will be stored together with a link to the response
for the particular question in the scoring system.
- Once finished with all the questions, the result for the test
is computed and stored with a link to the response in the scoring
system.
Autograding is different for each type of question.
- Multiple Choice
- All or nothing. In this scenario it will be looked, if all
correct
answers have been chosen by the respondee and none of the incorrect
ones. If this is the case, respondee get's 100%, otherwise nothing.
- Cumultative. Each answer has a certain percentage associated
with it. This can also be negative. For each option the user choose he
will get the according percentage. If negative points are allowed, the
user will get a negative percentage. In any case, a user can never get
more than 100% or less then -100%.
- Matching question
- All or nothing: User get's 100% if all matches are correct,
0% otherwise.
- Equally weighted: Each match is worth 100/{number of matches}
percent. Each correct match will give the according percentage and the
end result will be the sum of all correct matches.
- Allow negative: If we have equally weighted matches, each
correct match adds the match percentage (see above) to the total. Each
wrong match distracts the match percentage from the total.
- Obviously it is only possible to get up to 100% and not less
than -100%.
- Short answer question
- For each answerbox the possible answers are selected.
- The response is matched with each of the possible answers
- Equals: Only award the percentage if the strings
match exactly (case senstivity depends on the setting for the
question).
- Contains: If the answer contains exactly the string,
points
are granted. If you want to give percentages for multiple words, add
another answer to the answerbox (so instead of having one answerbox
containing "rugby soccer football", have three, one for each word).
- Regexp: A regular expression will be run on the answer.
If the result is 1, grant the percentage.
- The sum of all answerbox percentages will be granted to the
response. If allow negative is true, even a negative percentage can be
the result.
Human grading will display all the questions and answers of
response along with the possibility to reevaluate the points and give
comments. The header will display the following information:
- Title of the test
- Name of the respondee
- Number of the try / total number of tries
- Status of the try (finished, unfinished, autograded, human graded
(by whom))
- Start and Endtime for this try
- Time needed for the try
- Total number of Points for this test:Integer. Prefilled with the
current value for the response.
- Comment: richtext. Comment for the number of points given.
Prefilled with the current version of the comment.
For each question the following will be displayed
- Question Title.
- Maximum number of points for this question.
- Question.
- New Points: Integer. Prefilled with the current value for the
response. This is the possibility for staff to give a different number
of points for whatever reason.
- Comment: richtext. Comment for the number of points given.
Prefilled with the current version of the comment.
- Answer. The answer depends on the question type.
- Multiple Choice: The answer is made up of all the options,
with a
correct/wrong statement (in case we have an all or nothing type of
multiple choice) or a percentage in front of them (depending on the
response) and a small marker that shows which option the respondee
picked. The correct / wrong depends whether the respondee has answered
correct or wrong for this option (if he picked an option that he should
not have picked, this option will get a wrong in front).
- Matching question: The item on the left side and the picked
item are displayed in a connecting manner. A correct / wrong statement
will be added depending whether the displayed (and responded) match is
correct.
- Open Question: The answer is displayed as written by the
user. Furthermore the correct answer is displayed as well. This should
allow the TA to easily come to a conclusion concerning the number of
points.
- Short Answer: For each answerbox the response will be
displayed along with the percentage it got and all the correct answers
for this answerbox (with percentage). Might be interesting to display
regexps here :-).