• last updated 21 hours ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
provide an analysis method for cleared input (callable for swas via method blank-inputs)

  1. … 1 more file in changeset.
provide means to show submissions of students per test item

  1. … 2 more files in changeset.
added type per question to exam overview

provide a nicer exam-overview

  1. … 2 more files in changeset.
provide more detailed test-item descriptions for exam/question overviews

added exam-overview

  1. … 6 more files in changeset.
make grading checke configurable via URL, make calculations more robust

CVS: ----------------------------------------------------------------------

  1. … 1 more file in changeset.
added policy for supporting view of revisons, used more detailed message key values for supporting rounding by points or revisions

  1. … 3 more files in changeset.
Improve further: flush the object only when the state changes

Improve fix:

in FormPage www-edit, just after the save_data operation we render the FormPage to refresh the references. We should flush the form object cache here, as otherwise any formfield spec will come from the form in the previous state

Make sure that the form object is flushed whenever state might have changed

If code executed after the state change accesses the form object again, this could be that from a previous state and e.g. hold the wrong form definition. This happens in practice downstream, where the submitting of activities also involves accessing the form definition to e.g. compute the grade based on the questions. Downstream we also cache the formfield specs, so if a spec is dependent on the state, might be wrong in also for future requests.

improve listing of covered methods

added support for pagination buttons, visited buttons and flagged buttons

  1. … 5 more files in changeset.
fix regression test which was broken due to last change (move of links to answer panel)

  1. … 1 more file in changeset.
moved answer status into answer_panel (similar to downstream),

made sensitivity of inspect links automatically updating (like downstream)

made templating easier and refactored code

  1. … 2 more files in changeset.
Plug the proctoring-display include to the inclass-exam via a web-callable method

  1. … 1 more file in changeset.
call 'next' to ensure file attachments are stored in the database and content repository

fix attachments for short_text_interaction

improve spelling and whitespace changes

  1. … 1 more file in changeset.
added support to "Answer_manager.get_answers" to return also non-exericse specific attributes

extend role manager to include rel_type based roles (especially useful for DotLRN)

  1. … 1 more file in changeset.
include recent site-wide pages, modernize code

whitespace changes

perform proper login for all tests

remove unneeded line

added functionality to prevent opening the same exam in multiple tabs

  1. … 1 more file in changeset.
Perform proper login with the test user such that session_ids and cookies are correctly setup

Added substitution values for short text answers

This change adds the possibility to provide randomized substitution values to short

text questions via value sets.

Value sets are a means for a content developer to provide multiple matching answers which are inserted into the text before an exercise is shown to the end user. One can e.g. provide for a calculation exercise several input and some output values, such that the students get different calculation exercises provided. These values can also be used for the correct-when clauses.

The content developer can use percent-code delimited elements when defining the exercise:

---

Assume, you want to download a %x.what% with the size of %x.size% over a %x.type% connection with a rate of %x.rate% from %univ%.

---

and also in "correct when"

---

%x.secs%

---

the value sets can be provided via an extra field for the short-text questions and have the form

of a Tcl dict:

---

univ {WU-Vienna TU-Vienna "University of Vienna"}

x {

{type "ADSL" rate "256 kbit/s" size "235 MB" secs 5300 what "Powerpoint file"}

{type "ADSL" rate "512 kbit/s" size "5.6 MB" secs 91 what "PDF file"}

{type "4G" rate "80 Mbit/s" size "270 MB" secs 27 what "PDF file"}

{type "4G" rate "40 Mbit/s" size "650 MB" secs 32 what "Lecturecast Video" }

{type "5G" rate "1 Gbit/s" size "520 MB" secs 4 what "Powerpoint file" }

{type "5G" rate "1 Gbit/s" size "1.5 GB" secs 12 what "Lecturecast Video" }

}

---

In this example, every student will get a randomly chosen value for the university (%univ%)

and matching elements containing the answer (e.g. download time of "270 MB" over "80 Mbit/s" is 27 seconds).

The download time is used in the correct when part, such that auto-correction can be applied.

When a student answers this exercise, the system provides random choices that are substituted in the text.

For every variable ("univ", "x", ..) different random values are used for the student.

Certainly, for other students, other numbers and results will be provided.

Note, that this value sets can be used for numeric an non-numeric exercises.

Current limitations:

- only defined for short-text questions (can be in general also for other question types)

- no elaborate user interface for entering value sets (or a thorough validator) is provided.

  1. … 1 more file in changeset.
use test user_id instead of current user for running tests

Added two types of grading schemes (in addition to "exact") to ordering exercises:

- "position": count elements as correct, when these are on the correct position

- "relative": count elements as correct, if the neighboring element is correctly before the actual element

The results are adjusted by the same guessing correction as in the "ggw" scheme for MC exercises.

Example:

- desired order: 1,2,3,4

- provided answer: 3,1,2,4

- scheme "exact": 0%

- scheme "position": 0 0 0 1

- scheme "relative": 0 1 1 (correctly ordered 1<2 and 2<4)

A minor refactoring was also performed to ease code reuse.

  1. … 2 more files in changeset.