• last updated 7 hours ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Added option to download results on an exam

The download is in form of an CSV file and consists of one line per

question/sub-question and including comments. This feature can be

selected after the exam was evaluated once via running the

exam-protocol.

  1. … 4 more files in changeset.
Added support for manual grading and individual feedback

- Lecturers can provide points and feedback comments

directly via exam protocol

- Grading is allowed, when student has submitted the exam or

the exam is not open

- Composite questions are graded at the sub-question level

- Manual grading have a higher priority than automatic grades

- Manual grading/comments can be undone by clearingfield

(showing then missing points, automated computed points, ...)

- Grades and comments are included in the exam review for

students ("Einsicht").

- Grading interactions are implemented as AJAX calls

(no need for redrawing the exam protocol, immediate feedback)

  1. … 5 more files in changeset.
added support for optionally turning off proctoring recodings

This options is useful e.g. for e.g. mock exams to avoid privacy issues.

  1. … 10 more files in changeset.
remove obsolete statements

added title for "inclass-exam-submit"

  1. … 2 more files in changeset.
add titles to first/next/previous quesion buttons

  1. … 2 more files in changeset.
added tailored title for "logout button, fixed typos in message catalog

  1. … 2 more files in changeset.
Support for pool questions in the test-item family

Features:

- select random questions from some folder

(Currently siblings, i.e., folders of the same package instance)

- one exam can have multiple pool questions, potentially from other pools

- pools can be links to other folders (which are no siblings)

- The current folder can be used as well as a pool folder. In this

case, other not-used items can be selected as replacement items

given these match the specified filter characteristics

- Question filtering

* filter by item type:

allow one to select from all/some/some item types

(mc, sc, short text, reordering, composite, ...):

use as replacement items only examples of these types

* filter by minutes and/or points:

use as replacement items only items with matching points/minutes

* filter by language:

use as replacement items only items in a certain language

* Filter per item name pattern:

use as replacement items only items matching a name.

When (short-) names are used systematically, one can e.g.

use the date in the name and specify only items from e.g.

one year ago, via "*2020*", or from some chapter "*ch01", ...

Certainly, it is also possible to use different item

folders for this

Potential further steps:

- Currently, exams containing pool questions are treated as

auto-correctable (which implies automated exam-review (Einsicht), when

from the question pools only question from strictly closed questions

are selected (MC, SC, Ordering). Depending on the detailed settings,

also other item types could be possible (via correct-when), but this

requires a deeper analysis of every question, which is so far not

performed.

- Categorized items: technically, the infrastructure is mostly here to

allow also filtering by categories. This would allow one to select

e.g. from technical questions, case examples, knowledge transfer

questions, research-oriented questions... which are orthogonal to

the filtering currently available. Every lecturer can define

own categories depending on their needs, we could provide

university-wide category-trees, etc. Of course, one could also

define separate pools for these purposes, but categorizing is

probably more convenient and more flexible.

- Performance: when question pools become large (500+ questions) and

the cohorts as well (500+), the current version might require more

tuning. The only critical time is the exam-start, where the random

question placeholders have to be resolved for every student. The

approach from this weekend uses basic caching, but maybe this has to

be extended.

- Protecting selected questions: Question pools are more detached from

an exam than single exercises, a lecturer might have in mind. One

should not allow one to delete questions/question pools when these are

in use. Probably deletion should be a move to a trash-can, which is

actually, an issue for all exams, but getting more important with

pool questions.

- Handling of potential duplicates: When items are pulled from a

question pool, a replacement item is selected by making sure that

this item does not occur already in the query selection. Therefore,

one can safely draw two questions from the same question pool

without fearing that a student gets the same question twice.

This duplicate checking might require some fine-tuning:

* the system checks for duplicates in an exam via POOL/NAME.

* if a lecturer uses two pool-questions in an exam pointing to the same pool,

the systems makes sure, the same question is not used twice.

* however, if a teacher adds a question q1 to pool1 and the same question to pool2,

these two instances have different item ids and are regarded as two different

questions. One could make a test for only checking NAME (without POOL),

but then it might be the case that certain questions are not accepted

although these are different, because they have the same name.

- Statistics: so far, i have not provided any special answer

statistics for treating pool questions.

  1. … 8 more files in changeset.
provide tailed labels in table of contents for instances of the edit-interaction workflow

Added support for tailored messages when autosave is rejected

Optionally, the AJAX call from autosave can be answered with

a JSON structure containing a "feedback". If this is provided, this

is presented to the user. The mechanism can be extended in the

future to include some reason code, etc. for further automatic

processing in JavaScript.

With this change, the inclass-exam-answer workflow will use this

to comminicate the reasone for rejected autosave operations in

situations, where the examtime is up.

  1. … 4 more files in changeset.
provide means for omit the audio alarm control in the countdown timer

  1. … 4 more files in changeset.
fix typo

better control of browser built-in spellcheck

- xowiki: added property "spellcheck" to formfield classes "textarea" and "text_fields"

- xowf: allow per-exam to activate/deactivate spellcheck in these widget classes

  1. … 6 more files in changeset.
added line missing after refactoring

improve robustness for earlier created exams

Make inclass-exam more foolproof

In case, the randomization is NOT suited for a real exam, do not allow

the user to publish this exam. In cases, where creating exams with

randomization=always are ok, pass "p.realexam=0" to the create

link (and call the entry maybe "training exam")

  1. … 1 more file in changeset.
improved robustness by stronger HTML quoting

minor styling improvements

  1. … 1 more file in changeset.
allow user to open exam answering in multiple tabs in try-out mode

improve comment

allow sorting of submissions via query parameter

  1. … 1 more file in changeset.
add a 10sec grace period to allow typing until the last seconds (but not longer)

Improved handling of autosaved revisions

- reject autosave attempts when time for a student is up

(works for synchronized and non-synchronized exams)

- include submissions with autosaved content in exam protocol (even when state is "initial")

  1. … 3 more files in changeset.
avoid potential loss of submission data on "publish" operations

  1. … 1 more file in changeset.
no need to ask answer_manager for package_id

add missing variable which was deleted during the refactoring of the exam protocol

add bulk-notification functionality to participants_table

  1. … 1 more file in changeset.
strengthen parameter checking

  1. … 3 more files in changeset.
Handling of mutual overwrites in answer workflows

Mutual overwrites occur in answer workflows when a user manages to

open multiple browswer instances/or tabs on the same exam.

In case there was an mutual overwrite, the position as provided by the

instance attributes might deviate from the position, based on which

the actual form data was generated. So, for validating and updating

one has to change the position to the one from the form data (when

this differs). Note that the randomizer depends on property

"position" as well.

The new version avoids that the user might accidencially overwrite his

data and closes on mutual overwrite automatically the older instance

window.

Bumped version number to 5.10.0d38

  1. … 2 more files in changeset.
Refactored exam protocol renderers

The new code reduces the sinze of inclass-exam.wf significantly by

moving the exam protocol into test-item-procs.tcl. In the same step,

the largish function was split up, the rendering functions of

submissions are now named consistently, and there are now different

functions for rendering single items vs. many items, making the

single-item rendering reusable. Additionally, the answer manager is

documented in a more eye-friendly and modular way.

  1. … 1 more file in changeset.