Index: openacs-4/packages/assessment/www/doc/as_items.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/as_items.adp,v diff -u -r1.1.2.4 -r1.1.2.5 --- openacs-4/packages/assessment/www/doc/as_items.adp 9 Jun 2016 13:03:12 -0000 1.1.2.4 +++ openacs-4/packages/assessment/www/doc/as_items.adp 4 Jul 2016 11:33:12 -0000 1.1.2.5 @@ -16,17 +16,18 @@ discuss the design implications for as_items. Green colored tables have to be internationlized.

Each as_item consists of a specific -as_item Type like "Multiple Choice Question" or "Free Text". Each -as_item Type adds additional -Attributes to the as_item, thereby making it pretty flexible. -Additionally each as_item has a related display type storing information on how to -display this as_item. This way we can create an adp-snippet which -we can include to display a certain as_item (the snippet is stored -de-normalized in the as_items table and update on every change to -the as_item or the as_item_type).
+as_item Type like "Multiple Choice Question" or +"Free Text". Each as_item +Type adds additional Attributes to the as_item, thereby making +it pretty flexible. Additionally each as_item has a related +display type storing information +on how to display this as_item. This way we can create an +adp-snippet which we can include to display a certain as_item (the +snippet is stored de-normalized in the as_items table and update on +every change to the as_item or the as_item_type).

How is this achieved concretely? Each -as_item Type has it's own table with attributes useful for this +as_item Type has it's own table with attributes useful for this as_item type. All tables (as_items, as_item_type_*, as_item_display_*) are controlled by the content repository. Each as_item is linked using acs-relationships to the specific items of @@ -43,9 +44,9 @@ Assessment authors flexibility in adapting as_item defaults, help messages, etc for use in different Assessments, we abstract out a number of attributes from as_items into mapping tables where -"override" values for these attributes can optionally be set by -authors. If they choose not to set overrides, then the values -originally created in the as_item supercede.

+"override" values for these attributes can optionally be +set by authors. If they choose not to set overrides, then the +values originally created in the as_item supercede.

Seperately we will deal with Checks on as_items. These will allow us to make checks on the input (is the value given by the user actually a valid value??), branches (if we @@ -61,78 +62,80 @@

    -
  • Short Answer -Answers +
  • Short +Answer Answers (as_item_sa_answers):
    • answer_id
    • cr::name - Identifier
    • cr::title - Answer string that will be matched against the response
    • data_type - Integer vs. real number vs. text
    • case_sensitive_p - Shall the match be case sensitive
    • percent_score - Percentage a correct match gives
    • compare_by - How is the comparison done (equal, contains, -regexp)
    • regexp_text: If the compare_by is a "regexp", this field -contains the actual regexp.
    • allowed_answerbox_list - list with all answerbox ids (1 2 3 ... +regexp)
    • regexp_text: If the compare_by is a "regexp", this +field contains the actual regexp.
    • allowed_answerbox_list - list with all answerbox ids (1 2 3 ... n) whose response will be tried to match against this answer. An empty field indicates the answer will be tried to match against all -answers
    • NOTE: These answers are reusable, that's why we have a +answers
    • NOTE: These answers are reusable, that's why we have a relationship.
    -
  • Multiple Choice -Item +
  • Multiple +Choice Item (as_item_type_mc)
    • cr::name - Identifier
      @@ -178,25 +181,28 @@
      • -Multiple Choices -(as_item_choices) contain additional information for all -multiple choice as_item_types. Obvious examples are radiobutton and -checkbox as_items, but pop-up_date, typed_date and image_map -as_items also are constructed via as_item Choices. Each choice is a -child to an as_item_type Object. Note the difference. A choice does not belong to an as_item, but to -the instance of the as_item_type! This way we can reuse -multiple choice answers easier. It is debatable if we should allow -n:m relationships between choices and as_item_types (thereby -allowing the same choice been reused). In my opinion this is not -necessary, therefore we relate this using the parent_id (which will -be treated as a relationship in cr_child_rels by the content -repository internally). Following the Lars Skinny Table approach of -conflating all the different potential data types into one table, -we provide columns to hold values of the different types and -another field to determine which of them is used. as_item Choices -have these attributes:
          +Multiple +Choices (as_item_choices) contain additional +information for all multiple choice as_item_types. Obvious examples +are radiobutton and checkbox as_items, but pop-up_date, typed_date +and image_map as_items also are constructed via as_item Choices. +Each choice is a child to an as_item_type Object. Note the +difference. A choice does not +belong to an as_item, but to the instance of the +as_item_type! This way we can reuse multiple choice answers +easier. It is debatable if we should allow n:m relationships +between choices and as_item_types (thereby allowing the same choice +been reused). In my opinion this is not necessary, therefore we +relate this using the parent_id (which will be treated as a +relationship in cr_child_rels by the content repository +internally). Following the Lars Skinny Table approach of conflating +all the different potential data types into one table, we provide +columns to hold values of the different types and another field to +determine which of them is used. as_item Choices have these +attributes:
          • choice_id
          • cr::parent_id (belonging to an as_item_type_mc object).
          • cr::name - Identifier
            -
          • cr::title - what is displayed in the choice's "label"
          • data_type - which of the value columns has the information this +
          • cr::title - what is displayed in the choice's +"label"
          • data_type - which of the value columns has the information this Choice conveys
          • numeric_value - we can stuff both integers and real numbers here
          • text_value
          • boolean_value
          • timestamp_value
            @@ -235,7 +241,8 @@ (as_item_image_choices):
            • choice_id
            • cr::parent_id (belonging to an as_item_type_im object).
            • cr::name - Identifier
              -
            • cr::title - what is displayed in the choice's "label"
            • data_type - which of the value columns has the information this +
            • cr::title - what is displayed in the choice's +"label"
            • data_type - which of the value columns has the information this Choice conveys
            • numeric_value - we can stuff both integers and real numbers here
            • text_value
            • boolean_value
            • content_value - references an as_item in the CR -- for an @@ -261,8 +268,9 @@

              Each item_display_type has a couple of attributes in common.

                -
              • item_display_id
              • cr::name - name like "Select box, aligned -right", stored in the name field of CR.
                +
              • item_display_id
              • cr::name - name like "Select box, +aligned right", stored in the name field of +CR.
              • html_display_options - field to specify other stuff like textarea dimensions ("rows=10 cols=50" eg)
              • @@ -272,7 +280,7 @@ attributes) come into play (are added as attributes to the CR item type) (mark: this is not feature complete. It really is up to the coder to decide what attributes each widget should have, -down here are only *suggestions*). Additionally we're not +down here are only *suggestions*). Additionally we're not mentioning all HTML possibilities associated with each type (e.g. a textarea has width and heigth..) as these can be parsed in via the html_display_options.
                @@ -282,32 +290,34 @@ textbox (as_item_display_tb) - single-line typed entry
                • abs_size - An abstraction of the real -size value in "small","medium","large". Up to the developer how -this translates.
                • +size value in +"small","medium","large". Up to the +developer how this translates.
                • item_answer_alignment -- the orientation between the "question part" of the Item (the -title/subtext) and the "answer part" -- the native Item widget (eg -the textbox) or the 1..n choices. Alternatives accommodate L->R -and R->L alphabets (or is this handled automagically be -Internationalization?) and +- the orientation between the "question part" of the Item +(the title/subtext) and the "answer part" -- the native +Item widget (eg the textbox) or the 1..n choices. Alternatives +accommodate L->R and R->L alphabets (or is this handled +automagically be Internationalization?) and include:
                    -
                  1. beside_left - the "answers" are left of -the "question"
                  2. beside_right - the "answers" are right of -the "question"
                  3. below - the "answers" are below the -"question"
                  4. above - the "answers" are above the -"question"
                  5. +
                  6. beside_left - the "answers" are +left of the "question"
                  7. beside_right - the "answers" +are right of the "question"
                  8. below - the "answers" are below +the "question"
                  9. above - the "answers" are above +the "question"

              • short_answer (as_item_display_sa) - Multiple textboxes in one item.
                • abs_size - An -abstraction of the real size value in "small","medium","large". Up -to the developer how this translates.
                • box_orientation - the pattern by which 2..n answer boxes are -laid out when displayed. Note that this isn't a purely stylistic -issue better left to the .adp templates or css; the patterns have -semantic implications that the Assessment author appropriately -should control here. +abstraction of the real size value in +"small","medium","large". Up to the +developer how this translates.
                • box_orientation - the pattern by which 2..n answer boxes are +laid out when displayed. Note that this isn't a purely +stylistic issue better left to the .adp templates or css; the +patterns have semantic implications that the Assessment author +appropriately should control here.
                  1. horizontal - all answerboxes are in one continuous line.
                    @@ -317,32 +327,37 @@
              • text area (as_item_display_ta) - multiple-line typed entry
                • abs_size - An abstraction of the real size value in -"small","medium","large". Up to the developer how this -translates.
                • acs_widget - the type of "widget" displayed when the Item is -output in html. There are many types we should support beyond the -stock html types. We are talking ACS Templating +"small","medium","large". Up to the +developer how this translates.
                • acs_widget - the type of "widget" displayed when the +Item is output in html. There are many types we should support +beyond the stock html types. We are talking ACS Templating widgets here.
                  -
                • item_answer_alignment - the orientation between the "question -part" of the Item (the title/subtext) and the "answer part" -- the -native Item widget (eg the textbox) or the 1..n choices. -Alternatives accommodate L->R and R->L alphabets (or is this -handled automagically be Internationalization?) and include: +
                • item_answer_alignment - the orientation between the +"question part" of the Item (the title/subtext) and the +"answer part" -- the native Item widget (eg the textbox) +or the 1..n choices. Alternatives accommodate L->R and R->L +alphabets (or is this handled automagically be +Internationalization?) and include:
                    -
                  1. beside_left - the "answers" are left of the "question"
                  2. beside_right - the "answers" are right of the "question"
                  3. below - the "answers" are below the "question"
                  4. above - the "answers" are above the "question"
                  5. +
                  6. beside_left - the "answers" are left of the +"question"
                  7. beside_right - the "answers" are right of the +"question"
                  8. below - the "answers" are below the +"question"
                  9. above - the "answers" are above the +"question"

              • radiobutton (as_item_display_rb) - single-choice multiple-option
                • choice_orientation - the pattern by which 2..n Item Choices are -laid out when displayed. Note that this isn't a purely stylistic -issue better left to the .adp templates or css; the patterns have -semantic implications that the Assessment author appropriately -should control here. Note also that Items with no Choices (eg a -simple textbox Item) has no choice_orientation, but handles the -location of that textbox relative to the Item label by the -item_alignment option (discussed below). +laid out when displayed. Note that this isn't a purely +stylistic issue better left to the .adp templates or css; the +patterns have semantic implications that the Assessment author +appropriately should control here. Note also that Items with no +Choices (eg a simple textbox Item) has no choice_orientation, but +handles the location of that textbox relative to the Item label by +the item_alignment option (discussed below).
                  1. horizontal - all Choices are in one line
                  2. vertical - all Choices are in one column
                  @@ -351,16 +366,17 @@
                • sort_order_type: Numerical, alphabetic, randomized or by order of entry (sort_order field).
                • item_answer_alignment - the orientation -between the "question part" of the Item (the title/subtext) and the -"answer part" -- the native Item widget (eg the textbox) or the -1..n choices. Alternatives accommodate L->R and R->L -alphabets (or is this handled automagically be -Internationalization?) and include:
                    -
                  1. beside_left - the "answers" are left of -the "question"
                  2. beside_right - the "answers" are right of -the "question"
                  3. below - the "answers" are below the -"question"
                  4. above - the "answers" are above the -"question"
                  5. +between the "question part" of the Item (the +title/subtext) and the "answer part" -- the native Item +widget (eg the textbox) or the 1..n choices. Alternatives +accommodate L->R and R->L alphabets (or is this handled +automagically be Internationalization?) and +include:
                      +
                    1. beside_left - the "answers" are +left of the "question"
                    2. beside_right - the "answers" +are right of the "question"
                    3. below - the "answers" are below +the "question"
                    4. above - the "answers" are above +the "question"

                @@ -370,16 +386,17 @@
              • allow_multiple_p - Is it allow to select multiple values ?
              • sort_order_type: Numerical, alphabetic, randomized or by order of entry (sort_order field).
              • item_answer_alignment - the orientation -between the "question part" of the Item (the title/subtext) and the -"answer part" -- the native Item widget (eg the textbox) or the -1..n choices. Alternatives accommodate L->R and R->L -alphabets (or is this handled automagically be -Internationalization?) and include:
                  -
                1. beside_left - the "answers" are left of -the "question"
                2. beside_right - the "answers" are right of -the "question"
                3. below - the "answers" are below the -"question"
                4. above - the "answers" are above the -"question"
                5. +between the "question part" of the Item (the +title/subtext) and the "answer part" -- the native Item +widget (eg the textbox) or the 1..n choices. Alternatives +accommodate L->R and R->L alphabets (or is this handled +automagically be Internationalization?) and +include:
                    +
                  1. beside_left - the "answers" are +left of the "question"
                  2. beside_right - the "answers" +are right of the "question"
                  3. below - the "answers" are below +the "question"
                  4. above - the "answers" are above +the "question"

              @@ -390,58 +407,61 @@ of entry (sort_order field).
            • allow_multiple_p - Is it allow to select multiple values ?
            • item_answer_alignment - the orientation -between the "question part" of the Item (the title/subtext) and the -"answer part" -- the native Item widget (eg the textbox) or the -1..n choices. Alternatives accommodate L->R and R->L -alphabets (or is this handled automagically be -Internationalization?) and include:
                -
              1. beside_left - the "answers" are left of -the "question"
              2. beside_right - the "answers" are right of -the "question"
              3. below - the "answers" are below the -"question"
              4. above - the "answers" are above the -"question"
              5. +between the "question part" of the Item (the +title/subtext) and the "answer part" -- the native Item +widget (eg the textbox) or the 1..n choices. Alternatives +accommodate L->R and R->L alphabets (or is this handled +automagically be Internationalization?) and +include:
                  +
                1. beside_left - the "answers" are +left of the "question"
                2. beside_right - the "answers" +are right of the "question"
                3. below - the "answers" are below +the "question"
                4. above - the "answers" are above +the "question"

          • image map (as_item_display_im) - Title with picture
            • allow_multiple_p - Is it allow to select multiple values ?
            • -item_answer_alignment - the orientation between the "question -part" of the Item (the title/subtext) and the "answer part" -- the -native Item widget (eg the textbox) or the 1..n choices. -Alternatives accommodate L->R and R->L alphabets (or is this -handled automagically be Internationalization?) and +item_answer_alignment - the orientation between the +"question part" of the Item (the title/subtext) and the +"answer part" -- the native Item widget (eg the textbox) +or the 1..n choices. Alternatives accommodate L->R and R->L +alphabets (or is this handled automagically be +Internationalization?) and include:
                -
              1. beside_left - the "answers" are left of -the "question"
              2. beside_right - the "answers" are right of -the "question"
              3. below - the "answers" are below the -"question"
              4. above - the "answers" are above the -"question"
              5. +
              6. beside_left - the "answers" are +left of the "question"
              7. beside_right - the "answers" +are right of the "question"
              8. below - the "answers" are below +the "question"
              9. above - the "answers" are above +the "question"

          • multiple-choice-other (as_item_display_mco): Consider, for instance, a combo box that consists of a radiobutton plus a textbox --- used for instance when you need a check "other" and then fill in -what that "other" datum is. In effect this is a single Item but it -has two different forms: a radiobutton and a textbox. The answer -will NOT be stored in the answer choice table. There is no -item_type "multiple-choice-other". +-- used for instance when you need a check "other" and +then fill in what that "other" datum is. In effect this +is a single Item but it has two different forms: a radiobutton and +a textbox. The answer will NOT be stored in the answer choice +table. There is no item_type "multiple-choice-other".
            • widget_choice - Type of the widget for the multiple choice part
            • sort_order_type: Numerical, alphabetic, randomized or by order of entry (sort_order field).
            • other_size: size of the other text field.
            • other_label: label (instead of "other").
            • -item_answer_alignment - the orientation between the "question -part" of the Item (the title/subtext) and the "answer part" -- the -native Item widget (eg the textbox) or the 1..n choices. -Alternatives accommodate L->R and R->L alphabets (or is this -handled automagically be Internationalization?) and +item_answer_alignment - the orientation between the +"question part" of the Item (the title/subtext) and the +"answer part" -- the native Item widget (eg the textbox) +or the 1..n choices. Alternatives accommodate L->R and R->L +alphabets (or is this handled automagically be +Internationalization?) and include:
                -
              1. beside_left - the "answers" are left of -the "question"
              2. beside_right - the "answers" are right of -the "question"
              3. below - the "answers" are below the -"question"
              4. above - the "answers" are above the -"question"
              5. +
              6. beside_left - the "answers" are +left of the "question"
              7. beside_right - the "answers" +are right of the "question"
              8. below - the "answers" are below +the "question"
              9. above - the "answers" are above +the "question"

            @@ -453,25 +473,26 @@ textbox, and submit button together) so user can upload a file

          Help System

          -The help system should allow a small "?" -appear next to an object's title that has a help text identified -with it. Help texts are to be displayed in the nice bar that Lars -created for OpenACS in the header. Each object can have multiple -help texts associated with it (which will be displayed in sort -order with each hit to the "?".) and we can reuse the help text, -making this an n:m relationship (using cr_rels). E.g. you might -want to have a default help text for certain cr_as_item_types, -that's why I was thinking about reuse... +The help system should allow a small +"?" appear next to an object's title that has a help +text identified with it. Help texts are to be displayed in the nice +bar that Lars created for OpenACS in the header. Each object can +have multiple help texts associated with it (which will be +displayed in sort order with each hit to the "?".) and we +can reuse the help text, making this an n:m relationship (using +cr_rels). E.g. you might want to have a default help text for +certain cr_as_item_types, that's why I was thinking about +reuse...

          Relationship attributes:

          • as_item_id
          • message_id - references as_messages
          • sort_order (in which order do the messages appear)

          -Messages (as_messages) abstracts -out help messages (and other types of messages) for use in this -package. Attributes include:

          +Messages (as_messages) +abstracts out help messages (and other types of messages) for use +in this package. Attributes include:

          • message_id
          • message
          Index: openacs-4/packages/assessment/www/doc/data-modell.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/Attic/data-modell.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/data-modell.adp 25 Aug 2015 18:02:18 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/data-modell.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -4,10 +4,10 @@

          Overview

          At its core, the Assessment package defines a hierarchical -container model of a "survey", "questionnaire" or "form". This -approach not only follows the precedent of existing work; it also -makes excellent sense and no one has come up with a better -idea.

          +container model of a "survey", "questionnaire" +or "form". This approach not only follows the precedent +of existing work; it also makes excellent sense and no one has come +up with a better idea.

          • One Assessment consists of
          • One or more Sections which each consist of
          • One or more Items which have
          • Zero or more Choices
          @@ -17,43 +17,44 @@ mostly because these terms are more general and thus suit the broader applicability intended for this package.

          As is the custom in the OpenACS framework, all RDBMS tables in -the package will be prepended with "as_" to prevent further prefent -naming clashes. Judicious use of namespaces will also be made in -keeping with current OpenACS best practice.

          +the package will be prepended with "as_" to prevent +further prefent naming clashes. Judicious use of namespaces will +also be made in keeping with current OpenACS best practice.

          Several of the Metadata entities have direct counterparts in the Data-related partition of the data model. Some standards (notably -CDISC) rigorously name all metadata entities with a "_def" suffix -and all data entities with a "_data" suffix -- thus "as_item_def" -and "as_item_data" tables in our case. We think this is overkill -since there are far more metadata entities than data entities and -in only a few cases do distinctions between the two become -important. In those cases, we will add the "_data" suffix to data -entities to make this difference clear.

          +CDISC) rigorously name all metadata entities with a +"_def" suffix and all data entities with a +"_data" suffix -- thus "as_item_def" and +"as_item_data" tables in our case. We think this is +overkill since there are far more metadata entities than data +entities and in only a few cases do distinctions between the two +become important. In those cases, we will add the "_data" +suffix to data entities to make this difference clear.

          A final general point (that we revisit for specific entities below): the Assessment package data model exercises the Content Repository (CR) in the OpenACS framework heavily. In fact, this use of the CR for most important entities represents one of the main advances of this package compared to the earlier versions. The decision to use the CR is partly driven by the universal need for versioning and reuse within the functional requirements, and partly -by the fact that the CR has become "the Right Way" to build OpenACS -systems. Note that one implication of this is that we can't use a -couple column names in our derived tables because of naming clashes -with columns in cr_items and cr_revisions: title and description. -Furthermore we can handle versioning -and internationalization through the CR.

          +by the fact that the CR has become "the Right Way" to +build OpenACS systems. Note that one implication of this is that we +can't use a couple column names in our derived tables because +of naming clashes with columns in cr_items and cr_revisions: title +and description. Furthermore we can handle versioning and internationalization through +the CR.

          Synopsis of The Data Model

          -

          Here's a detailed summary view of the entities in the Assessment -package. Note that in addition to the partitioning of the entities -between Metadata Elements and Collected Data Elements, we identify -the various subsystems in the package that perform basic -functions.

          +

          Here's a detailed summary view of the entities in the +Assessment package. Note that in addition to the partitioning of +the entities between Metadata Elements and Collected Data Elements, +we identify the various subsystems in the package that perform +basic functions.

          We discuss the following stuff in detail through the subsequent -pages, and we use a sort of "bird's eye view" of this global -graphic to keep the schema for each subsystem in perspective while -homing in on the relevent detail. Here's a brief introduction to -each of these section
          +pages, and we use a sort of "bird's eye view" of this +global graphic to keep the schema for each subsystem in perspective +while homing in on the relevent detail. Here's a brief +introduction to each of these section
          • core - items entities (purple) @@ -65,9 +66,10 @@ sequencing entities (yellow-orange) handle data validation steps and conditional navigation derived from user responses
          • -scoring ("grading") entities -(yellow-green) define how raw user responses are to be processed -into calculated numeric values for a given Assessment
          • +scoring ("grading") +entities (yellow-green) define how raw user responses are to be +processed into calculated numeric values for a given +Assessment
          • display entities (light blue) define constructs that handle how Items are output into the actual html forms returned to users for completion -- including page Index: openacs-4/packages/assessment/www/doc/data_collection.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/data_collection.adp,v diff -u -r1.1.2.3 -r1.1.2.4 --- openacs-4/packages/assessment/www/doc/data_collection.adp 9 Jun 2016 13:03:12 -0000 1.1.2.3 +++ openacs-4/packages/assessment/www/doc/data_collection.adp 4 Jul 2016 11:33:12 -0000 1.1.2.4 @@ -6,93 +6,97 @@

            The schema for the entities that actually collect, store and retrieve Assesment data parallels the hierarchical structure of the Metadata Data Model. In the -antecedent "complex survey" and "questionnaire" systems, this -schema was simple two-level structure:

            +antecedent "complex survey" and "questionnaire" +systems, this schema was simple two-level structure:

            • -survey_responses which capture information about which -survey was completed, by whom, when, etc
            • -survey_question_responses which capture the actual user -data in a "long skinny table" mechanism
            • +survey_responses which capture information +about which survey was completed, by whom, when, etc
            • +survey_question_responses which capture the +actual user data in a "long skinny table" mechanism
            -

            This suffices for one-shot surveys but doesn't support the fine -granularity of user-action tracking, "save&resume" -capabilities, and other requirements identified for the enhanced -Assessment package. Consequently, we use a more extended -hierarchy:

            +

            This suffices for one-shot surveys but doesn't support the +fine granularity of user-action tracking, +"save&resume" capabilities, and other requirements +identified for the enhanced Assessment package. Consequently, we +use a more extended hierarchy:

            • -Assessment Session which captures information about -which Assessment, which Subject, when, etc
            • -Section Data which holds information about the status of -each Section
            • -Item Data which holds the actual data extracted from the -Assessment's html forms; this is the "long skinny table"
            • +Assessment Session which captures information +about which Assessment, which Subject, when, etc
            • +Section Data which holds information about the +status of each Section
            • +Item Data which holds the actual data +extracted from the Assessment's html forms; this is the +"long skinny table"

            To support user modification of submitted data (of which -"store&resume" is a special case), we base all these entities -in the CR. In fact, we use both cr_items and cr_revisions in our -schema, since for any given user's Assessment submission, there -indeed is a "final" or "live" version. In contrast, recall that for -any Assessment itself, different authors may be using different -versions of the Assessment. While this situation may be unusual, -the fact that it must be supported means that the semantics of -cr_items don't fit the Assessment itself. They do fit the -semantics of a given user's Assessment "session" however.

            -

            We distinguish here between "subjects" which are users whose -information is the primary source of the Assessment's responses, -and "users" which are real OpenACS users who can log into the -system. Subjects may be completing the Assessment themselves or may -have completed some paper form that is being transcribed by staff -people who are users. We thus account for both the "real" and one -or more "proxy" respondents via this mechanism. Note that subjects -may or may not be OpenACS users who can log into the system running -Assessment. Thus subject_id will be a foreign key to -persons not users. If the responding user is -completing the assessment for herself, the staff_id will be -identical to the subject_id. But if the user completing the -assessment is doing it by proxy for the "real" subject, then the -staff_id will be hers while the subject_id will belong to the -"real" subject.

            -

            We've simplified this subsection of Assessment considerably from -earlier versions, and here is how and why:

            +"store&resume" is a special case), we base all these +entities in the CR. In fact, we use both cr_items and cr_revisions +in our schema, since for any given user's Assessment +submission, there indeed is a "final" or "live" +version. In contrast, recall that for any Assessment itself, +different authors may be using different versions of the +Assessment. While this situation may be unusual, the fact that it +must be supported means that the semantics of cr_items don't +fit the Assessment itself. They do fit the semantics of a +given user's Assessment "session" however.

            +

            We distinguish here between "subjects" which are users +whose information is the primary source of the Assessment's +responses, and "users" which are real OpenACS users who +can log into the system. Subjects may be completing the Assessment +themselves or may have completed some paper form that is being +transcribed by staff people who are users. We thus account for both +the "real" and one or more "proxy" respondents +via this mechanism. Note that subjects may or may not be OpenACS +users who can log into the system running Assessment. Thus +subject_id will be a foreign key to +persons not users. If the +responding user is completing the assessment for herself, the +staff_id will be identical to the subject_id. But if the user +completing the assessment is doing it by proxy for the +"real" subject, then the staff_id will be hers while the +subject_id will belong to the "real" subject.

            +

            We've simplified this subsection of Assessment considerably +from earlier versions, and here is how and why:

            • -Annotations: We previously had a separate table to -capture any type of ad hoc explanations/descriptions/etc that a -user would need to attach to a given data element (either an item -or section). Instead, we will use the OpenACS General Comments +Annotations: We previously had a separate +table to capture any type of ad hoc explanations/descriptions/etc +that a user would need to attach to a given data element (either an +item or section). Instead, we will use the OpenACS General Comments package, which is based on the CR and thus can support multiple comments attached to a given revision of a data element. The integration between Assessment and GC thus will need to be at the UI level, not the data model level. Using GC will support post-test -"discussions" between student and teacher, for example, about -inidividual items, sections or sessions.
            • -Scoring-grading: This has been a rather controversial -area because of the wide range of needs for derived +"discussions" between student and teacher, for example, +about inidividual items, sections or sessions.
            • +Scoring-grading: This has been a rather +controversial area because of the wide range of needs for derived calculations/evaluations that different applications need to perform on the raw submitted data. In many cases, no calculations -are needed at all; only frequency reports ("74% of responders chose -this option") are needed. In other cases, a given item response may -itself have some measure of "correctness" ("Your answer was 35% -right.") or a section may be the relevant scope of scoring ("You -got six of ten items correct -- 60%.). At the other extreme, -complex scoring algorithms may be defined to include multiple -scales consisting of arbitrary combinations of items among -different sections or even consisting of arithmetic means of -already calculated scale scores. +are needed at all; only frequency reports ("74% of responders +chose this option") are needed. In other cases, a given item +response may itself have some measure of "correctness" +("Your answer was 35% right.") or a section may be the +relevant scope of scoring ("You got six of ten items correct +-- 60%.). At the other extreme, complex scoring algorithms may be +defined to include multiple scales consisting of arbitrary +combinations of items among different sections or even consisting +of arithmetic means of already calculated scale scores.

              Because of this variability as well as the recognition that Assessment should be primarily a data collection package, -we've decided to abstract all scoring-grading functions to one or -more additional packages. A grading package (evaluation) +we've decided to abstract all scoring-grading functions to one +or more additional packages. A grading package (evaluation) is under development now by part of our group, but no documentation is yet available about it. How such client packages will interface with Assessment has not yet been worked out, but this is a crucial issue to work through. Presumably something to do with service contracts. Such a package will need to interact both -with Assessment metadata (to define what items are to be "scored" -and how they are to be scored -- and with Assessment collected data -(to do the actual calculations and mappings-to-grades.

              +with Assessment metadata (to define what items are to be +"scored" and how they are to be scored -- and with +Assessment collected data (to do the actual calculations and +mappings-to-grades.

            • Signatures: The purpose of this is to provide identification and nonreputability during data submission. An @@ -106,72 +110,75 @@ certification exams (for instance) or for clinical data or financial applications, this kind of auditing is essential.

              We previously used a separate table for this since probably most -assessments won't use this (at least, that is the opinion of most -of the educational folks here). However, since we're generating -separate revisions of each of these collected data types, we -decided it would be far simpler and more appropriate to include the -signed_data field directly in the as_item_data table. Note -that for complex applications, the need to "sign the entire form" -or "sign the section" could be performed by concatenating all the -items contained by the section or assessment and storing that in a -"signed_data" field in as_section_data or as_sessions. However, -this would presumably result in duplicate hashing of the data -- -once for the individual items and then collectively. Instead, we'll -only "sign" the data at the atomic, as_item level, and procedurally -sign all as_item_data at once if the assessment author requires -only a section-level or assessment-level signature.

              +assessments won't use this (at least, that is the opinion of +most of the educational folks here). However, since we're +generating separate revisions of each of these collected data +types, we decided it would be far simpler and more appropriate to +include the signed_data field directly in the +as_item_data table. Note that for complex applications, the need to +"sign the entire form" or "sign the section" +could be performed by concatenating all the items contained by the +section or assessment and storing that in a "signed_data" +field in as_section_data or as_sessions. However, this would +presumably result in duplicate hashing of the data -- once for the +individual items and then collectively. Instead, we'll only +"sign" the data at the atomic, as_item level, and +procedurally sign all as_item_data at once if the assessment author +requires only a section-level or assessment-level signature.

            • -"Events" related to assessments In some applications -(like clinical trials), it is important to define a series of -"named" assessment events (like "baseline" "one month" "six months" +"Events" related to assessments In +some applications (like clinical trials), it is important to define +a series of "named" assessment events (like +"baseline" "one month" "six months" etc) at which time assessments are to be performed. Earlier we -included an "event_id" attribute in data collection entities -(notably as_item_data) to make mapping of these events to their -data easy. This denormalization makes some sense for efficiency -considerations, but it doesn't prove to be generally applicable -enough to most contexts, so we've removed it. Instead, any client -package using Assessment in this fashion should implement its own -relationships (presumably with acs_rels).
            • -"Status" of data collection entities An assessment -author may specify different allowable steps for her assessment -- -such as whether a user can "save&resume" between sessions, -whether a second user needs to "review&confirm" entered data -before it becomes "final", etc etc. Rather than try to anticipate +included an "event_id" attribute in data collection +entities (notably as_item_data) to make mapping of these events to +their data easy. This denormalization makes some sense for +efficiency considerations, but it doesn't prove to be generally +applicable enough to most contexts, so we've removed it. +Instead, any client package using Assessment in this fashion should +implement its own relationships (presumably with acs_rels).
            • +"Status" of data collection entities +An assessment author may specify different allowable steps for her +assessment -- such as whether a user can +"save&resume" between sessions, whether a second user +needs to "review&confirm" entered data before it +becomes "final", etc etc. Rather than try to anticipate these kinds of workflow options (and considering that many uses of -Assessment won't want to track any such status), we've decided to -move this out of the data model for Assessment per se and into -Workflow. Assessment authors will have a UI through which they can -configure an applicable workflow (defining states, roles, actions) -for the assessment.
            • +Assessment won't want to track any such status), we've +decided to move this out of the data model for Assessment per se +and into Workflow. Assessment authors will have a UI through which +they can configure an applicable workflow (defining states, roles, +actions) for the assessment.

            Synopsis of Data-Collection Datamodel

            -

            Here's the schema for this subsystem:
            +

            Here's the schema for this subsystem:

            Data Model

            Specific Entities

            This section addresses the attributes the most important entities have in the data-collection data model -- principally the -various design issues and choices we've made. We omit here literal -SQL snippets since that's what the web interface to CVS is for. -;-)

            +various design issues and choices we've made. We omit here +literal SQL snippets since that's what the web interface to CVS +is for. ;-)

            • -Assessment Sessions (as_sessions) are the top of the -data-collection entity hierarchy. They provide the central -definition of a given subject's performance of an Assessment. +Assessment Sessions (as_sessions) are the top +of the data-collection entity hierarchy. They provide the central +definition of a given subject's performance of an Assessment. Attributes include:
              • session_id
              • cr::name - Identifier, format "$session_id-$last_mod_datetime"
                -
              • assessment_id (note that this is actually a revision_id)
              • subject_id - references a Subjects entity that we don't define -in this package. Should reference the parties table as there is no -concept of storing persons in OpenACS in general. Note: this -cannot reference users, since in many cases, subjects will not be -able (or should not be able) to log into the system. The users -table requires email addresses. Subjects in Assessment cannot be -required to have email addresses. If they can't be "persons" then -Assessment will have to define an as_subjects table for its own -use. +
              • assessment_id (note that this is actually a revision_id)
              • subject_id - references a Subjects entity that we don't +define in this package. Should reference the parties table as there +is no concept of storing persons in OpenACS in general. +Note: this cannot reference users, since in many cases, +subjects will not be able (or should not be able) to log into the +system. The users table requires email addresses. Subjects in +Assessment cannot be required to have email addresses. If they +can't be "persons" then Assessment will have to +define an as_subjects table for its own use.
              • staff_id - references Users if someone is doing the Assessment as a proxy for the real subject
              • target_datetime - when the subject should do the Assessment
              • creation_datetime - when the subject initiated the @@ -182,36 +189,39 @@ far
              • consent_timestamp - Time when the consent has been given.
                Note, this is a denormalization introduced for the educational application. For clinical trials apps, in contrast, a complete, -separate "Enrollment" package will be necessary and would capture -consent information. Actually, it's not clear that even for -education apps that this belongs here, since a consent will happen -only once for a given assessment while the user may complete the -assessment during multiple sessions (if save&resume is enabled -for instance). In fact, I've removed this from the graffle -(SK).
              • +separate "Enrollment" package will be necessary and would +capture consent information. Actually, it's not clear that even +for education apps that this belongs here, since a consent will +happen only once for a given assessment while the user may complete +the assessment during multiple sessions (if save&resume is +enabled for instance). In fact, I've removed this from the +graffle (SK).
            • -Assessment Section Data (as_section_data) tracks the -state of each Section in the Assessment. Attributes include: +Assessment Section Data (as_section_data) +tracks the state of each Section in the Assessment. Attributes +include:
              • section_data_id
              • cr::name - Identifier, format "$session_id-$last_mod_datetime"
              • session_id
              • section_id
              • subject_id
              • staff_id
            • -Assessment Item Data (as_item_data) is the heart of the -data collection piece. This is the "long skinny table" where all -the primary data go -- everything other than "scale" data ie -calculated scoring results derived from these primary responses -from subjects. Attributes include: +Assessment Item Data (as_item_data) is the +heart of the data collection piece. This is the "long skinny +table" where all the primary data go -- everything other than +"scale" data ie calculated scoring results derived from +these primary responses from subjects. Attributes include:
                -
              • item_data_id
              • session_id
              • cr::name - identifier in the format "$item_id-$subject_id"
              • event_id - this is a foreign key to the "event" during which -this assessment is being performed -- eg "second term final" or -"six-month follow-up visit" or "Q3 report". Note: adding this here -is a denormalization justified by the fact that lots of queries -will depend on this key, and not joining against as_sessions will -be a Very Good Thing since if a given data submission occurs -through multiple sessions (the save&resume situation).
              • subject_id
              • staff_id
              • item_id
              • choice_id_answer - references as_item_choices
              • boolean_answer
              • numeric_answer
              • integer_answer
              • text_answer -- presumably can store both varchar and text +
              • item_data_id
              • session_id
              • cr::name - identifier in the format +"$item_id-$subject_id"
              • event_id - this is a foreign key to the "event" +during which this assessment is being performed -- eg "second +term final" or "six-month follow-up visit" or +"Q3 report". Note: adding this here is a denormalization +justified by the fact that lots of queries will depend on this key, +and not joining against as_sessions will be a Very Good Thing since +if a given data submission occurs through multiple sessions (the +save&resume situation).
              • subject_id
              • staff_id
              • item_id
              • choice_id_answer - references as_item_choices
              • boolean_answer
              • numeric_answer
              • integer_answer
              • text_answer -- presumably can store both varchar and text datatypes -- or do we want to separate these as we previously did?
              • timestamp_answer
              • content_answer - references cr_revisions
              • signed_data - This field stores the signed entered data, see above and below for explanations
              • percent_score
                @@ -221,21 +231,22 @@ need of considerable help. Can we rely on it here?
            • -Assessment Scales : As discussed above, this will for -the time being be handled by external grading-scoring-evaluation -packages. Assessment will only work with percentages internally. It -might be necessary to add scales into assessment as well, but we -will think about this once the time arrives, but we think that a -more elegant (and appropriate, given the OpenACS toolkit design) -approach will be to define service contracts to interface these -packages.
            • -Assessment Annotations provides a flexible way to handle -a variety of ways that we need to be able to "mark up" an -Assessment. Subjects may modify a response they've already made and -need to provide a reason for making that change. Teachers may want -to attach a reply to a student's answer to a specific Item or make -a global comment about the entire Assessment. This will be achieved -by using the General Comments System of OpenACS
            • +Assessment Scales : As discussed above, this +will for the time being be handled by external +grading-scoring-evaluation packages. Assessment will only work with +percentages internally. It might be necessary to add scales into +assessment as well, but we will think about this once the time +arrives, but we think that a more elegant (and appropriate, given +the OpenACS toolkit design) approach will be to define service +contracts to interface these packages.
            • +Assessment Annotations provides a flexible way +to handle a variety of ways that we need to be able to "mark +up" an Assessment. Subjects may modify a response they've +already made and need to provide a reason for making that change. +Teachers may want to attach a reply to a student's answer to a +specific Item or make a global comment about the entire Assessment. +This will be achieved by using the General Comments System of +OpenACS
            • Signing of content Index: openacs-4/packages/assessment/www/doc/display_types.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/display_types.adp,v diff -u -r1.1.2.4 -r1.1.2.5 --- openacs-4/packages/assessment/www/doc/display_types.adp 9 Jun 2016 13:03:12 -0000 1.1.2.4 +++ openacs-4/packages/assessment/www/doc/display_types.adp 4 Jul 2016 11:33:12 -0000 1.1.2.5 @@ -10,11 +10,11 @@ assessment to assessment. Last but not least, the whole assessment might be displayed differently depending on attributes and the type of assessment we are talking about. -

              Note: please refer to the discussion -of Items here. That discussion -complements the discussion here, and the data model graphic -pertaining to the Item Display Types system is available there -also.

              +

              Note: please refer to the +discussion of Items here. That +discussion complements the discussion here, and the data model +graphic pertaining to the Item Display Types system is available +there also.

              Item Display Types

              Each item has an item_display_type object associated with it, that defines how to display the item. Each @@ -28,60 +28,63 @@

              Each item_display_type has a couple of attributes in common.

              Depending on the presentation_types additonal attributes (presentation_type attributes) come into play (are added as attributes to the CR item type) (mark: this is not feature complete. It really is up to the coder to decide what attributes each widget should have, down here are only -*suggestions*). Additionally we're not mentioning all HTML +*suggestions*). Additionally we're not mentioning all HTML possibilities associated with each type (e.g. a textarea has width and heigth..).

              • textbox - single-line typed entry
                • abs_size - An abstraction of the real size value in -"small","medium","large". Up to the developer how this -translates.
                +"small","medium","large". Up to the +developer how this translates.
            • text area - multiple-line typed entry
              • abs_size - An abstraction of the real size value in -"small","medium","large". Up to the developer how this -translates.
              +"small","medium","large". Up to the +developer how this translates.
          • radiobutton - single-choice multiple-option
            • choice_orientation - the pattern by which 2..n Item Choices are -laid out when displayed. Note that this isn't a purely stylistic -issue better left to the .adp templates or css; the patterns have -semantic implications that the Assessment author appropriately -should control here. Note also that Items with no Choices (eg a -simple textbox Item) has no choice_orientation, but handles the -location of that textbox relative to the Item label by the -item_alignment option (discussed below). +laid out when displayed. Note that this isn't a purely +stylistic issue better left to the .adp templates or css; the +patterns have semantic implications that the Assessment author +appropriately should control here. Note also that Items with no +Choices (eg a simple textbox Item) has no choice_orientation, but +handles the location of that textbox relative to the Item label by +the item_alignment option (discussed below).
              1. horizontal - all Choices are in one line
              2. vertical - all Choices are in one column
              3. matrix_col-row - Choices are laid out in matrix, filling first col then row
              4. matrix_row-col -Choices are laid out in matrix, filling first @@ -95,15 +98,16 @@
              5. choice_orientation (see above)
              6. allow_multiple_p - Is it allow to select multiple values ?
              7. sort_order: Numerical, alphabetic, randomized or by order of entry (sort_order field).
            -
          • select - multiple-option displayed in "popup menu"
            • +
            • select - multiple-option displayed in "popup +menu"
              • sort_order: Numerical, alphabetic, randomized or by order of entry (sort_order field).
              • allow_multiple_p - Is it allow to select multiple values ?
            • multiple-choice-other: Consider, for instance, a combo box that consists of a radiobutton plus a textbox -- used for instance when -you need a check "other" and then fill in what that "other" datum -is. In effect this is a single Item but it has two different forms: -a radiobutton and a textbox. +you need a check "other" and then fill in what that +"other" datum is. In effect this is a single Item but it +has two different forms: a radiobutton and a textbox.
              • other_size: size of the other text field.
              • other_label: label (instead of "other").
              • display_type: What display type should be used for the multiple-choice-part ?
              • @@ -121,11 +125,11 @@ Items:

                • ranking - a set of alternatives each need to be assigned an -exclusive rank ("Indicate the order of US Presidents from bad to -worse"). Is this one Item with multiple Item Choices? Actually, -not, since each alternative has a value that must be separately -stored (the tester would want to know that the testee ranked GWB -last, for instance).
                • ...
                • +exclusive rank ("Indicate the order of US Presidents from bad +to worse"). Is this one Item with multiple Item Choices? +Actually, not, since each alternative has a value that must be +separately stored (the tester would want to know that the testee +ranked GWB last, for instance).
                • ...

                Section display

                @@ -138,12 +142,13 @@ The section display page will be made up of the following attributes:
                  -
                • Name: text. Name of the section like "test view sorted"
                • Number of questions per page: integer. THIS HAS TO BE CHANGED +
                • Name: text. Name of the section like "test view +sorted"
                • Number of questions per page: integer. THIS HAS TO BE CHANGED IN THE DATAMODELL FROM PAGINATION_STYLE. How many questions shall be displayed per page in this section. Usually the answer would be -"" for all questions on one page (default), or "1" for one question -per page (aka one question at a time), but any number is -imagineable.
                • +"" for all questions on one page (default), or +"1" for one question per page (aka one question at a +time), but any number is imagineable.
                • ADP style: ADP to choose from that will control the makeup of the section along with the option to create a new one and a link to edit existing ones @@ -159,11 +164,12 @@ broken on purpose and result in an error.
                • Submit Answer seperately: boolean. Shall each answer be answered seperately, even if we display multiple answers? If yes, -display a "save" button next to each answer along with green "V" if -the answer has been already submitted. To finish the section, you -still have to click on the OK button at the buttom. Once the -section is finished all answers that have not been seperatly -submitted will be treated as not being submitted at all.
                  +display a "save" button next to each answer along with +green "V" if the answer has been already submitted. To +finish the section, you still have to click on the OK button at the +buttom. Once the section is finished all answers that have not been +seperatly submitted will be treated as not being submitted at +all.
                Index: openacs-4/packages/assessment/www/doc/grouping.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/grouping.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/grouping.adp 25 Aug 2015 18:02:18 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/grouping.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -16,23 +16,23 @@

                The primary key assessment_id is a revision_id inherited from cr_revisions. Note, the CR provides two main types of entities -- cr_items and cr_revisions. The latter are where sequential versions -of the former go, while cr_items is where the "current" version of -an entity can be stored, where unchanging elements of an entity are -kept, or where data can be cached. This is particularly useful if -the system needs a single "live" version, but it isn't appropriate -in situations where all versions potentially are equally-important -siblings. In the case of the Assessment package, it seems likely -that in some applications, users would indeed want to designate a -single "live" version, while in many others, they -wouldn't. 

                Attributes of Assessments will include those previously included +of the former go, while cr_items is where the "current" +version of an entity can be stored, where unchanging elements of an +entity are kept, or where data can be cached. This is particularly +useful if the system needs a single "live" version, but +it isn't appropriate in situations where all versions +potentially are equally-important siblings. In the case of the +Assessment package, it seems likely that in some applications, +users would indeed want to designate a single "live" +version, while in many others, they wouldn't. 

                Attributes of Assessments will include those previously included in Surveys plus some others:

                • assessment_id
                • cr:name - a curt name appropriate for urls
                  -
                • cr:title - a formal title to use in page layouts etc
                • creator_id - Who is the "main" author and creator of this -assessment
                • cr:description - text that can appear in introductory web +
                • cr:title - a formal title to use in page layouts etc
                • creator_id - Who is the "main" author and creator of +this assessment
                • cr:description - text that can appear in introductory web pages
                • instructions - text that explains any specific steps the subject needs to follow
                • mode - whether this is a standalone assessment (like current -surveys), or if it provides an "assessment service" to another -OpenACS app, or a "web service" via SOAP etc
                • editable_p - whether the response to the assessment is editable +surveys), or if it provides an "assessment service" to +another OpenACS app, or a "web service" via SOAP etc
                • editable_p - whether the response to the assessment is editable once an item has been responded to by the user.
                • anonymous_p - This shows whether the creator of the accessment will have the possibility to see the personal details of the respondee or not. In @@ -51,7 +51,8 @@
                • return_url - URL the respondee will be redirected to after finishing the assessment. Should be redirected directly if no Thank you page is there. Otherwise the return_url should be set in the -thank you page context, so we can have a "continue" URL.
                • start_time - At what time shall the assessment become available +thank you page context, so we can have a "continue" +URL.
                • start_time - At what time shall the assessment become available to the users (remark: It will only become available to the users who have at least the "respond" priviledge.
                • end_time - At what time the assessment becomes unavailable. This is a hard date, any response given after this time will be @@ -132,8 +133,9 @@ admin pages, not for end-user pages
                • instructions - text displayed on user pages
                • enabled_p - good to go?
                • required_p - probably not as useful as per-Item required_p but maybe worth having here; what should it mean, though? All Items in a required section need to be required? At least one? Maybe this -isn't really useful.
                • content_value - references cr_revisions: for an image, audio -file or video file
                • numeric_value - optional "number of points" for section
                • feedback_text - optional preset text to show user
                • max_time_to_complete - optional max number of seconds to +isn't really useful.
                • content_value - references cr_revisions: for an image, audio +file or video file
                • numeric_value - optional "number of points" for +section
                • feedback_text - optional preset text to show user
                • max_time_to_complete - optional max number of seconds to perform Section

                Permissions / Scope: Control of reuse previously was through a shareable_p boolean. As with Items and Assessments, we instead will @@ -145,24 +147,25 @@

            • Section Display Types (as_section_display_types) define types of display for an groups of Items. Examples are a "compound -question" such as "What is your height" where the response needs to -include a textbox for "feet" and one for "inches". Other examples -are "grids" of radiobutton multiple-choice Items in which each row -is another Item and each column is a shared radiobutton, with the -labels for the radiobutton options only displayed at the top of the -grid (see the SAQ for -an illustration of this). +question" such as "What is your height" where the +response needs to include a textbox for "feet" and one +for "inches". Other examples are "grids" of +radiobutton multiple-choice Items in which each row is another Item +and each column is a shared radiobutton, with the labels for the +radiobutton options only displayed at the top of the grid (see +the SAQ for an +illustration of this).

              This entity is directly analogous in purpose and design to as_item_display_types.

                -
              • section_display_type_id
              • section_type_name - name like "Vertical Column" or "Depth-first -Grid" or "Combo Box"
              • pagination_style - all-items; one-item-per-page; variable (get +
              • section_display_type_id
              • section_type_name - name like "Vertical Column" or +"Depth-first Grid" or "Combo Box"
              • pagination_style - all-items; one-item-per-page; variable (get item groups from mapping table)
              • branched_p - whether this Section defines a branch point (so that the navigation procs should look for the next step) or whether this Section simply transitions to the next Section in the sort_order (it may be better not to use this denormalization and instead always look into the Sequencing mechanism for navigation -- -we're still fuzzy on this)
              • item_orientation - the pattern by which 2..n Items are laid out -when displayed. Note that this isn't a purely stylistic issue +we're still fuzzy on this)
              • item_orientation - the pattern by which 2..n Items are laid out +when displayed. Note that this isn't a purely stylistic issue better left to the .adp templates or css; the patterns have semantic implications that the Assessment author appropriately should control here. @@ -172,28 +175,31 @@ then col
              • item_labels_as headers_p - whether to display labels of the -Items; if not, a "grid of radiobuttons" gets displayed. See -discussion of Items and Item Choices +Items; if not, a "grid of radiobuttons" gets displayed. +See discussion of Items and Item Choices here. There are contexts where a Section of Items all share the -same Choices and should be laid out with the Items' item_subtexts -as row headers and the radiobuttons (or checkboxes) only -- without -their labels -- displayed in a grid (see +same Choices and should be laid out with the Items' +item_subtexts as row headers and the radiobuttons (or checkboxes) +only -- without their labels -- displayed in a grid (see this example).
              • presentation_type - may actually be superfluous...gotta think -more about this, but there's at least one example: +more about this, but there's at least one example:
                1. ranking - a set of alternatives each need to be assigned an -exclusive rank ("Indicate the order of US Presidents from bad to -worse"). Is this one Item with multiple Item Choices? Actually, -not, since each alternative has a value that must be separately -stored (the tester would want to know that the testee ranked GWB -last, for instance).
                2. what others?
                3. +exclusive rank ("Indicate the order of US Presidents from bad +to worse"). Is this one Item with multiple Item Choices? +Actually, not, since each alternative has a value that must be +separately stored (the tester would want to know that the testee +ranked GWB last, for instance).
                4. what others?
              • item_alignment - the orientation between the "section -description part" of the Section (if any) and the group of Items. -Alternatives accommodate L->R and R->L alphabets (or is this -handled automagically be Internationalization?) and include: +description part" of the Section (if any) and the group of +Items. Alternatives accommodate L->R and R->L alphabets (or +is this handled automagically be Internationalization?) and +include:
                  -
                1. beside_left - the Items are left of the "heading"
                2. beside_right - the Items are right of the "heading"
                3. below - the Items are below the "heading"
                4. above - the Items are above the "heading"
                5. +
                6. beside_left - the Items are left of the +"heading"
                7. beside_right - the Items are right of the +"heading"
                8. below - the Items are below the "heading"
                9. above - the Items are above the "heading"
              • display_options - field to specify other stuff like the grid dimensions ("rows=10 cols=50" eg)
              • @@ -202,8 +208,8 @@ Section, caches display code, and contains optional overrides for Section and Item attributes:
                  -
                • item_id
                • section_id
                • enabled_p
                • required_p - whether Item must be answered
                • item_default
                • content_value - references CR
                • numeric_value - where optionally the "points" for the Item can -be stored
                • feedback_text
                • max_time_to_complete
                • adp_chunk - display code
                • sort_order
                • +
                • item_id
                • section_id
                • enabled_p
                • required_p - whether Item must be answered
                • item_default
                • content_value - references CR
                • numeric_value - where optionally the "points" for the +Item can be stored
                • feedback_text
                • max_time_to_complete
                • adp_chunk - display code
                • sort_order
              • Section Assessment Map (as_assessment_section_map) basically is a standard map, though we can override a few Section attributes Index: openacs-4/packages/assessment/www/doc/index.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/index.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/index.adp 25 Aug 2015 18:02:18 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/index.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -5,11 +5,12 @@

                Introduction

                The Assessment Package unites the work and needs of various members of the OpenACS community for data collection functionality -within the OpenACS framework. We're using the term "Assessment" -instead of "Survey" or "Questionnaire" (or "Case Report Form" aka -CRF, the term used in clinical trials) because it is a term used by -IMS and because it connotes the more generic nature of the data -collection system we're focusing on.

                +within the OpenACS framework. We're using the term +"Assessment" instead of "Survey" or +"Questionnaire" (or "Case Report Form" aka CRF, +the term used in clinical trials) because it is a term used by IMS +and because it connotes the more generic nature of the data +collection system we're focusing on.

                There has been considerable recent interest in expanding the capabilities of generic data collection packages within OpenACS. Identified applications include:

                @@ -35,11 +36,11 @@ Interchange Standards Consortium) formed a few years ago and is developing data models for clinical trials data derived from schema contributed primarily by Phase Forward and PHT. These vendors -provide "electronic data capture" (EDC) services at considerable -cost -- a 18 month study of 2500 patients including about 500 data -elements costs nearly $500,000. There is clearly interest and -opportunity to craft systems that bring such costs "in house" for -organizations doing clinical research.

                +provide "electronic data capture" (EDC) services at +considerable cost -- a 18 month study of 2500 patients including +about 500 data elements costs nearly $500,000. There is clearly +interest and opportunity to craft systems that bring such costs +"in house" for organizations doing clinical research.

              • Data collection services for other OpenACS packages. Most other OpenACS packages invoke some form of data collection from users. While developments such as ad_form and the templating system in @@ -55,37 +56,38 @@

                Several OpenACS efforts form the context for any future work. These include:


                Index: openacs-4/packages/assessment/www/doc/item_types.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/item_types.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/item_types.adp 25 Aug 2015 18:02:18 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/item_types.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -19,14 +19,15 @@
              • Open Question

                Open questions are text input questions for free text. For obvious reasons they cannot be auto corrected. The difference -between an "Open Question" and a "Short Answer" Item is that Open -Questions accept alphanumeric data from a user and only undergo -manual "grading" by an admin user through comparison with "correct" -values configured during Assessment authoring. Open Questions can -either be short (textbox) or long (text area) elements in the html -form. Here are several configuration options the authoring -environment will support (in addition to many others, such as -alignment/orientation, required/not required, etc etc):

                  +between an "Open Question" and a "Short Answer" +Item is that Open Questions accept alphanumeric data from a user +and only undergo manual "grading" by an admin user +through comparison with "correct" values configured +during Assessment authoring. Open Questions can either be short +(textbox) or long (text area) elements in the html form. Here are +several configuration options the authoring environment will +support (in addition to many others, such as alignment/orientation, +required/not required, etc etc):

                  • Size of the reply box: Radio buttons to set size of textbox: small/medium/large; or text area
                  • Prefilled Answer Box: richtext widget. The content of this field will be prefilled in the response of the user taking the @@ -37,8 +38,8 @@ the correct answer box. This would be shown to the manual corrector to quickly choose from when manually scoring the answer. What kind of comments would these be? Should they be categorized entries in -the "message" system that admin users would populate over time, -that would be stuck into the authoring UI dynamically during +the "message" system that admin users would populate over +time, that would be stuck into the authoring UI dynamically during Assessment creation?
                • Short Answer Item: @@ -48,8 +49,8 @@ compare functions to compare the output. The creation of a short answer question will trigger entries into the as_item check tables. In addition to supporting automated validation/grading, this item -type differs from "Open Questions" in that only textboxes are -supported -- meaning short answers, no text area essays.

                    +type differs from "Open Questions" in that only textboxes +are supported -- meaning short answers, no text area essays.

                    • Intro_label: textarea. This contains the leading text that will be presented before the first answerbox.
                    • Extro_label: textarea. This contains the trailing text.
                    • Number of Answerboxes: integer. Number of answerboxes presented to the user.
                      @@ -66,15 +67,16 @@ will awarded.
                    • Size: Integer Select: size of the input box (small, medium, large)
                    • Compare by: Select (equal, contains, regexp). This defines how the comparison between the answer string and the response shall -happen.
                    • Regexp: Textarea. If the compare by is a "regexp", this field -contains the actual regexp.
                    • sort_order
                      +happen.
                    • Regexp: Textarea. If the compare by is a "regexp", +this field contains the actual regexp.
                    • sort_order
                    • Allow in answerbox: This defines to which answerbox the answer check of the short_answer is compared to. If we have four -answerboxes and the question was "Name four  European Capitals -of EU members", then you would 25 correct answers which could be -given in all answerboxes. If the question was "Name four European -Capitals of EU members ordered by Name", then you'd only have four -answers "Athen, Berlin, Copenhagen,
                      +answerboxes and the question was "Name four  European +Capitals of EU members", then you would 25 correct answers +which could be given in all answerboxes. If the question was +"Name four European Capitals of EU members ordered by +Name", then you'd only have four answers "Athen, +Berlin, Copenhagen,
                    @@ -84,9 +86,9 @@ left with pull down menues on the right hand side of a survey. The number of the items is identical to the number of items on the right hand side. This also appears to be a Section of Items; each -Item consists of a single "phrase" against which it is to be -associated with one of a set of potential choices (displayed via a -select widget; could be radiobutton though too). If there are +Item consists of a single "phrase" against which it is to +be associated with one of a set of potential choices (displayed via +a select widget; could be radiobutton though too). If there are several such matchings (three phrases <-> three items in the popup select) then this is a Section with three Items. The UI for this needs to be in section-edit, not item-edit.

                      @@ -150,27 +152,28 @@ options. This would add a couple of fields:
                      • To each answer: Fixed position: Select Box, Choose the -mandatory position when displaying the option (e.g. "none of the -above").
                      • Number of correct answers: integer, defining how many correct +mandatory position when displaying the option (e.g. "none of +the above").
                      • Number of correct answers: integer, defining how many correct options have to be displayed. Check if enough correct answers have been defined.
                      • Number of answers: integer, defining how many options shall be displayed in total (correct and incorrect). Check if enough answers are available.
                      • Display of options: Numerical, alphabetic, randomized or by -order of entry.
                      • All radio button Items must have a "clear" button that unsets -all the radiobuttons for the item. (For that matter, every Section -and every Assessment also must have "clear" buttons. Fairly trivial -with Javascript.)
                      • +order of entry.
                      • All radio button Items must have a "clear" button +that unsets all the radiobuttons for the item. (For that matter, +every Section and every Assessment also must have "clear" +buttons. Fairly trivial with Javascript.)
                    -Note that one special type of "multiple choice" question consists -of choices that are created by a database select. For instance: a -question like "Indicate your state" will have a select widget that -displays all state names obtained from the states table in -OpenACS.
                  • Rank question: +Note that one special type of "multiple choice" question +consists of choices that are created by a database select. For +instance: a question like "Indicate your state" will have +a select widget that displays all state names obtained from the +states table in OpenACS.
                  • Rank question:

                    Rank questions ask for the answers to be ranked. This appears to -me to be a special case of the "matching question" in which the -select options are ordinal values, not arbitrary strings.

                      +me to be a special case of the "matching question" in +which the select options are ordinal values, not arbitrary +strings.

                      • Rank Type: Boolean (alphabetic, numeric). Shall the rank be from a to z or from 1 to n.
                      • Only unique rank: Boolean (yes/no). Shall the ranking only allow unique ranks (like 1,2,3,5,6 instead of 1,2,2,4,5)
                      • Straigth order: Boolean (alphabetic, numeric). Shall the rank @@ -187,10 +190,10 @@

                        The idea here is a "question" consisting of a group of questions. We include it here because to many users, this does appear to be a "single" question.

                        However, it is actually more appropriately recognized to be a -"section" because it is a group of questions, a response to each of -which will need to be separately stored by the system. Further, -this is in fact a display option for the section that could -reasonably be used for any Item Type. For instance, there are +"section" because it is a group of questions, a response +to each of which will need to be separately stored by the system. +Further, this is in fact a display option for the section that +could reasonably be used for any Item Type. For instance, there are situations where an Assessment author may want to group a set of selects, or radiobuttons, or small textboxes, etc.

                      • Composite matrix-based multiple response item: Index: openacs-4/packages/assessment/www/doc/page_flow.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/page_flow.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/page_flow.adp 25 Aug 2015 18:02:18 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/page_flow.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -4,13 +4,13 @@

                        Overview

                        Through the OpenACS templating system, the UI look&feel will -be modifiable by specific sites, so we needn't address page layout -and graphical design issues here. Other than to mention that the -Assessment package will use these OpenACS standards:

                        +be modifiable by specific sites, so we needn't address page +layout and graphical design issues here. Other than to mention that +the Assessment package will use these OpenACS standards:

                        • "trail of breadcrumb" navigational links
                        • context-aware (via user identity => permissions) menu -options (whether those "menus" are literally menus or some other -interface widget like toolbars)
                        • in-place, within-form user feedback (eg error messages about a +options (whether those "menus" are literally menus or +some other interface widget like toolbars)
                        • in-place, within-form user feedback (eg error messages about a form field directly next to that field, not in an "error page")
                        @@ -19,37 +19,37 @@ package. We need to be able to create, edit and delete all the constituent entities in the Package. The boundary between the pages belonging specifically to Assessment and those belonging to -"calling" packages (eg dotLRN, clinical trials packages, financial -management packages, etc etc) will necessarily be somewhat -blurred.

                        +"calling" packages (eg dotLRN, clinical trials packages, +financial management packages, etc etc) will necessarily be +somewhat blurred.

                        Proposed Page Flow

                        Nevertheless, here is a proposed set of pages along with very brief descriptions of what happens in each. This organization is actually derived mostly from the existing Questionnaire module which can be examined here -in the "Bay Area OpenACS Users Group (add yourself to the group and -have a look).

                        +in the "Bay Area OpenACS Users Group (add yourself to the +group and have a look).

                        The UI for Assessment divides into a number of primary functional areas, as diagrammed below. These include:

                          -
                        • the "Home" area (for lack of a better term). These are the main -index pages for the user and admin sections
                        • +
                        • the "Home" area (for lack of a better term). These +are the main index pages for the user and admin sections
                        • Assessment Authoring: all the pages involved in creating, editing, and deleting the Assessments themselves; these are all admin pages
                        • Assessment Delivery: all the pages involved in deploying a given Assessment to users for completion, processing those results, etc; these are user pages
                        • Assessment Review: all the pages involved in select data extracts and displaying them in whatever formats indicated; this -includes "grading" of an Assessment -- a special case of data -review; these are admin pages, though there also needs to be some -access to data displays for general users as well (eg for anonymous -surveys etc). Also, this is where mechanisms that return -information to "client" packages that embed an Assessment would -run.
                        • Session Management: pages that set up the timing and other -"policies" of an Assessment. This area needs to interact with the -next one in some fashion, though exactly how this occurs needs to -be further thought through, depending on where the Site Management -mechanisms reside.
                        • Site Management: pages involved in setting up who does +includes "grading" of an Assessment -- a special case of +data review; these are admin pages, though there also needs to be +some access to data displays for general users as well (eg for +anonymous surveys etc). Also, this is where mechanisms that return +information to "client" packages that embed an Assessment +would run.
                        • Session Management: pages that set up the timing and other +"policies" of an Assessment. This area needs to interact +with the next one in some fashion, though exactly how this occurs +needs to be further thought through, depending on where the Site +Management mechanisms reside.
                        • Site Management: pages involved in setting up who does Assessments. These are admin pages and actually fall outside the Assessment package per se. How dotLRN wants to interact with Assessment is probably going to be different from how a Clinical Index: openacs-4/packages/assessment/www/doc/policies.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/policies.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/policies.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/policies.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -18,27 +18,29 @@ perform Assessment
                        • interruptable_p - whether user can "save&resume" session
                        • data_entry_mode - (presumes that the necessary UI output procs are implemented in the APIs) to produce different deployment -formats: standard web page, handheld gizmo, kiosk "one question at -a time", AVR over phone, etc etc
                        • consent_required_p - whether subjects must give formal consent +formats: standard web page, handheld gizmo, kiosk "one +question at a time", AVR over phone, etc etc
                        • consent_required_p - whether subjects must give formal consent before doing Assessment
                        • consent - optional text to which the subject needs to agree before doing the Assessment (this may be more appropriate to abstract to Assessment-Events)
                        • logo - optional graphic that can appear on each page
                        • electronic_signature_p - whether subject must check -"attestation box" and provide password to "sign"
                        • digital_signature_p - whether in addition to the electronic -signature, the response must be hashed and encrypted
                        • shareable_p - whether Policy is shareable; defaults to 't' -since this is the whole intent of this "repository" approach, but -authors' should have option to prevent reuse
                        • feedback_text - where optionally some preset feedback can be -specified by the author
                        • double_entry_p - do two staff need to enter data before it's -accepted?
                        • require_annotations_with_rev_p - is an annotation required if a +"attestation box" and provide password to +"sign"
                        • digital_signature_p - whether in addition to the electronic +signature, the response must be hashed and encrypted
                        • shareable_p - whether Policy is shareable; defaults to +'t' since this is the whole intent of this +"repository" approach, but authors' should have +option to prevent reuse
                        • feedback_text - where optionally some preset feedback can be +specified by the author
                        • double_entry_p - do two staff need to enter data before +it's accepted?
                        • require_annotations_with_rev_p - is an annotation required if a user modifies a submitted response?
                      • Assessment Events (as_assessment_events) define an planned, -scheduled or intended "data collection event". It abstracts out -from Assessment Policies details that define specific instances of -an Assessment's deployment. Attributes include: +scheduled or intended "data collection event". It +abstracts out from Assessment Policies details that define specific +instances of an Assessment's deployment. Attributes include:
                          -
                        • event_id
                        • name
                        • description
                        • instructions
                        • target_days_post_enroll - an interval after the "enrollment" -date which could be the time a subject is enrolled in a trial or -the beginning of a term
                        • optimal_days_pre - along with the next attribute, defines a +
                        • event_id
                        • name
                        • description
                        • instructions
                        • target_days_post_enroll - an interval after the +"enrollment" date which could be the time a subject is +enrolled in a trial or the beginning of a term
                        • optimal_days_pre - along with the next attribute, defines a range of dates when the Assessment should be performed (if zero, then the date must be exact)
                        • optimal_days_post
                        • required_days_pre - as above, only the range within which the Assessment must be performed
                        • required_days_post
                        • Data modell

                        • Index: openacs-4/packages/assessment/www/doc/requirements.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/requirements.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/requirements.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/requirements.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -40,14 +40,14 @@ system under development.

                          Use Cases

                          -The assessment module in it's simplest form is a dynamic +The assessment module in it's simplest form is a dynamic information gathering tool. This can be clearly seen in the first group of use cases, which deal with surveys (one form of assessment, e.g. for quality assurance or clinical trials). An extension of this information gathering the possibility to conduct an evaluation on the information given, as we show in the second group of use cases (testing scenarios). Last but not least, the -assessment tool should be able to provide it's information +assessment tool should be able to provide it's information gathering features to other packages within the OpenACS framework as well.

                          It is very important to note, that not all parameters and @@ -74,12 +74,13 @@ with all the questions the author added to the survey.

                          Quality Assurance

                          -A company wants to get feedback from users about it's product. It -creates a survey which offers branching (to prevent users from +A company wants to get feedback from users about it's product. +It creates a survey which offers branching (to prevent users from filling out unnecessary data, e.g. if you answered you have never -been to Europe the question "Have you seen Rome" should not show -up) and multi-dimensional likert scales (To ask for the quality and -importance of a part of the product in conjunction).
                          +been to Europe the question "Have you seen Rome" should +not show up) and multi-dimensional likert scales (To ask for the +quality and importance of a part of the product in +conjunction).

                          Professional data entry

                          A clinic wants to conduct a trial. For this research assistants are @@ -126,9 +127,9 @@

                          Multiple languages

                          The quality assurance team of the company mentioned above realizes -that the majority of it's user base is not native English speakers. -This is why they want to add additional translations to the -questions to broaden the response base. For consistency, the +that the majority of it's user base is not native English +speakers. This is why they want to add additional translations to +the questions to broaden the response base. For consistency, the assessment may only be shown to the subject if all questions used have been translated. Furthermore it is necessary to store the language used along with the response (as a translation might not @@ -237,8 +238,8 @@ immediately as a percentage score in a table comparing that score to other users. Users should be able to answer only a part of the possible questions each time. If the user is in the top 2%, offer -him the contact address of "Mensa", other percentages should give -encouraging text.
                          +him the contact address of "Mensa", other percentages +should give encouraging text.

                          Scoring

                          The computer science department has a final exam for the students. @@ -249,7 +250,7 @@ two sections only 30% towards the total score. Each section consists of multiple questions that have a different weigth (in percent) for the total score of the section. The sum of the weigths -has to be 100%, otherwise the author of the section get's a +has to be 100%, otherwise the author of the section get's a warning. Some of the questions are multiple choice questions, that get different percentages for each answer. As the computer science department wants to discourage students from giving wrong answers, @@ -291,8 +292,8 @@

                          Action driven questions

                          The company conducting the QA wants to get more participants to -it's survey by recommendation. For this each respondee is asked at -the end of the survey if he would recommend this survey to other +it's survey by recommendation. For this each respondee is asked +at the end of the survey if he would recommend this survey to other users (with the option to give the email address of these users). The answer will be processed and an email send out to all given emails inviting them to take the survey. @@ -320,8 +321,8 @@ Assessment or set of Assessments to a specific set of subjects, students or other data entry personnel. These actions potentially will involve interfacing with other Workflow management tools (e.g. -an "Enrollment" package that would handle creation of new Parties -(aka clinical trial subjects) in the database.

                          +an "Enrollment" package that would handle creation of new +Parties (aka clinical trial subjects) in the database.

                          Schedulers could also be teachers, curriculum designers, site coordinators in clinical trials, etc.

                          Analyst

                          @@ -336,35 +337,35 @@ completing a health-related quality-of-life instrument to track her health status. Subjects need appropriate UIs depending on Item formats and technological prowess of the Subject -- kiosk -"one-question-at-a-time" formats, for example. May or may not get -immediate feedback about data submitted.

                          +"one-question-at-a-time" formats, for example. May or may +not get immediate feedback about data submitted.

                          Subjects could be students, consumers, or patients.

                          Data Entry Staff

                          Has permissions to create, edit and delete data for or about the -"real" Subject. Needs UIs to speed the actions of this trained -individual and support "save and resume" operations. Data entry -procedures used by Staff must capture the identity if both the -"real" subject and the Staff person entering the data -- for audit -trails and other data security and authentication functions. Data -entry staff need robust data validation and integrity checks with -optional, immediate data verification steps and electronic -signatures at final submission. (Many of the tight-sphinctered -requirements for FDA submissions center around mechanisms -encountered here: to prove exactly who created any datum, when, -whether it is a correct value, whether anyone has looked at it or -edited it and when, etc etc...)

                          +"real" Subject. Needs UIs to speed the actions of this +trained individual and support "save and resume" +operations. Data entry procedures used by Staff must capture the +identity if both the "real" subject and the Staff person +entering the data -- for audit trails and other data security and +authentication functions. Data entry staff need robust data +validation and integrity checks with optional, immediate data +verification steps and electronic signatures at final submission. +(Many of the tight-sphinctered requirements for FDA submissions +center around mechanisms encountered here: to prove exactly who +created any datum, when, whether it is a correct value, whether +anyone has looked at it or edited it and when, etc etc...)

                          Staff could be site coordinators in clinical trials, insurance adjustors, accountants, tax preparation staff, etc.

                          System / Application Overview

                          Editing of Assessments

                          • Manage the structure of Assessments -- the organization of -series of questions (called "Items") into Sections (defined -logically in terms of branch points and literally in terms of -"Items presented together on a page"), along with all other -parameters that define the nature and fuction of all Assessment -components.
                          • Create, edit and delete Assessments, the highest level in the +series of questions (called "Items") into Sections +(defined logically in terms of branch points and literally in terms +of "Items presented together on a page"), along with all +other parameters that define the nature and fuction of all +Assessment components.
                          • Create, edit and delete Assessments, the highest level in the structure hierarchy. Configure Assessment attributes:
                            • Assessment name, description, version notes, instructions, @@ -395,10 +396,11 @@ Assessment, including editing of the Assessment itself, access to collected Assessment data, and control of scheduling procedures.
                            • A "clear" button to wipe all user input from an -Assessment.
                            • A "printer-friendly" version of the Assessment so that it can -be printed out for contexts in which users need to complete it on -paper and then staff people transcribe the answers into the web -system (yes, this actually is an important feature).
                            • +Assessment.
                            • A "printer-friendly" version of the Assessment so +that it can be printed out for contexts in which users need to +complete it on paper and then staff people transcribe the answers +into the web system (yes, this actually is an important +feature).
                          • Create, edit, clone and delete Sections -- the atomic grouping unit for Items. Configure Section attributes: @@ -410,46 +412,50 @@ Items.
                          • Item data integrity checks: rules for checking for expected relationships among data submitted from two or more Items. These define what are consistent and acceptable responses (ie if Item A -is "zero" then Item B must be "zero" as well for example).
                          • Navigation criteria among Items within a Section -- including +is "zero" then Item B must be "zero" as well +for example).
                          • Navigation criteria among Items within a Section -- including default paths, randomized paths, rule-based branching paths responding to user-submitted data, including possibly looping paths.
                          • Any time-based attributes (max time allowed for Section, -minimum time allowed)
                          • A "clear" button to clear all user values in a Section.
                          • +minimum time allowed)
                          • A "clear" button to clear all user values in a +Section.
                        • Create, edit, clone and delete Items -- the individual "questions" themselves. Configure Item attributes:
                          • Item data types: integer, numeric, text, boolean, date, or uploaded file
                          • Item formats: radio buttons, checkboxes, textfields, textareas, selects, file boxes.
                          • Item values: the label, instructions, feedback text (for use -during "grading") etc displayed with the Item either during the -subject's performance of the Assessment or the.
                          • Item designation (a "field code") to include in data +during "grading") etc displayed with the Item either +during the subject's performance of the Assessment or the.
                          • Item designation (a "field code") to include in data reporting
                          • Item defaults: configure a radio button choice that will be checked when the Assessment first displays, a text that will appear, a date that will be set, etc.
                          • Item data validation checks: correct data type; range checks for integer and numeric types; regexp matching for text types (eg accept only valid phone numbers) along with optional case-sensitivity during text validation; valid file formats for -uploaded files. Note: the designation of "the correct answer" in -the educational context of testing is a special case of data -validation checks. +uploaded files. Note: the designation of "the correct +answer" in the educational context of testing is a special +case of data validation checks.

                            Note also: need to support three-value logic regarding the existence of any single Item datum: null value means the Item -hasn't been dealt with by responder; "unknown" value means that the -Item has been answered but the responder doesn't know value; actual -value (of proper type) means that the responder has found and -submitted a value.

                            -
                          • Database-derived stock Items (eg, "country widgets", "state -widgets", etc).
                          • Item-specific feedback: configurable text/sound/image that can +hasn't been dealt with by responder; "unknown" value +means that the Item has been answered but the responder doesn't +know value; actual value (of proper type) means that the responder +has found and submitted a value.

                            +
                          • Database-derived stock Items (eg, "country widgets", +"state widgets", etc).
                          • Item-specific feedback: configurable text/sound/image that can be returned to user based on user response to Item.
                          • Any time-based attributes (max time allowed for Item, minimum -time allowed).
                          • Support of combo-box "other" choice in multiple-choice Items -(ie, if user selects a radiobutton or checkbox option of "other" -then the textbox for typed entry gets read; if user doesn't select -that choice, then the textbox is ignored).
                          • A "clear Item" button for each Item type that can't be directly -edited by user.
                          • +time allowed).
                          • Support of combo-box "other" choice in +multiple-choice Items (ie, if user selects a radiobutton or +checkbox option of "other" then the textbox for typed +entry gets read; if user doesn't select that choice, then the +textbox is ignored).
                          • A "clear Item" button for each Item type that +can't be directly edited by user.
                          -
                        • Create, edit, clone and delete Item Choices -- the "multiple -choices" for radiobutton and checkbox type Items: +
                        • Create, edit, clone and delete Item Choices -- the +"multiple choices" for radiobutton and checkbox type +Items:
                          • Choice data types: integer, numeric, text, boolean
                          • Choice formats: horizontal, vertical, grid
                          • Choice values: labels, instructions, numeric/text encoded values
                          • Choice-specific feedback: configurable text/sound/image that @@ -462,54 +468,57 @@
                            • Scoring Algorithms: names and arithmetic calculation formulae to operate on submitted data when the form returns to the server. -These include standard "percent correct -> letter grade" grading -schemes as well as formal algorithms like Likert scoring -(conversion of ordinal responses to 0-100 scale scores).
                            • Names and descriptions of Scales -- the output of Algorithm +These include standard "percent correct -> letter +grade" grading schemes as well as formal algorithms like +Likert scoring (conversion of ordinal responses to 0-100 scale +scores).
                            • Names and descriptions of Scales -- the output of Algorithm calculations.
                            • Mapping of Items (and/or other Scales) to calculate a given Scale Scores.
                            • Define data retrieval and display alternatives: tabular display in web page tables; tab-delimited (or CSV etc) formats; graphical -displays (when appropriate).
                            • Note: manual "grading by the teacher" is a special case of -post-submission Assessment Processing in that no automated +displays (when appropriate).
                            • Note: manual "grading by the teacher" is a special +case of post-submission Assessment Processing in that no automated processing occurs at all; rather, an admin user (the teacher) -retrieves the subject's responses and interacts with the subject's -data by in effect annotating it ("This answer is wrong" "You are -half right here" etc). Such annotations could be via free text or -via choices configured during editing of Items and Choices (as -described above).
                            • +retrieves the subject's responses and interacts with the +subject's data by in effect annotating it ("This answer is +wrong" "You are half right here" etc). Such +annotations could be via free text or via choices configured during +editing of Items and Choices (as described above).

                            Note that there are at least three semantically distinct concepts of scoring, each of which the Assessment package should support and have varying levels of importance in different contexts. Consider:

                              -
                            • Questions may have a "correct" answer against which a subject's -reponse should be compared, yielding some measure of a "score" for -that question varying from completely "wrong" to completely -"correct". The package should allow Editors to specify the nature -of the scoring continuum for the question, whether it's a -percentage scale ("Your response is 62% correct") or a nominal -scale ("Your response is Spot-on" "Close but No Cigar" "How did you -get into this class??")
                            • Raw responses to questions may be arithmetically compiled into +
                            • Questions may have a "correct" answer against which a +subject's reponse should be compared, yielding some measure of +a "score" for that question varying from completely +"wrong" to completely "correct". The package +should allow Editors to specify the nature of the scoring continuum +for the question, whether it's a percentage scale ("Your +response is 62% correct") or a nominal scale ("Your +response is Spot-on" "Close but No Cigar" "How +did you get into this class??")
                            • Raw responses to questions may be arithmetically compiled into some form of Scale, which is the real output of the Assessment. This is the case in the health-related quality-of-life measures -demo'd here. There is -no "correct" answer as such for any subject's responses, but all -responses are combined and normalized into a 0-100 scale.
                            • Scoring may involve summary statistics over multiple responses -(one subjects' over time; many subjects' at a single time; etc). -Such "scoring" output from the Assessment package pertains to -either of the two above notions. This is particularly important in -educational settings.
                            • +demo'd here. +There is no "correct" answer as such for any +subject's responses, but all responses are combined and +normalized into a 0-100 scale.
                            • Scoring may involve summary statistics over multiple responses +(one subjects' over time; many subjects' at a single time; +etc). Such "scoring" output from the Assessment package +pertains to either of the two above notions. This is particularly +important in educational settings.
                          • Create, edit, clone and delete Repositories of Assessments, Sections and Items. Configure:
                              -
                            • Whether a Repository is shareable, and how/with whom.
                            • Whether a Repository is cloneable, and how/with whom.
                            • Note: this is the concept of a "Question Catalog" taken to its -logical end -- catalogs of all the organizational components in an -Assessment. In essence, the Assessment package is an Assessment -Catalog. (The CR is our friend here ;-)
                            • Versioning is a central feature of this repository; multiple -"live" versions of any entity should be supported, with attributes -(name, version notes, version creation dates, version author, scope --- eg subsite/group/etc) to make it possible to identify, track and -select which version of any entity an Assessment editor wants to -use.
                            • +
                            • Whether a Repository is shareable, and how/with whom.
                            • Whether a Repository is cloneable, and how/with whom.
                            • Note: this is the concept of a "Question Catalog" +taken to its logical end -- catalogs of all the organizational +components in an Assessment. In essence, the Assessment package is +an Assessment Catalog. (The CR is our friend here ;-)
                            • Versioning is a central feature of this repository; multiple +"live" versions of any entity should be supported, with +attributes (name, version notes, version creation dates, version +author, scope -- eg subsite/group/etc) to make it possible to +identify, track and select which version of any entity an +Assessment editor wants to use.
                          @@ -525,17 +534,17 @@
                      • Provide these additional functions:
                          -
                        • Support optional "electronic signatures" consisting simply of -an additional password field on the form along with an "I attest -this is my response" checkbox that the user completes on submission -(rejected without the correct password) -- ie authentication -only.
                        • Support optional "digital signatures" consisting of a hash of -the user's submitted data, encrypted along with the user's password --- ie authentication + nonrepudiation.
                        • Perform daily scheduled procedures to look for Subjects and +
                        • Support optional "electronic signatures" consisting +simply of an additional password field on the form along with an +"I attest this is my response" checkbox that the user +completes on submission (rejected without the correct password) -- +ie authentication only.
                        • Support optional "digital signatures" consisting of a +hash of the user's submitted data, encrypted along with the +user's password -- ie authentication + nonrepudiation.
                        • Perform daily scheduled procedures to look for Subjects and Staff who need to be Invited/Instructed or Reminded to participate.
                        • Incorporate procedures to send Thanks notifications upon completion of Assessment
                        • Provide UIs for Subjects and for Staff to show the status of -the Assessments they're scheduled to perform -- eg a table that +the Assessments they're scheduled to perform -- eg a table that shows expected dates, actual completion dates, etc.
                      • @@ -557,16 +566,17 @@
                      • Handle user Login (for non-anonymous studies)
                      • Determine and display correct UI for type of user (eg kiosk format for patients; keyboard-centric UI for data entry Staff)
                      • Deliver Section forms to user
                      • Perform data validation and data integrity checks on form submission, and return any errors flagged within form
                      • Display confirmation page showing submitted data (if -appropriate) along with "Edit this again" or "Yes, Save Data" -buttons
                      • Display additional "electronic signature" field for password -and "I certify these data" checkbox if indicated for -Assessment
                      • Process sequence navigation rules based on submitted data and +appropriate) along with "Edit this again" or "Yes, +Save Data" buttons
                      • Display additional "electronic signature" field for +password and "I certify these data" checkbox if indicated +for Assessment
                      • Process sequence navigation rules based on submitted data and deliver next Section or terminate event as indicated
                      • Track elapsed time user spends on Assessment tasks -- answering a given question, a section of questions, or the entire Assessment --- and do something with this (we're not entirely sure yet what +-- and do something with this (we're not entirely sure yet what this should be -- merely record the elapsed time for subsequent analysis, reject over-time submissions, or even forcibly refresh a -laggard user's page to "grab the Assessment back")
                      • Insert appropriate audit records for each data submission, if +laggard user's page to "grab the Assessment +back")
                      • Insert appropriate audit records for each data submission, if indicated for Assessment
                      • Handle indicated email notifications at end of Assessment (to Subject, Staff, Scheduler, or Editor)
                      Index: openacs-4/packages/assessment/www/doc/sequencing.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/sequencing.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/sequencing.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/sequencing.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -7,26 +7,27 @@ vexing problem confronting the Assessment package is how to handle conditional navigation through an Assessment guided by user input. Simple branching has already been accomplished in the "complex -survey" package via hinge points defined by responses to single -items. But what if branching/skipping needs to depend on +survey" package via hinge points defined by responses to +single items. But what if branching/skipping needs to depend on combinations of user responses to multiple items? And how does this relate to management of data validation steps? If branching/skipping depends not merely on what combination of -"correct" or "in range" data the user submits, but also on -combinations of "incorrect" or "out of range" data, how the heck do -we do this?

                      +"correct" or "in range" data the user submits, +but also on combinations of "incorrect" or "out of +range" data, how the heck do we do this?

                      One basic conceptual question is whether Data Validation is a distinct process from Navigation Control or not. Initially we thought it was and that there should be a datamodel and set of procedures for checking user input, the output of which would pipe to a separate navigation datamodel and set of procedures for -determining the user's next action. This separation is made (along -with quite a few other distinctions/complexities) in the IMS -"simple sequencing" model diagrammed below). But to jump the gun a -bit, we think that actually it makes sense to combine these two -processes into a common "post-submission user input processing" -step we'll refer to here as Sequencing. (Note: we reviewed several -alternatives in the archived prior discussions +determining the user's next action. This separation is made +(along with quite a few other distinctions/complexities) in the IMS +"simple sequencing" model diagrammed below). But to jump +the gun a bit, we think that actually it makes sense to combine +these two processes into a common "post-submission user input +processing" step we'll refer to here as Sequencing. (Note: +we reviewed several alternatives in the archived prior discussions + here.

                      So here is our current approach. We note that there are two scopes @@ -44,21 +45,22 @@

                      Specific Entities

                      • Item-checks (as_item_checks) define 1..n ordered evaluations of -a user's response to a single Item. These can occur either via +a user's response to a single Item. These can occur either via client-side Javascript when the user moves focus from the Item, or server-side once the entire html form comes back. They are associated (related) to as_items.

                        The goal is to have a flexible, expressive grammar for these checks to support arbitrary types of checks, which will be input -validation ("Is the user's number within bounds?"; "Is that a -properly formatted phone number?"). One notion on check_sql. -Instead of using comparators we store the whole SQL command that -makes up this check with a predefined variable "value" that -contains the response of the user to the item the item_check is -related to. If we want to make sure the value is between 0 and 1 we -store "0 < :value < 1" with the check. Once an item is -submitted, the system looks up the related checks for this item and -replaces in each of them ":value" with the actual response.
                        +validation ("Is the user's number within bounds?"; +"Is that a properly formatted phone number?"). One notion +on check_sql. Instead of using comparators we store the whole SQL +command that makes up this check with a predefined variable +"value" that contains the response of the user to the +item the item_check is related to. If we want to make sure the +value is between 0 and 1 we store "0 < :value < 1" +with the check. Once an item is submitted, the system looks up the +related checks for this item and replaces in each of them +":value" with the actual response.

                        Item Checks thus will have these attributes:

                        • item_check_id
                        • cr:name - identifier
                        • cr:description - Explanation what this check does
                        • check_location - client-side or server-side
                        • javascript_function - name of function that gets called when @@ -74,18 +76,20 @@ The goal is to have a way of telling if a section (or an item within a section) shall be displayed or not depending on the section-checks. This way you could say that you only display this -section if the response to item(1234) "Color of your eye" was -"blue" and the response to item(4231) "Color of your hair" was -"red". Sadly we can't use such an easy way of checking the ":value" -as we do with item_checks, as we do not know which item this refers -to. Instead we store the item_id like this ":item_1234". This way -the check_sql would look like ":item_1234 == 'blue' AND :item_4231 -== 'red'". Additionally other variables might be defined by the API -at a later stage,  e.g. ":percent_score", which would be -replaced by the current percentage value (aka score) that subject -had in the test so far (taken from the as_session_table). It might -be interesting to pass these variables along in the API, this -remains to be seen when actually implementing the system.

                          +section if the response to item(1234) "Color of your eye" +was "blue" and the response to item(4231) "Color of +your hair" was "red". Sadly we can't use such an +easy way of checking the ":value" as we do with +item_checks, as we do not know which item this refers to. Instead +we store the item_id like this ":item_1234". This way the +check_sql would look like ":item_1234 == 'blue' AND +:item_4231 == 'red'". Additionally other variables +might be defined by the API at a later stage,  e.g. +":percent_score", which would be replaced by the current +percentage value (aka score) that subject had in the test so far +(taken from the as_session_table). It might be interesting to pass +these variables along in the API, this remains to be seen when +actually implementing the system.

                          The Inter Item Checks also allow post section navigation (in contrast to the pre section / item navigation mentioned above). If post_check_p is true, the check will be done *after* the user has @@ -104,8 +108,9 @@
                        • section_id - Section to call if we are in a section mode (all items will be displayed in sections) and it is a post_check.
                          -
                        • item_id - Item to call if we are in "per item" mode (all items -will be displayed on a seperate page) and it is a post_check.
                        • Potential extension: item_list - list of item_ids that are used +
                        • item_id - Item to call if we are in "per item" mode +(all items will be displayed on a seperate page) and it is a +post_check.
                        • Potential extension: item_list - list of item_ids that are used in the check_sql to speed up the check.

                        Index: openacs-4/packages/assessment/www/doc/versioning.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/versioning.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/versioning.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/versioning.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -5,9 +5,10 @@

                        Overview

                        This topic requires special mention because it is centrally important to Assessment and one of the most radical departures from -the current packages (in which "surveys" or "questionnaires" are -all one-shot affairs that at best can be cloned but not readily -modified in a controlled fashion).

                        +the current packages (in which "surveys" or +"questionnaires" are all one-shot affairs that at best +can be cloned but not readily modified in a controlled +fashion).

                        During its lifetime, an Assessment may undergo revisions in the midst of data collection. These revisions may be minor (change of a label on an Item or adddition of a new Choice to an Item) or major @@ -18,22 +19,23 @@ protocols change; teachers alter their exams from term to term. And still, there is a crucial need to be able to assemble and interpret data collected across all these changes.

                        -

                        Another type of "revision" occurs when a component (an Item -Choice, Item, Section, or the entire Assessment) needs to be +

                        Another type of "revision" occurs when a component (an +Item Choice, Item, Section, or the entire Assessment) needs to be translated into another language. Even if the semantics of the component are identical (and they should be or you need a better translator ;-), the Assessment package needs to handle this -situation correctly: an admin user needs to be able to "assign" the -right language version to a set of subjects, and the returned user -data need to be assembled into trans-language data sets.

                        +situation correctly: an admin user needs to be able to +"assign" the right language version to a set of subjects, +and the returned user data need to be assembled into trans-language +data sets.

                        Note that two orthogonal constructs are in play here:

                        • Many-many relationships: a given Section may be reused in many different Assessments (eg if it contains commonly-needed Items such as questions about demographic details)
                        • Multiple versions: that same Section may exist in different versions in those different Assessments (eg if different Assessment -authors add or subtract an Item, change wording of an Item's label, -etc). This includes different translations of semantically +authors add or subtract an Item, change wording of an Item's +label, etc). This includes different translations of semantically identical text.

                        Approach

                        @@ -48,8 +50,8 @@ cr_revisions. Thus we actually have, for instance, two tables for Items:

                          -
                        • as_items (a cr_item) for whatever "immutable" attributes there -are
                        • as_items_revs (a cr_revision) for all mutable attributes +
                        • as_items (a cr_item) for whatever "immutable" +attributes there are
                        • as_items_revs (a cr_revision) for all mutable attributes including translations

                        This pattern of dual tables is used for all components that need @@ -62,35 +64,35 @@ the entire hierarchy. Data collected after this change will be collected with a semantically different instrument. Whether the difference is large or small is immaterial; it is different, and -Assessment must handle this. And the CR doesn't do this for us +Assessment must handle this. And the CR doesn't do this for us automagically.

                        So what the package must do is version both the individual entities and also all the relationships over which we join when -we're assembling the entire Assessment (whether to send out to a -requesting user, to stuff the database when the form comes back, or -to pull collected data into a report).

                        -

                        This doesn't involve merely creating triggers to insert new +we're assembling the entire Assessment (whether to send out to +a requesting user, to stuff the database when the form comes back, +or to pull collected data into a report).

                        +

                        This doesn't involve merely creating triggers to insert new mapping table rows that point to the new components. We also need to insert new revisions for all components higher up the hierarchy -than the component we've just revised. Thus:

                        +than the component we've just revised. Thus:

                        • If we change the text displayed with a Section, then we need to insert a new as_section_revs and a new as_section_assessment_map row. But we also need to insert a new as_assessment_revs as well, since if the Section is different, so is the Assessment. However, -we don't need to insert any new as_item_revs for Items in the +we don't need to insert any new as_item_revs for Items in the Section, though we do need to insert new as_section_item_map rows.
                        • If we change the text of an Item Choice, then we need to insert new stuff all the way up the hierarchy.

                        Another key issue, discussed in this thread, involves the semantics of versioning. How big of a modification in some Assessment package entity needs to happen -before that entity is now a "new item" instead of a "new version of -an existing item"? If a typo in a single Item Choice is corrected, -one can reasonably assume that is merely a new version. But if an -Item of multiple choice options is given a new choice, is this Item -now a new one?

                        +before that entity is now a "new item" instead of a +"new version of an existing item"? If a typo in a single +Item Choice is corrected, one can reasonably assume that is merely +a new version. But if an Item of multiple choice options is given a +new choice, is this Item now a new one?

                        One possible way this could be defined would derive from the hierarchy model in the CR: cr_items -- but not cr_revisions -- can contain other entities; the parent_id column is only in cr_items. @@ -109,16 +111,16 @@

                        Specific Versionable Entities

                        Within each subsystem of the Assessment package, the following entities will inherit from the CR. We list them here now, and once -we've confirmed this selection, we'll move the information out to -each of the subsystems' pages.

                        +we've confirmed this selection, we'll move the information +out to each of the subsystems' pages.

                        • Core - Items:
                          • Items: as_items; as_items_revs
                          • Item Choices: as_item_choices; as_item_choices_revs
                          • Localized Items: as_item_localized; as_item_localized_revs
                            -Note: we're not yet entirely sure what we gain by this when Items -themselves are versioned; we haven't yet settled on whether -different translations of the same Items should be different -versions or not.
                          • Messages: as_messages; as_messages_revs
                          • +Note: we're not yet entirely sure what we gain by this when +Items themselves are versioned; we haven't yet settled on +whether different translations of the same Items should be +different versions or not.
                          • Messages: as_messages; as_messages_revs
                        • Core - Grouping:
                            Index: openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s04.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s04.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s04.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s04.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -9,9 +9,9 @@ rightLink="ch02s05" rightLabel="Next">

                            -4. Branch Triggers

                            To define a Branch Trigger, the field "Type" in the form must be -checked as branch. Is necessary that at least one section is -created after the one that is being evaluated.

                            The condition field shows the question and its possible answers, +4. Branch Triggers

                            To define a Branch Trigger, the field "Type" in the +form must be checked as branch. Is necessary that at least one +section is created after the one that is being evaluated.

                            The condition field shows the question and its possible answers, this means that if a user that respond the assessment choose that response the trigger will be activated, and the section secuence will change.

                            After the trigger is defined as branch, the section that will be Index: openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s05.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s05.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s05.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s05.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -9,8 +9,8 @@ rightLink="ch02s06" rightLabel="Next">

                            -5. Action Triggers

                            To define an Action Trigger, the field "Type" in the form must -be checked as "Action".

                            The condition field shows the question and its possible anwers, +5. Action Triggers

                            To define an Action Trigger, the field "Type" in the +form must be checked as "Action".

                            The condition field shows the question and its possible anwers, it means that when the user is responding the assessment, if this answer is given for this question, the action will be executed.

                            After the trigger is created, the action related must be chosen, also the time when the action will be executed, and the message Index: openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s06.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s06.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s06.adp 25 Aug 2015 18:02:19 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/asm_trigger_doc/ch02s06.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -10,9 +10,9 @@

                            6. Trigger Administration

                            The trigger administration page can be reached from two -different links, the link "Administer Triggers" in the action bar -of each section, or from the link that show the number of triggers -related to an item.

                            If the trigger administration page is reached from thi link of +different links, the link "Administer Triggers" in the +action bar of each section, or from the link that show the number +of triggers related to an item.

                            If the trigger administration page is reached from thi link of the section, all the triggers related to the items of the section will be display, this allowst to manage the order of the execution of the actions when they are executed immediately or at the end or @@ -21,9 +21,10 @@ that show the number of triggers of each item, then the row will not be shown. Through this interface, the trigger can be edited, deleted or can be managed its notifications.

                            When a trigger is deleted, a confirm message will be display -showing all the information related to it.

                            The link "Notify User" leads to a page a user can request -notifications when this trigger is executed. It also allowst to -search and register another users to the notifications.

                            +showing all the information related to it.

                            The link "Notify User" leads to a page a user can +request notifications when this trigger is executed. It also +allowst to search and register another users to the +notifications.

                            An administrator can select requests that want to approve and -click in the button "Approve", and also can send mail to several -users that requested the action. Through this interface the +click in the button "Approve", and also can send mail to +several users that requested the action. Through this interface the notifications can also be managed.

                            +the option "None" is selected, it means that the +registration process will be the same as always has been, if any +other option is selected, the assessment will be diplayed when a +user creates a new account.

                            An assessment author should be able optionally to specify some consent statement that a user must agree to in order to proceed with the assessment. The datamodel needs to -store the user's response positive response with a timestamp (in -as_sessions). This isn't relevant in educational testing, but it is -an important feature to include for other settings, notably medical -and financial ones.
                            +store the user's response positive response with a timestamp +(in as_sessions). This isn't relevant in educational testing, +but it is an important feature to include for other settings, +notably medical and financial ones.
                          • Progress bar: select. (no progress bar, different styles). What kind of progress bar shall be displayed to the respondee while @@ -59,7 +59,8 @@ using the style?)
                          • Customizable thank you page: richtext.
                          • Return_URL: text. URL the respondee will be redirected to after finishing the assessment. Should be redirected directly if no Thank you page is there. Otherwise the return_url should be set in the -thank you page context, so we can have a "continue" URL.
                          • +thank you page context, so we can have a "continue" +URL.
                        • Times
                            @@ -89,10 +90,10 @@ allow access to the accessment. Add users to the system if not already part of it. Notify users via email that they should take the accessment.
                          • Password: short_text. Password that has to be typed in before -the respondee get's access to the accessment. This should be done -by creating a registered filter that returns a 401 to popup an HTTP -auth box. look in oacs_dav::authenticate for an example of how to -check the username/password
                            +the respondee get's access to the accessment. This should be +done by creating a registered filter that returns a 401 to popup an +HTTP auth box. look in oacs_dav::authenticate for an example of how +to check the username/password
                          • IP Netmask. short_text. Netmask that will be matched against the IP-Adress of the respondee. If it does not match, the user will not be given access. Again this should be handled by the creation @@ -169,10 +170,10 @@ included is a consent form; an assessment author should be able optionally to specify some consent statement that a user must agree to in order to proceed with the assessment. The datamodel needs to -store the user's response whether it is positive or negative, along -with a timestamp. This isn't relevant in educational testing, but -it is an important feature to include for other settings, notably -medical and financial ones.
                          • +store the user's response whether it is positive or negative, +along with a timestamp. This isn't relevant in educational +testing, but it is an important feature to include for other +settings, notably medical and financial ones.
                          \ No newline at end of file Index: openacs-4/packages/assessment/www/doc/user_interface/index.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/index.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/user_interface/index.adp 25 Aug 2015 18:02:20 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/user_interface/index.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -4,10 +4,10 @@

                          Introduction

                          In recent times the survey system has -expanded beyond it's initial scope of providing a quick and easy -solution to conduct surveys. Due to it's flexibility it has already -expanded in the area of storing user information and provide a tool -for feedback for quality assurance. +expanded beyond it's initial scope of providing a quick and +easy solution to conduct surveys. Due to it's flexibility it +has already expanded in the area of storing user information and +provide a tool for feedback for quality assurance.

                          On the other hand the need for dotLRN has risen to provide an assessment solutions that allows (automated) tests as well with the possibility to score the test results and @@ -83,10 +83,10 @@

                          Furthermore the grading package offers to transfer scores (which are stored as integer values) into a grade (e.g. the american A-F scheme, or the German 1-6). This is where it -gets the name from I'd say ;). Grading schemes are flexible and can -be created on the fly. This allows us to support any grading scheme -across the world's universities. In addition in the area of -Knowledge Management, grades could be transfered into incentive +gets the name from I'd say ;). Grading schemes are flexible and +can be created on the fly. This allows us to support any grading +scheme across the world's universities. In addition in the area +of Knowledge Management, grades could be transfered into incentive points, that can be reused to reward employees for good work done (where they got good ratings for).

                          Last but not least, maybe embeded with the @@ -118,7 +118,8 @@ This way a quick view can be given about the user (aggregating user information in a flexible way). Best explanation would be to treat the /pvt/home page as a collection of assessment data and the -"change basic information" as one assessment among many.

                          +"change basic information" as one assessment among +many.

                          With a little bit of tweaking and the possiblity to add instant gratification, aka aggregated result display, it could include the poll package and make it @@ -129,14 +130,15 @@ is this feature / how good do you think we implemented it). And as you might have guessed, for anything the current survey module has been used for as well (e.g. plain and simple surveys).

                          -

                          The grading system on it's own would be -usefull for the OpenACS community as it would allow the handing out -of "zorkmints" along with any benefits the collection of mints -gives to the users. As mentioned earlier, this is also very -important in a Knowledge Management environment, where you want to -give rated feedback to users.

                          +

                          The grading system on it's own would +be usefull for the OpenACS community as it would allow the handing +out of "zorkmints" along with any benefits the collection +of mints gives to the users. As mentioned earlier, this is also +very important in a Knowledge Management environment, where you +want to give rated feedback to users.

                          -











                          +Question Catalogue

                          Assessment +Creation

                          Sections

                          Tests

                          Test Processing

                          User Experience


                          Index: openacs-4/packages/assessment/www/doc/user_interface/item_creation.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/item_creation.adp,v diff -u -r1.1.2.3 -r1.1.2.4 --- openacs-4/packages/assessment/www/doc/user_interface/item_creation.adp 9 Jun 2016 13:03:12 -0000 1.1.2.3 +++ openacs-4/packages/assessment/www/doc/user_interface/item_creation.adp 4 Jul 2016 11:33:12 -0000 1.1.2.4 @@ -51,16 +51,17 @@ Data validation steps are fairly complex because we need two layers of data validation checks:
                          • -Intra-item checks: the user input -{ exactly matches | falls within narrow "target" bounds | falls -within broader "acceptable" bounds with explanation}
                          • +Intra-item checks: the user +input { exactly matches | falls within narrow "target" +bounds | falls within broader "acceptable" bounds with +explanation}
                          • Inter-item checks: if { a user input for item a is A, item b is B, ... item n is N } then { user input for item z is Z }

                          Both levels involve stringing together multiple binary comparisons (eg 0 < input < 3 means checks -that 0 < input and input < 3), so we need to express a -grammar consisting of

                            +that 0 < input and input < 3), so we need to express +a grammar consisting of

                            • comparison1 conjunction comparison2 conjunction ... comparison n
                            • appropriate grouping to define precedence order (or simply agree to evaluate left to right)
                            • @@ -108,10 +109,10 @@ box (small, medium, large)
                            • Compare by: Select (equal, contains, regexp). This defines how the comparison between the answer string and the response shall happen.
                            • Allow in answerbox: (multiple select box -with "All" and the numbers from 1 to x where x is the number of -answerboxes from above. For sure this only works with JS enabled -:)). Defines the answerboxes the user can fill out that shall be -matched with this answer. w
                            • +with "All" and the numbers from 1 to x where x is the +number of answerboxes from above. For sure this only works with JS +enabled :)). Defines the answerboxes the user can fill out that +shall be matched with this answer. w
                          @@ -174,9 +175,9 @@
                      • In addition to submit, there is another button to allow further answers to be filled in. Typed in values -shall be remembered and 4 more answerboxes be shown.
                      • Additionally there is a button "copy", -which copies the contents of this question to a new question, after -you gave it a new title.
                      • +shall be remembered and 4 more answerboxes be shown.
                      • Additionally there is a button +"copy", which copies the contents of this question to a +new question, after you gave it a new title.
                      • [FE]: Possibility to randomly choose from the options. This would add a couple of fields:
                        • To each answer: Fixed position: Select @@ -213,7 +214,7 @@ displayed in one block. At the moment this is done in the section setup (if all questions in a section have the same answers they would be shown in a matrix). One could think about making this a -special question type on it's own.
                        • +special question type on it's own.
                        Only site wide admins will get to see the following question types: @@ -275,7 +276,7 @@ question.
                      • Mail all current survey administrators using this question about the update.
                      • Include a link which allows the administrators to update their survey to the latest revision of the -question.
                      • Don't relink the survey to the latest +question.
                      • Don't relink the survey to the latest revision if not explicitly asked for by the survey administrator.
                      @@ -287,7 +288,7 @@ sophisticated system which links to a media database is thinkable, once the media database is ready.
                    -For the future we'd like to see a more +For the future we'd like to see a more sophisticated way to include images in questions. Currently this can be done using HTML linking, but a media database would be considerably more helpful and could be reused for the CMS as @@ -297,23 +298,25 @@ I'm not clear from your description what these are. If by Calculation questions you mean questions that produce some -calculated result from the user's raw response, then IMHO this is -an important type of question to support now and not defer. This is -the main type of question we use in quality-of-life measures (see -demo +calculated result from the user's raw response, then IMHO this +is an important type of question to support now and not defer. This +is the main type of question we use in quality-of-life measures +(see demo here ). These are questions scored by the Likert scale algorithm. If there are five potential responses (1,2,3,4, and 5) -for a question, and the user choose "1" then the "score" is -calculated as 0; if "5" then 100; if "3" then 50, and so on -- a -mapping from raw responses to a 0-100 scale. Is this what you mean -by a "calculation" question? +for a question, and the user choose "1" then the +"score" is calculated as 0; if "5" then 100; if +"3" then 50, and so on -- a mapping from raw responses to +a 0-100 scale. Is this what you mean by a "calculation" +question?

                    By Database questions, do you mean free text input (via -textboxes or textareas) questions for which there is a "correct" -answer that needs to be stored during question creation? Then when -the teacher is reviewing the student's response, she can inspect -the student's response against the stored answer and determine what -degree of correctness to assign the response?

                    +textboxes or textareas) questions for which there is a +"correct" answer that needs to be stored during question +creation? Then when the teacher is reviewing the student's +response, she can inspect the student's response against the +stored answer and determine what degree of correctness to assign +the response?

                    -- Stan Kaufman on November 09, 2003 06:29 PM (view details)

                    Index: openacs-4/packages/assessment/www/doc/user_interface/section_creation.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/section_creation.adp,v diff -u -r1.1.2.3 -r1.1.2.4 --- openacs-4/packages/assessment/www/doc/user_interface/section_creation.adp 1 Dec 2015 11:18:05 -0000 1.1.2.3 +++ openacs-4/packages/assessment/www/doc/user_interface/section_creation.adp 4 Jul 2016 11:33:12 -0000 1.1.2.4 @@ -17,8 +17,8 @@ pages describing the user how to fill out the section.
                  • Display Type: section display type to use. Select box of display types in use by this -user, as well as "new display type" and "display type from -catalogue".
                    +user, as well as "new display type" and "display +type from catalogue".
                  • Seconds allowed for completion: integer. Seconds allowed for completing the section.
                  • Feedback Text: textarea. Feedback given to the user after finishing the section.
                  • @@ -77,7 +77,7 @@ Branch by result. Instead of relying on one or multiple answers we check for a result in a previous section. This can only work in a test -environment (so don't display this option if we are not dealing +environment (so don't display this option if we are not dealing with a test).
                    • Section: select. This will display a list of all previous sections. The selected section will be used for the Index: openacs-4/packages/assessment/www/doc/user_interface/tests.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/tests.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/user_interface/tests.adp 25 Aug 2015 18:02:20 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/user_interface/tests.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -3,8 +3,8 @@ Tests A test -is a special kind of accessment that allows the respondee's answers -to be rated immediatly. Unless otherwise stated, all pages +is a special kind of accessment that allows the respondee's +answers to be rated immediatly. Unless otherwise stated, all pages described are admin viewable only.
                      • @@ -100,7 +100,7 @@
                      • All or nothing. In this scenario it will be looked, if all correct answers have been chosen by the respondee and none of the incorrect ones. If this is the case, respondee -get's 100%, otherwise nothing.
                      • Cumultative. Each answer has a certain +get's 100%, otherwise nothing.
                      • Cumultative. Each answer has a certain percentage associated with it. This can also be negative. For each option the user choose he will get the according percentage. If negative points are allowed, the user will get a negative @@ -109,8 +109,8 @@
                    • Matching question
                        -
                      • All or nothing: User get's 100% if all -matches are correct, 0% otherwise.
                      • Equally weigthed: Each match is worth +
                      • All or nothing: User get's 100% if +all matches are correct, 0% otherwise.
                      • Equally weigthed: Each match is worth 100/{number of matches} percent. Each correct match will give the according percentage and the end result will be the sum of all correct matches.
                      • Allow negative: If we have equally @@ -130,8 +130,8 @@ for the question).
                      • Contains: If the answer contains exactly the string, points are granted. If you want to give percentages for multiple words, add another answer to the answerbox (so instead of -having one answerbox containing "rugby soccer football", have -three, one for each word).
                      • Regexp: A regular expression will be run +having one answerbox containing "rugby soccer football", +have three, one for each word).
                      • Regexp: A regular expression will be run on the answer. If the result is 1, grant the percentage.
                      Index: openacs-4/packages/assessment/www/doc/user_interface/user_experience.adp =================================================================== RCS file: /usr/local/cvsroot/openacs-4/packages/assessment/www/doc/user_interface/user_experience.adp,v diff -u -r1.1.2.2 -r1.1.2.3 --- openacs-4/packages/assessment/www/doc/user_interface/user_experience.adp 25 Aug 2015 18:02:20 -0000 1.1.2.2 +++ openacs-4/packages/assessment/www/doc/user_interface/user_experience.adp 4 Jul 2016 11:33:12 -0000 1.1.2.3 @@ -33,7 +33,7 @@ instead of the submit button. Otherwise only the answer will be displayed. In either case the answer is displayed as text on the page, not within a form element.
                    • If the respondee cannot see his old -answers, don't display them once submitted. Make sure the +answers, don't display them once submitted. Make sure the backbutton does not work (e.g. using Postdata ?). Not sure how much sense it makes to display an edit button :).
                    • If we have a time in which the respondee has to answer the assessment, display a bar with time @@ -64,14 +64,15 @@ Once the assessment has been finished
                      • Display optional electronic signature -file upload with an "I certify this test and state it is mine" -checkbox. This will be stored in addition to the test.
                      • Notifications shall be send to the admin, +file upload with an "I certify this test and state it is +mine" checkbox. This will be stored in addition to the +test.
                      • Notifications shall be send to the admin, staff and respondee.
                      • If we shall display the results to the respondee immediatly after finishing the assessment, show it to him / her. Display the comments along depending on the settings.
                      • If we have a special score, show this -result to the user (e.g. if 90% means "you are a dream husband", -display this along with the 90%).
                      • Display a link with the possibility to +result to the user (e.g. if 90% means "you are a dream +husband", display this along with the 90%).
                      • Display a link with the possibility to show all the questions and answers for printout.
                      • Store the endtime with the response.