{/doc/assessment/ {Assessment}} {Appendix A: RFC for Assessment Specs} Appendix A: RFC for Assessment Specs

Introduction

In recent times the survey system has expanded beyond it's initial scope of providing a quick and easy solution to conduct surveys. Due to it's flexibility it has already expanded in the area of storing user information and provide a tool for feedback for quality assurance.

On the other hand the need for dotLRN has risen to provide an assessment solutions that allows (automated) tests as well with the possibility to score the test results and store them in an easy retrievable way.

Last but not least a new demand has risen for the possibility to give and store ratings on objects within the system as part of a knowledge management solution.

The documents on these page will provide a solution that is flexible to meet ababove needs but still be focused enough to apply for special clients demands.

Assessments

The current survey system will build the basis for a new assessment package and will consist of various areas:

Question Catalogue

The question catalogue stores all the questions of the assessment package. It is a pool where all the questions from all assessments are stored in. This creates the opportunity to make the questions reusable, allowing for statistics across surveys and prevents the respondee from having to fill out a question he has already filled out. Furthermore, special administrators are given the possibility to add questions that do not store the results within the scope of the assessment package but in other database tables (e.g. the name of the user) or trigger some other events (e.g. send email to the email address filled out by the respondee). A detailed description can be found here.

Assessment creation

An assessment is either a survey or a test. The functionality for both is nearly identical though a test needs additional work to allow for automated grading. A detailed description of the options given to the creator of an assessment can be found here.

Each assessment consists of various sections, that allow for the split up of the assessment (so it will be displayed to the respondee on multiple pages) and give the possibility for branching depending on previous answers of the respondee. Questions are always added into the question database first, then added to a specific section and thus made available to the assessment. A detailed description of the Sections can be found here.

Tests

Tests are a special kind of assessment in that they allow for automatic processing of the answers and storage of the result in the grading system. They have a couple of additional settings as well as the possibility to get an overview of the evaluation (what have the respondees answered, how have they done in total (points)). A description for this can be found here.

The backend for the test processing, that enables the automatic tests is described in a separate document as it will be parsed while the respondee answers the test, not manually. In addition this document describes how the grades are calculated (automatically or manually) for each question. The result is being stored in the grading package.

Scoring/Grading

The grading package will be designed first of all to all the storing of test results. In addition to this, it will provide functionality to other packages to allow rating of their contents (one example of this would be Lars Rating package, that would be used as a basis for this). In general, it should provide a very flexible way of adding scores into the system, either automatically (as described above) or manually (e.g. this student did a good oral exam).

In addition to the possibility to enter scores/rates, the grading package allows for automatic aggregation of scores. This holds especially true for tests and classes. A test result will depend on the result of all the answers (aggregated). A class result will depend on the result of all the tests a respondee did in addition to any manual grades the professor can come up with. Providing a clean UI for this is going to be the challenge.

Furthermore, the grading package offers to transfer scores (which are stored as integer values) into a grade (e.g. the american A-F scheme, or the German 1-6). This is where it gets the name from I'd say ;). Grading schemes are flexible and can be created on the fly. This allows us to support any grading scheme across the world's universities. In addition in the area of Knowledge Management, grades could be transferred into incentive points, that can be reused to reward employees for good work done (where they got good ratings for).

Last but not least, maybe embedded with the workflow system, is the possibility to execute actions based on the grade. An example would be the adding of the student to the advanced class if his grade or score reaches a certain level. Alternatively this looks like a good thing for the curriculum module to achieve.

User Experience

So far we have only talked about the administrators point of view. A respondee will be directed to an assessment from various possible entry points. Depending on the settings for the assessment the respondee will be presented with the assessment that he is allowed to answer. Though a lot of it is redundant, a special page has been created to describe this. For the implementation though there might be additional things depending on the specifications of the various administrator settings.

Use Cases

The most obvious use case would be a class in a school or university, that offers automated tests to the students and wants to have them graded automatically. The description of the assessment system has been written mainly with this in mind.

Additionally you can use the assessment system to collect user information. When signing up to a site the user could be asked to fill out an assessment where part of the questions will be stored in the acs_users table, some other questions in other tables and the rest in the assessment package. This way a quick view can be given about the user (aggregating user information in a flexible way). Best explanation would be to treat the /pvt/home page as a collection of assessment data and the "change basic information" as one assessment among many.

With a little bit of tweaking and the possibility to add instant gratification, aka aggregated result display, it could include the poll package and make it redundant.

Last but not least with the ability to display questions in a multi dimensional way to the user, the assessment system is useful for quality assurance (how important is this feature / how good do you think we implemented it). And as you might have guessed, for anything the current survey module has been used for as well (e.g. plain and simple surveys).

The grading system on it's own would be useful for the OpenACS community as it would allow the handing out of "zorkmints" along with any benefits the collection of mints gives to the users. As mentioned earlier, this is also very important in a Knowledge Management environment, where you want to give rated feedback to users.

Question Catalogue

Assessment Creation

Sections

Tests

Test Processing

User Experience