Design Decisions

Scope
The xolp package is considered to be a "backend" service that provides a storage infrastructure for indicators with an accompanying API for retrieval and analysis.
The following aspects are considered in scope:
(Durable) Storage of Indicators and Activities
The persistence of indicators and activities. Although it is possible to update and delete all data via the API, indicators are typically simply imported/pushed into the data base without the need for further modification. Changes to the activities related to these indicators should typically create new activity versions, i.e. activity information is historicized. An activity should remain in the xolp storage even if the source objects (e.g. an assessment object) are deleted. (For pragmatic reasons it is possible to delete an activity with the associated indicators, though.)
Retrieval and Evaluation of Indicators
The xolp API provides means for simple retrieval and filtering of indicators, and their evaluation with respect to evaluation schemas/scales.
The following aspects should be managed by the client (e.g. a "gradebook" package, or an "e-portfolio" package) and are therefore considered out of scope:
User Interface
The xolp package is considered a backend service without a UI.
Triggering Activity Historization
The xolp API provides means to create a new version of an activity. However, the point in time when this should happen must be decided by the client application.
Permissions Management
Access control (e.g. to the indicators) is considered to be implemented at the application layer, i.e. within an application using the xolp service.
Dimensional Data Model
In order to gain high query performance and flexible basis for various analyses, we decided to follow a dimensional modeling approach (in contrast to an entity-relationship approach). There is a central fact table (xolp_indicator_facts) that stores all actual measures (indicators) surrounded by a range of dimension tables to provide context for these measured values.
Identification for (several) Entities via Schematic String-based Resource Identifiers
The architecture of the WWW promotes the notion of global identifiers for resource identification, such as Internationalied Resource Identifier (IRI), Uniform Resource Identifiers (URI) or Uniform Resource Names (URN).
Identification of Sparse Entities via Strings (EvaluationScale, EvaluationSchema, ActivityVerb)
There are several entities in xolp for which we expect only a few instances. For example, we will only need a handful of ActivityVerbs and EvaluationSchemas, and in practice probably even not too many EvalutionScales (if the client application cares about deduplication). By referring to these resources via human-readable identifiers, we are able to write nice readable code such as ::xolp::EvaluationSchema require -iri "https://dotlrn.org/xolp/evaluation-schemas/at-five-to-one" or ::xolp::ActivityVerb require -iri "http://adlnet.gov/expapi/verbs/experienced".
Activity Identification
At the point in time at which an activity is initially registered within the xolp activity dimension table, this activity might (a) already exists as an ACS Object in the system, (b) exists as a tuple of a table that does not inherit from acs_objects, or (c) not be represented in the system at all. An example for (a) would be an xowf based test, for (b) would be a (manual) grade book entry (tlf-gradebook), and for (c) could be an not-yet registered activity (e.g. a presentation) for which indicators are imported via a CSV file. There are at least two approaches for handling these cases: On the one hand, we could create ACS Objects for all activities that do not already have one (such as (b) and (c)). Then, a column "object_id" in the activity dimension table could be used to group the activity versions. This, however, would require a separate table (such as xolp_activities) that (unintuitively) stores only a subset of relevant activities. On the other hand, by using IRIs (URIs/URNs), one can simply identify arbitrary activities without the need for a separate table. We use the scheme openacs:<table>:<id> for ACS Objects (a) and other internal tuples (b), but basically allow for arbitrary IRIs. The client system that stores the indicators and activities is required and trusted to use unambiguous IRIs.
Percentage-based Indicators
Indicators are the "facts" of our star/snowflake schema and are merely "value objects". Each indicator is a time-stamped percentage value. Therefore, and because we expect a huge amount of entries in this table, we did not design indicators as full-fledged ACS Objects.
A "Slowly Changing Dimension" for Activities
For any indicator (e.g. a grade) the system must persist the context permanently and historically valid. For example, the deletion of a test question in the system must not cascade to the associated students' grades. Therefore, we implemented the Activity Dimension as a Slowly Changing Dimension (of Type 2).
Activity Hierarchy
It is natural to think of activities at different granularity levels, or that an activity can comprise several sub-activities. Therefore, xolp models activities in the form of an hierarchical tree, and treats a course, a test, a group work, etc. as activities. Within a context (super-activity), each (sub-)activity has a certain weight, the sum of which is typically 100%. (Exceptions are special cases such as the virtual activity "Group work" below, where we assume that only one of the two forks is possible for a particular user.) Although it is not prevented that one activity has multiple parents (which makes the hierarchy a polyhierarchy), one would typically model activities as a hierarchical tree of contextualized activities.

Figure 2: Example of a simple contextualized activity hierarchy.

Competency Graph
Similar to the activity hierarchy, xolp models competencies in the form of directed acyclical graph. One or more activities can prove one or more competencies. Currently, the activities that book onto the same competency "overlap", i.e. the lowest/average/highest percentage takes precedence, depending on the result policy (i.e. whether to count to best result, the worst result, or the average result). (Example: If one exam shows you are a mediocre software developer, and another one shows you are a good one, we assume, you are a good one.)

Figure 2: Example of a simple competency graph (result policy: best).