Skip to main content

ADR 0005: Assessment-Centric K/O/E Modelling (Knowledge, Observation, Evidence)

Date: 2025-12-04
Status: Accepted

Context

Evalium started life as an Assessment Management System (AMS) focused on online knowledge tests.

As the product vision expanded, we identified a broader K/O/E (Knowledge, Observation, Evidence) scope:

  • Knowledge – classic online tests and exams.
  • Observation – on-the-job / practical assessments performed by an assessor.
  • Evidence – artefacts (files, videos, documents) demonstrating performance.

There was a risk that introducing “Observation” and “Evidence” could push Evalium towards:

  • a generic forms/inspection tool, or
  • a heavy “competency management” / HR suite with separate modules and domain models.

We needed to decide whether K/O/E would be modelled as new top-level concepts (Observations, Evidence, Competencies) or expressed inside the existing AMS lifecycle.

Decision

Evalium will remain an Assessment Management System at its core:

  • Evaluations are the single design-time blueprint for all assessment activities.
  • K/O/E are implemented as flavours of Evaluations, not separate domain roots:
    • Knowledge Evaluations – candidate-driven tests/exams.
    • Observation Evaluations – assessor-driven practical assessments.
    • Evidence Evaluations – upload+review assessments focused on artefacts.
  • Programmes are the only container for multi-step journeys, bundling any mix of K/O/E evaluations and defining completion/competence rules.

There will be no separate top-level “Observation”, “Evidence” or “Competency” aggregate in v1. All K/O/E behaviour is expressed through:

Evaluation → Assignment → Session → Submission → Programme aggregation.

Options Considered

1. Separate K/O/E Modules (rejected)

Introduce new domain roots:

  • Observation with its own lifecycle, storage and APIs.
  • EvidenceArtifact at top level, managed separately from submissions.
  • CompetencyFramework with its own engine and UI.

Pros:

  • Clear conceptual separation on paper.
  • Easier to bolt on new modules for specific verticals (apprenticeships, HR suites).

Cons:

  • Fragments the core data model (different stores for K vs O vs E).
  • Increases complexity for RLS, snapshots, audit trails.
  • Pushes Evalium towards LMS/HR/portfolio territory instead of AMS.
  • Slows iteration and raises integration complexity for the frontend.

2. Minimal “Tests Only” Model (rejected)

Keep Evaluations strictly as online knowledge tests (K only), treating:

  • Observations as generic checklists outside the assessment engine.
  • Evidence as external file storage (e.g. object storage keyed by user).

Pros:

  • Very simple core model.
  • Fast to ship an MVP focused only on tests.

Cons:

  • Cannot support observation or evidence workflows cleanly.
  • Forces customers into parallel tools for real-world skills.
  • Misses a validated market gap (integrated K/O/E competence view).

3. Assessment-Centric K/O/E (chosen)

Express K/O/E as variations of Evaluations:

  • One consistent lifecycle and storage model.
  • Programmes as the single place where components are combined.

Pros:

  • Keeps domain coherent and AMS-focused.
  • Reuses RLS, snapshots, TxManager and scoring machinery for K/O/E.
  • Enables a unified competence view without a new “competency engine”.
  • Easier for devs and customers to reason about: “everything is an Evaluation”.

Cons:

  • Some K/O/E UX flows must be carefully designed to avoid feeling “forced” into the Assessment model.
  • Future full competency/HR features (if ever needed) will need to build on top of the assessment data rather than owning it.

Consequences

Positive:

  • Stable conceptual model: teams can think in Evaluations + Programmes everywhere.
  • Easier to implement and test K/O/E features under existing invariants (RLS, snapshots).
  • Clear product positioning: Evalium remains an AMS that happens to cover K/O/E, not a generic competency/HR suite.
  • Simplifies analytics and reporting: all assessment data is derived from the same lifecycle.

Negative:

  • Some future verticals (e.g. deep HR competency frameworks) may require additional layers built on top rather than separate modules.
  • Requires good naming and UX to avoid confusing authors with multiple “kinds” of Evaluations.

Notes

  • This ADR underpins KOE-ams-concept.md and future specs for Knowledge, Observation and Evidence evaluations.
  • Any new assessment type should be modelled as an Evaluation flavour unless there is a strong justification to introduce a new root aggregate.