Skip to main content

Reporting & Feedback Roadmap (Backend)

Goal: make results reporting-ready while letting authors control candidate-facing feedback tags, without losing full-tag flexibility for reporting. Builds on the current snapshot + archival foundation.

Phase 0 (done)

  • Immutable version_snapshot on submissions (sections/items/questions).
  • Submissions endpoint /api/v1/submissions/\{id\} returns snapshot + items.
  • Archive-on-delete with usage counts; session creation blocked on archived evals.

Phase 1 (ready to build now)

  • Snapshot tagging: include tags (and optional dimensions) on question versions in version_snapshot.questions[...]; copy item-level tags if applicable.
  • Feedback tag config: add evaluation/version-level config for feedback-visible tags/groups; results rendering produces both full tags and feedbackTags filtered per config for candidate views.
  • Submission metrics: add completed_at, score, max_score, passed/outcome to submissions; add is_correct/score/max_score/time_spent_ms to submission_items (or a projection). Populate when scoring runs.
  • Enrich /submissions/\{id\}: include the new metrics and full snapshot tags; expose feedbackTags in candidate-facing view. Admin/reporting views always receive the full tag set.
  • Cohort hook: start storing assignmentId and optional runLabel (from assignment) on session/submission (no-op until assignments ship) so reporting can group by cohort.
  • Multi-level reporting readiness: ensure tags/metadata and metrics can be reported at item, section, passage, and tag/dimension levels by carrying them in snapshots and emitting per-level metrics (scores/time/correctness).

Phase 2 (as scoring/assignments land)

  • Scoring pipeline: compute per-item correctness/score, section scores, overall score, duration; set submission passed/outcome.
  • Snapshot enrichment: add evaluationId, evaluationTitle, passScore/grading config to version_snapshot.
  • Tag semantics: extend tags with structured dimensions (skills/domains/difficulty) if needed; keep full sets in snapshots for reporting.
  • Assignment/cohort context: once assignments exist, persist assignmentId/teamId/runLabel on submissions/snapshots for cohort reporting.

Phase 3 (reporting projections)

  • Add a reporting-friendly projection table or materialized view with common measures/dimensions: submission/section/item scores, tags/dimensions, cohorts, evaluation metadata, user demographics (if collected). Add indexes on commonly filtered dimensions (evaluationId, assignmentId, tag keys, teamId).
  • Optional: xAPI/LRS export using the snapshot and computed metrics.

Phase 4 (analytics/UX polish)

  • Admin/reporting endpoints that allow arbitrary tag/dimension filters using snapshot data.
  • Candidate feedback uses feedbackTags; admins see full tags.
  • Optional psychometrics (item difficulty/discrimination) derived from submission_items.

Notes

  • All additions are additive: JSON snapshots can grow fields without breaking old data.
  • Keep snapshots as the source for rendering historical results; use live authoring only as a fallback.
  • Reporting projections and snapshots should respect tenant-level data retention and allow deletion/anonymisation of user-identifying fields while preserving aggregate stats where required.