Skip to main content

Reporting & Feedback Implementation Plan (Backend)

We’re structurally ready (snapshots, structured tags, feedbackTagKeys, frozen feedbackTags, item responses, archive/gating). The remaining work is additive: outcomes, feedback visibility, metrics, cohort hooks, and admin reporting surfaces.

Schema Changes (step 1)

  • evaluation_versions: add feedback_mode text CHECK (feedback_mode IN ('none','overall','tags','items')), pass_mark_percent numeric CHECK (pass_mark_percent BETWEEN 0 AND 100) (nullable).
  • submissions: add completed_at timestamptz, score numeric, max_score numeric, outcome_code text CHECK (outcome_code IN ('pass','fail','incomplete')), outcome_label text, assignment_id uuid NULL, run_label text NULL.
  • submission_items: add is_correct boolean, score numeric, max_score numeric, time_spent_ms bigint NULL.

Metrics & Outcomes (step 2)

  • In SubmitSession, compute for MCQ:
    • per-item: is_correct, score (weight if correct, else 0), max_score (start with 1), leave time_spent_ms NULL for now.
    • submission: sum item scores to score/max_score, set completed_at, derive percent, set outcome_code via pass_mark_percent on evaluation version (pass/fail else default), set outcome_label (Pass/Fail/Incomplete defaults).
  • Freeze feedbackTagKeys and feedbackTags per submission as now.

Feedback Visibility (step 3)

  • Store feedback_mode on evaluation_versions and include in snapshot.
  • Candidate views obey feedback_mode:
    • none: no outcome/feedback.
    • overall: outcome + score only.
    • tags: outcome + tag-level feedback (feedbackTags).
    • items: outcome + tag-level + per-item feedback.
  • Admin views: always full structured tags and all metrics.

Cohort Hooks (step 3)

  • Populate assignment_id / run_label on sessions/submissions when assignments/schedules land; nullable until then.

Admin Reporting Endpoints (step 4)

  • Implemented: GET /api/v1/evaluations/\{evalId\}/submissions with page/limit + status/outcome filters; returns submission summaries (id, userId, completedAt, score/maxScore, outcome_code/label, frozen feedbackTags). Snapshots are omitted in the list for brevity—fetch /submissions/\{id\} for detail.
  • Implemented: GET /api/v1/users/\{userId\}/submissions with the same pagination/filters and summary shape.
  • Candidate vs admin enforcement relies on auth/roles; candidate detail view continues to use view=candidate on /submissions/\{id\} which respects feedback_mode.

Reporting View (optional, step 5)

  • Pending: add a DB view (or materialized view if needed) flattening submission + item metrics + structured tags + feedbackTags + cohort fields for BI/exports. Start with a plain VIEW; materialize only if performance requires it.

Already Done (no changes)

  • Immutable versionSnapshot with sections/items/questions/answers/navigation.
  • Structured tags per question version captured in snapshot.
  • Feedback config endpoint PATCH /evaluations/\{evalId\}/versions/\{versionId\}/feedback freezes feedbackTagKeys per submission.
  • feedbackTags derived from structured tags + frozen keys.
  • Admin vs candidate tags: admin sees full tags; candidate sees feedbackTags.
  • Archive-on-delete and session gating.

Implementation Order

  1. Apply schema changes (evaluation_versions, submissions, submission_items).
  2. Implement metrics/outcomes in SubmitSession.
  3. Apply feedback_mode filtering for candidate views.
  4. Add admin reporting endpoints.
  5. (Optional) Add the reporting view.