Observations — Commercial Inspection & Verification Model
Status
Active – Commercial-Native Implementation (core flows implemented; UI/offline pending)
Purpose
Observations in Evalium are a verification mechanism, not an educational assessment.
They are designed to support professional service delivery where a human actor must verify that work, behaviour, or conditions meet a defined standard — and where the output must be defensible months or years later.
This includes (but is not limited to):
- Health & Safety inspections
- ISO / compliance audits
- Site walkthroughs
- Practical competence verification
- Internal quality checks
- Consultant-led or client-performed inspections
Observations are optimized for speed in the field, clarity of findings, and immutability after sign-off.
Core Concept
An Observation is a structured inspection consisting of:
- A Rubric (what to check)
- One or more Subjects (who or what is being checked)
- Findings (issues identified)
- Evidence (photos, video, notes)
- A Review & Approval workflow
- An immutable snapshot once approved
Observations do not primarily produce scores. They produce defensible findings.
Implementation Status (Current)
- Findings enforcement is live:
requiresCommentandrequiresEvidenceare enforced during answer/evidence capture, andfinding.detectedledger events are emitted on submit. - Evidence handling reuses the shared evidence ledger flows (standalone, inline, mixed).
- Submission review/approval workflow is implemented at the API layer (approval status + approve/reject + request-changes + reopen endpoints); four-eyes enforcement is live via
reviewPolicyon evaluation versions (self_approvedefault) and optional auto-approve viaautoApprove. - Subject/asset model (Option A) is implemented with org/tenant scoping; assignment-based visibility is implemented via
assignment_subjects+ assignment-target read policy for external inspectors. - Findings projection is live:
reporting.report_findingsis populated by the reporting projection worker for dashboards and filters, and reporting exposes findings list APIs withledgerEventIdfor claims/disputes linkage. - Batch mode UX is planned; execution currently remains one session + one submission per subject.
Technical Implementation Notes (How It Works)
- Findings enforcement: rubric metadata is validated at answer/evidence capture; on submit, findings emit
finding.detectedledger events and are projected intoreporting.report_findings. - Evidence handling: evidence metadata and decisions flow through the shared evidence ledger (
evidence.*events) for standalone, inline, and mixed paths. - Review/approval: submission approval status is persisted;
reviewPolicy(self_approve|four_eyes) andautoApproveare stored on evaluation versions and enforced at approval time. Review actions (request_changes,reopen) update approval status and emit ledger events. - Subjects/assets:
subjects,subject_users,assets,assignment_subjects, andsubmission_subjectsare tenant/org scoped and protected by RLS with assignment-based visibility for inspectors (assignment target can read their assignments across org scope). - Batch grouping:
batch_idis canonical on assignments and copied to submissions at submit-time; submissions exposebatchIdfor reporting without joining scaffolding. - Subject auto-link: when an assignment carries a
subjectId, the submission inherits it at submit-time (no manual attach required). - Findings list API: reporting exposes
/reporting/findings?submissionId=...withledgerEventIdto anchor claims/disputes to a specific finding event.
Functional Flow (What Users Experience)
- Inspector completes checks; any finding with
requiresComment/requiresEvidenceblocks progress until satisfied. - Evidence upload and approval reuse the same ledger-based evidence flows used elsewhere.
- Submissions enter
pending_reviewunlessautoApproveis enabled; four-eyes prevents self-approval when configured. - Subjects are visible only when scoped by org or linked via assignments/submissions the inspector can access; external inspectors see subjects once an assignment is issued.
- Batch UI groups multiple assignments; each subject still yields its own session + submission, with a shared
batchId.
Terminology Mapping
| Educational Term | Commercial Meaning |
|---|---|
| Student / Candidate | Subject (Person, Asset, Location) |
| Group Session | Batch / Site / Audit Mode |
| Observer | Inspector / Auditor |
| Criterion | Check / Control |
| Score | Status / Finding |
| Finalise | Submit for Review |
| Result | Outcome |
| Portfolio | Evidence Pack |
Observation Structure
1. Observation Template
Defines what is inspected.
Includes:
- Title & description
- Rubric items (checks)
- Optional severity metadata
- Required evidence rules
- Review configuration (self-approve vs four-eyes, optional auto-approve)
Templates are reusable across clients and sites.
2. Subjects (Who or What Is Observed)
A Subject represents the entity being inspected.
Examples:
- A person (employee, trainee)
- A location (room, site, warehouse)
- An asset (vehicle, machine, extinguisher)
Important: Subjects are treated generically in the system. They may be users today and assets later without changing the Observation model.
Actor vs Subject (Non-Negotiable)
Observations must separate:
- Actor = the person who performed/logged/approved the work (a user)
- Subject = the entity being inspected (person, asset, or location)
This keeps the ledger clean when a record is about a non-person entity (e.g., a fire extinguisher or a room).
Subject / Entity Layer (Implemented)
To support assets as first-class subjects without overloading users, introduce a Subject abstraction.
Recommended model (Option A)
subjects(core, tenant/org scoped)id,tenant_id,org_unit_idsubject_type(user,asset, laterlocation, etc.)display_name,external_ref,status,created_at
subject_users(optional 1:1 mapping when subject_type isuser)subject_id,user_id
assets(typed extension)subject_id,asset_type_id, optional identifiers/metadata
assignment_subjects(planning link)assignment_id,subject_id,role,relationship
submission_subjects(ledger link)submission_id,subject_id,role(primary/secondary)
Why this fits Evalium:
- Submissions remain the WORM truth; subjects are joinable context.
- Assets can be mutable scaffolding without compromising ledger integrity.
- Enables asset timelines, Glass Box views, and multi-subject audits.
Current state: Subjects, assets, assignment links, and submission links are implemented with RLS-scoped tables (org/tenant scope + assignment-based visibility for inspectors).
Technical Implementation
- Tables:
subjects,subject_users,assets,assignment_subjects,submission_subjects. - RLS: subject visibility checks org scope or assignment linkage; submission subject access mirrors submission visibility.
- Assignments can attach a subject via
assignment_subjects, enabling just-in-time read access for inspectors.
Functional Flow
- Admins create subjects (users or assets) in their org.
- Assignments link to subjects for inspection tasks.
- Inspectors can only see subjects tied to assignments they can access or submissions they can view; assignment targets can read assignments even when org scope differs.
- Submissions inherit assignment subjects automatically on submit to preserve execution attribution.
Batch / Audit Mode (Formerly Group Sessions)
Batch Mode allows one inspector to apply the same rubric across many subjects in one flow.
Examples
- 30 fire extinguishers
- 20 desks
- 12 delivery vehicles
- 10 staff members
Key Properties
- Grid-based UI
- Fast sequential completion
- Shared context (same site, same visit)
- Single submission per subject
- Unified review & approval layer
This is a core differentiator versus tools that require opening a new report per item.
Implementation Note (Current)
Batch mode is UI orchestration rather than a multi-submit DB primitive. Each subject still maps to its own session + submission to preserve the execution ledger invariant.
Technical Implementation
assignments.batch_idis the canonical planning handle.submissions.batch_idis copied at submit-time for immutable execution context.submission.batchIdis exposed in API responses for reporting and Glass Box filters.
Functional Flow
- A batch action creates multiple assignments with a shared
batchId. - Each assignment produces a separate session/submission.
- Post-execution reporting uses
batchIdfrom submissions without joining assignments.
Findings (First-Class Concept)
Definition
A Finding is a rubric outcome that indicates:
- Non-compliance
- Risk
- Required action
Findings are recorded as rubric outcomes enriched with structured metadata. Ledger truth remains on submissions/items, while reporting uses a projection for filtering.
Implementation
Each rubric option may include metadata such as:
{
"severity": "critical",
"requires_evidence": true,
"requires_comment": true
}
Behaviour Rules
When a Finding is triggered:
- UI visually escalates (colour/state)
- Evidence upload is forced
- Comment / remediation note is forced
- Item is added to the Action Required Summary
Implementation note: Evidence and comment requirements are enforced at answer/evidence capture.
Findings are surfaced prominently in:
- Reviewer dashboards
- Client-facing summaries
- Exported reports
Implementation note: Findings remain ledger-truth JSON on submissions/items; the reporting worker projects them into reporting.report_findings (including ledger_event_id) and reporting exposes a findings list API for claims/disputes linkage.
Technical Implementation
- Findings metadata lives in item payload/rubric and is enforced during answer/evidence capture.
- On submit,
finding.detectedledger events are emitted for projection. - Reporting worker projects findings into
reporting.report_findingsfor dashboards/filters (withledger_event_id). - Reporting API exposes
/reporting/findings?submissionId=...for downstream claims/disputes.
Functional Flow
- A failed check with severity produces a visible finding and blocks approval without required evidence/comment.
- Reviewers see findings surfaced in summaries and Glass Box views.
Integration Risk Controls (Mitigations)
1) Subject RLS & External Inspectors
Subjects use org/tenant scoping plus assignment-based, just-in-time visibility: if an inspector has an assignment tied to a subject (active or completed), or a submission they can see, they can read the subject even when the inspector is not in the subject's org unit.
2) Batch Mode vs Session Invariant
Batch mode will not create many submissions inside one session by default. If bulk commit is ever required, it will be wrapped in a single TxManager transaction and each submission will remain an independent ledger record.
3) Findings Projections
Findings remain ledger-truth JSON, but reporting is driven by a projection:
finding.detected ledger events are emitted and reporting.report_findings is populated for dashboards and filtering.
Evidence Capture (Speed Is Non-Negotiable)
Evidence is an attribute of any rubric item.
Supported Evidence
- Photos (multi-shot)
- Video clips
- Files
- Free-text notes
Field Ergonomics (Required)
- Tap → Snap → Snap → Done
- No file browser dependency
- Offline-first capture where possible
Forensic Enhancements
- Automatic timestamp watermarking
- Optional GPS stamping (if available)
- Evidence bound to rubric + subject + visit
Evidence is immutable once the submission enters review.
Submission Lifecycle
Observations produce a Submission, shared with Knowledge and Evidence workflows.
Status Fields (Orthogonal)
| Field | Purpose |
|---|---|
lifecycle_status | in_progress → completed |
outcome | pass / fail / pending |
approval_status | approved / pending_review / rejected / changes_requested |
Implementation note: approval_status is persisted and updated by review endpoints; approvals are only allowed from pending_review, request-changes is only allowed from pending_review, and reopen is allowed from approved/rejected/changes_requested; reviewPolicy enforces four-eyes when configured.
Technical Implementation
approval_statusis stored on submissions and updated via review handlers.reviewPolicy+autoApproveare stored on evaluation versions and enforced during submission approval.
Functional Flow
- Inspectors submit; submissions land in
pending_reviewunless auto-approve is enabled. - Reviewers approve/reject; status transitions are immutable and auditable.
Review & Approval (Universal Workflow)
Observations default to pending review unless configured otherwise.
Implementation note: Approval APIs are live (approve/reject/request-changes/reopen); four-eyes enforcement is implemented via reviewPolicy. Auto-approve is supported via autoApprove on evaluation versions.
Technical Implementation
- Approval endpoints write ledger events and update
approval_status. - Four-eyes enforcement rejects self-approval when
reviewPolicy=four_eyes. - Auto-approve runs at submit-time and emits
submission.approved. - Request-changes/reopen emit
submission.changes.requestedandsubmission.reopened.
Functional Flow
- SMB flow: self-approve or auto-approve for low friction.
- Regulated flow: approval requires a distinct reviewer identity.
Standard Workflow
-
Inspector completes observation
-
Submission enters
pending_review -
Reviewer is notified
-
Reviewer:
- Views immutable snapshot
- Reviews findings & evidence
-
Reviewer approves →
approved -
Submission becomes client-visible
Rejection / Change Request
If rejected or changes are requested:
- Submission status updates to
rejectedorchanges_requested - Review action emits a ledger event with optional reason metadata
- Review can be reopened to return the submission to
pending_review - If rework is needed, a new session/submission is created (no edits to the original)
Four-Eyes Principle (Configurable)
Evalium supports both:
SMB Mode
- Inspector can self-approve
- Zero friction
Enterprise / Regulated Mode
- Inspector cannot approve own work
- Reviewer role enforced via capabilities
- Full audit trail of approvals
This is controlled via:
- Capabilities (
submissions.approve) - Evaluation version
reviewPolicy(self_approve|four_eyes) - Evaluation version
autoApprove(defaultfalse)
Technical Implementation
reviewPolicyandautoApproveare persisted on evaluation versions.- Approval handler validates actor vs submission owner when
reviewPolicy=four_eyes. - Auto-approve sets
approval_status=approvedon submit and emits ledger events.
Client Visibility Rules
Clients:
- Only see
approvedsubmissions - Never see drafts or rejected work
- See findings + evidence + timestamps
- Receive a clean, defensible view
This protects consultant credibility.
Immutability & Audit Readiness
Once approved:
-
Submission content is immutable
-
Evidence cannot be altered or removed
-
Approval metadata is preserved:
- Who approved
- When
- From which role
This forms the Chain of Custody required for:
- Regulatory audits
- Client disputes
- Insurance reviews
Non-Goals (Explicit)
Observations are not:
- Journals
- Reflective diaries
- Narrative learning logs
Those patterns belong to education and are intentionally out of scope for commercial inspections.
Outstanding / Planned
- Batch UI orchestration (grid capture + navigation across many subjects). Assignment batch helper exists; UI orchestration pending.
- Offline capture + sync (separate macro-level initiative).
- Subject bulk import UI + richer search/filtering in UI (API supports assetType/tags/identifiers and returns asset fields on list; UI pending).
- Claims/disputes UX flows that consume findings + review actions (backend linkage is ready; UI pending).
Proof Runs (Implemented)
backend/tests/test_observations_all.sh— observation end-to-end suite.backend/tests/observation_subject_visibility_assignment.sh— assignment-based subject visibility (external inspector).backend/tests/observation_subjects_import.sh— subject import preview/commit (API).backend/tests/observation_subjects_filtering.sh— subject list filters (assetType/tags/identifier fields).backend/tests/observation_findings_evidence_required.sh— findings evidence gating + findings projection.backend/tests/observation_submission_approval.sh— approval lifecycle (approve/reject/request changes/reopen).backend/tests/observation_submission_auto_approve.sh— auto-approve path.backend/tests/observation_submission_four_eyes.sh— four-eyes enforcement.backend/tests/observation_verification_context_required.sh— L4 verification context + step-up enforcement.backend/tests/observation_batch_assignments.sh— batch assignment to submissions withbatchId.backend/tests/observation_batch_assignments_bulk.sh— assignment batch helper (one assignment per subject, sharedbatchId).backend/tests/observation_assignment_multi_subjects.sh— multi-subject submission linkage.backend/tests/observation_assignment_subject_auto.sh— auto-link submission subjects from assignment scope.
Summary
Observations in Evalium are:
- Inspection-native
- Finding-driven
- Evidence-first
- Review-gated
- Legally defensible
They transform:
“I checked this” into “I can prove I checked this, correctly, on this date, with this evidence.”
This is the foundation that supports premium pricing, professional trust, and scalable service delivery.