Normal Use & Depth Discovery
Purpose
This document describes how Evalium is experienced in everyday use — when work is proceeding normally, nothing is disputed, and users simply want to get their job done.
It intentionally avoids architectural detail. Its role is to make clear that Evalium is simple to use by default, while offering depth and seriousness only when it matters.
This document is referenced by FOUNDATION and architecture to clarify market intent and user experience.
Capability Baseline (Validated 2026-02-25)
This narrative reflects current backend maturity plus target-state intent.
Live now:
- authoring + assignments + operations runtime + reporting/remediation.
- Submission-level explainability (
koeStatus,proofReadiness) and defensibility exception queues.
Target-state/future:
- Skills profile hubs and competence projection UX.
- First-class compliance case/timeline orchestration.
- Dedicated proctor command and grading queue projection surfaces.
Core Principle: Calm by Default, Deliberate by Need
Most of the time, Evalium behaves like a familiar professional tool:
- Users save work normally
- Results appear quickly
- Navigation is predictable
- Nothing feels heavy or legalistic
Depth is revealed only when:
- an action has downstream impact
- a result must be corrected
- work is shared externally
- an explanation is requested
The system slows down to protect the user, not to burden them.
Scenario 1: authoring & Small Changes (Everyday Work)
Situation A consultant or assessor edits an evaluation or checklist to fix wording or improve clarity.
What the user does
- Opens the evaluation
- Makes changes
- Clicks Save
What the user experiences
- A simple confirmation: “Saved”
- No prompts about versions
- No warnings unless the change has impact
Why it feels simple
- Saving behaves as users expect
- There is no decision fatigue
- The user is not asked to understand internal mechanics
Where depth exists (if needed)
- If the content is already in use, a clear indicator appears later at Publish, not at Save
- The system quietly preserves history without exposing it
Scenario 2: Issuing Work to Clients or Participants
Situation A professional service firm issues an assessment, observation, or review to a client or cohort.
What the user does
- Clicks Assign
- Selects recipients
- Accepts default limits
- Clicks Send
What the user experiences
- A short summary of who, what, and when
- Defaults that make sense
- Clear confirmation that work has been issued
Why it feels simple
- Most options are pre-filled
- Advanced policies (including verification rigor) are hidden by default
- The user can complete the task quickly
Where depth exists (if needed)
- The system defaults to basic verification for everyday work
- Higher verification levels (e.g. context-verified delivery) are surfaced only when the selected template or engagement requires it
- All issuance details are still recorded automatically for later reference
Scenario 3: Reviewing Results (Normal Outcomes)
Situation A manager or consultant checks results after work is completed.
What the user does
- Opens the results view
- Scans scores or outcomes
What the user experiences
- Clear results
- Familiar tables and summaries
- Fast loading views
Why it feels simple
- Results appear without explanation unless requested
- No background processing is visible
- The system feels responsive and predictable
Where depth exists (if needed)
- An Explain link is available on any derived value
- Clicking it reveals why a result exists, without changing the main workflow
- For live submission defensibility, users can also open readiness/exception context without leaving their workflow
Scenario 4: Sharing Results with a Client (Professional Confidence)
Situation A firm shares outcomes or evidence with a client.
What the user does
- Shares a Client Link or export
What the user experiences
- A clean, professional, read-only view
- No editing controls
- Clear presentation of outcomes
Why it feels simple
- Sharing does not require duplicating data
- Clients see only what they are permitted to see
Where depth exists (if needed)
- The Client Link always reflects the current, authorised view of the results
- History and explanations are available but not forced
- Sensitive readiness reasons may be redacted based on link capabilities
- The shared view carries inherent credibility without added explanation
Scenario 5: Correcting Results (Serious Moment)
Situation A mistake is discovered that affects multiple results.
What the user does
- Opens Correct Results
What the user experiences
- A clear explanation that this action is significant
- An impact preview showing what will change
- Before/after counts and outcomes
Why this feels reassuring, not stressful
- Nothing changes immediately
- The system shows consequences before action
- The user remains in control
Where seriousness is felt
- The preview makes consequences explicit
- Applying the change requires confirmation and a reason
- The system records what changed and why
This is a deliberate pause — not friction — designed to protect the user.
Scenario 6: Explaining or Defending an Outcome (Depth on Demand)
Situation A user is asked: “Why is this the result?”
What the user does
- Clicks Explain
What the user experiences
- A focused view showing contributing evidence
- Clear scope indicators (what version, when)
Why this feels controlled
- Explanations are precise, not overwhelming
- The user can answer questions without preparing separate documentation
Where depth exists
- The explanation is always tied to the exact version of the rules and content used at the time of execution
- Changes made after the work was completed do not affect what is shown
- Full history is available if required, but most users never need to go this far
- Skill-level explainability follows the same pattern once skills projections are activated