User Feedback Loops
The Technovation Girls rubric rewards projects that demonstrate iterative testing with real users and documented UI changes based on feedback. This page provides the framework for SisterShield’s user feedback loops — iterative testing cycles, each producing specific, evidence-backed design changes.
Testing Methodology Overview
SisterShield follows a structured iterative testing approach:
- Define tasks — Select specific user flows to test (e.g., “Find and activate Quick Exit,” “Start a course,” “Switch language”).
- Recruit participants — Target users from the intended audience (women and girls at risk of or learning about TF-VAWG).
- Observe and record — Capture task completion rates, time-on-task, errors, and qualitative feedback.
- Analyze and decide — Identify patterns, prioritize changes, and document the reasoning.
- Implement and re-test — Apply changes and validate improvements in the next cycle.
Testing Cycle 1
| Field | Detail |
|---|---|
| Cycle Number | 1 |
| Date | TODO: Date of testing |
| Participant Count | TODO: Number of participants |
| Demographics | TODO: Age range, gender, relevant context (e.g., familiarity with TF-VAWG, tech comfort level) |
Tasks Tested
| # | Task | Completion Rate | Avg. Time | Notes |
|---|---|---|---|---|
| 1 | TODO: Task description | TODO | TODO | TODO |
| 2 | TODO: Task description | TODO | TODO | TODO |
| 3 | TODO: Task description | TODO | TODO | TODO |
Key Findings
- TODO: Finding 1 (e.g., “3 of 5 participants did not notice the Quick Exit button within 10 seconds”)
- TODO: Finding 2
- TODO: Finding 3
Participant Quotes
“TODO: Direct quote from a participant illustrating a key insight.” — Participant TODO: ID
“TODO: Another quote.” — Participant TODO: ID
UI Change Justification
Change 1: Based on Cycle 1 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.
Change 2: Based on Cycle 1 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.
Outcome Measured
- TODO: How will we measure whether this change was effective in Cycle 2?
Testing Cycle 2
| Field | Detail |
|---|---|
| Cycle Number | 2 |
| Date | TODO: Date of testing |
| Participant Count | TODO: Number of participants |
| Demographics | TODO: Age range, gender, relevant context |
Tasks Tested
| # | Task | Completion Rate | Avg. Time | Notes |
|---|---|---|---|---|
| 1 | TODO: Re-test changed tasks from Cycle 1 | TODO | TODO | TODO |
| 2 | TODO: New task | TODO | TODO | TODO |
| 3 | TODO: New task | TODO | TODO | TODO |
Key Findings
- TODO: Finding 1 (ideally showing improvement from Cycle 1 changes)
- TODO: Finding 2
- TODO: Finding 3
Participant Quotes
“TODO: Quote showing response to changes from Cycle 1.” — Participant TODO: ID
“TODO: Quote revealing new insight.” — Participant TODO: ID
UI Change Justification
Change 1: Based on Cycle 2 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.
Change 2: Based on Cycle 2 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.
Outcome Measured
- TODO: Metric to validate in Cycle 3
Testing Cycle 3
| Field | Detail |
|---|---|
| Cycle Number | 3 |
| Date | TODO: Date of testing |
| Participant Count | TODO: Number of participants |
| Demographics | TODO: Age range, gender, relevant context |
Tasks Tested
| # | Task | Completion Rate | Avg. Time | Notes |
|---|---|---|---|---|
| 1 | TODO: Re-test changed tasks from Cycle 2 | TODO | TODO | TODO |
| 2 | TODO: Task description | TODO | TODO | TODO |
| 3 | TODO: Task description | TODO | TODO | TODO |
Key Findings
- TODO: Finding 1 (demonstrate cumulative improvement across cycles)
- TODO: Finding 2
- TODO: Finding 3
Participant Quotes
“TODO: Quote showing overall satisfaction or remaining friction.” — Participant TODO: ID
UI Change Justification
Change 1: Based on Cycle 3 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.
Outcome Measured
- TODO: Final validation metric
Next Iteration Notes
- TODO: What would Cycle 4 focus on if there were more time?
Evidence Summary
| Cycle | Evidence Type | Description | Link/Location |
|---|---|---|---|
| 1 | TODO: e.g., Screen recording | TODO: Description | TODO: File path or URL |
| 1 | TODO: e.g., Survey responses | TODO: Description | TODO: File path or URL |
| 2 | TODO: e.g., Before/after screenshots | TODO: Description | TODO: File path or URL |
| 2 | TODO: e.g., Task completion data | TODO: Description | TODO: File path or URL |
| 3 | TODO: e.g., Participant notes | TODO: Description | TODO: File path or URL |
| 3 | TODO: e.g., Final usability scores | TODO: Description | TODO: File path or URL |
Pivot Log: When Feedback Contradicts Assumptions
A pivot log documents moments where user feedback or technical evidence contradicted initial design assumptions, leading to significant architectural or UX changes. These are not failures — they are evidence of responsive, evidence-driven development.
Pivot 1: Class-Based Assignments → Student Self-Serve
Initial Assumption: The platform should follow a traditional LMS model where teachers create classes, enroll students, and assign specific courses. This mirrors familiar educational software like Google Classroom.
Contradicting Evidence:
- User research insight: Target users (women and girls at risk of TF-VAWG) may not be in a formal classroom setting. Many potential learners are self-directed — they find resources through social media, crisis hotlines, or peer recommendations.
- Safety concern: Requiring class enrollment creates an administrative trail that could be visible to an abuser who has access to the victim’s email or school accounts. A self-serve model reduces the data footprint.
- Technical complexity vs. value: The class/assignment system added 3 database models (
Class,ClassStudent,Assignment), aclassIdforeign key onSubmission, and an impersonation system for teachers to preview student views. This complexity served only one use case (formal classroom deployment) while creating barriers for the primary use case (self-directed learning).
Technical Changes:
- Removed
Class,ClassStudent, andAssignmentmodels from the Prisma schema - Removed
classIdfrom theSubmissionmodel - Removed the teacher impersonation system (
useImpersonationhook, impersonation API routes) - Simplified the teacher workflow to: create course → publish → students discover and self-serve
- Implemented a global review queue replacing per-class submission review
UX Changes:
- Students browse all published courses directly from the dashboard (no enrollment required)
- Teachers see all submissions in a single review queue (no class filtering)
- Removed class management UI entirely
Impact: Reduced database schema complexity by 3 models, eliminated an entire permission layer, and aligned the platform with its core safety principle: minimize data collection, maximize accessibility.
Rubric Connection: This pivot demonstrates Growth & Perseverance (willingness to discard working code when evidence contradicts assumptions) and User Experience (design decisions driven by target audience needs, not convention).
Pivot 2: TODO
Initial Assumption: TODO
Contradicting Evidence: TODO
Technical Changes: TODO
UX Changes: TODO
Impact: TODO
Rubric Connection: TODO
Measurable Success Criteria
The following criteria define success for the next round of testing:
| Criterion | Target | Current Status |
|---|---|---|
| Quick Exit discoverability | 100% of users find it within 5 seconds | TODO |
| Course start task completion | 90%+ unassisted completion | TODO |
| Language switching | 100% success, no confusion | TODO |
| Overall satisfaction (1-5 scale) | 4.0+ average | TODO |
| Safety perception (“I felt safe using this app”) | 4.5+ average | TODO |
Rubric Mapping
| Rubric Category | How This Page Contributes |
|---|---|
| User Experience | Demonstrates iterative feedback loops with documented, evidence-based UI changes |
| Growth & Perseverance | Shows willingness to iterate and adapt based on real user needs |
| Avoid Harm | Testing with target audience validates that design choices actually reduce harm |