Skip to content

User Feedback Loops

The Technovation Girls rubric rewards projects that demonstrate iterative testing with real users and documented UI changes based on feedback. This page provides the framework for SisterShield’s user feedback loops — iterative testing cycles, each producing specific, evidence-backed design changes.

Testing Methodology Overview

SisterShield follows a structured iterative testing approach:

  1. Define tasks — Select specific user flows to test (e.g., “Find and activate Quick Exit,” “Start a course,” “Switch language”).
  2. Recruit participants — Target users from the intended audience (women and girls at risk of or learning about TF-VAWG).
  3. Observe and record — Capture task completion rates, time-on-task, errors, and qualitative feedback.
  4. Analyze and decide — Identify patterns, prioritize changes, and document the reasoning.
  5. Implement and re-test — Apply changes and validate improvements in the next cycle.

Testing Cycle 1

FieldDetail
Cycle Number1
DateTODO: Date of testing
Participant CountTODO: Number of participants
DemographicsTODO: Age range, gender, relevant context (e.g., familiarity with TF-VAWG, tech comfort level)

Tasks Tested

#TaskCompletion RateAvg. TimeNotes
1TODO: Task descriptionTODOTODOTODO
2TODO: Task descriptionTODOTODOTODO
3TODO: Task descriptionTODOTODOTODO

Key Findings

  • TODO: Finding 1 (e.g., “3 of 5 participants did not notice the Quick Exit button within 10 seconds”)
  • TODO: Finding 2
  • TODO: Finding 3

Participant Quotes

“TODO: Direct quote from a participant illustrating a key insight.” — Participant TODO: ID

“TODO: Another quote.” — Participant TODO: ID

UI Change Justification

Change 1: Based on Cycle 1 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.

Change 2: Based on Cycle 1 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.

Outcome Measured

  • TODO: How will we measure whether this change was effective in Cycle 2?

Testing Cycle 2

FieldDetail
Cycle Number2
DateTODO: Date of testing
Participant CountTODO: Number of participants
DemographicsTODO: Age range, gender, relevant context

Tasks Tested

#TaskCompletion RateAvg. TimeNotes
1TODO: Re-test changed tasks from Cycle 1TODOTODOTODO
2TODO: New taskTODOTODOTODO
3TODO: New taskTODOTODOTODO

Key Findings

  • TODO: Finding 1 (ideally showing improvement from Cycle 1 changes)
  • TODO: Finding 2
  • TODO: Finding 3

Participant Quotes

“TODO: Quote showing response to changes from Cycle 1.” — Participant TODO: ID

“TODO: Quote revealing new insight.” — Participant TODO: ID

UI Change Justification

Change 1: Based on Cycle 2 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.

Change 2: Based on Cycle 2 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.

Outcome Measured

  • TODO: Metric to validate in Cycle 3

Testing Cycle 3

FieldDetail
Cycle Number3
DateTODO: Date of testing
Participant CountTODO: Number of participants
DemographicsTODO: Age range, gender, relevant context

Tasks Tested

#TaskCompletion RateAvg. TimeNotes
1TODO: Re-test changed tasks from Cycle 2TODOTODOTODO
2TODO: Task descriptionTODOTODOTODO
3TODO: Task descriptionTODOTODOTODO

Key Findings

  • TODO: Finding 1 (demonstrate cumulative improvement across cycles)
  • TODO: Finding 2
  • TODO: Finding 3

Participant Quotes

“TODO: Quote showing overall satisfaction or remaining friction.” — Participant TODO: ID

UI Change Justification

Change 1: Based on Cycle 3 feedback: TODO: data/quote, we changed TODO: UI element from TODO: before to TODO: after to achieve TODO: goal.

Outcome Measured

  • TODO: Final validation metric

Next Iteration Notes

  • TODO: What would Cycle 4 focus on if there were more time?

Evidence Summary

CycleEvidence TypeDescriptionLink/Location
1TODO: e.g., Screen recordingTODO: DescriptionTODO: File path or URL
1TODO: e.g., Survey responsesTODO: DescriptionTODO: File path or URL
2TODO: e.g., Before/after screenshotsTODO: DescriptionTODO: File path or URL
2TODO: e.g., Task completion dataTODO: DescriptionTODO: File path or URL
3TODO: e.g., Participant notesTODO: DescriptionTODO: File path or URL
3TODO: e.g., Final usability scoresTODO: DescriptionTODO: File path or URL

Pivot Log: When Feedback Contradicts Assumptions

A pivot log documents moments where user feedback or technical evidence contradicted initial design assumptions, leading to significant architectural or UX changes. These are not failures — they are evidence of responsive, evidence-driven development.

Pivot 1: Class-Based Assignments → Student Self-Serve

Initial Assumption: The platform should follow a traditional LMS model where teachers create classes, enroll students, and assign specific courses. This mirrors familiar educational software like Google Classroom.

Contradicting Evidence:

  1. User research insight: Target users (women and girls at risk of TF-VAWG) may not be in a formal classroom setting. Many potential learners are self-directed — they find resources through social media, crisis hotlines, or peer recommendations.
  2. Safety concern: Requiring class enrollment creates an administrative trail that could be visible to an abuser who has access to the victim’s email or school accounts. A self-serve model reduces the data footprint.
  3. Technical complexity vs. value: The class/assignment system added 3 database models (Class, ClassStudent, Assignment), a classId foreign key on Submission, and an impersonation system for teachers to preview student views. This complexity served only one use case (formal classroom deployment) while creating barriers for the primary use case (self-directed learning).

Technical Changes:

  • Removed Class, ClassStudent, and Assignment models from the Prisma schema
  • Removed classId from the Submission model
  • Removed the teacher impersonation system (useImpersonation hook, impersonation API routes)
  • Simplified the teacher workflow to: create course → publish → students discover and self-serve
  • Implemented a global review queue replacing per-class submission review

UX Changes:

  • Students browse all published courses directly from the dashboard (no enrollment required)
  • Teachers see all submissions in a single review queue (no class filtering)
  • Removed class management UI entirely

Impact: Reduced database schema complexity by 3 models, eliminated an entire permission layer, and aligned the platform with its core safety principle: minimize data collection, maximize accessibility.

Rubric Connection: This pivot demonstrates Growth & Perseverance (willingness to discard working code when evidence contradicts assumptions) and User Experience (design decisions driven by target audience needs, not convention).


Pivot 2: TODO

Initial Assumption: TODO

Contradicting Evidence: TODO

Technical Changes: TODO

UX Changes: TODO

Impact: TODO

Rubric Connection: TODO


Measurable Success Criteria

The following criteria define success for the next round of testing:

CriterionTargetCurrent Status
Quick Exit discoverability100% of users find it within 5 secondsTODO
Course start task completion90%+ unassisted completionTODO
Language switching100% success, no confusionTODO
Overall satisfaction (1-5 scale)4.0+ averageTODO
Safety perception (“I felt safe using this app”)4.5+ averageTODO

Rubric Mapping

Rubric CategoryHow This Page Contributes
User ExperienceDemonstrates iterative feedback loops with documented, evidence-based UI changes
Growth & PerseveranceShows willingness to iterate and adapt based on real user needs
Avoid HarmTesting with target audience validates that design choices actually reduce harm