Crafted
Design
Solutions
faysalrabby.com
We test real tasks with users, review screens against usability rules, and run WCAG AA checks for contrast, keyboard, forms, and focus to show where people struggle and why. We translate findings into a ranked issue list, design changes, and a retest plan so teams ship improvements with confidence.
Purpose: confirm users can complete key tasks and find friction fast
What we do: set objectives, recruit users, write a script, run moderated or unmoderated sessions, record and measure success rate, time, and errors
Methods: think aloud tests, five user rounds, benchmarks on prototype or live build
Deliverables: highlight reel, issue list with severity, prioritized fixes, updated designs
Typical scope: about one week to plan and recruit, one week to run and synthesize
Success signals: higher task success, shorter time to first value, fewer support tickets
Purpose: expert review to surface issues quickly before or between tests
What we do: assess flows and screens against usability principles, platform guidelines, content clarity, and analytics signals
Methods: checklist scoring, screenshots and examples, quick wins and systemic issues
Deliverables: audit report with severity and proof, design recommendations, fix roadmap
Typical scope: three to seven days per product area
Success signals: fewer known issues before release, faster iteration, better satisfaction
Purpose: ensure inclusive use and meet buyer and legal standards
What we do: automated scans plus manual checks for keyboard access, focus, forms, contrast, and screen reader paths
Methods: Axe or Lighthouse, NVDA and VoiceOver passes, contrast tools, tests for reduced motion and high contrast modes
Deliverables: WCAG gap list by severity, annotated screenshots, design and code fixes, retest plan
Typical scope: one to two weeks depending on size and platforms
Success signals: critical issues resolved, strong automated scores, readiness for procurement reviews
Purpose: find where users drop and why, then lift conversion with evidence
What we do: audit events, build funnels and cohorts, review data quality, watch session replays, propose experiments
Methods: GA4 or Amplitude setup, UTM hygiene, heatmaps, short surveys, AB test plan
Deliverables: funnel map with drop off points, metric baselines and targets, experiment backlog, event naming standard
Typical scope: one to two weeks for first audit and setup, ongoing monthly reviews
Success signals: clear dashboards, higher activation or conversion, lower time to key action, better decision speed