Trusted by NASA, NIH, DHCS, Fiserv, Uber
What UX research services include
UX research services cover the qualitative and quantitative methods an agency deploys to study how real users interact with a digital product, including interviews, usability testing, field studies, analytics analysis, and heuristic evaluations conducted before, during, and after the design phase. Fuselab Creative has led UX research engagements for NASA, NIH, the California Department of Health Care Services, Fiserv, and Uber since 2017, with the largest share of its portfolio concentrated in healthcare and government products.
Testing with the wrong user population
The most common research failure Fuselab encounters in new client engagements is testing with the wrong user population. A healthcare product tested only with physicians misses how nurses, pharmacists, and administrative staff interact with the same interface under different time pressures. The DHCS Medi-Cal project required separate research tracks for caseworkers and applicants because their goals, literacy levels, and error tolerances had almost nothing in common.
Testing on prototypes instead of production environments
Testing on prototypes catches layout and flow problems. Testing on production systems catches performance, latency, and real-data problems that prototypes cannot simulate. A dashboard prototype loaded with sample data behaves differently from the same dashboard pulling live records across three API sources with inconsistent response times. Fuselab's research on the Fiserv Small Business Index included production-environment testing for exactly this reason.
Delivering findings too late to change direction
Research delivered as a PDF report three weeks after testing ends is research the product team will not use. Findings lose value every day they sit unread. Fuselab shares session recordings and preliminary patterns within 48 hours of each testing round so the design team can adjust direction while the research is still running.
Get the Insights You Need to Make
Your Product a Success
UX research methods we deploy
The six methods most frequently deployed in Fuselab research engagements are moderated and unmoderated usability testing, in-depth user interviews, field studies and contextual inquiry, prototype testing with interactive prototypes, surveys and structured customer feedback, and card sorting paired with tree testing for information architecture validation. Method selection depends on what the research needs to answer, not on a fixed checklist.
Moderated and unmoderated usability testing
Moderated testing sessions, where a facilitator guides a participant through tasks, produce the deepest qualitative insights because the facilitator can follow up on unexpected behaviors in real time. Unmoderated testing scales better for validating specific hypotheses across a larger participant pool without the scheduling overhead of one-to-one sessions. The choice between them depends on whether the research needs depth or breadth at that stage of the project.
User interviews
In-depth interviews are foundational in every Fuselab research engagement because they reveal the reasoning behind user decisions that behavioral data alone cannot explain. A user who abandons a checkout flow and a user who completes it reluctantly look identical in analytics. An interview distinguishes between the two and identifies what the reluctant user almost gave up on, which is where the highest value design improvements hide.
Field studies and contextual inquiry
Field studies move research outside the lab and into the environment where users actually work. Observing how someone uses a product at their desk, under interruption, with three other tools open alongside it reveals constraints that controlled testing cannot simulate. For the DHCS project, watching caseworkers navigate eligibility systems in their actual offices showed workflow friction that lab-based usability testing had missed entirely.
Prototype testing
Testing interactive prototypes with real users before development begins catches structural problems when changes are still inexpensive. The critical distinction is testing task flows, not visual design. A prototype that looks rough but lets users complete real tasks produces more useful findings than a polished mockup that only demonstrates appearance. Fuselab builds clickable task-flow prototypes for every major engagement and tests them before any design enters the development pipeline.
Surveys and structured feedback
Providing UX or user research services always includes some kind of surveys and customer feedback mechanism. Feedback from users on your application service throughout various stages of our work is what let's know how off track we might be. This feedback is invaluable in understanding what users like and dislike and why.
Card sorting and tree testing
Card sorting reveals how users mentally organize categories and labels. Tree testing confirms whether users can find specific content within a proposed navigation structure. Both methods take days rather than weeks to complete and prevent costly information architecture restructuring after development starts. Fuselab uses both on every project involving navigation design or content reorganization, because architecture errors discovered after launch require a full structural rebuild rather than a simple content edit.
What a UX research engagement looks like
A typical Fuselab UX research engagement runs six to ten weeks from kickoff to final deliverables. The first week focuses on goal alignment and existing data review with the client team. Testing and data collection run in the middle weeks with live client access throughout. Analysis, recommendations, and implementation support close the engagement.
A UX research services engagement begins by defining research objectives, reviewing existing analytics, user feedback, and design documentation, and identifying the specific questions the research must answer. Testing should not begin until the team agrees on what a successful outcome looks like. Research without agreed success criteria produces interesting findings that connect to no decision the product team needs to make, which is where most engagements lose their return on investment.
Research sessions run on recorded platforms where the product team can observe live or review recordings within hours. The methods deployed, whether moderated testing, field observation, interviews, or surveys, depend on the questions established at kickoff. Preliminary patterns should surface continuously rather than arriving in a final report, because the design team needs to adjust direction while testing is still running to get the full value of the engagement.
In parallel to user testing, healthcare, government, and fintech products require a regulatory review to identify where compliance constraints like HIPAA, Section 508, WCAG, and KYC shape interface decisions that general usability testing cannot surface. Accessibility issues that users silently work around instead of reporting are flagged at this stage rather than discovered during a post-launch audit, where remediation costs multiply.
Analysis uses affinity mapping and thematic analysis to identify patterns across the collected data. The product team participates directly in synthesis rather than waiting for a finished report. UX research services deliverables include a prioritized recommendation list, journey maps, personas where relevant, and testable prototypes demonstrating the changes. Every recommendation traces to a specific observation in the data, not to a general best practice the team could have read online.
Implementation work extends past the deliverable handoff. A UX research services engagement translates research evidence into interface changes, information architecture adjustments, and testing protocols in direct collaboration with the development team. Engineering review of the design direction before build begins prevents mid-development revisions that slow delivery and compound technical debt across the product.
4 to 8 weeks after implementation, a follow-up evaluation measures whether the design changes achieved what the research predicted. User behavior data, task completion rates, and support ticket volume are compared against the baseline captured at kickoff. Research insights have a shelf life, and products evolve continuously, which is why follow-up confirms which observations still hold and which have been overtaken by later product changes.
UX research project case studies
UX research by industry
UX research requirements vary by industry because the regulatory context, user populations, and task complexity differ at a structural level. A healthcare research protocol cannot be applied to a fintech product without adjustment. Fuselab's UX research services concentrate on industries where domain knowledge directly shapes methodology: healthcare, data visualization and dashboards, fintech, AI and machine learning, transportation, and enterprise SaaS.
Clinicians switch user roles mid-task, patients reviewing results have ten to thirty seconds of attention, and administrators process sensitive data while managing interruptions. Commercial UX testing methodology does not transfer to these conditions, which is why healthcare engagements require custom research protocols. HIPAA and Section 508 are the baseline, not the differentiator. The harder question is whether the interface supports clinical decision-making when the user has seven seconds to choose, which is where most healthcare usability testing stops short.
Users reading charts, filtering large datasets, and drilling into anomalies face a different research problem than users completing transactional tasks. The testing is about pattern interpretation rather than task completion, and most UX research frameworks built around transactional flows do not handle information density well. Dashboard research covers chart-type selection logic, information density thresholds, and how the interface handles edge cases in the underlying data.
A user entering bank account details abandons the task at the first sign of interface uncertainty. Financial research has to test trust and transaction confidence alongside standard usability, because the stakes change what users notice and what they tolerate. KYC sequencing, transaction-state communication, and error recovery patterns decide whether users complete onboarding or drop at verification. The hardest fintech UX research problem is not usability. It is how the interface behaves when the transaction fails and the user cannot tell why.
General UX testing does not cover the three things that matter most for AI products: users must understand what the model is doing, calibrate trust in its outputs, and know when to override recommendations. Research here tests how the interface communicates confidence levels, handles failure gracefully, and lets users provide corrections without requiring them to understand the underlying system architecture.
Interfaces used under motion, time pressure, and environmental distraction cannot be evaluated in a controlled testing lab. Field research is essential because in-vehicle and warehouse environments introduce variables that prototypes cannot simulate. A telematics dashboard that tests flawlessly on a laptop can fail within minutes when mounted in a moving vehicle under vibration and changing light conditions, which is why Fuselab tested the Automatize Platform in actual fleet environments rather than simulated workloads.
Trained operators, power users, and administrators perform the same tasks hundreds of times per week. They tolerate friction differently than first-time users because friction that feels minor in onboarding compounds into real cost when repeated daily. UX research for enterprise products focuses on keyboard shortcuts, bulk operations, and error recovery patterns rather than discovery flows or visual appeal.
Who leads the research team
Our team
Fuselab's UX research team is led by Marc Caposino, CEO and Founder, who has directed research engagements for NASA, Fiserv, DHCS, NIH, and Uber across more than 15 years in enterprise UX. The senior research staff includes practitioners with experience across healthcare, government, fintech, and AI interface design, reporting directly to Marc on every engagement.
Don't Listen to Us, Read What Our Clients Are Saying.
We know that trusting an outsider with your vision can be scary. This is why if you're not satisfied with us after the first two weeks, you can walk away owing us nothing.
"We went from prototype to usable software lightening fast, and our customer reviews have never been better."
"Their creativity and mastery of UX UI design has made our years of working together enjoyable and incredibly successful!"
"If you need to re-think your product and need some truly unique design talent , Fuselab Creative design team is your answer."
"We needed a nimble team of UI UX designers to work with our development team and they quickly became one of our most vital resources and far exceeded our expectations."
Ready to have a conversation?
Contact our UX Design team
by filling out the form below!
Frequently Asked
Questions
Fuselab Creative has been creating user-friendly and visually appealing digital interfaces for over a decade, and we still feel like we've only touched the surface of our potential.
What is the difference between UX research and UX design?
UX research studies how users actually behave with a product through observation, testing, and data analysis. UX design applies those findings to create or improve the interface. Research happens before and during design, not after. An agency that designs without researching first is making decisions based on assumptions, and an agency that researches without designing is producing reports that never ship.
What is the difference between UX research and market research?
Market research studies what people say they want through surveys, focus groups, and demographic analysis. UX research studies what people actually do when they use a product through direct observation, task-based testing, and behavioral data. Market research answers whether demand exists. UX research answers whether the product works for the people using it. Both are valuable, but they answer fundamentally different questions.
How does UX research work alongside our internal design team?
UX research integrates with an internal design team by providing evidence that informs design decisions rather than replacing the team’s judgment. Research sessions are observable by the internal team in real time. Synthesis happens collaboratively, not behind closed doors. The deliverables are structured so the internal team can apply findings independently after the engagement ends, which means the research investment continues producing value without ongoing agency involvement.
Do we still need UX research if we already have product analytics?
Product analytics show what users do but not why they do it. Analytics can identify that 40% of users drop off at step three of a workflow, but they cannot explain whether the problem is confusing labels, a missing confirmation step, or a performance issue that only appears on certain devices. UX research answers the why behind the analytics data, which is what the design team needs to fix the problem correctly.
How much do UX research services cost?
UX research services from US-based specialist agencies typically range from $25,000 to $75,000 for a full engagement, with hourly rates between $100 and $250 depending on scope and regulatory complexity. Healthcare, government, and fintech projects cost more because compliance review adds structural work to every phase. Offshore generalist agencies charge less but rarely have the domain expertise that regulated-industry products require.
How long does a UX research project take?
A full-scope UX research engagement runs 6 to 10 weeks from kickoff to final deliverables. Rapid validation projects with a narrow scope can complete in 2 to 3 weeks. The variable that most affects timeline is participant recruitment, not analysis. Products with specialized user populations like clinicians, compliance officers, or logistics operators take longer to recruit than products with general consumer users.
Can UX research be done on a product that is already live?
UX research on a live product is often more valuable than research during a redesign because testing on production systems captures real performance, real data, and real user behavior that prototypes cannot simulate. A dashboard loaded with sample data behaves differently from the same dashboard pulling live records across multiple API sources. Research on live products identifies the problems users actually encounter, not the problems a prototype predicts.
How do you measure whether UX research actually worked?
Measurement happens 4 to 8 weeks after implementation by comparing three metrics against the baseline captured at kickoff: task completion rates, support ticket volume related to usability complaints, and user retention or adoption rates for the redesigned workflows. If the baseline was not established before research began, there is nothing to measure against, which is why defining success metrics during the first week of the engagement matters more than most teams realize.
What should we prepare before a UX research engagement starts?
Three things accelerate the first week: existing analytics data showing where users currently struggle, access to 3 to 5 real users from each distinct role the product serves, and a list of the product decisions the team needs research to inform. Teams that arrive with opinions about what is broken but no data confirming it get the most value from research, because the findings either validate or redirect those assumptions quickly.
Read Our Blogs
UX Research is the Foundation for all UX/UI Design


