User Researchers often conduct our research toward the beginning of the design journey prior to development of new products and features. This means at big companies the delay between early research and product launch can be as long as 2+ years, which means I often don't know the full impact of my research because the metrics never get back to me, the feature never launches, or I leave the team before it launches. This lack of direct impact combined with skeptical stakeholders has, at times, sowed doubt around the value of qualitative data. Certainly, healthy skepticism can be really helpful for sharpening individual studies and avoiding bias, but even justified stakeholder doubt can seep into the psyche of a qualitative researcher.
After a decade of research in tech, there have been a couple moments where I was undeniably validated. These weren’t always the splashiest projects, but they are ones I look back to ground myself.
One of those moments was helping the Alexa team define the minimum valuable product (MVP) for public transit commutes. Alexa launched its commuting feature for cars months prior, but because of the complex nature of public transit systems, there was no public transit version. But with the team deciding to launch in Japan, the need for public transit became unavoidable. For those that aren’t intimately familiar with public transit systems - Japan has the most expansive and successful train systems in the world with 30+ billion riders a year, and houses 45 of the busiest 51 train stations in the world (at least in 2013).
I flew to Japan to conduct foundational field research to help the team understand the public transit needs of Japanese users. We did home visits in Tokyo that both forced the team to ride the train and talk to geographically distributed people across Tokyo.
I presented my research to a small group of stakeholders including the Director and VP. Without even going through the research, the VP announced:
"You only talked to 10 participants. This doesn't provide any level of certainty."
In one fell swoop he rejected the entire premise of the research as conjecture based on the sample size. There was no way to prove that qualitative research was effective at gathering these sorts of contextual insights. This was not the first time I had received this critique from him and it wouldn’t be the last. Fortunately he wasn’t making the decision and the product manager who was had gone on the trip with me.
Based on this research, I helped the team prioritize the MVP scope down from many possible features to only one critical use case with additional features to be added later. Almost exactly a year after this foundational research, we started getting behavioral data back about public transit requests from beta testers. It turned out my research had not only accurately predicted the most popular request, but also the magnitude of how frequently one request would come in compared to another.
As a result, the team had prioritized the right feature to build well before the behavioral data came through confirming. It wasn’t long after that that I left the team, so I never heard about how successful the feature was after launching to the general public and to other countries.
I couldn't believe how definitive the evidence was. This validation was exactly the reminder I needed that what I was doing was valuable.