Establishing an effective user research practice can prove to be a huge unlock in the maturation of a product design team and the larger product organization. It not only helps you evaluate the effectiveness and desirability of new features you launch, or tell you what’s not working with the current experience, but it can also point you towards fertile ground in developing new services and solutions that help users in meaningful ways.
At Drizly we set up a very effective user research practice that supported and influenced every part of the organization. It took a lot of effort from a lot of people - not just the research team. Ultimately we transformed how the organization planned and built features from beginning to end and became a valuable tool in quarterly planning and defining our North Star.
The obstacles
I see two obstacles in establishing a robust user research practice. The first and biggest obstacle to overcome are the misconceptions others in the org have about user research - what it is, how it’s conducted and (most importantly) how much time is needed to do it effectively.
Most people have a concept of user research. They’ve witnessed user interviews. They’ve heard of prototype testing. They may know that there are quantitative studies and qualitative studies. What most people don’t fully grasp is that there are many types of user research tasks that can be conducted. The trick is understanding the type of question the team wants to ask (or the thing they hope to learn) and then applying the most relevant task to get at that answer.
A good way to understand this is the NCredible Framework which helps identify what you’re trying to learn and that determines the type of tasks best suited to answer those questions. Most product teams are not familiar with this and so they piecemeal their understanding of user research, not understanding the methods and rigor that goes into effective research.
Another aspect of this first obstacle is the perception of how much time user research takes. There is typically an assumption that it takes a lot of time. Meanwhile squads are on hold and getting behind. Better to rush forward than pause or delay, so the thinking goes. These time assumptions are usually incorrect (you’d be surprised how fast an effective research team can move) and also a result of not planning up front to include user research.
The second obstacle I find is on the research/design team. It’s important to understand where a company is in their maturation and the pressures they’re under to get work done. Introducing a new method (user research) into a process that’s under immense pressure to deliver is asking a lot and PMs and engineers need to see the manifest value of a new process before they adopt it. Quick wins and positive experiences are critical in adopting user research and should be the primary focus early on. Too often I’ve seen research get bogged down in process and poorly communicated timelines. This leads to frustration on the part of product teams. They will not buy in.
I want to stress here that while it’s important for researchers to conduct their studies quickly and deliver quick wins, I am not advocating for slipshod work or cutting corners. Researchers should never compromise results with improper methodology.
Building User Research at Drizly
When I joined Drizly, we did not have a formal user research practice. Designers would conduct their own ad hoc research when they could find the time - maybe an unmoderated online prototype test or occasional retailer interviews - but no dedicated researcher with extensive training and expertise. To Drizly’s credit, they had budgeted for hiring a user researcher, a role I would take responsibility for filling.
To begin, I needed to define the role and its responsibilities. This was an important hire and ideally they would be responsible for setting up the foundation for the research discipline. I identified the areas that this hire would need to cover, three different hats they would need to wear:
User Researcher. Obvious, but important. They would need to be an excellent user researcher, versed in all the different methods and approaches, able to identify the right approach to whatever situation. They would be the only researcher at first. They needed to be able to do the job.
Educator. With many teams working on different aspects of the Drizly platform - one that serves three distinct audiences - the reality is that one researcher wasn’t going to be enough. But if the researcher could teach others to conduct research, impart the basics and advise on methods and tactics based on the situation, then we could extend research’s reach. We would enlist others to conduct research, starting with the rest of the design team and then extending to PMs, engineers, etc.
Promoter. With a new - and important - function being developed, we needed to make the company aware of it. We wanted other teams to get excited about the prospect of leveraging user research, incorporating it into their product team’s process. We needed someone to highlight all of the good and impactful work we would be doing.
It’s important when starting with one researcher, they can effectively serve each of these roles within it. I was blessed to find an amazing candidate, Kat Rutledge, who is a talented researcher who checked all three boxes. She is empathetic and entrepreneurial in her approach to building our research practice. She is an incredible partner.
Iterative learning
At the time we had a directive from executive leadership to make meaningful improvements to our overly complicated PDP. We already had multiple stages of development planned. I wanted to get user insights into the project. But things were already planned and moving. I got significant pushback in attempting to conduct research. We could not slow down.
We were still in the design phase for several components. We had a couple of weeks before engineers would begin working on them. I knew we could get initial studies conducted in that timeframe. I proposed two-tracking it - design would continue while research would run. These learnings could influence designs in real time, as they came in. No need for delay. Additionally, as we learned more things, we could adjust designs/builds on the fly.
This changed peoples’ perception of user research. It was not a drag on timelines. And as we learned, informing design, we began to realize the efficiencies of building what’s backed by research as opposed to guessing and learning after launch.
Answering seemingly unanswerable questions
A feature idea that frequently came up at Drizly was in store pick-up. This was a feature considered by executive and product leadership. We would discuss it in meetings, debate pros and cons, work up estimates, and the punt on making a decision. We repeated this cycle several times. It seemed to be an unanswerable question: “would users like and use this feature?” The pro side was that once built, it was an additional fulfillment option for users. It could be a passive win. The con side was that it would require a not insignificant amount of front end work and a lot of training for retailers to set this up in the stores. It never got prioritized, but it was always being considered.
I should say at this point I was for building the feature. My assumption was that in cities where people live closer to retailers, they would be more likely to use it.
More than my assumptions though, this was something I felt we could leverage user research for. Kat developed a study leveraging the Kano model, which evaluates users’ excitement or interest in various features. The study looked at different last-mile related features. In store pick up did not rate high. Consumers were “indifferent” to it, which means it would not move the needle.
What did generate enthusiasm though was live order tracking. This is a feature we had not prioritized because of the complexity inherent in Drizly’s operating model. Deliveries are not carried out by Drizly drivers. Retailers are required to provide drivers for delivery. As a result Drizly has almost no visibility into the order being delivered. This created a number of challenges to doing it well. Nonetheless, the study gave us the confidence to set aside in store pick up and pick up live order tracking. We built and launched it a few months later to great success.
Building research momentum
As part of Kat’s onboarding, I had her set up regular check-ins with the PMs. I met with leaders throughout the company. We were gathering information about work planned or needing to be scoped, looking for opportunities to set up a research study. It was slow going but we built a backlog of opportunities. We also presented findings at Product team meetings and company town halls. We set up learning shareouts where people were invited to listen to the readout of a recent study and ask questions. Kat put together a monthly newsletter of research activities and built a repository of study readouts.
This had a flywheel effect: the more research was shared and discussed, the more people were interested and bringing potential research opportunities to us. We quickly went from an outreach approach - can we conduct research - to an intake process - fielding incoming requests. Calendars were created. We hired another researcher. We invested in a research platform (Usertesting.com).
Research for roadmap planning
One thing every product manager I’ve worked with has in common is not enough time. Not enough time to think about future experiences. Not enough time to work out what a new and improved version of the flow they’re working on might look like. Not enough time to think more than 10 steps down the road. They’re swamped with the myriad challenges and issues that pop up day to day on the squad, to keep things on track sprint to sprint and so on. Before they realize it it’s time for another quarterly planning session and it’s a scramble to build out the roadmap.
We saw this as an opportunity for research to lean in. Rather than have research be just reactionary — how is this feature working or how to improve the current experience just a bit — we wanted research to run far ahead of product. Look further down the road and explore bigger opportunities. As part of our weekly check-ins with PMs, we started asking about quarterly planning and beyond. What were things that if you had time to think about, you’d like to explore? What answers would help you most moving into the next quarter?
We began to tackle these existential questions. Research studies were developed to go deeper into these questions. The purpose was so that we could enter planning with good signal into what users might want. This is a big and important shift. Instead of committing to a feature in a quarter and then conducting the research to validate that idea in the quarter, we would do exploratory research ahead of quarterly planning so that we felt good about the features we wanted to plan for.
Final thoughts
Building a robust user research practice makes a huge difference in product orgs. Integrating research into the fold, making it a part of the process rather than separate can lead to huge success.
In starting out, it’s important to build relationships and demonstrate real business impact. We made sure research ran quickly and in parallel with ongoing work so it did not become a blocker. We worked with product managers and engineers to understand their concerns and struggles and looked for ways we could help. As we built trust and demonstrated results, we worked proactively to include research as part of the planning process to the point where we could run ahead of the squads, and ultimately inform roadmap planning.