A framework for user research analysis that works for everyone on a team


We often hear that “user research is a team sport”. But if some of the team are new to digital ways of working, how do you make it all make sense for everyone?

By Siobhan Harris

Siobhan is a user researcher who has worked in private sector and government services, with a focus on research that is inclusive for both users and teams.


User research on multi-disciplinary teams - make it make sense!

When you’re designing services, the best teams are truly multidisciplinary. This means that you might have people from roles such as policy, operations and business management working alongside designers, researchers and developers.

The benefit of this team set-up is that you get access to so much breadth of experience and knowledge about all aspects of the service.

But it can cause problems. If you’re new to Agile, it can be very hard to adapt to working in this way. User-centred design processes that we use every day can seem risky and strange if you’ve not experienced them before.

This is especially true of user research. As a user researcher on this kind of team, you need to be able to explain what you’re doing in a way that makes sense to everyone, so you can bring everyone along with you.

User research methods can be hard to understand

User research methods - even well-established ways of working - can be baffling for some stakeholders. You’re not writing an offical report on your findings? You’re not getting that approved by some senior leader somewhere? You’ve already started iterating? Wait, what - you ONLY ASKED 8 PEOPLE????

This is also true of playing back research findings.

As an agile UCD team made up of a researcher, content designer, and interaction designer, we’re happy to work on the fly; it’s what we’re used to. Playing back our user research findings can be scrappy - we’re used to picking out the key details and focusing on making progress fast.

Other team members work in a more regulated environment. They probably have their day job workloads to prioritise and might not have the time to sit in every design meeting.

Because of this, there are vital process steps that they might miss. While it’s in our UCD heads, we can’t assume that our usual ways of playing back user research are enough to make it clear for everyone.  

Discovery can feel like chaos

In a recent discovery project, I had the privilege of working with a team made up of people from all sorts of roles in the organisation.

 Our first round of research was testing a concept solution with 3 groups of users. We gained a huge amount of insight and the research findings were played back to the team in an informal way. We were eager to use what we’d learned to start doing the exciting bit - building our first prototype to test in the next round.

But things quickly started to fragment on the team. Our methods were being questioned and work started to slow down.

 We put a lot of effort into documenting our process, on clearly showing decisions, processes and designs. However, for our stakeholders, there was not a clear enough link between what we had tested, why we’d tested it and what we wanted to learn next, based on research findings.

 We all knew what we were doing, but not everyone was clear on why or how. We needed to make the research findings work harder to add more value for everyone in the team.

Making it make sense

To deal with this problem, we got everyone on the team along to a ‘checkpoint’ session. This session was intended to recap what we had learned, and agree on what we wanted to learn next.

For this initial session, we used a Miro board that everyone had access to. We had a screengrab of each screen we had tested, with post-its for the research findings for each screen, and different coloured post-its for research recommendations.

We invited the whole team to add post-its for each research recommendation, with notes and suggestions about what to learn next. Working through this board allowed us to explain our research findings very clearly, and, based on the findings, agree as a team what we would test next.

Crucially, it also allowed us to properly access the deep knowledge and experience of our team members who work in policy, operations and other parts of the organisation.

It felt that, for the first time in a while, the whole team had a clear idea about why and how we were iterating and designing new things to test.

A new framework for playing back user research

The output of that initial checkpoint session became the beginning of a research playback framework with a clearly defined process.

Before each round of research, we agree:

  • what we want to explore in the next round - for example, a new way to help users make a decision

  • what we would need to learn to explore it, so we could then design and test - for example, any data on how users currently make that decision

Then, in order to play back the research, we set up a Miro board showing:

  • what we tested in this round of research, shown as ‘how might we’ statements

  • research learnings for each ‘how might we’, grouped in themes

  • recommendations from the research for next steps

Here’s an example of what the board looks like:

This format allows research findings to be presented in a structure where anyone from the team or elsewhere can look at the board, understand what we learned in research, how we got to that learning and why. It allows non UCD-team members to jump onto a research playback call without having seen the full UCD process/workshops, and it still makes sense.

Tying it all together

TThe beauty of everything being in one place is being able to build as you go, developing the bigger picture detail as you progress.

In this case, you’re building a view of:

  • the starting point of what you wanted to learn

  • what you designed to learn this

  • insight you gained from users

  • recommendations for what to learn next

This is working well for us

The benefits of using this framework have been evident. It’s clear, it’s informative, and the value of the research findings makes sense when it’s painting a cohesive picture of what you've learned and why.

It makes sense for the whole team; those who are time-poor can dip into research findings in their own time, without needing guidance.

Making sure this framework is applied to all rounds of research going forward encourages familiarity with the research process. It also helps with stakeholder buy-in - demonstrating the importance of doing research and the value in user insight.

For our team, we find that we’re working together faster and have more trust and understanding. Everyone has a voice, no matter their role, and we really feel like one team.


Looking for an amazing user researcher for your next project?

Get in touch with Scroll

Previous
Previous

Getting started as a content designer

Next
Next

Content design for services: what’s it like?