The first lesson for a user experience designer is to go out and talk to your users. But what if your users aren’t easy to access? Or the subject is a person’s mental health? In the last few years, there has been a lot of awareness-raising around mental health, encouraging people to talk and be open about their experiences. However, it can still be difficult to get participants willing to talk about their experience of getting help with their mental health.
The world of healthcare is slow-moving and with good reason. There needs to be rigorous research and standards for compliance in safety, security and quality, for any healthcare product. This is somewhat at odds with the world of agile software development, which preaches early testing and iterative development. To quote Sara Holoubek, founder and CEO of Luminary Labs;
“It’s one thing to move fast and break things with a consumer internet app. It’s another thing when tech is used to improve human life”.
For a design team working in an agile process, continually testing with users is not always possible when the context is healthcare. Even more so when the context is mental healthcare. To combat these challenges, we spread our bets and get user feedback in multiple formats.
The simplest and easiest way to get input from users is through a survey. We have the ability to add these to the platform at various stages of the user journey, although they come with some caveats. These surveys are completely optional and non-intrusive, a user can simply close the notification and not see it again. Surveys also don’t give the same context and insight as an interview, and these limitations are why surveys only form one part of the strategy.
However, it does allow us to hear a lot more voices and can give us a sense of what the big issues are. For example, in a question asking for open feedback we got a lot of comments about the app login experience and a particular bug around saving login details. This helped us to prioritise work on this part of the app as it was a big frustration for users.
It’s not just the design team that wants to talk to our users; the clinical research team, the product team, the marketing team, and the service provider also want feedback on how the product is working. Often, we need to combine efforts. If one team is able to get an interview or a survey out, its cross-checked with the other teams to see if it can answer multiple questions. One way we do this is through a questionnaire, asked when a user has completed the program. It asks about their experience of the program, if they would recommend it to others (Net Promoter Score) and also if they consent to be contacted for follow-up research. This information is useful to the service provider, the marketing team, and the design team. We now have a bank of users who have agreed to be contacted for a follow-up interview, which can be done by any team. Even though these users have consented, we still follow a strict protocol for how to contact them, with templates that had to be approved for clinical safety and data security. This is to ensure that users understand the process and are comfortable with sharing their experiences.
One bias of this is that the users who agree to an interview are often the ones who have had a positive experience with the program. Users who had a negative experience or dropped out of the program early could provide the most useful feedback about what isn’t working, but it can be extremely difficult to talk to this cohort. If a user hasn’t found the program useful and didn’t continue to use it, they are not likely to be invested in giving feedback. This is an ongoing challenge that we are aware of but haven’t fully overcome. Another drawback of this process is that the interviews are done after the user has completed the program, so we don’t get the early impressions or in-the-moment feedback, it's more reflective.
The main benefit of these interviews is the depth of conversation that can happen. In some recent interviews, we were able to talk to users about their readiness for not just a mental health intervention, but also a digital one. Interviews facilitate a researcher to get questions about such a sensitive subject across in the right tone and can result in rich conversations. Topics such as readiness for change help us to understand the broader picture. We can see from the data that someone signed up, but it’s much impossible to understand their thoughts and feelings about it. Interviews give us a way to get an insight into this.
Clinical research trials are one of the strengths of SilverCloud. The platform and the content are based on years of clinical trials and there are always new trials starting to evaluate new programs, features or service settings. These can be a great opportunity for the design team to get access to user feedback through the research, surveys or interviews done as part of the trial. As these have been set up with ethical approval and participant consent, it gives us access to a wealth of data. While the purpose of the trial may not be what the design team i researching, it can help us answer questions. For example, as part of a naturalistic RCT on the use of SilverCloud in the NHS, a set of surveys were added that looked at user’s expectations, experience, and usage of the platform across 3-time points. The outcomes of these surveys are analysed in this paper.
Sometimes there are also smaller pilots which allow us to follow more of a co-creation process. For the creation of some programs, we were able to partner with a service to set up a trial. This allowed us to meet users face to face to talk about their ideas for the specific program, create a minimal viable product (MVP) and get in-depth feedback on it. The key to getting this process to work in mental healthcare is setting up a good partnership with a service provider. We can then work alongside the provider to recruit users for the co-creation process while maintaining high standards of clinical safety and data protection.
A good example of this process was the programs created for parents of anxious children and teens. We created an MVP version and partnered with Northpoint Wellbeing in the UK to test it with some parents. Feedback was collected during the use of the program over 6 weeks and ended with an in-depth interview. This feedback led to a lot of changes and customisations, the biggest of which was splitting the program into separate child and teen versions.
Healthcare products & services have two primary users, the end-user or patient and the clinician. For a patient to be able to use a product, it must be recommended or referred to by a clinician. The first port of call in healthcare research is often the clinician, as they are also a user, have an influence over how the patient understands the product, and they also understand the typical mindset of patients with low mood or anxiety. Talking to clinicians gives the design team the surrounding context of how and when a user is introduced to the product. Though it is not first-hand feedback, it helps us understand the mindset and framing a user might have when first interacting with the platform.
User research in mental healthcare involves balancing a number of standards. On the product side, we want to ensure we are designing the right thing and keeping the user at the centre. On the clinical side, we need to conduct the research in an ethical way, ensuring a high standard of clinical safety for all users. Using multiple research methods can keep this balance right and inform the design of the product. Surveys give us breadth, interviews give us depth, trials give us rigor and service providers give us context. By combining these methods, we can get the full picture of the experience.