Back to insights

Listening with our eyes and ears (remotely)

Many of us still experience a sense of discomfort or unease when communicating online - remote user testing is still an effective way to gain actionable insight.

6 minute read.

Hero banner

Challenges of communication in remote user research.

Many of us think of language as simply ‘text’ or ‘speech’, But in reality, language is thought to have evolved, be used and learned largely through face-to-face communication. During these interactions, we use a variety of cues, specifically, multimodal cues, which go beyond speech and include gestures, vocal intonation, eye gaze, and many more. This is why you were able to navigate in a foreign country where you couldn’t speak the language, as you were able to use gestures, points and appropriate vocal intonations to communicate with a helpful local.

Therefore, it’s unsurprising that a lack of these cues can make communication much more difficult. Think back to your last zoom call or MS Teams meeting – were there moments where you kept talking over each other and ended up awkwardly apologising for it? Or perhaps have you found it difficult to gage when it’s your turn to talk when everyone else in the meeting had turned their cameras off?

Despite having worked remotely for some time now, many of us still experience a sense of discomfort or unease when communicating online. Arguably, this is because many conversational elements, such as timely multimodal cues present in face-to-face interactions, are disrupted or missing.

As user researchers, COVID-19 has forced us to conduct more usability tests and in-depth interviews remotely. This can make picking up on multimodal cues present in normal conversations extremely difficult. In turn, this can make communication and research sessions feel a lot more effortful and disjointed.

In the case of turn-taking – where one person speaks after another finishes their ‘turn’– studies shown that transmission delays (i.e., connection issues) of even just .57 seconds has a significant impact our ability to take turns. Furthermore, these delays can disrupt our ability to carry out or initiate ‘repair’ – which is when we flag trouble in conversation by saying things like “huh?” or raising our eyebrows in a quizzical way to show that we didn’t understand what the other person just said. These cues in normal conversation often encourage the other person to self-repair, meaning that they correct themselves (e.g., by rewording what they had said) without needing to be explicitly prompted. These mechanisms normally allow for seamless conversation. However, in remote settings, difficulties in each of these conversational elements can make communication noticeably much more troublesome.

On top of this, the lack of context as to where each participant is being tested can be a challenge. In the remote setting, the researcher has very little or no control over the participant’s environment or even be aware of where they are until the meeting starts. Furthermore, the lack of standardisation in which testing takes place is not ideal as more confounding variables are present (e.g., a sudden drilling noise) that could have a profound effect on the participant’s performance and concentration.

Moreover, conducting remote research with participants who are less technologically literate can detract our attention from noticing these multimodal cues, and instead focus on more administrative parts of the research, such as giving clear verbal instructions for participant to start sharing their screen. As a result, researchers dedicate more time and energy during preparation (e.g., writing up and sending ‘to do’ guides on how to use remote testing platforms) and testing (e.g., giving verbal guidance on how to interact with the prototype) so that various technical/digital aspects of the research can run as smoothly as possible.

A combination of these issues often leaves participants and researchers frustrated and notice an increase in cognitive load, a term used in Psychology, which proposes that there is a limit to which the brain can store and process information, beyond which it starts to struggle. In remote research, this is reflected in how participants, particularly older participants, often apologise for “being bad with tech” or feel like they are “wasting [the researcher’s] time” or that the researcher “would’ve been better off testing someone else”. In turn, researchers spend more time to reassuring the participants and letting them know their difficult experiences are valid.

Therefore, perhaps a solution to all this is to first acknowledge that communication just is more difficult in remote settings and it is perfectly fine to feel overwhelmed and/or fatigued. We should actively communicate to our participants that it is normal to feel a bit frustrated, demonstrate empathy and be patient when things go wrong. In addition, perhaps we could extend our idea of multimodal cues to include more digital ones. For instance, we could try to be more sensitive of the participant’s cursor movements (thinking of it as participant’s hand movements) when interacting with a prototype. This can help researchers to ask better probing questions (e.g., I noticed that you have been hovering over X, what’s your thought process behind that?) to better uncover the customer’s pain points or wants and needs. Finally, we should keep challenging ourselves to find better ways to conduct remote research. It’s easy to get complacent and fall back on tried and trusted methods, but making the effort to explore new options every now and then, like new user research platforms, could potentially be extremely helpful.

With the vaccine rollout, many of us are looking forward to resuming usability tests in person, especially to try out the new state of the art labs offered by EY Seren. Although there are limitations to remote user testing, it should be emphasized that it is still an effective way to gain actionable insight and design better human centred solutions. In fact, various remote user testing technologies can allow for researchers to gather more sophisticated insights. Facial coding, for instance, allows researchers to track eye movements and facial expression in real time using the participant’s webcam, noting down instances such as widening of the eyes or lips tightening, when interacting with a product or service, gaining further insight into the user’s emotional experience. Furthermore more generally, remote user research can also be more cost-effective (i.e., less travel costs) and provide opportunities to test a wider audience, particularly geographically.

Moving forward, I am sure we will continue to develop better ways of remote testing and utilise it alongside traditional in-person methods, as we shift towards more sustainable and hybrid ways of working.

Woman gesturing with her hands

References: