UX DESIGN

The Fidelity Dilemma of Usability Analysis

What works right: A lo-fi, mid-fi or hi-fi prototype?

Arijit
Muzli - Design Inspiration
7 min readAug 3, 2019

--

Tyler Tate in his article “Concerning Fidelity in Design” writes:

Design methods are not mutually exclusive. Rather, each method exists on a continuum of fidelity, ranging from low fidelity sketches to high fidelity HTML prototypes. Each method is well-suited for a particular phase of the design process, with one level of fidelity often leading into the next.

As most digital UX designers will agree, indeed fidelity of our designs is an ever evolving process as it begins with more primitive paper sketches, to wireframes, to digital mockups, and gradually transforms into full-fledged visual design screens, often associated with interactive prototypes, ready for development. But in the early stages, we always adopt a certain design fidelity for our usability studies which will be efficient, effective and relevant.

To slightly dampen your expectations right at the onset, there is no one way or one specific prototyping fidelity for usability analysis. However, this story highlights how certain methodologies and circumstances helped me understand new realms of usability testing and thereby decide the right prototyping fidelity for the user-group in focus.

Context

Usability Testing is an industry wide accepted technique to validate the solution designers come up with, though arguably, lesser practiced in most of our new age start-ups pertaining to business deadlines, incurring costs, participant compensation, etc. and often unawareness. But various research, like this, shows that the ROI for usability analysis is considerably high and must be prioritized in our product cycles. While there are several techniques for conducting an usability test, two bigger umbrellas are Formative Test (performed in the early design process to gain quick insights, often termed as low fidelity tests) and Summative Test (performed at a later stage to capture metrics, often termed as high fidelity tests).

The Norm

It is quite an accepted and propagated practice to undertake formative tests with low fidelity screens, mostly through paper prototypes, lo-fi wireframes, and its perfect for validating early stage ideas and concepts. As the approach goes: create a task for the user, ask them to think aloud, observe their behaviour, look for clear errors; to basically understand what works and what not and why. It is an iterative process undertaken more than one time during the early phases of the design process.

As often evident, low-fidelity prototypes provide a quick and easy way to attain user feedback on our initial ideas/flows and prevent users from getting distracted by detailed high fidelity prototypes and visuals motifs. But is there an exception to this theory?

The Outliers

During my internship at Microsoft, I was a part of their Cloud & Enterprise (CnE) Business Unit. While designing for the SaaS product — Visual Studio Team Services (VSTS), presently rebranded as Azure DevOps, I was introduced to the Rapid Iterative Testing & Evaluation, commonly known as RITE method, which is an adopted methodology for Design in VSTS.

RITE method is basically an iterative usability method whereby participants are taken through the same perform-task and think-aloud protocol. One major difference is that, instead of executing the entire study plan and then gathering the findings to suggest improvements, we iterate on the designs as soon as issues are discovered by 2 or 3 participants. In this way, we are able to quickly test and get feedback on new solutions and ideas. However, the biggest difference which we incorporated as a part of the RITE method was:

Rather than testing low fidelity wireframes, we always created high-fidelity mockups, at times with a simple clickable prototype.

Why?

SaaS and ERP based products, like VSTS in our context, are always data intensive and the amount of information any user is subject to is often overwhelming and it is paramount to create an experience that would seem “real” and “relevant” to those users. While this makes the process more labour intensive, it results in rich, insightful feedback. Also, with the existence of a well defined design system, the execution of hi-fidelity mockups proved slightly easier.

Nevertheless, I found an exception where the generic notion that formative tests are conducted with low-fidelity prototypes was challenged as our high-fidelity prototypes in RITE method approach indeed proved beneficial for my project.

Fast forward a few months, I started working at Ola Cabs, India’s leading ride hailing aggregator, and I was a part of their premier product, Ola Play, launched in 2016 as world’s first connected car platform for ride-sharing economy. It mainly comprises of 2 terminals, the driver’s console in the front seat and the passenger’s console in the back seat. We were responsible for defining a design system and accordingly design the interface for the driver’s terminal called the Driver eXperience Console (DXC). With no predefined process, methodology, usability lab, it was a fresh beginning with the only constraint of pre-defined console dimensions.

“Design for Driving” is a very special domain and in an ideal scenario, usability analysis is best conducted in a simulation lab as a part of Experiential Prototyping with some fidelity prototypes, but with no such practice and facilities at our helm, we naturally adopted “The Norm”, i.e., formative tests with low fidelity prototypes but under an iterative process like RITE method. While the card sorting and tree tests were easily conducted in the early phases of defining the structure, our formative test scenarios proved challenging. When we interacted with the drivers in a non-driving environment and presented our low-fidelity mockups, it proved quite difficult to attain any valuable feedback and eventually we ended up leading them in the conversations, which is always an undesired situation for user studies. But it was evident that the drivers were unable to connect to the product. We quickly shifted to mid fidelity mockups, not clickable prototypes yet, but when we presented these mocks to the drivers, there was a drastic improvement in their feedback. We started iterating but the feedback, though useful, still felt insufficient to iterate effectively and with no design system in place it proved quite inefficient. Meanwhile, it was important to undertake usability testing in driving environments as well to better understand the flows.

After further research on automotive UX and “Design for Driving” from existing leaders like Android Auto and Apple Car Play, we gathered some clarity on UI principles for this domain. We eventually decided on high-fidelity mockups, not necessarily with finalized colours, typography, and iconography, but with more detailed designs and also clickable prototypes.

Why?

A car is a critical and sensitive environment especially for a driver. The environment of driving is very different from a home or an office and any distraction while driving could lead to serious implications on the driver, passenger and the car. Safety is paramount and hence it was important to conduct our usability tests with utmost care, especially when we were in the real world, driving on the roads. So it was time we created an experience that would seem “real” and “relevant” to the drivers so that it resulted in more rich and insightful feedback. We undertook the usual perform-task and think-aloud protocol to conduct our usability analysis through our hi-fi clickable prototypes and recorded their feedback. At times, even our hypotheses about some driver behaviours were defeated during the usability tests. Long story short, it proved quite fruitful and our iterations were also faster.

Yet another exception where the generic notion that formative tests are conducted with low-fidelity prototypes was challenged. While I discovered another realm where high fidelity prototypes are crucial, it also highlighted the importance of design guidelines for any product. Especially under an iterative usability method where hi-fi mockups are essential, predefined design guidelines help in executing things efficiently and effectively.

Obviously, formative usability tests will continue to be primarily conducted with low-fidelity prototypes and there’s no big reason why it shouldn’t be, but I’ll like to reiterate, there is no one way or one specific prototyping fidelity for usability analysis. I can relate this to all UXers’ favourite answer:

Well, it depends.😅

Nonetheless, there are surely a few domains which are exceptions and interesting, and I feel we are often not exposed to such challenges in most of our consumer end products, or our small academic UX stints. While I have worked on a few other products thereafter but haven’t faced any new fidelity dilemma situation during usability analysis.

One of my exceptions has been a web UI and the other has been a tablet app UI. Does this happen with a mobile app UI as well? Please feel free to highlight any such outliers and exceptional domains if the more experienced UX folks here have worked in and faced during usability testing. It would surely be interesting to know about such different scenarios. I hope to explore more on the UX challenges of such domains which possibly comes with a fidelity exception but more importantly I wish the industry keeps enough bandwidth for Usability Analysis in their product development cycles because that’s turning into a bigger challenge today.

Don’t hesitate to share your feedback and views or you can shoot me an email here. Cheers! 🙌

--

--

Digital Product Designer • Microsoft • NID Bengaluru • HTW Berlin • Liverpool FC • Travelogue @d_meridians • www.arijitdey.in