No, gestures aren’t a failed usability experiment.

Pineapple
Muzli - Design Inspiration
9 min readApr 25, 2021

--

Project Soli

A lot of us would be of the opinion that smartphones have accelerated the speed of our interactions with the world of technology. But if you really think about it, we’ve moved from typing with our 10 fingers on our computer keyboard, to now using just 2 fingers on our smartphone screens I will not talk much about this bandwidth issue, as Elon Musk likes to call it, and his futuristic plans with Neuralink. Instead, this article will focus on a very specific topic — Gestures. Let’s explore how gesture interactions impact our digital experiences.

For the purpose of this article, we can categorize gestures into 2 parts as we gradually move from talking about the first to the second.

A. Touchscreen Gestures
B. Touchless Gestures

A. Touch screen gestures

When we transitioned to touch interfaces, the biggest concern was that interactions apart from taps did not have any signifiers.

Signifiers indicate an availability of an interaction by providing strong clues.

So as gestures do not have any visual representation, they rely on the user’s memory and learning. Hence, they had to be designed in a way that would occur very naturally to the user. Conventionally the most common gestures that unfolded were — double-tap, pinch, spread, drag, flick.

With visual cues and tutorials, these gestures can be easily trained to the users. When introducing a new form of gesture, various techniques like sending push notifications, showing walkthroughs, animated or haptic feedback to depict accomplishment can be used to onboard a user to the new idea and eventually make it an almost unforgettable habit for him.

Credits : Paul van Oijen

Let’s take a look at some of the most widely adopted gestures that we see today.

1. Pull to Refresh

The pull-to-refresh gesture is patented by Twitter. It first appeared in Twitter’s Tweetie mobile application developed by Loren Brichter. Although he initially planned to create a refresh mechanism that follows Apple’s platform conventions, Brichter’s work with pull-to-refresh resulted in a novel interaction new to Apple’s platform at the time, and one of the most common gestures we see today.

2. Swipe to Delete

This gesture was introduced by Apple in the iOS Mail app. Plenty of apps today use versions of this gesture from removing an item list, to even expose an entire set of contextual actions. The one we probably use the most would be WhatsApp’s and Instagram’s swipe to reply.

3. Double Tap to Like

Every Instagram post has a heart button to like the post. If someone asked you today about how you learned the alternative way of double-tapping the post to perform the same action of liking, chances are you would probably say you discovered it by mistake or a friend told you.

4. Tinder Cards

Tinder’s UI is one of the most interesting ways of using a cards based approach. The swipe gestures on the cards make it very quick for the user and almost make it addictive. This gamification has made the swipe gesture very popular, so much so that “Swipe Right” has become slang for liking someone on Tinder now.

All of these examples prove the fact that users can be more than willing to adopt gestures when they’re designed rightly.

Apple Home Screen Swipe Up

As the race began to make smartphone bezels thinner in order to maximize the screen real estate, Apple decide to remove the classic home button from the iPhone X. To access the Home Screen at any time, the users now have to swipe up from the bottom edge of the screen. Along with this they also added other gestural functionalities replacing some of the previously learned gestures by the users. The ‘home line’ that appears at the bottom of the screen, where the home button used to be, serves as a reminder for users to perform the swipe up.

Apple Home Screen Swipe Up

This bold move by Apple taught the industry that gestures can be experimented with. Gestures need not be just ‘quicker alternatives’ to features for enthusiasts and power users. Gestures can be the only interaction available for a particular action, like reaching the home screen of an iPhone.

Speaking of enthusiast gestures, some of the Android phones have had some really cool ones for a long time. These include drawing letters on your lock screen like O to open Camera and V to turn on the flashlight, squeezing the phone to launch the voice assistant, answering calls automatically by raising the phone to your ear, etc. These gestures have absolutely no signifiers, and they need not have any. The users who would want these features won’t find them difficult to learn.

B. Touchless Gestures

Up until now, we saw how gestures could be useful on touch screens to make possible a wide range of functionalities even if there isn’t much space for a visual representation of those interactions. In smaller devices like watches, there would be less space for the touch gestures themselves, in that case, touchless gestures can play a huge role.

This is Google’s Project Soli. Their latest implementation could be seen in the Pixel 4 smartphone. The way it works is, it creates an imaginary bubble around the phone within which it can detect the presence, absence, or any type of movement of the hand with high precision using the radar.

You are the only interface you need.

Screen real estate is not the only problem that touchless gestures solve. We’ve been interacting with technology in this 3D world mainly on our 2D interfaces. There have also been several other attempts in the past that use cameras to detect gestures. But even with the criteria of the hands being within the range of the camera with good lighting conditions fulfilled, they provide a comparatively poor positional accuracy.

3D touchless gestures, open up a whole new dimension of ways in which the user can interact with any kind of devices irrespective of the device type or size. These gestures blend our physical and digital realities.

Carsten Schwesig, the design lead at Project Soli, has beautifully explained this idea —

We arrived at this idea of virtual tools because we recognized that there are certain archetypes of controls like a volume knob or a physical slider. Imagine a button between your thumb and index finger. The button is not there but pressing it is a very clear action and there’s an actual physical haptic feedback that occurs as you perform that action. The hand can both embody a virtual tool, and it can also be on that virtual tool at the same time. So if we can recognize an action we have an interesting direction for interfacing with technology.

Google Project Soli

There are endlessly possible creative ways that you can imagine where these gestures could be of great benefit. Let’s take a look at a few.

Quicker Actions

Working on your desk when your phone rings? No problem, you could just simply wave left to decline or wave right to answer the call without trying to reach the phone physically.

Home Automation

Every other automation of tasks that we can think of accomplishing with our voice assistant or smart home devices, can be complemented with gestures. An ecosystem could exist that could be flexible to the user’s choice of input. For example, you could vary the temperature of your air conditioner with your voice assistant when you are away. And when you are present in the room, you could do the same task by just waving your hand in front of the air conditioner. On similar lines, the need for a physical remote for other devices like television will also be redundant. Gone will be the times when you’d get annoyed over a remote being lost.

The COVID Push

Touchless tech has already been adopted at places like faucets and hand dryers in mall washrooms. This care for hygiene has only been given even more importance due to the coronavirus. Just as we have seen how this virus has accelerated the adoption trajectory for future tech, we will see the same with contactless technology. Airports and ATM machines could be one such part where contactless technology will be given a widespread push.

Driving

Other interesting and important use cases could be when you are driving. Want to change music tracks or turn up the volume? Great, you could simply wave to change tracks or perform the rotate action just like you would rotate a physical circular knob to the right to increase the volume and left to decrease. Gestures could also be used to set window heights. Things like these could prove very effective in reducing drivers’ distraction and the need for their physical attention, thereby improving road safety.

Personal Computers

For us designers, in particular, it would be a delight if we could just zoom in and out of our Adobe XD, Figma, and Sketch artboards by making pinching and spreading air gestures with our fingers in front of our large screens. Touch screen displays for computers have existed for a very long time. However, they do not provide a good enough usability experience. It is not very convenient to constantly switch between reaching the screen for touch and back to the keyboard to type. Air gestures solve this issue and provide a great experience. Vicara, a startup based in Bangalore, India has pretty much achieved this, but with their wearable device — Kai.

Kai Gesture Controller

Industry Level Uses

In a video titled ‘The future of design’ uploaded in 2013, Elon Musk shows how he could manipulate the perspectives and viewing angles of a SpaceX rocket engine’s 3D design by just making air gestures with his hands, and later even manipulating the components and design itself.

Elon Musk, The Future of Design.

Yes, this is from 2013, 7 years ago!

Role of Design

The example we saw above, explains that this tech is not just about fantasies from sci-fi movies, and is in fact being used in various industries. And so I believe that gestures aren’t a failed usability stunt and are rather very much essential to push forward the human-machine interactions to greater highs, and we should expect this tech soon in our everyday devices. To make all of this possible design will play a very important role in ensuring that our next generation of gesture-based interactions feels very human and natural. While thinking of the consumer level use cases, it is important to keep in mind that they should not rely much on training as they would at an industry level use case. The user should not have a difficult time catching up with this tech quickly. The transition and introduction of this tech to the common masses is something we at Pineapple studio are very excited about and are really looking forward to. We have always liked to think differently and the stories of designers who’ve designed these revolutionary gestures keep inspiring us a lot to experiment and test rather than finding usability excuses.

Unless you do it, you don’t know.

— Ivan Poupyrev, Project Soli Founder

If you like what you read, do clap for us and check out the articles recommended by us below :)

This is how Netflix, Snapchat, and Microsoft break UX Design principles

The future of social media experience

The business value of UX/UI designs

Want to say hi? Drop us a line at hello@pineapplestudio.in

Check out our work and our website Pineapple

--

--

We design holistic digital experiences that enrich human lives and help businesses grow. Let’s connect at hello@pineapple.design