Integrating Gestures Into Mobile Applications

Row of server cabinets with computer and digital displays full of data, numbers, and blue blinking lights and leds, arranged in circular rows. Computer servers fill a room of a futuristic data center, used as a cloud computing and data storage facility. Room is illuminated by blue light. Low angle view. Supercomputer simulation, digitally generated image.

In the often referenced Minority Report, Tom Cruise flips though crime files comprised of complex data sets and videos in a fully gesture supported immersive computing experience.  This is duplicated in Marvel’s Iron Man movies with Tony Stark flipping through 3-D models with mere gestures and voice control.

A more real-time example of the power of gesture-based interfaces could be found first in the explosive use of the Wii, and today the burgeoning popularity of the Kinect interface for the X-Box. So why now the focus on gestures as a methodology of interacting with everything from smart phones, tablets, video games, and all manners of industrial computing? 

As the computing environments become more complex, interaction mediums are becoming more difficult to translate into keystrokes or button push combinations; gestures can bridge this gap. Just think about the countless videos of babies on the Internet interacting with iPads and iPhones before they can even talk, showing the simplicity and universal understanding of gestures as an interaction medium.  However, while gestures are very easy to consume as a part of the application interface, they are extremely difficult to integrate into applications. As the device screen and size of the interaction space evolves, gestures will need to transform.  Apple demonstrated this successfully by adding three-finger and four-finger swipe gestures into their operating system.

This all points to specified requirements for:

  • Devices
  • Screen sizes
  • Operating systems

For example, a 7-inch display tablet is primarily held in one hand.  This means that the non-dominant hand is holding the tablet and almost never involves the gestures, and the dominant hand is often sweeping across the body. Rarely would the user go right to left with the right hand. However the larger footprint display tablet (such as 10 or 12-inch) in both hands are often times involved in the interaction as the device lies on a table, tray, or lap. Lastly, operating systems have their very own gestures to signify different actions to be taken such as pinch and zoom, double tap or expand. Overall we expect the gestures will increasingly be used in all manners of mobile applications and computing interfaces. Trying to enforce a one-size-fits-all methodology of gesture interfaces will lead to frustration for the application user.

If you are in your early days of mobile application exploration, or just beginning to build out mobile apps for your customers and employees, look towards gestures to provide more immersive interfaces.  Start out slowly and understand how the differences manifest themselves across different devices, device classes, and operating systems. Find applications that utilize gestures well and dig deep on those interfaces. Finally, as gestures move outside of touchscreens, look to optimize the learning from those touch screens in additional computing paradigms.


    Today mobile app development are in boom.Nowadays every company want to develop mobile apps. I like your way of presenting the information.You have done a fabulous job.