Touch Monitors Are On My Wishlist

I think I want touch screen monitors for my work and home desktops. I might be one of the few who still uses a desktop, but my quirky thoughts on the coming obsolescence of laptops as a form factor can be saved for another time.

The appeal of touch monitors is not to replace my mouse or keyboard, but rather to do the things they aren’t terribly great at. For example when I want to re position my cursor from my browser on one monitor to a specific location in my text editor on another, neither the mouse nor the keyboard are able to do this with the level of efficiency that a touch screen enables. To illustrate what I mean it helps to detail the steps a bit.

With the mouse I first have to find the cursor. This is normally not so difficult and there are even tools to help, but it is a task none-the-less. It gets a little worse if the window currently in focus doesn’t have the mouse within its space. Usually what happens is I wiggle my mouse or finger on the track pad and find the movement with my eye. With dual monitor setups this is yet less trivial.  Step two involves navigating the pointer to the next location. Step three is clicking to place the cursor. Step four is placing my hands back on the keyboard to start typing.

With a keyboard there are so many options for keyboard shortcuts and paths to accomplish the task that whatever steps I chose someone could just say, ‘well my [secret hidden] way is much simpler’ and probably be right, but a typical person like myself in a typical scenario as I often find myself in might use alt+tab to select a new window to focus from the list of open windows, and either press that multiple times until they land on the one they want or use arrow keys. In Ubuntu this method uses icons and I have to remember which icon belongs to the program window I want to focus on (text editor). Add two more little steps if I happen to have two text editor windows open so that I pick the right one. Once I get the right window in focus. I hunt down the blinking cursor with my eye and move the cursor with the arrow keys (or in my case nimble well trained fingers with lots of keyboard keys in combination that are faster than using only arrow keys.) to the desired location.

With a touchscreen I could simply reach up and touch the place on the screen I want the cursor to now be at, put my hand back on the keyboard and start typing.

And there are probably infinite combinations of using all three interaction avenues to accomplish the task depending on if it involves multiple windows of the same application, minimized or hidden windows, scrolling to window content that is out of view, tabs within the application, windows on other virtual desktops, etc.

Anyway long boring bit about HCI, but point of it is that I think touch is here to stay because it is a intuitive and useful way of interacting with computers. It has been overplayed so much that I kinda hate bringing up how effortlessly my kids use a tablet, but seriously, they do, and observing them on it is part of why I am thinking about this. Personally I don’t think touch interaction will replace keyboard, mice, or track pads on platforms where those already dominate. Touch screens will merely compliment them very nicely. And as cool as other interaction methods such as eye tracking, voice recognition, or body gesture readers are conceptually, for the near future at least, I see those as only being practically applicable for niche cases whereas touch seems beneficial in many more scenarios. My take away from all these thoughts is probably nothing all that revelatory or novel. It is simply that it doesn’t matter what sort of device you are designing your application GUI for anymore, you need to consider if, where, and how to make it touch friendly because, even if your particular platform doesn’t have touch capabilities now, I guess the odds that it will in the future are increasing rapidly.

Published by

Aaron

Web developer/designer with a strong preference for open source software

One thought on “Touch Monitors Are On My Wishlist”

  1. I have long dreamed of (more or less ever since i was let down by the “Nintendo Power Glove”) creating a sort of (what i generally refer to as a) “glove interface” — This is essentially sensors around each segment of at least the thumb/index/middle and the palm, to give you a fully functional virtual representation of your hand, in any computer interface you can get it working with. Highly programmable gestures would be the key, for example “grabbing” by pinching it with your index/thumb, rub the two fingers together in one direction for one action, the other for another action, tapping your thumb and index or thumb and middle fingers a different number of times could perform different immediate actions, and also maybe another to bring up a “virtual keyboard” for you to virtually type on, and if you are following my train of thought here, having a “heads up display” (an evolution beyond Google Glass, perhaps with some luck in 20-30 years embedded into a contact lense?) you could literally control the HUD with your hand in your pocket walking down the street.
    Recently a friend pointed out one or two projects that appear to be moving in that general direction (one on Kickstarter — a “gesture ring” and another India based start-up is working on a neat project relevant to this end result as well. Unfortunately I’m neither a mechanical engineer or programmer, and not very ambitious either so most of my ideas exist in my mind alone (and in the minds of anyone I can adequately explain them to — assuming they don’t fall asleep before I finish).
    I too believe that while the keyboard and mouse are very effective and will be with us for a long time, alternative interface technologies are key to finding new and interesting ways to more efficiently interact with our computer systems. With this “glove interface” one could have the advantages of a touch interface, without actually having to reach up to the screen, maybe sitting on a couch, or in some cases without needing a physical screen at all (in the case of a integrated HUD in some glasses, contact lense, or a cybernetic eye).
    To take it slightly further the “glove interface” technology could later be taken a step further, to be used in games or simulators, perhaps with a pair of gloves made out of a “smart” fabric or electroactive polymer that can change shape when a charge is applied — so when you pick something up in your game or simulation it actually feels as though you are holding something (visually represented in your HUD or screen). This would certainly be a nice step past a “Wiimote”.
    I mainly use Linux as my primary workstation OS, running Gnome-Fallback with Compiz to give me a lot more control over the desktop via keyboard shortcuts. One extremely handy feature which can be replicated under windows is whatever your mouse is hovering over, you can scroll in — without bringing the window into focus, allowing you to type in one window and scroll in another without switching the window focus between them. Little optimizations like this can go a long way, depending on your preferred workflow. To achieve a similar effect under Windows in the past I have used a utility called “Katmouse”.
    I hope I was able to convey some of these ideas well enough to follow.

Leave a Reply

Your email address will not be published. Required fields are marked *