Progressive Place

Tuesday, June 15, 2010

Extending the Handheld Device

EXTENDING THE HANDHELD
Core concepts here:
Handheld devices are weakest at helping the user maintain a sense of context for the small amount of information displayed on the device’s small screen. A helpful adaptation would be a larger display space outside the device; say, a printed illustration showing the layout of the options in a particular app. Until we become familiar with the conceptual space inside the app, our memories are not good at doing that on their own.
The device screens could provide more contextual information by using visual layering to simulate a 3-dimensional image space. Color coding and varying depth of shade and color could further enhance the (delicacy? Precision? Articulateness?) of the display. For whatever reason, pocket video games are the only mobile devices that have gone this route.

Background: see http://audiknow.com, and the paper that spawned the idea, at http://home.comcast.net/~writebrain1/WB0502/330_Audiknow.htm.

Blended media could greatly enhance the experience of using a handheld device. In a research paper I wrote back in 2001, I pointed out that the small screen of a mobile device doesn’t provide the context a person needs to make full sense of what one sees on the screen. In situations where the content is known and fairly predictable, the handheld could be laid on a printed chart that physically represents . like a site map that shows the whole subject domain, or any conceptual space laid out like a mind map, with each sub-topic/page node and relationship briefly described and illustrated.

(Rewrite the following 3 paragraphs. They’re redundant.) Basically, viewing a visual application in a handheld’s screen is like viewing real life through a paper towel tube, or a rolled-up newspaper. If you set the device down on a sheet of paper on a clipboard, or a table even, your visual space is limited only by the size of the paper. If the paper shows the whole “map” of the app you’re working with in the handheld, you get to see how the detail in the screen fits into the “bigger picture”. To see the detail at a different point in the app, just find that spot on the large chart, then tell the device to go to that point, where you’ll see a zoomed-in” view. The proximity sensor in the iPhone, iTouch and iPad could provide this positioning automatically.

This was a primitive form of blended learning that would have been exceptionally useful in all kinds of applications. I put it aside because it didn’t apply to my paper, which focused on mobile digital audio. By the time I finished the paper, I’d forgotten the visual idea, and was working to make a business of the more familiar audio part. It would have been simple create a pilot, but it didn’t happen.

Another spin on the idea was to extend the image by creating a 3-D visualization within the handheld, and using it on a drafting table with a set of roll-up charts. The absence in handhelds of so obvious an enhancement as the 3-D image continues to amaze me. People are very skilled at thinking in 3-D perspective. Treating the handheld device screen as a flat surface makes as much sense as declaring that all cities should have only one-story buildings. In the size-constrained land of the handheld, 3-D should (be the) rule (not the exception).

Incidentally, I also continue to be appalled that the creators of mind-mapping software don’t use depth, shading, and perceived near-far proximity to enhance the visual and conceptual richness of their products.

It occurred to me 11 years ago that my kids’ Gameboys were vastly better at representing a complex domain than my PDA. Once you learned the terrain, it was graphically navigable, and used several metaphors to simulate a 3-D physical space. While it’s sad that I didn’t act on that observation back then, knowing how my mind works, it probably would have kept me from finishing my research paper and getting the damn degree.

Separate topics:
The Golden-i is a Bluetooth video headset, mainly intended as a mobile interface to a computer or the Internet. Voice activation makes it a totally hands-free workstation. However, adapting it for eye-monitoring would let the user:
1. Use blinks as button clicks, to navigate among layers in a 3-D space. Since blinking is mostly automatic, signals would have to start at 2x, plus signals for close-and-open, etc.
2. Use quick left-right-up-down eye motions to move an oversize image correspondingly, as with an iPhone, or like moving objects around on a physical magnet board.

Mobile technology in the work setting can be very Green-enhancing, in that it saves travel, time, office space, carting around of materials, and more. Also, I’ve noticed that most large-scale sustainability initiatives depend so heavily on automation, they could say “IT Inside”, like most PCs say “Intel Inside”.

0 Comments:

Post a Comment

<< Home