Organic Interaction Technologies: From Stone to Skin

May 31st, 2008

by Jun Rekimoto

Introduction
There is no doubt that the mouse is the most successful input device in the history of computing. However, it is also true that the mouse is not the ultimate input device, because it does not completely bring out the sophisticated manipulation skills of humans. With a mouse, we can only control a single (x, y) position at one time, with additional button press (on/off) actions. Feedback for the input is normally only available as visual information. On the other hand, in natural manipulation, we can easily control multiple points and continuous parameters (e. g., pressure) at the same time. Feedback is not limited to sight; it often involves touch, sound, temperature, or maybe air movement. Feedback itself is also more tightly unified with input than traditional GUIs, where input and output are often spatially separated. The body part used for interaction is not limited to the fingers; the palm, arm, or even the entire body can be used. Several recent approaches have attempted to bring these human manipulation skills to human computer interactions. We use the term “organic” or “organic interaction” for these kinds of interfaces, because they more closely resemble natural human-physical or human-human interactions, such as shaking hands or making gestures.

Table 1. Comparison between traditional GUI and organic interaction.

Table 1 summarizes the features of organic interactions by comparing them with the features of traditional user interfaces. As number of novel interaction methods using various sensing technologies have been introduced, but, until recently, they were mainly used for special purposes, such as interactive art. Myron Krueger’s “videoplace” is one of the earliest examples of such systems. A video camera is used to capture a user’s body silhouette, and the full-body shape, not just finger positions, can be used as an input to the computer system. In the very near future, as the cost of sensing and computation is becoming negligible, such organic interaction technologies could become viable alternatives to traditional mouse-based interactions. In this article, we look at some selected examples of them and discuss future research topics for organic user interfaces.

Figure 1. The “HoloWall” interactive surface systems [5]. The camera with an infrared (IR) filter captures the image on the back surface of a rear projection sheet, which is illuminated by IR lights installed behind the sheet. (a) Sensor configuration, (b) examples of multi-touch interaction, (c) using physical instruments for interaction, (d) captured image for recognition, and (e) an interactive table using hand shapes as inputs.

Our first example is HoloWall [5], a camera-based interactive wall/table system. HoloWall uses a combination of an infra-red camera and an array of infrared (IR) lights installed behind the wall (Figure 1). The camera captures the images on the back surface of the wall, which is illuminated by the IR lights. An IR-blocking filter built into the LCD projector ensures that the camera is not affected by the projected image.

Since the rear-projection panel is semi-opaque and diffusive, the user’s hand shape in front of the screen is not visible to the camera when the hand is far from the screen. When the user moves a finger close enough to the screen, the finger reflects IR light and thus becomes visible to the camera. With a simple image processing technique such as frame subtraction, the finger shape can easily be separated from the background.

Shape as Input
Using this sensing principle, HoloWall distinguishes multiple hand and finger contact points, which enables typical multi-touch interactions, such as zooming with two hands (Figure 2(c)). Moreover, it can also recognize the human hand, arm, body, any physical objects, such as rods, or visual patterns, such as two-dimensional barcodes attached on the object (Figure 2 (c,d)).

Figure 1 (c) shows two users playing a ping-pong game using HoloWall, as demonstrated at SIGGRAPH in 1998. Although the system was originally designed for hand and body gestures, some participants decided to use other physical objects as instruments for interaction, because any light-reflecting object can be recognized by the system. This kind of dynamic expandability is also an interesting feature of organic user interfaces. Note that a sensing principle similar to that of HoloWall’s is also used in other interactive-surface systems, such as the Microsoft Surface. “Perceptive Pixels” [2] is another optical multi-touch input system, though it is based on a different sensing principle.

Figure 2: SmartSkin, an interactive surface system based on capacitive sensing [7]: (a) A collaborative table system allowing multi-hand, multi-person interaction, (b) ob ject movement using arm motion, (c) and (d) results of sensing showing hand shape and multiple finger points, and (e) as-rigid-as-possible shape manipulation [4] featuring SmartSkin multi-touch interaction.

SmartSkin (Figure 2) is a multi-touch interactive surface system based on capacitive sensing [7]. It uses a grid-shaped antenna to measure hand and finger proximity. The antenna consists of transmitter and receiver electrodes (copper wires). The vertical wires are transmitter electrodes, and the horizontal wires
are receiver electrodes. When one of the transmitters is excited by a wave signal (of typically several hundred kilohertz), the receiver receives this wave signal because each crossing point (transmitter/receiver pairs) acts as a capacitor. The magnitude of the received signal is proportional to the frequency and voltage of the transmitted signal, as well as to the capacitance between the two electrodes. When a conductive, grounded object approaches a crossing point, it capacitively couples to the electrodes, and drains the wave signal. As a result, the received signal amplitude becomes weak. By measuring this effect, it is possible to detect the proximity of a conductive ob ject, such as a human hand. Since the hand detection is done by means of capacitive sensing, all the necessary sensing elements can be completely embedded in the surface. In contrast to camera-based systems, the SmartSkin sensor is not affected by the change of the environmental light condition. The surface is also not limited to a flat one; surfaces of any ob jects, including furniture and robots, can potentially provide such interactivity, acting like the skin of a living creature.

Proximity sensing
When the user’s hand is placed within 5 to 10 cm from the table, the system recognizes the effect of the capacitance change. A potential field is created when the hand is close to the table surface. To accurately determine the hand position, which is at the peak of the potential field, a bicubic interpolation method is used to analyze the sensed data. The position of the hand can be determined by finding the peak on the interpolated curve. The precision of the calculated position is much finer than the size of a grid cell. The current implementation has an accuracy of 1 cm, and the size of a grid cell is 10 cm .

Shape-based manipulation
SmartSkin’s sensor configuration also enables “shape-based manipulation”, which does not explicitly use the 2-D position of the hand. Instead, a potential field created by sensor inputs is used to move ob jects. As the hand approaches the table surface, each intersection of the sensor grid measures the capacitance between itself and the hand. By using this field, various rules of object manipulation can be defined. For example, an object that descends to a lower potential area is repelled from the human hand. By changing the hand’s position around the ob ject, the direction and speed of the ob ject’s motion can be controlled.

In our user-tests with SmartSkin, many people were quickly able to use this interface even though they did not fully understand the underlying dynamics. Many users naturally used two hands, or even their arms. For example, to move a group of objects, one can sweep the table surface with one’s arm. Two arms can be used to trap and move objects (Figure 2(b)).

Multi-finger recognition
Using the same sensing principle with a denser grid antenna layout, SmartSkin can determine the human hand shape as shown in Figure 2 (c) and Figure 2 (d). The peak detection algorithm can also be used, and in this case, the algorithm, instead of being able to track just one position of the hand, can track multiple positions of the fingertipsnot.

Interacting with deformable shapes using multi-touch interface
Igarashi et al.’s “As-Rigid-As-Possible Shape Manipulation” is an algorithm for deforming objects with multiple control points [
4]. Figure 2(e) shows how this algorithm is implemented using SmartSkin. A user of this system can directly manipulate graphics objects with multiple finger control points. The algorithm deforms a graphics object according to the position change of the control points.

DiamondTouch
DiamondTouch [
1] is another interactive table system based on capacitive sensing. The unique feature of DiamondTouch is its capability to distinguish multiple users. The gird-shaped antenna embedded in the DiamondTouch table transmits a time-modulated signal. Users of DiamondTouch sit in a special chair with a signal-receiving electrode. When a user’s finger contacts the surface, a capacitive connection from the grid-antenna to signal-receiving chair is established through the user’s body. This information is used to determine the user’s finger position on the surface, as well as who is manipulating the surface. Since the DiamondTouch table simply transmits modulated signal, multiple users can operate the same surface simultaneously without the system losing track of each user’s identity. DiamondTouch also supports semi-multi-touch operation, where “semi” means it can detect multiple points with some ambiguity. For instance, when a user touches two points, (100, 200) and (300, 400), the system cannot distinguish this situation from another two points, (100, 400) and (300, 200). However, doing simple multi-touch interactions such as pinching (controlling scale by the distance between two fingers), this ambiguity is not a problem.

PreSense: Touch and Pressure Sensing Interaction
Touch Sensing Input [
3] extends the usability of the mouse by introducing a touch sensor. While buttons of a normal mouse only have two states (non-press and press), the touch sensitive button provides three (non-touch, touching, and press) states. This additional state allows more precise control of the system. For example, the toolbox of a GUI application automatically expands when a user moves a cursor to a toolbar region with a finger touch with the button.

Pressure is another interesting input parameter for organic interaction. We naturally use and control pressure for natural communication, such as when we shake hands. With a simple pressure sensor embedded in a normal mouse or touch-pad, the device can easily sense finger pressure be by.

Figure 3. PreSense is a 2D input device enhanced by pressure sensors. A user can add pressure to control analog parameters such as scaling. A user can also specify “positive” and “negative” pressures by changing the size of the finger contact area to the touch-pad surface. To emulate discrete button press with “click” feeling, PreSense can also be combined with tactile feedback can.

PreSense [8] is a touch and pressure sensing input device that uses finger pressure as well as finger positions for operation (Figure 3). It consists of a capacitive touch-pad, force-sensitive resistor (FSR) pressure sensors, and an actuator for tactile feedback. It can also recognize finger contact by measuring the capacitive change on the touch-pad surface. With a combination of pressure sensing and tactile feedback, it can also emulate various types of buttons (e.g., one-level button, two-level buttons) by setting thresholds to pressure parameters. For example, a user can “soft press” the target to select it, and “hard press” to display a pop-up menu.

Using analog pressure sensing, a user can control continuous parameters, such as the scale of the displayed image. To distinguish between scaling directions (scale-up and scale-down), the finger contact area can be used. As shown in Figure 3, by slightly changing the position of the finger, one can control both zooming-in and zooming-out with a single finger. Pressure can be used for explicit parameter control, such as scaling, but it also offers the possibility of sensing implicit or emotional states of the user. When a user feels frustrated with the system, his/her mouse button pressure might be changed from the normal state. Then the system would be able to react to the frustration of the user. Finger input with pressure, combined with tactile feedback, is the most common form of natural interaction. Like “Shiatsu” (Japanese finger-pressure therapy), users of PreSense can directly feel and control the status of computer systems.

Research Issues for Organic User Interfaces
Because organic user interfaces represent a new and emerging research field, there are still many research issues that require further study. In what follows, we cover four such research topics.

Interaction techniques for OUI
GUIs have a long history and a large number of interaction techniques have been developed. When the mouse was invented, it was only used to point at onscreen objects. It took some time for mouse-based interaction techniques, such as “pop-up menus” or “scroll bars” to be developed. The current development level of organic user interfaces is at the same level as the mouse was when it was first invented. For multi-touch interaction, only a simple set of techniques, such as zooming, have been introduced, but there should be many more possibilities. Interaction techniques used in [
4] may be a candidates.

Stone (Tool) vs. skin: Comparison between tangible and organic UIs
It is also interesting and important to consider the similarities and differences between tangible UIs and organic UIs. Although there are large overlaps between the two types of UIs, the conceptual differences are easy to see. Tangible UI systems often use multiple physical objects as tools for manipulation. Each object is graspable so that users can use physical manipulation. These objects often have a concrete meaning (i.e., phicon) in the application, and thus many tangible systems are domain specific (tuned for a particular application). For organic UI systems, users directly interact with, possibly curved, interactive surfaces (walls, tables, electronic paper, etc.) where no intermediate objects are used. Interactions are more generic and less application-oriented. This situation can be compared to real world interaction. In the real world, we also use physical instruments (tools) for manipulating something, but we prefer direct contact for human-human communication, such as shaking hands. It might be said that tangible UIs are more logical or manipulation-oriented, whereas organic UIs are more emotional or communication-oriented, but more real-world experiences must be evaluated to perform a valid comparison.

Other modalities for interaction
For organic UIs, we still mainly use our hands as a primary the body parts for interaction. We should also be able to use other parts, because we do so for natural communication. Eye gaze is of course one possibility. Another interesting possibility is the use of blowing. Blowing can be used for manipulation because it is controllable, but it also conveys emotion during interaction. Shwetak et al. developed a technique to determine the direction of a blow based on an acoustic analysis [
6]. The BYU-BYU-View system tries to transmit wind to add reality for telecommunications [9].

Connection to physical environment
In the context of traditional HCI, the term interaction generally means some information exchange between a human and a computer. In the near future, interaction will also involve more physical substance, such as illumination, air, temperature, humidity, or even energy. The interaction concept is no longer limited to interactions between humans and computers, but can be expanded to cover interactions between the real world and computers. For example, future interactive wall systems will react to human gestures, but will also be aware of the air in the room and be able stabilize conditions such as temperature and humidity, in the same way that a cell membrane keeps a cell environment stable. Interactive walls may also be able to control sound energy to dynamically create silent areas. Even ceilings may some day act as an information display. In this way, future interactive systems may more seamlessly interact with, or control, our physical environments.

References
1. Paul Dietz and Darren Leigh. Diamondtouch: a multi-user touch technology.
In UIST ’01: Proceedings of the 14th annual ACM symposium on User in-
terface software and technology, pages 219–226, New York, NY, USA, 2001.
ACM.
2. J. Y. Han. Low-cost multi-touch sensing through frustrated total internal
reflection. In Proceedings of the 18th Annual ACM Symposium on User
Interface Software and Technology, pages 115–118, 2005.
3. Ken Hinckley and Mike Sinclair. Touch-sensing input devices. In CHI’99
Proceedings, pages 223–230, 1999.
4. Takeo Igarashi, Tomer Moscovich, and John F. Hughes. As-rigid-as-possible
shape manipulation. In ACM Transactions on Computer Graphics (SIG-
GRAPH 2005), pages 1134–1141, 2005.
5. Nobuyuki Matsushita and Jun Rekimoto. HoloWall: Designing a Finger,
Hand, Body, and Ob ject Sensitive Wall. In Proceedings of UIST’97, October
1997.
6. Shwetak Patel and Gregory Abowd. BLUI: Low-cost localized blowable user
interfaces. In Proceedings of UIST 2007, 2007.
7. Jun Rekimoto. SmartSkin: An infrastructure for freehand manipulation on
interactive surfaces. In CHI 2002 Proceedings, pages 113–120, 2002.
8. Jun Rekimoto, Takaaki Ishizawa, Casten Schwesig, and Haruo Oba. Pre-
Sense: interaction techniques for finger sensing input devices. In Proceedings
of the 16th annual ACM symposium on User interface software and technol-
ogy (UIST 2003), pages 203–212, 2003.
9. Erika Sawada, Shinya Ida, Tatsuhito Awa ji, Keisuke Morishita Tomohisa
Aruga, Ryuta Takeichi, Tomoko Fujii, Hidetoshi Kimura, Toshinari Naka-
mura, Masahiro Furukawa, Noriyoshi Shimizu, Takuji Tokiwa, Hideaki Nii,
Maki Sugimoto, and Masahiko Inami. Byu-byu-view: A wind communica-
tion interface. In SIGGRAPH 2007 Emerging Technologies, 2007.

Bio
Jun Rekimoto (http://rkmt.net) is a professor of the Interfaculty Initiative in Information Studies at The University of Tokyo, and a director of Interaction Laboratory at Sony Computer Science Laboratories, Inc.. He received his B.A.Sc., M.Sc., and Ph.D. in Information Science from Tokyo Institute of Technology in 1984, 1986, and 1996, respectively. His research interests include real-world user interfaces, perceptive environments, novel input devices, and large-scale sensing systems. In 2007, he was elected to ACM SIGCHI Academy.


Comments are closed.