Comments are closed.
Tangible User Interface and Its Evolution
by Hiroshi Ishii
People have developed sophisticated skills for sensing and manipulating their physical environments. However, most of these skills are not employed in interaction with the digital world today. Interactions with digital information are now largely confined to Graphical User Interfaces (GUIs). With the commercial success of the Apple Macintosh and Microsoft Windows, the GUI has become the standard paradigm for Human Computer Interaction (HCI) today. GUIs represent information (bits) with pixels on a bit-mapped display. Those graphical representations can be manipulated with generic remote controllers such as mice and keyboards. By decoupling representation (pixels) from control (input devices) in this way, GUIs provide the malleability to emulate a variety of media graphically. However, when we interact with the GUI world, we can not take advantage of our dexterity or utilize our skills for manipulating various physical objects such as manipulation of building blocks or the ability to shape models out of clay.
Figure 1. Tangible User Interface. By giving tangible (physical) representation to the digital information, TUI makes information directly graspable and manipulable with haptic feedback. Intangible representation (e.g. video projection) may complement tangible representation by synchronizing with it.
In the mid-nineties, we moved from GUI to Tangible User Interfaces. TUI demonstrated a new way to materialize Mark Weiser’s vision of Ubiquitous Computing of weaving digital technology into the fabric of a physical environment and make it invisible [9 ]. Instead of making pixels melt into an assortment of different interfaces, TUI uses tangible physical forms that can fit seamlessly into a users’ physical environment. Tangible User Interfaces (TUIs) aim to take advantage of these haptic interaction skills, which is significantly different approach from GUI. The key idea of TUIs was to give physical forms to digital information [3 ]. The physical forms serve as both representations and controls for their digital counterparts. TUI makes digital information directly manipulatable with our hands, and perceptible through our peripheral senses by physically embodying it. Figure 1 illustrates the model of TUI.
Figure 2. Urp and shadow simulation. Physical building models casting digital shadows, and a clock tool to control time of the day (position of the sun).
Urp: 1st generation TUI
To illustrate basic TUI concepts, we introduce “Urp” (Urban Planning Workbench) as an example of TUI [8 ]. Urp uses scaled physical models of architectural buildings to configure and control an underlying urban simulation of shadow, light reflection, wind flow, etc. (Fig. 2). In addition to a set of building models, Urp also provides a variety of interactive tools for querying and controlling the parameters of the urban simulation. These tools include a clock tool to change a position of sun, a material wand to change the building surface between bricks and glass (with light reflection), a wind tool to change the wind direction, and an anemometer to measure wind speed. The physical building models in Urp cast digital shadows onto the workbench surface (via video projection), corresponding to solar shadows at a particular time of day. The time of day, representing the position of the sun, can be controlled by turning the physical hands of a “clock tool” (Fig. 2). The building models can be moved and rotated, with the angle of their corresponding shadows transforming according to their position and time of day. Correspondingly, moving the hands of the clock tool can cause Urp to simulate a day of shadow movement between the situated buildings. Urban planners can identify and isolate inter-shadowing problems (shadows cast on adjacent buildings), and reposition buildings to avoid areas that are needlessly dark areas, or maximize light between buildings.
In “Urp,” physical models of buildings are used as tangible representations of digital models of the buildings. To change the location and orientation of buildings, users simply grab and move the physical model as opposed to pointing and dragging a graphical representation on a screen with a mouse. The physical forms of Urp’s building models, and the information associated with their position and orientation upon the workbench represent and control the state of the urban simulation. Although standard interface devices for GUIs such as keyboards, mice, and screens are also physical in form, the role of the physical representation in TUI provides an important distinction. The physical embodiment of the buildings to represent the computation involving building dimensions and location allows a tight coupling of control of the object and manipulation of its parameters in the underlying digital simulation. In Urp, the building models and interactive tools are both physical representations of digital information (shadow dimensions and wind speed) and computational functions (shadow interplay). The physical artifacts also serve as controls of the underlying computational simulation (specifying the locations of objects). The specific physical embodiment allows a dual use in representing the digital model and allowing control of the digital representation. However, Urp does not provide the capability to change the forms of tangible representations during the interactions. Users had to use predefined finite set of fixed-form objects (building models in this case), changing only the spatial relationship among them but not the form of individual object itself. All the tangible objects in Urp must be predefined (in both physically and digitally), and it did not provide the means to change their forms on the fly. That is why we designed 2nd generation of “organic” TUI.
SandScape: 2nd generation TUI
With the advent of new sensing and display technologies, it became possible to introduce the possibility of dynamic form giving into Tangible User Interfaces. This “organic” TUI suggests the direction toward the new digital/physical materials that couple sensing and displaying capabilities seamlessly.
Instead of using predefined discrete objects with fixed forms, we therefore developed new types of organic TUIs that utilize continuous tangible material such as clay and sand were developed for rapid form giving and sculpting for the landscape design. Examples are Illuminating Clay [6 ], and SandScape [2 ], which we’ll discuss in more detail below. With the advent of flexible materials capable of integrating fully flexible sensors and displays, this category of organic TUI shows tremendous potential.
Figure 3. SandScape. Using SandScape, users can alter the form of the landscape model by manipulating sand while seeing the resultant effects of computational analysis generated and projected on the surface of sand in real-time.
SandScape [2 is an organic tangible interface for designing and understanding landscapes through a variety of computational simulations using sand. Users view these simulations as they are projected on the surface of a sand model that represents the terrain. The users can choose from a variety of different simulations that highlight the height, slope, contours, shadows, drainage or aspect of the landscape model (Fig. 3). The users can alter the form of the landscape model by manipulating sand while seeing the resultant effects of computational analysis generated and projected on the surface of sand in real-time. The project demonstrates how TUI takes advantage of our natural ability to understand and manipulate physical forms while still harnessing the power of computational simulation to help in our understanding of a model representation. SandScape uses optical technique to capture the geometry of landscape model. SandScape is less accurate than its predecessor Illuminating Clay which used laser range finder to capture the geometry of a clay model [6]. SandScape and Illuminating Clay show the potential advantages of combining physical and digital representations for landscape modeling and analysis. The physical clay and sand models convey spatial relationships that can be intuitively and directly manipulated by hand. Users can also insert any found physical objects directly under the camera. This approach allows users to quickly create and understand highly complex topographies that would be difficult and time-consuming to produce with conventional CAD tools. We believe that this “Continuous and Organic TUI” approach makes better use of our natural abilities to discover solutions through the manipulation of physical objects and materials.
Summary
Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and decision-making by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This article introduced the evolution of TUI from rigid discrete interface toward organic and malleable material that enable dynamic sculpting and computational analysis using digitally augmented continuous physical materials. This new type of TUIs offer rapid form giving in combination with computational feedback.
In addition to the rapid form giving, actuation technology will also play critical role to make interface more organic and dynamic. We have been exploring the new genre of TUIs that incorporate actuation mechanisms to realize kinetic memory for educational toys such as Curlybot [1 ] and Topobo [7 ]. We are also designing a new generation of tabletop TUIs in which actuation is utilized to make tangible objects behave more actively to represent the internal computational state dynamically. Examples are the Actuated Workbench [4 ] and PICO [5 ].
I hope the evolution of TUIs introduced in this article will contribute to the future discussion of malleable, dynamic, and organic interfaces which seamlessly integrate sensing and display into soft and hard digital/physical material.
References
1. Frei, P., Su, V., Mikhak, B. and Ishii, H. (2000b). curlybot: designing a new class of computational toys. Proceedings of the SIGCHI conference on Human factors in computing systems (CHI 2000) (The Hague, The Netherlands, April 01 – 06, 2000), ACM Press, pp. 129-136.
2 . Ishii, H., Ratti, C., Piper, B., Wang, Y., Biderman, A. and Ben-Joseph, E. (2004). Bringing clay and sand into digital design – continuous tangible user interfaces. BT Technology Journal 22, 4, 287-299.
3. Ishii, H. and Ullmer, B. (1997). Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. Conference on Human Factors in Computing Systems (CHI ’97) (Atlanta, March 1997), ACM, pp. 234-241.
4. Pangaro, G., Maynes-Aminzade, D. and Ishii, H. (2002). The actuated workbench: computer-controlled actuation in tabletop tangible interfaces, Proceedings of the 15th annual ACM symposium on User Interface Software and Technology (UIST 2002), ACM Press, pp. 181-190.
5. Patten, J. and Ishii, H. (2007). Mechanical constraints as computational constraints in tabletop tangible interfaces. CHI 2007: pp. 809-818
6. Piper, B., Ratti, C. and Ishii, H. (2002). Illuminating clay: a 3-D tangible interface for landscape analysis, Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves, ACM Press, pp. 355-362.
7. Raffle, H. S., Parkes, A. J. and Ishii, H. (2004). Topobo: a constructive assembly system with kinetic memory, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2004), ACM Press, pp. 647-654.
8. Underkoffler, J. and Ishii, H. (1999). Urp: a luminous-tangible workbench for urban planning and design, Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit, ACM Press, pp. 386-393.
9. Weiser, M. (1991). The computer for the 21st Century. Scientific American 265, 3, 94-104.
Bio
Hiroshi Ishii is Muriel R. Cooper Professor of Media Arts and Sciences at the MIT Media Lab, where he heads the Tangible Media Group and co-director of the Things That Think (TTT) consortium. Prof. Ishii’s research focuses upon the design of seamless interfaces between humans, digital information, and the physical environment. Prof. Ishii received his BE degree in electronic engineering, and ME and PhD degrees in computer engineering, from Hokkaido University, Japan. In 2006, ACM SIGCHI elected Prof. Ishii to the CHI Academy, recognizing his substantial contributions to the field of Human-Computer Interactions through the creation of Tangible User Interfaces (TUI).