How People Make Things is an exhibition that helps families talk together and learn about the making of everyday objects. The goal of the project was to create a learning environment that mediates difficult manufacturing concepts for parents, and scaffolds the development of family conversations about the processes of making both inside and outside the museum. A visit to the exhibition would be deemed successful if visitors demonstrated changes in what they knew and how they talked about objects and manufacturing processes. A model of change describing how families might build such an
Finger-based touch input has become a major interaction modality for mobile user interfaces. However, due to the low precision of finger input, small user interface components are often difficult to acquire and operate on a mobile device. It is even harder when the user is on the go and unable to pay close attention to the interface. In this paper, we present Gesture Avatar, a novel interaction technique that allows users to operate existing arbitrary user interfaces using gestures. It leverages the visibility of graphical user interfaces and the casual interaction of gestures. Gesture Avatar
Multi-Touch technology provides a successful gesture based Human Computer Interface. The contact and gesture recognition algorithms of this interface are based on full hand function and, therefore, are not accessible to many people with physical disability. In this paper, we design a set of command-like gestures for users with limited range and function in their digits and wrist. Trajectory and angle features are extracted from these gestures and passed to a recurrent neural network for recognition. Experiments are performed to test the feasibility of gesture recognition system and determine
Collaborative Information Retrieval (CIR) is the process by which people working together can collaboratively search for, share and navigate through information. Computer support for CIR currently makes use of single-user systems. CIR systems could benefit from the use of multi-user interaction to enable more than one person to collaborate using the same data sources, at the same time and in the same place. Multi-touch interaction has provided the ability for multiple users to interact simultaneously with a multi-touch surface. This paper presents a generalised architecture for multi-touch CIR
Creating multiple prototypes facilitates comparative reasoning, grounds team discussion, and enables situated exploration. However, current interface design tools focus on creating single artifacts. This paper introduces the Juxtapose code editor and runtime environment for designing multiple alternatives of both application logic and interface parameters. For rapidly comparing code alternatives, Juxtapose>introduces selectively parallel source editing and execution.
DATE:
TEAM MEMBERS:
Björn HartmannLoren YuAbel AllisonYeonsoo YangScott R. Klemmer
This report describes and discusses the findings from a field study that was conducted at the Vancouver Aquarium to investigate how visitors explore and experience large horizontal multi-touch tables as part of public exhibition spaces. The study investigated visitors’ use of two different tabletop applications—the Collection Viewer and the Arctic Choices table—that are part of the Canada’s Arctic exhibition at the Vancouver Aquarium. Our findings show that both tabletop exhibits enhanced the exhibition in different ways. The Collection Viewer table evoked visitors curiosity by presenting
DATE:
TEAM MEMBERS:
Jim SpadacciniJeff HeywoodUta HinrichsSheelagh Carpendale
Touch-sensitive devices are becoming more and more common. Many people use touch interaction, especially on handheld devices like iPhones or other mobile phones. But the question is, do people really understand the different gestures, i.e., do they know which gesture is the correct one for the intended action and do they know how to transfer the gestures to bigger devices and surfaces? This paper reports the results of usability tests which were carried out in semi public space to explore peoples’ ability to find gestures to navigate on a virtual globe. The globe is presented on a multi-touch
DATE:
TEAM MEMBERS:
Jim SpadacciniMarkus JokischThomas BartoschekAngela Schwering
resourceresearchProfessional Development, Conferences, and Networks
Museums are shifting from being object and collection centered, towards a focus on space, affect and audience by producing multi-dimensional spatial non-lineal experiences. Interactivity is used unquestionably to verify this shift. Through the findings of a case study the ‘High Arctic’, a temporary exhibition at the National Maritime Museum, the paper will discuss how the museum interprets and practices the notion of interactivity. Through examining the multiplicity of museum with the focus being on process, the possibility of opening and creating new models of experience can be evaluated
For the past twenty years there has been a slow trickle of research disseminated through a variety of channels on the natureand use of computer interactives within museum and gallery environments. This research has yet to be consolidated into arobust and coherent evidence base for considering and understanding the continued investment in such interactives byinstitutions.Simultaneously however, the technology has changed almost beyond recognition from early kiosk-based computer exhibitsfeaturing mostly film and audio content, through to the newer generation of multi-touch interfaces being
This paper explores the interactive possibilities enabled when the barrel of a digital pen is augmented with a multi-touch sensor. We present a novel multi-touch pen (MTPen) prototype and discuss its alternate uses beyond those of a standard stylus, such as allowing new touch gestures to be performed using the index finger or thumb and detecting how users grip the device as a mechanism for mode switch-ing. We also discuss the hardware and software implemen-tation challenges in realizing our prototype, and showcase how one can combine different grips (tripod, relaxed tripod, sketch, wrap) and
DATE:
TEAM MEMBERS:
Jim SpadacciniHyunyoung SongHrvoje BenkoFrancois GuimbretiereShahram IzadiXiang CaoKen Hinckley
Creating and editing large graphs and node-link diagrams are crucial activities in many application areas. For them, we consider multi-touch and pen input on interactive surfaces as very promising. This fundamental work presents a user study investigating how people edit node-link diagrams on an interactive tabletop. The study covers a set of basic operations, such as creating, moving, and deleting diagram elements. Participants were asked to perform spontaneous gestures for 14 given tasks. They could interact in three different ways: using one hand, both hands, as well as pen and hand
DATE:
TEAM MEMBERS:
Mathias FrischJens HeydekornRaimund Dachselt
In this paper we describe two projects that utilize reality-based interaction to advance collaborative scientific inquiry and discovery. We discuss the relation between reality-based and embodied interaction, and present findings from an experimental study that illustrate benefits of reality-based tabletop interaction for collaborative inquiry-based learning.