We introduce our view of the relation between symbolic gestures and manipulations in multi-touch Natural User Interfaces (NUI). We identify manipulations not gestures as the key to truly natural interfaces. Therefore we suggest that future NUI research should be more focused on designing visual workspaces and model-world interfaces that are especially appropriate for multi-touch manipulations.
DATE:
TEAM MEMBERS:
Hans-Christian JetterJens GerkenHarald Reiterer
Though many tabletop applications allow users to interact with the application using complex multi-touch gestures, automated tool support for testing such gestures is limited. As a result, gesture-based interactions with an application are often tested manually, which is an expensive and error prone process. In this paper, we present TouchToolkit, a tool designed to help developers automate their testing of gestures by incorporating recorded gestures into unit tests. The design of TouchToolkit was informed by a small interview study conducted to explore the challenges software developers face
DATE:
TEAM MEMBERS:
Shahedul Huq KhandkarS. M. SohanJonathan SillitoFrank Maurer
Proton is a novel framework that addresses both of these problems. Using Proton, the application developer declaratively specifies each gesture as a regular expression over a stream of touch events. Proton statically analyzes the set of gestures to report conflicts, and it automatically creates gesture recognizers for the entire set. To simplify the creation of complex multitouch gestures, Proton introduces gesture tablature, a graphical notation that concisely describes the sequencing of multiple interleaved touch actions over time.
DATE:
TEAM MEMBERS:
Jim SpadacciniKenrick KinBjörn HartmannTony DeRoseManeesh Agrawala
This article introduces a new interaction model called Instrumental Interaction that extends and generalizes the principles of direct manipulation. It covers existing interaction styles, including traditional WIMP interfaces, as well as new interaction styles such as two-handed input and augmented reality. It defines a design space for new interaction techniques and a set of properties for comparing them.
DATE:
TEAM MEMBERS:
Michael Beaudouin-Lafon
resourceresearchProfessional Development, Conferences, and Networks
For document visualization, folding techniques provide a focus-plus-context approach with fairly high legibility on flat sections. To enable richer interaction, we explore the design space of multi-touch document folding. We discuss several design considerations for simple modeless gesturing and compatibility with standard Drag and Pinch gestures. We categorize gesture models along the characteristics of Symmetric/Asymmetric and Serial/Parallel, which yields three gesture models. We built a prototype document workspace application that integrates folding and standard gestures, and a system for
DATE:
TEAM MEMBERS:
Patrick ChiuChunyuan LiaoFrancine Chen
Delimiters are great for using gestures to overide application or OS commands. This paper investigates whether the DoubleFlip gesture is easy learn and practical to use as an effective delimter.
Many tasks in graphical user interfaces require users to interact with elements at various levels of precision. We present FingerGlass, a bimanual technique designed to improve the precision of graphical tasks on multitouch screens. It enables users to quickly navigate to different locations and across multiple scales of a scene using a single hand. The other hand can simultaneously interact with objects in the scene. Unlike traditional pan-zoom interfaces, FingerGlass retains contextual information during the interaction. We evaluated our technique in the context of precise object selection
DATE:
TEAM MEMBERS:
Dominik K¨aserManeesh AgrawalaMark Pauly
Modern smartphones contain sophisticated sensors to monitor three-dimensional movement of the device. These sensors permit devices to recognize motion gestures— deliberate movements of the device by end-users to invoke commands. However, little is known about best-practices in motion gesture design for the mobile computing paradigm. To address this issue, we present the results of a guessability study that elicits end-user motion gestures to invoke commands on a smartphone device. We demonstrate that consensus exists among our participants on parameters of movement and on mappings of motion
DATE:
TEAM MEMBERS:
Jim SpadacciniJaime RuizYang LiEdward Lank
WNET is producing "The Human Spark," a multimedia project that includes a four-part television series (4 x 60 min) for national primetime broadcast on PBS, innovative outreach partnerships with museums, an extensive Web site and outreach activities, including a Spanish-language version and companion book. Hosted by Alan Alda, "The Human Spark" will explore the intriguing questions: What makes us human? Can the human spark be found in the differences between us and our closest genetic relative -- the great apes? Is there some place or process unique to the human brain where the human spark resides? And if we can identify it, could we transfer it to machines? The programs will explore these questions through presenting cutting-edge research in a number of scientific disciplines including evolution, genetics, cognitive neuroscience, behavioral science, anthropology, linguistics, AI, robotics and computing. The series will highlight opposing views within each field, and the interdisciplinary nature of science, including its intersection with the humanities. The series will develop a new innovative format, the "muse concept", which involves pairing the host with a different scientific expert throughout each program. The outreach plan is being developed with a consortium of four leading science museums, American Museum of Natural History in New York, Museum of Science in Boston, The Exploratorium in San Francisco, and the Fort Worth Museum of Science and History, paired with their respective local public television stations. An additional six museums and local broadcasters will be chosen through an RFP process to develop local initiatives around the series. Multimedia Research and Leflein Associates will conduct formative as well as summative evaluations of the series and web.
DATE:
-
TEAM MEMBERS:
William GrantJared LipworthGraham CheddBarbara Flagg
This is one of three focus point presentations delivered as part of the session titled "Technology and Cyberinfrastructure," delivered on day two of the Citizen Science Toolkit Conference at the Cornell Lab of Ornithology in Ithaca, New York on June 20-23, 2007. Josh Knauer, Director of Advanced Development, Information Commons at MAYA Design, discusses the problem of "information liquidity" and how to make data available to the intended audiences and in a way that makes the data available at all times. Knauer applies the original model of the public library to the digital age and makes
DATE:
TEAM MEMBERS:
Josh Knauer
resourceresearchProfessional Development, Conferences, and Networks
These reports were delivered on day three at the conclusion of the Citizen Science Toolkit Conference at the Cornell Lab of Ornithology in Ithaca, New York on June 20-23, 2007. The reports summarize the discussions that took place in five separate breakout groups, which met periodically throughout the conference to focus on key Citizen Science themes and topics that emerged during conference presentations and plenary discussions.
DATE:
TEAM MEMBERS:
Cornell Lab of OrnithologyCatherine McEverNolan DoeskenGeoff LeBaronSarah KirnRebecca JordanMaureen McConnell
resourceresearchProfessional Development, Conferences, and Networks
This discussion was held during the final plenary session on day three of the Citizen Science Toolkit Conference at the Cornell Lab of Ornithology in Ithaca, New York on June 20-23, 2007. Topics discussed include citizen science as a new field or discipline, the science role that citizen scientists play, next steps, issues to consider, suggestions, and developing (or not) a shared data infrastructure.
DATE:
TEAM MEMBERS:
Cathy McEverCornell Lab of Ornithology