HCI-IS extended deadline

The deadlines for submission to the HCI-IS conference have been extended. The new deadlines are:

  • Paper Submission: September, 9, 2013
  • Notification of Acceptance: September, 13, 2013
  • Camera Ready Submission: September, 20, 2013

We welcome everybody interested to submit a paper and join us in early October in Ljubljana.

Interactive Visual Analysis – IVA

Just a few notes from Helwig Hauser‘s keynote at SouthCHI2013 titled: »Integrating interactive and computational analysis in visualization.«

Firs, he defined visualization as computer assisted means to enable insight into data. In research, visual analytics have been a hot topic since 2004. Based on the level of integration of visualization and interaction, visual analytics tools can be divided in the following 3 (or is it 4?) categories:

  • level 0: no integration,
  • level 1a: visualization of results,
  • level 1b: making computational analysis interactive,
  • level 2: tight integration.

The last level is the one with most potential for research.  He continued by presenting the IVA methodology and the IVA loop. Some remarks about the IVA methodology (and tools for interactive visual analytics): it is needed when the user is faced with too much or too complex data; it should support data exploration, data analysis, hypotheses generation and sense making; it should take into account user interests and task at hand; it should support ‘information drill-down’ (i.e. going from overview to details); and it should offer an interactive and iterative visual dialog. The basic IVA loop consists of two steps: visualization (the computer shows the data to the user) and interaction (the user tells the computer what he/she is interested in). It sounds simple, but the execution of these two steps can quickly get complicated and complex – keep in mind that the process must run in real-time to be interactive.

For more on the topic, see Helvig Hauser’s bibliography.

Multitouch – not only gestures

Multitouch interaction is usually associated with gestures, but the richness of multitouch data can also be exploited in other ways. This post provides a few examples taken from recent research literature.

In “MTi: A method for user identification for multitouch displays”, we provide an overview of literature concerned with user identification and user distinction on multitouch multi-user displays. State-of-the-art methods are presented by considering three key aspects: user identification, user distinction and user tracking. Next, the paper proposes a method for user identification called MTi, which is capable of user identification based solely on the coordinates of touch points (thus applicable to all multitouch displays). The 5 touch point’s coordinates are first transformed in 29 features (distances, angles and areas), which are then used by an SVM model to perform identification. The method reported 94 % accuracy on a database with 100 users. Additionally, a usability study was performed to see how users react to MTi and to frame its scope.

In “Design and Validation of Two-Handed Multi-Touch Tabletop Controllers for Robot Teleoperation” Micire et al.describe the DREAM (Dynamically Resizing, Ergonomic And Multitouch) controller. The controller is designed for robot teleoperation, a task currently performed with specific joysticks that allow “chording” – the use of multiple fingers on the same hand to manage complex and coordinated movements (of the robot). Due to the lack of physical feedback, multitouch displays have been regarded as inappropriate for such tasks. The authors agree that simply emulating the physical 3D world (and controls) on a flat 2D display is doomed to failure, but at the same time provide an alternative solution. Multitouch controls should be designed around the biomechanical characteristics of each individual’s hand. The point here is that, because multitouch controls are soft/programmable controls, they can adapt to each user individually and not to an average user as physical controls have to. This approach is demonstrated with the DREAM controller (a Playstation controller split in half – each half appears under one of the users hands). The position of the user’s fingers determines the location of the controller as well as its size and functions. In the paper the authors describe how they determine the presence of a user’s hand (hand detection/registration), how they determine which (left/right) hand it is, why their approach does not rely on Cartesian coordinates (rotation insensitiveness) etc.

The next article that explores multitouch data from a non-gesture perspective is “See Me, See You: A Lightweight Method for Discriminating User Touches on Tabletop Displays.” Here, Zhang et al. describe how to discriminate users (the position of the user around a tabletop) based on the orientation of the touch. With data from 8 participants (3072 samples) they build an SVM model with 97,9 % accuracy. For details, see the CHI paper above, the video below or this Msc thesis.

Ewerling et al. suggested a processing pipeline for multitouch detection on large touch screens that combines the use of maximally stable extremal regions and agglomerative clustering in order to detect finger touches, group finger touches into hands and distinguish left and right hand (when all fingers from a single hand touch the display). Their motivation was the fact that existing hardware platforms only detect single touches and assume all belong to the same gesture, which limits the design space of multitouch interaction. The presented solution was evaluated on a diffused illumination display (97 % finger registration accuracy, 92 % hand registration accuracy), but is applicable to all multitouch displays that provide a depth map of the region above the display. For details see the paper “Finger and Hand Detection for Multi-Touch Interfaces Based on Maximally Stable Extremal Regions” or this MSc thesis.

If the above papers present hand and finger registration techniques as part of a broader context, Au and Tai in “Multitouch Finger Registration and Its Applications” provide two use-cases: the palm menu and the virtual mouse (for details see video below). Their method for hand and finger registration depends only on touch coordinates and is thus hardware independent.

Prevod vprašalnika SUS

Rad bi validiral prevod vprašalnika za merjenje uporabniške prijaznosti različnih sistemov – System Usability Scale – za kar potrebujem kup odgovorov, vsaj 200, na spodnjo anketo:

http://goo.gl/yvksG

Prosim, če anketo izpolnite v čim večjem številu in jo po možnosti posredujete dalje. Odgovarjajo lahko vsi, ki jim je slovenščina materni jezik in uporabljajo Gmail. Vse skupaj vzame par minut.

Po opravljeni validaciji bo prevod s kratkimi navodili za uporabo dostopen na tej strani.