This paper describes our approach to hand detection on a multitouch surface i.e. detecting how many hands are currently on the surface and associating each touch point to its corresponding hand. Our goal was to find a general software-based solution to this problem applicable to all multitouch surfaces regardless of their construction. We therefore approached hand detection with a limited amount of information: the position of each touch point. We propose HDCMD (Hand Detection with Clustering on Multitouch Displays), a simple clustering algorithm based on heuristics that exploit the knowledge of the anatomy of the human hand. The proposed hand detection algorithm’s accuracy evaluated on synthetic data (97%) significantly outperformed XMeans (21%) and DBScan (67%).
Peter Novak, Franc Novak & Barbara Koroušić Seljak
Abstract
In this paper, an enhancement of a web application design is presented. The aim was to modify the visual quality of the application in order to make it simple and visually more easy to understand, which consequently leads to improved user experience. The process of enhancement of user interface of the web application home page and subsequent web pages is based on different methods for establishing clear visual hierarchy of presented information, among them the methods of reduction, regularization and leverage. The web application to which the above principles were applied is Open Platform for Clinical Nutrition, which offers to users effective means for identifying their nutritional state and adjusting diet plans to their way of life and clinical state. Currently, the application has about 3000 active users.
Just a few notes from Helwig Hauser‘s keynote at SouthCHI2013 titled: »Integrating interactive and computational analysis in visualization.«
Firs, he defined visualization as computer assisted means to enable insight into data. In research, visual analytics have been a hot topic since 2004. Based on the level of integration of visualization and interaction, visual analytics tools can be divided in the following 3 (or is it 4?) categories:
level 0: no integration,
level 1a: visualization of results,
level 1b: making computational analysis interactive,
level 2: tight integration.
The last level is the one with most potential for research. He continued by presenting the IVA methodology and the IVA loop. Some remarks about the IVA methodology (and tools for interactive visual analytics): it is needed when the user is faced with too much or too complex data; it should support data exploration, data analysis, hypotheses generation and sense making; it should take into account user interests and task at hand; it should support ‘information drill-down’ (i.e. going from overview to details); and it should offer an interactive and iterative visual dialog. The basic IVA loop consists of two steps: visualization (the computer shows the data to the user) and interaction (the user tells the computer what he/she is interested in). It sounds simple, but the execution of these two steps can quickly get complicated and complex – keep in mind that the process must run in real-time to be interactive.
In this paper we present a web kiosk framework based on Kinect sensor. The main idea is to use the framework for creation of simple interactive presentations for informing, advertising and presenting knowledge to the public. The use of such a framework simplifies adaptation of existing web materials for presentation with the kiosk. We can also make use of touchless interaction for browsing through the interactive content, to animate the user and encourage her to spend more time browsing the presented content. We present the structure of the framework and a simple case study on using the framework as an interactive presentation platform and as an education resource. The developed framework has been used for presenting information on educational programs at Faculty of Computer and Information Science, University of Ljubljana.
The first event organised by hci.si was successfuly concluded as a workshop at the SouthCHI conference in Maribor. The whole conference took place at hotel Habakuk from the 1st to the 3rd of July 2013. The conference had broad international participation from the HCI community.
Three approaches from community members were presented at the workshop:
Multitouch interaction is usually associated with gestures, but the richness of multitouch data can also be exploited in other ways. This post provides a few examples taken from recent research literature.
In “MTi: A method for user identification for multitouch displays”, we provide an overview of literature concerned with user identification and user distinction on multitouch multi-user displays. State-of-the-art methods are presented by considering three key aspects: user identification, user distinction and user tracking. Next, the paper proposes a method for user identification called MTi, which is capable of user identification based solely on the coordinates of touch points (thus applicable to all multitouch displays). The 5 touch point’s coordinates are first transformed in 29 features (distances, angles and areas), which are then used by an SVM model to perform identification. The method reported 94 % accuracy on a database with 100 users. Additionally, a usability study was performed to see how users react to MTi and to frame its scope.
In “Design and Validation of Two-Handed Multi-Touch Tabletop Controllers for Robot Teleoperation” Micire et al.describe the DREAM (Dynamically Resizing, Ergonomic And Multitouch) controller. The controller is designed for robot teleoperation, a task currently performed with specific joysticks that allow “chording” – the use of multiple fingers on the same hand to manage complex and coordinated movements (of the robot). Due to the lack of physical feedback, multitouch displays have been regarded as inappropriate for such tasks. The authors agree that simply emulating the physical 3D world (and controls) on a flat 2D display is doomed to failure, but at the same time provide an alternative solution. Multitouch controls should be designed around the biomechanical characteristics of each individual’s hand. The point here is that, because multitouch controls are soft/programmable controls, they can adapt to each user individually and not to an average user as physical controls have to. This approach is demonstrated with the DREAM controller (a Playstation controller split in half – each half appears under one of the users hands). The position of the user’s fingers determines the location of the controller as well as its size and functions. In the paper the authors describe how they determine the presence of a user’s hand (hand detection/registration), how they determine which (left/right) hand it is, why their approach does not rely on Cartesian coordinates (rotation insensitiveness) etc.
The next article that explores multitouch data from a non-gesture perspective is “See Me, See You: A Lightweight Method for Discriminating User Touches on Tabletop Displays.” Here, Zhang et al. describe how to discriminate users (the position of the user around a tabletop) based on the orientation of the touch. With data from 8 participants (3072 samples) they build an SVM model with 97,9 % accuracy. For details, see the CHI paper above, the video below or this Msc thesis.
Ewerling et al. suggested a processing pipeline for multitouch detection on large touch screens that combines the use of maximally stable extremal regions and agglomerative clustering in order to detect finger touches, group finger touches into hands and distinguish left and right hand (when all fingers from a single hand touch the display). Their motivation was the fact that existing hardware platforms only detect single touches and assume all belong to the same gesture, which limits the design space of multitouch interaction. The presented solution was evaluated on a diffused illumination display (97 % finger registration accuracy, 92 % hand registration accuracy), but is applicable to all multitouch displays that provide a depth map of the region above the display. For details see the paper “Finger and Hand Detection for Multi-Touch Interfaces Based on Maximally Stable Extremal Regions” or this MSc thesis.
If the above papers present hand and finger registration techniques as part of a broader context, Au and Tai in “Multitouch Finger Registration and Its Applications” provide two use-cases: the palm menu and the virtual mouse (for details see video below). Their method for hand and finger registration depends only on touch coordinates and is thus hardware independent.
Intuitive human computer interaction (HCI) is becoming an increasingly popular topic in computer science. The need for intuitive HCI is fueled by the need for intuitive mastering and interaction with increasing complexity of software, amounts of data. Devices, where one or more users intuitively (e.g. with fingers) and simultaneously manage content are currently very rare, expensive (e.g. Microsoft Surface) and usually only provided as technology show-case.
An important advantage of such devices is the possibility of multi-user experience that enables additional ways of inter-user interaction. This makes this kind of systems suited for applications such as multimedia content viewing and browsing (e.g. in museums, galleries, exhibitions, …) and visualization of large quantities of information.
The goal of the project is creation of an open source platform that will ease the development of multi-touch multi-user (MTMU) enabled applications. No such freely available complete solution exists at this time. We believe that such a platform will increase the popularity and production of multi-touch systems as well as enable more rapid development of MTMU enabled applications.
Hardware
As a first step towards our goal we have set up a FTIR-based multitouch table to provide a hardware basis for the development of the platform.
The video below displays the existing demo software (developed by the NUI group) running on our table table as a demonstration of the concept.
Rad bi validiral prevod vprašalnika za merjenje uporabniške prijaznosti različnih sistemov – System Usability Scale – za kar potrebujem kup odgovorov, vsaj 200, na spodnjo anketo:
Prosim, če anketo izpolnite v čim večjem številu in jo po možnosti posredujete dalje. Odgovarjajo lahko vsi, ki jim je slovenščina materni jezik in uporabljajo Gmail. Vse skupaj vzame par minut.
Po opravljeni validaciji bo prevod s kratkimi navodili za uporabo dostopen na tej strani.
Call for papers for HCI-SEE Workshop is out. Whole call for papers document is accessible here.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.