Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

Nikolas Vidakis, Anastasios Vlasopoulos, Tsampikos Kounalakis, Petros Varchalamas, Michalis Dimitriou, Gregory Kalliatakis, Efthimios Syntychakis, John Christofakis, Georgios Triantafyllidis

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

3 Citations (Scopus)
895 Downloads (Pure)

Abstract

This paper presents a natural user interface system
based on multimodal human computer interaction, which
operates as an intermediate module between the user and the
operating system. The aim of this work is to demonstrate a
multimodal system which gives users the ability to interact with
desktop applications using face, objects, voice and gestures.
These human behaviors constitute the input qualifiers to the
system. Microsoft Kinect multi-sensor was utilized as input
device in order to succeed the natural user interaction, mainly
due to the multimodal capabilities offered by this device. We
demonstrate scenarios which contain all the functions and
capabilities of our system from the perspective of natural user
interaction.
Original languageEnglish
Title of host publication18th International Conference on Digital Signal Processing (DSP)
EditorsAthanasios Skodras
Number of pages8
PublisherWiley-IEEE press
Publication date2013
ISBN (Print)978-1-4673-5807-1
DOIs
Publication statusPublished - 2013
EventInternational conference on Digital Signal Processing - Fira, Greece
Duration: 1 Jul 20133 Jul 2013
Conference number: 18
http://dsp2013.dspconferences.org/

Conference

ConferenceInternational conference on Digital Signal Processing
Number18
Country/TerritoryGreece
CityFira
Period01/07/201303/07/2013
Internet address
SeriesInternational Conference on Digital Signal Processing proceedings
ISSN1546-1874

Fingerprint

Dive into the research topics of 'Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example'. Together they form a unique fingerprint.

Cite this