Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

4 Downloads (Pure)

Abstract

This paper describes a novel framework for real-time sonification of surface textures in virtual reality (VR), aimed towards realistically representing the experience of driving over a virtual surface. A combination of captur- ing techniques of real-world surfaces are used for mapping 3D geometry, texture maps or auditory attributes (aural and vibrotactile) feedback. For the sonification rendering, we propose the use of information from primarily graphical texture features, to define target units in concatenative sound synthesis. To foster models that go beyond current generation of simple sound textures (e.g., wind, rain, fire), towards highly “synchronized” and expressive scenarios, our contribution draws a framework for higher-level modeling of a bicycle’s kinematic rolling on ground contact, with enhanced perceptual symbiosis between auditory, visual and vibrotactile stimuli. We scanned two surfaces represented as texture maps, consisting of different features, morphology and matching navigation. We define target trajectories in a 2-dimensional audio feature space, according to a temporal model and morphological attributes of the surfaces. This synthesis method serves two purposes: a real-time auditory feedback, and vibrotactile feedback induced through playing back the concatenated sound samples using a vibrotactile inducer speaker.
Original languageEnglish
Title of host publicationProceedings of 2019 Sound and Music Computing Conference
EditorsLorenzo J. Tardón, Isabel Barbancho, Ana M. Barbancho, Alberto Peinado
Number of pages8
PublisherSound and Music Computing Network
Publication dateMay 2019
Pages348
ISBN (Electronic)978-84-09-11120-6
Publication statusPublished - May 2019
Event16th Sound and music computing conference - University of Malaga (UMA). , Malaga, Spain
Duration: 28 May 201931 May 2019
http://smc2019.uma.es/index.html

Conference

Conference16th Sound and music computing conference
LocationUniversity of Malaga (UMA).
CountrySpain
CityMalaga
Period28/05/201931/05/2019
Internet address

Fingerprint

Photogrammetry
Virtual reality
Textures
Feedback
Acoustic waves
Bicycles
Rain
Kinematics
Fires
Navigation
Trajectories
Rendering (computer graphics)
Geometry

Cite this

Magalhaes, E., Høeg, E. R., Bernardes, G., Bruun-Pedersen, J. R., Serafin, S., & Nordahl, R. (2019). Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering. In L. J. Tardón, I. Barbancho, A. M. Barbancho, & A. Peinado (Eds.), Proceedings of 2019 Sound and Music Computing Conference (pp. 348). Sound and Music Computing Network.
Magalhaes, Eduardo ; Høeg, Emil Rosenlund ; Bernardes, Gilberto ; Bruun-Pedersen, Jon Ram ; Serafin, Stefania ; Nordahl, Rolf. / Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering. Proceedings of 2019 Sound and Music Computing Conference. editor / Lorenzo J. Tardón ; Isabel Barbancho ; Ana M. Barbancho ; Alberto Peinado. Sound and Music Computing Network, 2019. pp. 348
@inproceedings{e6d60e16007e40ac93defb2165fbf476,
title = "Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering",
abstract = "This paper describes a novel framework for real-time sonification of surface textures in virtual reality (VR), aimed towards realistically representing the experience of driving over a virtual surface. A combination of captur- ing techniques of real-world surfaces are used for mapping 3D geometry, texture maps or auditory attributes (aural and vibrotactile) feedback. For the sonification rendering, we propose the use of information from primarily graphical texture features, to define target units in concatenative sound synthesis. To foster models that go beyond current generation of simple sound textures (e.g., wind, rain, fire), towards highly “synchronized” and expressive scenarios, our contribution draws a framework for higher-level modeling of a bicycle’s kinematic rolling on ground contact, with enhanced perceptual symbiosis between auditory, visual and vibrotactile stimuli. We scanned two surfaces represented as texture maps, consisting of different features, morphology and matching navigation. We define target trajectories in a 2-dimensional audio feature space, according to a temporal model and morphological attributes of the surfaces. This synthesis method serves two purposes: a real-time auditory feedback, and vibrotactile feedback induced through playing back the concatenated sound samples using a vibrotactile inducer speaker.",
author = "Eduardo Magalhaes and H{\o}eg, {Emil Rosenlund} and Gilberto Bernardes and Bruun-Pedersen, {Jon Ram} and Stefania Serafin and Rolf Nordahl",
year = "2019",
month = "5",
language = "English",
pages = "348",
editor = "Tard{\'o}n, {Lorenzo J.} and Isabel Barbancho and Barbancho, {Ana M.} and Alberto Peinado",
booktitle = "Proceedings of 2019 Sound and Music Computing Conference",
publisher = "Sound and Music Computing Network",

}

Magalhaes, E, Høeg, ER, Bernardes, G, Bruun-Pedersen, JR, Serafin, S & Nordahl, R 2019, Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering. in LJ Tardón, I Barbancho, AM Barbancho & A Peinado (eds), Proceedings of 2019 Sound and Music Computing Conference. Sound and Music Computing Network, pp. 348, 16th Sound and music computing conference, Malaga, Spain, 28/05/2019.

Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering. / Magalhaes, Eduardo; Høeg, Emil Rosenlund; Bernardes, Gilberto; Bruun-Pedersen, Jon Ram; Serafin, Stefania; Nordahl, Rolf.

Proceedings of 2019 Sound and Music Computing Conference. ed. / Lorenzo J. Tardón; Isabel Barbancho; Ana M. Barbancho; Alberto Peinado. Sound and Music Computing Network, 2019. p. 348.

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

TY - GEN

T1 - Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering

AU - Magalhaes, Eduardo

AU - Høeg, Emil Rosenlund

AU - Bernardes, Gilberto

AU - Bruun-Pedersen, Jon Ram

AU - Serafin, Stefania

AU - Nordahl, Rolf

PY - 2019/5

Y1 - 2019/5

N2 - This paper describes a novel framework for real-time sonification of surface textures in virtual reality (VR), aimed towards realistically representing the experience of driving over a virtual surface. A combination of captur- ing techniques of real-world surfaces are used for mapping 3D geometry, texture maps or auditory attributes (aural and vibrotactile) feedback. For the sonification rendering, we propose the use of information from primarily graphical texture features, to define target units in concatenative sound synthesis. To foster models that go beyond current generation of simple sound textures (e.g., wind, rain, fire), towards highly “synchronized” and expressive scenarios, our contribution draws a framework for higher-level modeling of a bicycle’s kinematic rolling on ground contact, with enhanced perceptual symbiosis between auditory, visual and vibrotactile stimuli. We scanned two surfaces represented as texture maps, consisting of different features, morphology and matching navigation. We define target trajectories in a 2-dimensional audio feature space, according to a temporal model and morphological attributes of the surfaces. This synthesis method serves two purposes: a real-time auditory feedback, and vibrotactile feedback induced through playing back the concatenated sound samples using a vibrotactile inducer speaker.

AB - This paper describes a novel framework for real-time sonification of surface textures in virtual reality (VR), aimed towards realistically representing the experience of driving over a virtual surface. A combination of captur- ing techniques of real-world surfaces are used for mapping 3D geometry, texture maps or auditory attributes (aural and vibrotactile) feedback. For the sonification rendering, we propose the use of information from primarily graphical texture features, to define target units in concatenative sound synthesis. To foster models that go beyond current generation of simple sound textures (e.g., wind, rain, fire), towards highly “synchronized” and expressive scenarios, our contribution draws a framework for higher-level modeling of a bicycle’s kinematic rolling on ground contact, with enhanced perceptual symbiosis between auditory, visual and vibrotactile stimuli. We scanned two surfaces represented as texture maps, consisting of different features, morphology and matching navigation. We define target trajectories in a 2-dimensional audio feature space, according to a temporal model and morphological attributes of the surfaces. This synthesis method serves two purposes: a real-time auditory feedback, and vibrotactile feedback induced through playing back the concatenated sound samples using a vibrotactile inducer speaker.

UR - http://smc2019.uma.es/docs/SMC2019_Proceedings.pdf

UR - http://smc2019.uma.es/

M3 - Article in proceeding

SP - 348

BT - Proceedings of 2019 Sound and Music Computing Conference

A2 - Tardón, Lorenzo J.

A2 - Barbancho, Isabel

A2 - Barbancho, Ana M.

A2 - Peinado, Alberto

PB - Sound and Music Computing Network

ER -

Magalhaes E, Høeg ER, Bernardes G, Bruun-Pedersen JR, Serafin S, Nordahl R. Combining Texture-Derived Vibrotactile Feedback, Concatenative Synthesis and Photogrammetry for Virtual Reality Rendering. In Tardón LJ, Barbancho I, Barbancho AM, Peinado A, editors, Proceedings of 2019 Sound and Music Computing Conference. Sound and Music Computing Network. 2019. p. 348