Automatically Generated Exploded View Animations in VR: A Deep Learning Approach

Jesper Gaarsdal*, Sune Wolff, Kiyoshi Kiyokawa, Claus Brøndgaard Madsen


Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review


Exploded view animations are widely used for communication of complex mechanical assemblies in fields such as manufacturing and education. The creation of these animations is largely a manual process that requires expertise knowledge. In this paper, we introduce a novel tool for creating exploded views within a virtual reality (VR) environment, offering a human-in-the-loop process allowing users to adjust the final animation. Our approach combines principles from traditional assembly-by-disassembly with modern machine learning techniques to determine the order and direction of part disassembly. The core of our methodology is a novel point cloud classification network, PointDAN, for predicting the disassembly of parts. Another key contribution is the development of a public point cloud dataset, facilitating the training of models to predict whether parts can be disassembled. We report the performance of our trained networks, along with the performance of the full assembly-by-disassembly process. Furthermore, we report on an expert user study including participants spanning various industries. This study demonstrates the industrial applicability and potential of the tool. Our findings show that the integration of assembly-by-disassembly and machine learning not only simplifies the automatic generation of exploded layouts but has the potential to create highly accurate animations. A project page containing the dataset and code will be made available upon acceptance of the manuscript at
TidsskriftI E E E Transactions on Visualization and Computer Graphics
Antal sider11
StatusAfsendt - 21 dec. 2023


Dyk ned i forskningsemnerne om 'Automatically Generated Exploded View Animations in VR: A Deep Learning Approach'. Sammen danner de et unikt fingeraftryk.