AssemblyNet: A Point Cloud Dataset and Benchmark for Predicting Part Directions in an Exploded Layout

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

Abstract

Exploded views are powerful tools for visualizing the assembly and disassembly of complex objects, widely used in technical illustrations, assembly instructions, and product presentations. Previous methods for automating the creation of exploded views are either slow and computationally costly or compromise on accuracy. Therefore, the construction of exploded views is typically a manual process. In this paper, we propose a novel approach for automatically predicting the direction of parts in an exploded view using deep learning. To achieve this, we introduce a new dataset, AssemblyNet, which contains point cloud data sampled from 3D models of real-world assemblies, including water pumps, mixed industrial assemblies, and LEGO models. The AssemblyNet dataset includes a total of 44 assemblies, separated into 495 subassemblies with a total of 5420 parts. We provide ground truth labels for regression and classification, representing the directions in which the parts are moved in the exploded views. We also provide performance benchmarks using various state-of-the-art models for shape classification on point clouds and propose a novel two-path network architecture. Project page available at https://github.com/jgaarsdal/AssemblyNet
OriginalsprogEngelsk
TitelProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
Antal sider9
ForlagIEEE
Sider5836-5845
StatusE-pub ahead of print - 27 jan. 2024
NavnIEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
ISSN2642-9381

Fingeraftryk

Dyk ned i forskningsemnerne om 'AssemblyNet: A Point Cloud Dataset and Benchmark for Predicting Part Directions in an Exploded Layout'. Sammen danner de et unikt fingeraftryk.

Citationsformater