AG-MAE: Anatomically Guided Spatio-Temporal Masked Auto-Encoder for Online Hand Gesture Recognition - École nationale supérieure Mines-Télécom Lille Douai
Communication Dans Un Congrès Année : 2025

AG-MAE: Anatomically Guided Spatio-Temporal Masked Auto-Encoder for Online Hand Gesture Recognition

Résumé

Hand gesture recognition plays a crucial role in the domain of computer vision, as it enhances human-computer interaction by enabling intuitive, touch-free control and communication. While offline methods have made significant advances in isolated gesture recognition, real-world applications demand online and continuous processing. Skeleton-based methods, though effective, face challenges due to the intricate nature of hand joints and the diverse 3D motions they induce. This paper introduces AG-MAE, a novel approach that integrates anatomical constraints to guide the self-supervised training of a spatio-temporal masked autoencoder, enhancing the learning of 3D keypoint representations. By incorporating anatomical knowledge, AG-MAE learns more discriminative features for hand poses and movements, subsequently improving online gesture recognition. Evaluation on standard datasets demonstrates the superiority of our approach and its potential for real-world applications
Fichier principal
Vignette du fichier
3DV_2025___Omar.pdf (3.94 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04793721 , version 1 (20-11-2024)

Identifiants

  • HAL Id : hal-04793721 , version 1

Citer

Omar Ikne, Benjamin Allaert, Hazem Wannous. AG-MAE: Anatomically Guided Spatio-Temporal Masked Auto-Encoder for Online Hand Gesture Recognition. International Conference on 3D Vision, Mar 2025, Singapour, Malaysia. ⟨hal-04793721⟩
0 Consultations
0 Téléchargements

Partager

More