MGRFormer: A Multimodal Transformer Approach for Surgical Gesture Recognition - École nationale supérieure Mines-Télécom Lille Douai Accéder directement au contenu
Communication Dans Un Congrès Année : 2024

MGRFormer: A Multimodal Transformer Approach for Surgical Gesture Recognition

Résumé

Automatic surgical gesture recognition has the potential to revolutionize the field of surgery by enhancing patient care, surgical training, and our understanding of surgical skills. By integrating kinematic data, which precisely captures hand movements, with video data for contextual understanding, multimodal machine learning can greatly enhance the accuracy of surgical gesture recognition systems by capturing complementary knowledge. Recent research has highlighted the capabilities of Transformer-based models for temporal action segmentation. A key component of these models is the iterative refinement module, which enhances predictions using contextual data. In this study, we propose MGRFormer, a novel multimodal framework that leverages the interaction between kinematics and visual data at the refinement stage for the task of surgical gesture recognition. We evaluated our MGRFormer on the VTS dataset, and the results demonstrated that our approach outperformed unimodal and multimodal state-of-the-art methods by a large margin.
Fichier principal
Vignette du fichier
MGRFormer_FG24.pdf (1.4 Mo) Télécharger le fichier

Dates et versions

hal-04603132 , version 1 (10-06-2024)

Identifiants

  • HAL Id : hal-04603132 , version 1

Citer

Kevin Feghoul, Deise Santana Maia, Mehdi El Amrani, Mohamed Daoudi, Ali Amad. MGRFormer: A Multimodal Transformer Approach for Surgical Gesture Recognition. 18th International Conference on Automatic Face and Gesture Recognition (FG), May 2024, Istanbul, Turkey. ⟨hal-04603132⟩
16 Consultations
22 Téléchargements

Partager

Gmail Mastodon Facebook X LinkedIn More