
To make this process smooth, it would be effective to present a comprehensive study report of the available state-of-the-art-work for enabling doctors and practitioners to easily use it in the decision making process. Gesture interaction approach-based augmented reality in the central nervous system has enormous impending for reducing the care cost, quality refining of care, and waste and error reducing. The role of augmented reality in the central nervous system becomes a thought-provoking task.

The function of augmented reality is to incorporate virtual and real objects, interactively running in a real-time and real environment. Modern speedy development in medical and computational growth in the field of the central nervous system enables practitioners and researchers to extract and visualize insight from these systems. The central nervous system function is to control the activities of the mind and the human body. The medical system is facing the transformations with augmentation in the use of medical information systems, electronic records, smart, wearable devices, and handheld. Moreover, we elaborate on the algorithms’ geometric interpretation based on geometric algebra, which supports some understanding of the recognition process. ) that is adequate for human–computer interaction. As a result, we achieve a compromise between high recognition rates ( With respect to two classical ML algorithms, k-nearest neighbor (k-NN) and support vector machine (SVM), and two state-of-the-art (SotA) deep learning (DL) models, bidirectional long short-term memory (BiLSTM) and gated recurrent unit (GRU), on an experimental dataset of ten gesture classes from the Italian Sign Language (LIS), each repeated 100 times by five inexperienced non-native signers, and gathered with wearable technology (a sensory glove and inertial measurement units).

, two geometric model-based approaches to gesture recognition which support the visualization and geometrical interpretation of the recognition process. However, this balance comes mainly from statistical models, which are challenging to interpret. To this aim, machine learning (ML) algorithms have been mostly investigated looking for a balance between the highest recognition rate and the lowest recognition time. Arm-and-hand tracking by technological means allows gathering data that can be elaborated for determining gesture meaning.
