Significance: Functional near-infrared spectroscopy (fNIRS), a well-established neuroimaging technique, enables monitoring cortical activation while subjects are unconstrained. However, motion artifact is a common type of noise that can hamper the interpretation of fNIRS data. Current methods that have been proposed to mitigate motion artifacts in fNIRS data are still dependent on expert-based knowledge and the post hoc tuning of parameters.
Aim: Here, we report a deep learning method that aims at motion artifact removal from fNIRS data while being assumption free. To the best of our knowledge, this is the first investigation to report on the use of a denoising autoencoder (DAE) architecture for motion artifact removal.
Approach: To facilitate the training of this deep learning architecture, we (i) designed a specific loss function and (ii) generated data to mimic the properties of recorded fNIRS sequences.
Results: The DAE model outperformed conventional methods in lowering residual motion artifacts, decreasing mean squared error, and increasing computational efficiency.
Conclusion: Overall, this work demonstrates the potential of deep learning models for accurate and fast motion artifact removal in fNIRS data.
Perception-action cycle-based motor learning theory postulates coupled action and perception for visuomotor learning. We hypothesized that perception-action-related brain connectivity will underpin visuomotor skill levels in a complex motor task based on this theory. We tested our hypothesis using multi-modal brain imaging on healthy human subjects (N=6 experts, N= 8 novice, all right-handed) during the performance of fundamentals of laparoscopic surgery (FLS) "suturing and intracorporeal knot-tying" task. We investigated dynamic directed brain networks using nonoverlapping sliding window-based spectral Granger causality (GC) from simultaneously acquired electroencephalogram (EEG), and functional near-infrared spectroscopy (fNIRS) signals. Our GC analysis on EEG signals showed the flow of information from the supplementary motor area complex (SMA) to the left primary motor cortex (LM1) that was statistically different (p <0.05) between the experts and novices. This result aligned with the perception action cycle theory where SMA is central to the orderly descent from the prefrontal to the motor cortex in Fuster's perception-action processing stages. The GC analysis of the fNIRS oxyhemoglobin signal revealed the connectivity from left to right primary motor cortex (LM1 to RM1) and LM1 to left prefrontal cortex (LPFC) that was significantly different (p <0.05) between the cohorts. Here, our preliminary results supported the involvement of perception-action-related directed brain connectivity in distinguishing the skill levels during a complex laparoscopic task that was measured with portable brain imaging during task performance. Future studies need to investigate the fusion of the EEG and fNIRS networks for the causal brain-behavior analysis of complex motor skill acquisition.
Optical neuroimaging is a promising tool to assess motor skills execution. Especially, functional near-infrared spectroscopy (fNIRS) enables the monitoring of cortical activations in scenarios such as surgical task execution. fNIRS data sets are typically preprocessed to derive a few biomarkers that are used to provide a correlation between cortical activations and behavior. Meanwhile, Deep Learning methodologies have found great utility in the data processing of complex spatiotemporal data for classification or prediction tasks. Here, we report on a Deep Convolutional model that takes spatiotemporal fNIRS data sets as input to classify subjects performing a Fundamentals of Laparoscopic Surgery (FLS) task used in board certification of general surgeons in the United States. This convolutional neural network (CNN) uses dilated kernels paired with multiple stacks of convolution to capture long-range dependencies in the fNIRS time sequence. The model is trained in a supervised manner on 474 FLS trials obtained from seven subjects and assessed independently by stratified-10-fold cross-validation (CV). Results demonstrate that the model can learn discriminatory features between passed and failed trials, attaining 0.99 and 0.95 area under the Receiver Operating Characteristics (ROC) and Precision-Recall curves, respectively. The reported accuracy, sensitivity, and specificity are 97.7%, 81%, and 98.9%, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.