The CNNs have significantly advanced in analyzing cellular movements. Unfortunately, the CNN-based networks incorporate the information loss caused by the intrinsic characteristics of the convolution operators, leading to degrading the performance of cell segmentation and tracking. Researchers have proposed consecutive CNNs to overcome these limitations, although these models are still in the preproduction stage. In this study, we present a novel approach that utilizes cumulative CNNs to segment and track cells in fluorescence videos. Our method incorporates the state-of-the-art Vision Transformer (ViT) and Bayesian Network to improve accuracy and performance. By leveraging the ViT architecture and Bayesian network, we aim to mitigate information losses and enhance the precision of cell segmentation and tracking tasks.
|