Multi-source unsupervised domain adaptation (MUDA) has received increasing attention that leverages the knowledge from multiple relevant source domains with different distributions to improve the learning performance of the target domain. The most common approach for MUDA is to perform pairwise distribution alignment between the target and each source domain. However,existing methods usually treat each source domain identically in source-source and source-target alignment, which ignores the difference of multiple source domains and may lead to imperfect alignment. In addition, these methods often neglect the samples near the classification boundaries during adaptation process, resulting in misalignment of these samples. In this paper, we propose a new framework for MUDA, named Joint Alignment and Compactness Learning (JACL). We design an adaptive weighting network to automatically adjust the importance of marginal and conditional distribution alignment, and such weights are adopted to adaptively align each pair of source-target domains. We further propose to learn intra-class compact features for some target samples that lie in boundaries to reduce the domain shift. Extensive experiments demonstrate that our method can achieve remarkable results in three datasets (Digit-five, Office-31, and Office-Home) compared to recently strong baselines.
Significance: Reducing the bit depth is an effective approach to lower the cost of an optical coherence tomography (OCT) imaging device and increase the transmission efficiency in data acquisition and telemedicine. However, a low bit depth will lead to the degradation of the detection sensitivity, thus reducing the signal-to-noise ratio (SNR) of OCT images.
Aim: We propose using deep learning to reconstruct high SNR OCT images from low bit-depth acquisition.
Approach: The feasibility of our approach is evaluated by applying this approach to the quantized 3- to 8-bit data from native 12-bit interference fringes. We employ a pixel-to-pixel generative adversarial network (pix2pixGAN) architecture in the low-to-high bit-depth OCT image transition.
Results: Extensively, qualitative and quantitative results show our method could significantly improve the SNR of the low bit-depth OCT images. The adopted pix2pixGAN is superior to other possible deep learning and compressed sensing solutions.
Conclusions: Our work demonstrates that the proper integration of OCT and deep learning could benefit the development of healthcare in low-resource settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.