Curvature filter and gradient transform based image enhancement algorithm can effectively suppress noises and enhance image edges. However, it is very hard to be carried out in real time due to the large computing load. To address this problem, a GPU based parallel implementation is proposed in this paper. First, aiming at the characteristics of the algorithm, a numerical implementation method based on central-difference is proposed. Then a domain decomposition scheme is utilized in parallel Gaussian curvature filter to remove the dependence of neighboring pixels and guarantee convergence. Finally, we make the multiprocessor wrap occupancy reach 100% by optimizing the thread grid and register usage. Experimental results demonstrate that our parallel method runs 200-300 times faster than CPU serial method with real time processing of 4096×4096 resolution image, which indicates a great potential for application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.