Image super-resolution is one of the key problems of image restoration. While a lot of methods focus on working with known degradation (like bicubic downsampling), in real-world use cases the degradation models are complex and unknown and therefore difficult to prepare for. To train such a model in supervised manner paired data with real-world degradations is needed. As only unpaired high- and low-resolution data is usually available due to the high cost of real paired data collection and alignment, methods that tackle the blind super-resolution task face the problem of degradation choice for training data generation. Some of existing methods provide a degradation pipeline that include noise injection, jpeg compression, downsampling with different kernels, etc. These methods may be effective in some cases but offer no mechanism of the pipeline to adapt to real-world scenario therefore lacking the performance. The other approach presented in this paper is simulating the degradation directly without trying to construct it from a predefined list. This can be done using modern generative models such as diffusion models. These models has strong generalization capabilities and are known to simulate the data distributions well. The proposed method uses a diffusion model trained for low-resolution image generation to simulate the degradations and construct paired data given high-resolution data. We compare proposed diffusion-based method with the existing paired data generation techniques and show the performance boost for it.
|