Adversarial attacks rely on the instability phenomenon appearing in general for all inverse problems, e.g., image classification and reconstruction, independently of the computational scheme or method used to solve the problem. We mathematically prove and empirically show that machine learning denoisers (MLD) are not excluded. That is to prove the existence of adversarial attacks given by noise patterns making the MLD run into instability, i.e., the MLD increases the noise instead of decreasing it. We further demonstrate that adversarial retraining or classic filtering do not provide an exit strategy for this dilemma. Instead, we show that adversarial attacks can be inferred by polynomial regression. Removing the underlying inferred polynomial distribution from the total noise distribution delivers an efficient technique yielding robust MLDs that make consistent computer vision tasks such as image segmentation or classification more reliable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.