In recent years advances in machine learning methods such as deep learning has led to significant improvements in our ability to track people and vehicles, and to recognise specific individuals. Such technology has enormous potential to enhance the performance of image-based security systems. However, wide-spread use of such technology has important legal and ethical implications, not least for individuals right to privacy. In this paper, we describe a technological approach to balance the two competing goals of system efficacy and privacy. We describe a methodology for constructing a “goal-function” that reflects the operators preferences for detection performance and anonymity. This goal function is combined with an image-processing system that provides tracking and threat assessment functionality and a decision-making framework that assesses the potential value gained by providing the operator with de-anonymized images. The framework provides a probabilistic approach combining user preferences, world state model, possible user actions and threat mitigation effectiveness, and suggests the user action with the largest estimated utility. We show results of operating the system in a perimeter-protection scenario
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.