Explaining Face Presentation Attack Detection Using Natural Language

December 15, 2021

Introduction

Face recognition based authentication systems are widely used due to the convenience they offer and their relative low cost. Nevertheless, these systems are vulnerable to different types of presentation attacks, such as printed face images and plastic masks. To mitigate such vulnerability, face recognition systems are augmented with presentation attack detection modules. However, these modules are typically opaque and do not provide adequate explanations for their decisions, which limits their utility for wide deployment. In this blog post, we will introduce a solution to this problem. Our solution produces natural language explanations for the decisions of already trained face presentation attack detection systems.

While deep learning methods have recently dominated in different tasks, including the task of face presentation attack detection (FPAD), their opacity and lack of interpretability ignited a whole area of research about explaining their performance, which is commonly referred to as explainable AI (XAI). Orthogonally, deep learning methods have excelled in natural language generation (NLG) tasks, including image captioning and visual question answering. The solution we are introducing in this blog post combines, for the first time, research in FPAD, XAI, and NLG. XAI methods typically use activation maps or semantic attributes as a proxy to the desired explanation. Using natural language instead has the advantage of being more expressive. It also enables tapping into the wealth of information embedded in NLG models, which could be useful in explaining new unseen scenarios.

Approach

Our solution uses human generated explanations for presentation attack samples and trains a language generation model to learn mapping from the FPAD model’s decision and internal representation to the natural language explanation. Upon training the NLG module, the FPAD module is frozen so that its performance is not affected by the explanation. Note that the decision of the FPAD module along with its internal representation are used as inputs to the NLG module. In this way, the NLG module learns to explain the FPAD module’s decision based on how the FPAD module “sees” the input.

Our approach is illustrated in the figure below. Three different types of losses are incorporated to train the NLG module: (1) A word-wise loss that measures the mismatch between the words of the generated explanation and the words of the ground truth explanation. (2) A sentence semantic loss, which measures the mismatch between the semantic meaning of the entire generated explanation and the corresponding ground truth. The sentence semantic loss is insensitive to the word order or the exact word to word match. Instead, it relies on semantic sentence embedding to capture the mismatch in the meanings. (3) A sentence discriminativeness loss, which measures how much the generated explanation is indicative of the classification of the input sample to a specific type of attack.

Experimental Results

The figure below shows some qualitative results from our model. The top row corresponds to correct explanations. The second row shows some types of errors our model makes. The most common reason for an erroneous explanation is a mistaken FPAD. In this case, the NLG module is trying to explain a wrong decision. Other errors in explanation can be in the form of declaring a different material for the attack or making a grammatical error.