Learning Robust Representations (LR2)

Description

Sponsored by DARPA’s GARD program, LR2 develops deep learning models that are robust to adversarial attacks. Our work covers models for numerous tasks spanning different modalities, such as image classification, object detection, action recognition, object tracking, speaker identification, and automatic speech recognition. Our general approach is layered defenses, where inner layers of the defense deploy modality-agnostic techniques based on robust representation learning, and the outer layers deploy modality specific techniques, such as attack detection and removal.

Team

Collaborating Institutions

Publications

September, 2021

arXiv preprint

In this paper, we introduce a novel non-linear activation function that spontaneously induces class-compactness and regularization in the embedding space of neural networks. The function is dubbed DOME for Difference Of Mirrored Exponential terms. The basic form of the function can replace the sigmoid or the hyperbolic tangent functions as an output activation function for binary classification problems. The function can also be extended to the case of multi-class classification and used as an alternative to the standard softmax function.

July, 2021

Computer Speech & Language

In this paper, we introduce a novel non-linear activation function that spontaneously induces class-compactness and regularization in the embedding space of neural networks. The function is dubbed DOME for Difference Of Mirrored Exponential terms. The basic form of the function can replace the sigmoid or the hyperbolic tangent functions as an output activation function for binary classification problems. The function can also be extended to the case of multi-class classification and used as an alternative to the standard softmax function.

June, 2021

IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

In this paper, we introduce a novel non-linear activation function that spontaneously induces class-compactness and regularization in the embedding space of neural networks. The function is dubbed DOME for Difference Of Mirrored Exponential terms. The basic form of the function can replace the sigmoid or the hyperbolic tangent functions as an output activation function for binary classification problems. The function can also be extended to the case of multi-class classification and used as an alternative to the standard softmax function.

Sponsors

Defense Advanced Research Project Agency

SPONSOR PROGRAM: DARPA – GARD