Combined Representations for Adept Learning (CORAL)

Description

Sponsored by DARPA’s LwLL program, CORAL develops machine learning algorithms that require significantly smaller amounts of labeled training data for computer vision tasks, such as image classification, object detection, and semantic image segmentation; and natural language processing tasks, such as machine translation and named entity recognition. 

Team

Collaborating Institutions

Publications

October, 2021

IEEE/CVF International Conference on Computer Vision (ICCV)

When solving zero-shot semantic segmentation problems, the need for pixel-level prediction with surrounding context motivates us to incorporate spatial information using positional encoding. We improve standard positional encoding by introducing the concept of Relative Positional Encoding, which integrates spatial information at the feature level and can handle arbitrary image sizes. Furthermore, we propose a new knowledge distillation-inspired self-training strategy, namely Annealed Self-Training, which can automatically assign different importance to pseudo-labels to improve performance. We systematically study the proposed Relative Positional Encoding and Annealed Self-Training in a comprehensive experimental evaluation, and our empirical results confirm the effectiveness of our method on three benchmark datasets.

October, 2021

IEEE/CVF International Conference on Computer Vision (ICCV)

Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation. Even though the idea of meta-learning for adaptation has dominated the few-shot learning methods, how to train a feature extractor is still a challenge. In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples. We propose a two-stage training scheme, Partner-Assisted Learning (PAL), which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance. 

January, 2021

IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)

Several semi-supervised learning approaches have been proposed to train neural networks using smaller amounts of labeled data with a large amount of unlabeled data. The Performance of semi-supervised methods significantly degrades as the size of labeled data decreases. We introduce Mutual-information-based Unsupervised & Semi-supervised Concurrent LEarning (MUSCLE), a hybrid learning approach that uses mutual information to combine both unsupervised and semi-supervised learning. MUSCLE can be used as a stand-alone training scheme for neural networks, and can also be incorporated into other learning approaches. 

Sponsors

Defense Advanced Research Project Agency

SPONSOR PROGRAM: DARPA – LwLL