Divide, Share, and Conquer: Multi-Task Attribute Learning With Selective Sharing

Abstract

Existing methods to learn visual attributes are plagued by two common issues: (i) they are prone to confusion by properties that are correlated with the attribute of interest among training samples, and (ii) they often learn generic, imprecise “lowest common denominator” attribute models in an attempt to generalize across classes where a single attribute may have very different visual manifestations. Yet, many proposed applications of attributes rely on being able to learn the precise and correct semantic concept corresponding to each attribute. We argue that these issues are both largely due to indiscriminate “oversharing” amongst attribute classifiers along two axes — (i) visual features and (ii) classifier parameters. To address both these issues, we introduce the general idea of selective sharing during multitask learning of attributes. First, we show how selective sharing helps learn decorrelated models for each attribute in a vocabulary. Second, we show how selective sharing permits a new form of transfer learning between attributes, yielding a specialized attribute model for each individual object category. We validate both these instantiations of our selective sharing idea through extensive experiments on multiple datasets. We show how they help preserve semantics in learned attribute models, benefitting various downstream applications such as image retrieval or zero-shot learning.

Publication
In Springer Book on Visual Attributes

Related