Attributes for Classifier Feedback
People
Abstract
Active learning provides useful tools to reduce annotation costs without compromising classifier performance. However it traditionally views the supervisor simply as a labeling machine. We propose a new interactive learning paradigm that allows the supervisor to additionally convey useful domain knowledge using attributes. The learner first conveys its belief about an actively chosen image e.g. "I think this is a forest, what do you think?". If the learner is wrong, the supervisor provides an explanation e.g. "No, this is too open to be a forest". With access to a pre-trained set of relative attribute predictors, the learner fetches all unlabeled images more open than the query image, and uses them as negative examples of forests to update its classifier. This rich human-machine communication leads to better classification performance.
We also propose three enhancements to this basic framework. First, we incorporate a weighting scheme that instead of making a hard decision reasons about the likelihood of an image being a negative example. Second, we do away with pre-trained attributes and instead learn the attribute models on the fly, alleviating overhead and restrictions of a pre-determined attribute vocabulary. Finally, we propose an active learning framework that accounts for not just the label- but also the attributes-based feedback while selecting the next query image. We demonstrate significant improvement in classification accuracy on faces and shoes. We also collect and make available the largest relative attributes dataset containing 29 attributes of faces from 60 categories.Papers
Amar Parkash, Devi Parikh.Attributes for Classifier Feedback.
In European Conference on Computer Vision (ECCV), 2012 (Oral).
PDF bibtex
Arijit Biswas, Devi Parikh.
Simultaneous Active Learning of Classifiers & Attributes via Relative Feedback.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
PDF bibtex
Presentations
ECCV 2012 Oral presentation:Slides Talk (video)
CVPR 2013 Poster presentation:
Poster
Data Set
We have collected a relative attributes dataset for 60 face categories and 29 attributes (subset of PubFig) using Amazon Mechanical Turk. For each pair of categories we show example images to 10 users on Amazon Mechanical Turk and ask them which category has a stronger presence of each attribute. We then trained relative attribute predictors for these 29 images. The dataset including the annotations, trained attribute predictors, and outputs of the predictors on 1800 images can be downloaded here: Relative Face Attributes Dataset.If you use this dataset please cite: bibtex