untitled
<OAI-PMH schemaLocation=http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd> <responseDate>2018-01-15T18:41:19Z</responseDate> <request identifier=oai:HAL:hal-00664606v1 verb=GetRecord metadataPrefix=oai_dc>http://api.archives-ouvertes.fr/oai/hal/</request> <GetRecord> <record> <header> <identifier>oai:HAL:hal-00664606v1</identifier> <datestamp>2018-01-11</datestamp> <setSpec>type:COMM</setSpec> <setSpec>subject:info</setSpec> <setSpec>collection:CNRS</setSpec> <setSpec>collection:I3S</setSpec> <setSpec>collection:UNICE</setSpec> <setSpec>collection:BNRMI</setSpec> <setSpec>collection:UNIV-AG</setSpec> <setSpec>collection:CEREGMIA</setSpec> <setSpec>collection:UCA-TEST</setSpec> <setSpec>collection:UNIV-COTEDAZUR</setSpec> </header> <metadata><dc> <publisher>HAL CCSD</publisher> <title lang=en>Multi-Class Leveraged $k$-NN for Image Classification</title> <creator>Piro, Paolo</creator> <creator>Nock, Richard</creator> <creator>Nielsen, Frank</creator> <creator>Barlaud, Michel</creator> <contributor>Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S) ; Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS)</contributor> <contributor>Centre de Recherche en Economie, Gestion, Modélisation et Informatique Appliquée (CEREGMIA) ; Université des Antilles et de la Guyane (UAG)</contributor> <contributor>Sony Corporation ; Sony Corporation</contributor> <contributor>Laboratoire d'Informatique, Signaux, et Systèmes de Sophia-Antipolis (I3S) / Equipe IMAGES-CREATIVE ; Signal, Images et Systèmes (SIS) ; Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S) ; Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS) - Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS) - Laboratoire d'Informatique, Signaux, et Systèmes de Sophia Antipolis (I3S) ; Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS) - Université Nice Sophia Antipolis (UNS) ; Université Côte d'Azur (UCA) - Université Côte d'Azur (UCA) - Centre National de la Recherche Scientifique (CNRS)</contributor> <description>International audience</description> <source>Proceedings of the 10th Asian Conference on Computer Vision, ACCV 2010, November 8-12, 2010, Queenstown, New Zealand</source> <coverage>Queenstown, New Zealand</coverage> <identifier>hal-00664606</identifier> <identifier>https://hal.inria.fr/hal-00664606</identifier> <identifier>https://hal.inria.fr/hal-00664606/document</identifier> <identifier>https://hal.inria.fr/hal-00664606/file/mlnn_accvfinal.pdf</identifier> <source>https://hal.inria.fr/hal-00664606</source> <source>Proceedings of the 10th Asian Conference on Computer Vision, ACCV 2010, November 8-12, 2010, Queenstown, New Zealand, 2010, Queenstown, New Zealand. 2010</source> <language>en</language> <subject>[INFO.INFO-TI] Computer Science [cs]/Image Processing</subject> <type>info:eu-repo/semantics/conferenceObject</type> <type>Conference papers</type> <description lang=en>The k-nearest neighbors (k-NN) classification rule is still an essential tool for computer vision applications, such as scene recognition. However, k-NN still features some major drawbacks, which mainly reside in the uniform voting among the nearest prototypes in the feature space. In this paper, we propose a new method that is able to learn the "relevance" of prototypes, thus classifying test data using a weighted k-NN rule. In particular, our algorithm, called Multi-class Leveraged k-nearest neighbor (MLNN), learns the prototype weights in a boosting framework, by minimizing a surrogate exponential risk over training data. We propose two main contributions for improving computational speed and accuracy. On the one hand, we implement learning in an inherently multiclass way, thus providing significant computation time reduction over one-versus-all approaches. Furthermore, the leveraging weights enable effective data selection, thus reducing the cost of k-NN search at classification time. On the other hand, we propose a kernel generalization of our approach to take into account real-valued similarities between data in the feature space, thus enabling more accurate estimation of the local class density. We tested MLNN on three datasets of natural images. Results show that MLNN significantly outperforms classic k-NN and weighted k-NN voting. Furthermore, using an adaptive Gaussian kernel provides significant performance improvement. Finally, the best results are obtained when using MLNN with an appropriate learned metric distance.</description> <date>2010</date> </dc> </metadata> </record> </GetRecord> </OAI-PMH>