untitled
<OAI-PMH schemaLocation=http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd> <responseDate>2018-01-15T18:32:41Z</responseDate> <request identifier=oai:HAL:hal-01089628v1 verb=GetRecord metadataPrefix=oai_dc>http://api.archives-ouvertes.fr/oai/hal/</request> <GetRecord> <record> <header> <identifier>oai:HAL:hal-01089628v1</identifier> <datestamp>2018-01-11</datestamp> <setSpec>type:POSTER</setSpec> <setSpec>subject:info</setSpec> <setSpec>collection:IHM-2014</setSpec> <setSpec>collection:UPMC</setSpec> <setSpec>collection:UNIV-AG</setSpec> <setSpec>collection:CNRS</setSpec> <setSpec>collection:ISIR</setSpec> <setSpec>collection:BNRMI</setSpec> <setSpec>collection:UPMC_POLE_1</setSpec> </header> <metadata><dc> <publisher>HAL CCSD</publisher> <title lang=fr>Audio-visual emotion recognition: A dynamic, multimodal approach</title> <creator>Jeremie, Nicole</creator> <creator>Vincent, Rapp</creator> <creator>Kevin, Bailly</creator> <creator>Prevost, Lionel</creator> <creator>Mohamed, Chetouani</creator> <contributor>Institut des Systèmes Intelligents et de Robotique (ISIR) ; Université Pierre et Marie Curie - Paris 6 (UPMC) - Centre National de la Recherche Scientifique (CNRS)</contributor> <contributor>Laboratoire de Mathématiques Informatique et Applications (LAMIA) ; Université des Antilles et de la Guyane (UAG)</contributor> <description>National audience</description> <source>IHM'14, 26e conférence francophone sur l'Interaction Homme-Machine</source> <coverage>Lille, France</coverage> <identifier>hal-01089628</identifier> <identifier>https://hal.archives-ouvertes.fr/hal-01089628</identifier> <identifier>https://hal.archives-ouvertes.fr/hal-01089628/document</identifier> <identifier>https://hal.archives-ouvertes.fr/hal-01089628/file/p44-nicole.pdf</identifier> <source>https://hal.archives-ouvertes.fr/hal-01089628</source> <source>IHM'14, 26e conférence francophone sur l'Interaction Homme-Machine, Oct 2014, Lille, France. pp.44-51, 2014</source> <language>fr</language> <subject lang=fr>Affective computing</subject> <subject lang=fr>Dynamic features</subject> <subject lang=fr>Multimodal fusion</subject> <subject lang=fr>Feature selection</subject> <subject lang=fr>Facial expressions</subject> <subject>[INFO.INFO-HC] Computer Science [cs]/Human-Computer Interaction [cs.HC]</subject> <type>info:eu-repo/semantics/conferenceObject</type> <type>Poster communications</type> <description lang=fr>Designing systems able to interact with students in a natural manner is a complex and far from solved problem. A key aspect of natural interaction is the ability to understand and appropriately respond to human emotions. This paper details our response to the continuous Audio/Visual Emotion Challenge (AVEC'12) whose goal is to predict four affective signals describing human emotions. The proposed method uses Fourier spectra to extract multi-scale dynamic descriptions of signals characterizing face appearance, head movements and voice. We perform a kernel regression with very few representative samples selected via a supervised weighted-distance-based clustering, that leads to a high generalization power. We also propose a particularly fast regressor-level fusion framework to merge systems based on different modalities. Experiments have proven the efficiency of each key point of the proposed method and our results on challenge data were the highest among 10 international research teams.</description> <date>2014-10-28</date> </dc> </metadata> </record> </GetRecord> </OAI-PMH>