000 09203nam a2200745 i 4500
001 011453107990
003 MOCL
005 20221101134703.0
007 cr cn |||m|||a
008 220304s2019 nyua fob 001 0deng d
020 _a9781970001693
020 _z9781970001709
020 _z9781970001686
020 _z9781970001716
035 _a(CaBNVSL)swl000408782
035 _a(OCoLC)1062373656
040 _aCaBNVSL
_beng
_cAPU
_dSF
050 4 _aQA76.9.U83
_bO95 2019eb
082 0 4 _a005.437
_223
100 1 _aOviatt, Sharon,
_eauthor.
_947419
245 1 4 _aThe handbook of multimodal-multisensor interfaces.
_nVolume 2,
_pSignal processing, architectures, and detection of emotion and cognition
_h[electronic resources] /
_cSharon Oviatt, Bjorn Schuller, Philip R. Cohen, Daniel Sonntag, Gerasimos Potamianos, Antonio Kruger.
246 3 0 _aSignal processing, architectures, and detection of emotion and cognition
250 _aFirst edition.
260 _a[New York],
_bAssociation for Computing Machinery
260 _a[San Rafael, California],
_bMorgan & Claypool,
_cc2019.
300 _a1 online resources (xxiii, 515 pages) :
_billustrations.
490 1 _aACM books,
_x2374-6777 ;
_v#21
504 _aIncludes bibliographical references and index.
505 0 _aIntroduction: Trends in intelligent multimodal-multisensorial interfaces: cognition, emotion, social signals, deep learning, and more --
505 8 _aPart I. Multimodal signal processing and architectures
505 8 _a1. Challenges and applications in multimodal machine learning / Tadas Baltrusaitis, Chaitanya Ahuja, Louis-Philippe Morency -- 1.1 Introduction -- 1.2 Multimodal Applications -- 1.3 Multimodal Representations -- 1.4 Co-learning -- 1.5 Conclusion -- Focus questions -- References --
505 8 _a2. Classifying multimodal data / Ethem Alpaydin -- 2.1 Introduction -- 2.2 Classifying multimodal data -- 2.3 Early, late, and intermediate integration -- 2.4 Multiple kernel learning -- 2.5 Multimodal deep learning -- 2.6 Conclusions and future work -- Acknowledgments -- Focus questions -- References --
505 8 _a3. Learning for multimodal and affect-sensitive interfaces / Yannis Panagakis, Ognjen Rudovic, Maja Pantic -- 3.1 Introduction -- 3.2 Correlation analysis methods -- 3.3 Temporal modeling of facial expressions -- 3.4 Context dependency -- 3.5 Model adaptation -- 3.6 Conclusion -- Focus questions -- References --
505 8 _a4. Deep learning for multisensorial and multimodal interaction / Gil Keren, Amr El-desoky Mousa, Olivier Pietquin, Stefanos Zafeiriou, Bj�orn Schuller -- 4.1 Introduction -- 4.2 Fusion models -- 4.3 Encoder-decoder models -- 4.4 Multimodal embedding models -- 4.5 Perspectives -- Focus questions -- References --
505 8 _aPart II. Multimodal processing of social and emotional states --
505 8 _a5. Multimodal user state and trait recognition: an overview / Bj�orn Schuller -- 5.1 Introduction -- 5.2 Modeling -- 5.3 An overview on attempted multimodal stait and trait recognition -- 5.4 Architectures -- 5.5 A modern architecture perspective -- 5.6 Modalities -- 5.7 Walk-through of an example state -- 5.8 Emerging trends and future directions -- Focus questions -- References --
505 8 _a6. Multimodal-multisensor affect detection / Sidney K. D'Mello, Nigel Bosch, Huili Chen -- 6.1 Introduction -- 6.2 Background from affective sciences -- 6.3 Modality fusion for multimodal-multisensor affect detection -- 6.4 Walk-throughs of sample multisensor-multimodal affect detection systems -- 6.5 General trends and state of the art in multisensor-multimodal affect detection -- 6.6 Discussion -- Acknowledgments -- Focus questions -- References --
505 8 _a7. Multimodal analysis of social signals / Alessandro Vinciarelli, Anna Esposito -- 7.1 Introduction -- 7.2 Multimodal communication in life and human sciences -- 7.3 Multimodal analysis of social signals -- 7.4 Next steps -- 7.5 Conclusions -- Focus questions -- References --
505 8 _a8. Real-time sensing of affect and social signals in a multimodal framework: a practical approach / Johannes Wagner, Elisabeth Andre -- 8.1 Introduction -- 8.2 Database collection -- 8.3 Multimodal fusion -- 8.4 Online recognition -- 8.5 Requirements for a multimodal framework -- 8.6 The social signal interpretation framework -- 8.7 Conclusion -- Focus questions -- References --
505 8 _a9. How do users perceive multimodal expressions of affects? / Jean-Claude Martin, Celine Clavel, Matthieu Courgeon, Mehdi Ammi, Michel-Ange Amorim, Yacine Tsalamlal, Yoren Gaffary -- 9.1 Introduction -- 9.2 Emotions and their expressions -- 9.3 How humans perceive combinations of expressions of affects in several modalities -- 9.4 Impact of context on the perception of expressions of affects -- 9.5 Conclusion -- Focus Questions -- References --
505 8 _aPart III. Multimodal processing of cognitive states --
505 8 _a10. Multimodal behavioral and physiological signals as indicators of cognitive load / Jianlong Zhou, Kun Yu, Fang Chen, Yang Wang, Syed Z. Arshad -- 10.1 Introduction -- 10.2 State-of-the-art -- 10.3 Behavioral measures for cognitive load -- 10.4 Physiological measures for cognitive load -- 10.5 Multimodal signals and data fusion -- 10.6 Conclusion -- Funding -- Focus questions -- References --
505 8 _a11. Multimodal learning analytics: assessing learners' mental state during the process of learning / Sharon Oviatt, Joseph Grafsgaard, Lei Chen, Xavier Ochoa --11.1 Introduction -- 11.2 What is multimodal learning analytics? -- 11.3 What data resources are available on multimodal learning analytics? -- 11.4 What are the main themes from research findings on multimodal learning analytics? -- 11.5 What is the theoretical basis of multimodal learning analytics? -- 11.6 What are the main challenges and limitations of multimodal learning analytics? -- 11.7 Conclusions and future directions -- Focus questions -- References --
505 8 _a12. Multimodal assessment of depression from behavioral signals / Jeffrey F. Cohn, Nicholas Cummins, Julien Epps, Roland Goecke, Jyoti Joshi, Stefan Scherer -- 12.1 Introduction -- 12.2 Depression -- 12.3 Multimodal behavioral signal processing systems -- 12.4 Facial analysis -- 12.5 Speech analysis -- 12.6 Body movement and other behavior analysis -- 12.7 Analysis using other sensor signals -- 12.8 Multimodal fusion -- 12.9 Implementation-related considerations and elicitation approaches -- 12.10 Conclusion and current challenges -- Acknowledgments -- Focus questions -- References --
505 8 _a13. Multimodal deception detection / Mihai Burzo, Mohamed Abouelenien, Veronica Perez-Rosas, Rada Mihalcea -- 13.1 Introduction and motivation -- 13.2 Deception detection with individual modalities -- 13.3 Deception detection with multiple modalities -- 13.4 The way forward -- Acknowledgments -- Focus questions -- References --
505 8 _aPart IV. Multidisciplinary challenge topic --
505 8 _a14. Perspectives on predictive power of multimodal deep learning: surprises and future directions / Samy Bengio, Li Deng, Louis-Philippe Morency, Bj�orn Schuller -- 14.1 Deep learning as catalyst for scientific discovery -- 14.2 Deep learning in relation to conventional machine learning -- 14.3 Expected surprises of deep learning -- 14.4 The future of deep learning -- 14.5 Responsibility in deep learning -- 14.6 Conclusion -- References --
505 8 _aIndex -- Biographies -- Volume 2 Glossary.
520 3 _aThe content of this handbook is most appropriate for graduate students and of primary interest to students studying computer science and information technology, human-computer interfaces, mobile and ubiquitous interfaces, affective and behavioral computing, machine learning, and related multidisciplinary majors. When teaching graduate classes with this book, whether in a quarter- or semester-long course, we recommend initially requiring that students spend two weeks reading the introductory textbook, The Paradigm Shift to Multimodality in Contemporary Interfaces (Morgan & Claypool Publishers, Human-Centered Interfaces Synthesis Series, 2015). With this orientation, a graduate class providing an overview of multimodal-multisensor interfaces then could select chapters from the current handbook, distributed across topics in the different sections.
530 _aAlso available in print.
538 _aMode of access: World Wide Web.
538 _aSystem requirements: Adobe Acrobat Reader.
650 0 _aMultimodal user interfaces (Computer systems)
_947417
650 0 _aHuman-computer interaction.
650 0 _aSignal processing.
655 0 _aElectronic books.
700 1 _aSchuller, Bjorn,
_947869
700 1 _aCohen, Philip R.,
_947421
700 1 _aSonntag, Daniel,
_947422
700 1 _aPotamianos, Gerasimos,
_947423
700 1 _aKruger, Antonio,
_947424
830 0 _aACM books ;
_v#21.
_947379
856 4 8 _uhttps://dl-acm-org.ezproxy.apu.edu.my/doi/book/10.1145/3107990
_yAvailable in ACM Digital Library. Requires Log In to view full text.
942 _2lcc
_cE-Book
999 _c383706
_d383706