Open Access
Issue |
ITM Web Conf.
Volume 44, 2022
International Conference on Automation, Computing and Communication 2022 (ICACC-2022)
|
|
---|---|---|
Article Number | 03071 | |
Number of page(s) | 9 | |
Section | Computing | |
DOI | https://doi.org/10.1051/itmconf/20224403071 | |
Published online | 05 May 2022 |
- Zhang, Sheng, Min Chen, Jincai Chen, Yuan-Fang Li, Yiling Wu, Minglei Li, and Chuanbo Zhu, “Combining cross-modal knowledge transfer and semisupervised learning for speech emotion recognition,” in Knowledge-Based Systems, Vol.229, pp.107340, 2021. [CrossRef] [Google Scholar]
- Zehra, Wisha, Abdul Rehman Javed, Zunera Jalil, Habib Ullah Khan, and Thippa Reddy Gadekallu, “Cross corpus multi-lingual speech emotion recognition using ensemble learning,” in Complex & Intelligent Systems, Vol.7, no.4, pp.1845–1854, 2021. [CrossRef] [Google Scholar]
- Guanghui, Chen, and Zeng Xiaoping, “Multi-modal emotion recognition by fusing correlation features of speech-visual,” in IEEE Signal Processing Letters, Vol.28, pp.533–537, 2021. [CrossRef] [Google Scholar]
- C. Zhang and L. Xue, “Autoencoder With Emotion Embedding for Speech Emotion Recognition,” in IEEE Access, vol.9, pp.51231–51241, 2021. [CrossRef] [Google Scholar]
- Hsu, Jia-Hao, Ming-Hsiang Su, Chung-Hsien Wu, and Yi-Hsuan Chen, “Speech emotion recognition considering nonverbal vocalization in affective conversations,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol. 29, pp.1675–1686, 2021. [CrossRef] [Google Scholar]
- N. Liu, Baofeng Zhang, Bin Liu, Jingang Shi, Lei Yang, Zhiwei Li and Junchao Zhu, “Transfer Subspace Learning for Unsupervised Cross-Corpus Speech Emotion Recognition,” in IEEE Access, vol. 9, pp. 95925–95937, 2021. [CrossRef] [Google Scholar]
- M. B. Er, “A Novel Approach for Classification of Speech Emotions Based on Deep and Acoustic Features,” in IEEE Access, vol. 8, pp. 221640–221653, 2020. [CrossRef] [Google Scholar]
- S. Kanwal and S. Asghar, “Speech Emotion Recognition Using Clustering Based GA-Optimized Feature Set,” in IEEE Access, vol.9, pp.125830–125842, 2021. [CrossRef] [Google Scholar]
- Schlegel, Patrick, Stefan Kniesburges, Stephan Dürr, Anne Schützenberger, and Michael Döllinger, “Machine learning based identification of relevant parameters for functional voice disorders derived from endoscopic high-speed recordings,” in Scientific Reports, Vol.10, no.1, pp.1–14, 2020. [CrossRef] [PubMed] [Google Scholar]
- E. Cambria, S. Poria, A. Hussain and B. Liu, “Computational Intelligence for Affective Computing and Sentiment Analysis [Guest Editorial],” in IEEE Computational Intelligence Magazine, vol. 14, no. 2, pp. 16–17, 2019. [CrossRef] [Google Scholar]
- Chen, Min, and Yixue Hao, “Label-less learning for emotion cognition,” in IEEE transactions on neural networks and learning systems, Vol.31, no.7, pp.2430–2440, 2019. [Google Scholar]
- El Ayadi, Moataz, Mohamed S. Kamel, and Fakhri Karray, “Survey on speech emotion recognition: Features, classification schemes, and databases,” in Pattern recognition, Vol.44, no. 3, pp.572–587, 2011. [CrossRef] [Google Scholar]
- Zvarevashe, Kudakwashe, and Oludayo Olugbara, “Ensemble learning of hybrid acoustic features for speech emotion recognition,” in Algorithms, Vol.13, no.3, pp.70, 2020. [CrossRef] [Google Scholar]
- Liu, Zhen-Tao, Qiao Xie, Min Wu, Wei-Hua Cao, Ying Mei, and Jun-Wei Mao, “Speech emotion recognition based on an improved brain emotion learning model,” in Neurocomputing, Vol.309, pp.145–156, 2018. [CrossRef] [Google Scholar]
- Gideon, John, Melvin G. McInnis, and Emily Mower Provost, “Improving cross-corpus speech emotion recognition with adversarial discriminative domain generalization (ADDoG),” in IEEE Transactions on Affective Computing, Vol.12, no.4, pp.1055–1068, 2019. [Google Scholar]
- Lu, Guanming, Liang Yuan, Wenjuan Yang, Jingjie Yan, and Haibo Li, “Speech emotion recognition based on long short-term memory and convolutional neural networks,” in Journal of Nanjing University of Posts and Telecommunications, Vol.38, no.5, pp.63–69, 2018. [Google Scholar]
- Liu, Zhen-Tao, Qiao Xie, Min Wu, Wei-Hua Cao, Ying Mei, and Jun-Wei Mao, “Speech emotion recognition based on an improved brain emotion learning model,” in Neurocomputing, Vol.309, pp.145–156, 2018. [CrossRef] [Google Scholar]
- Bhavan, Anjali, Pankaj Chauhan, and Rajiv Ratn Shah, “Bagged support vector machines for emotion recognition from speech,” in Knowledge-Based Systems, Vol.184, pp.104886, 2019. [CrossRef] [Google Scholar]
- Issa, Dias, M. Fatih Demirci, and Adnan Yazici, “Speech emotion recognition with deep convolutional neural networks,” in Biomedical Signal Processing and Control, Vol.59, pp.101894, 2020. [CrossRef] [Google Scholar]
- Schuller, Björn, Zixing Zhang, Felix Weninger, and Gerhard Rigoll, “Using multiple databases for training in emotion recognition: To unite or to vote?,” in Twelfth Annual Conference of the International Speech Communication Association, 2011. [Google Scholar]
- Atmaja, Bagus Tris, and Masato Akagi, “Speech emotion recognition based on speech segment using LSTM with attention model,” in 2019 IEEE International Conference on Signals and Systems (ICSigSys), pp.40–44, 2019. [CrossRef] [Google Scholar]
- Schirmer, Annett, and Thomas C. Gunter, “Temporal signatures of processing voiceness and emotion in sound,” in Social cognitive and affective neuroscience, Vol.12, no.6, pp.902–909, 2017. [CrossRef] [Google Scholar]
- Nardelli, Mimma, Gaetano Valenza, Alberto Greco, Antonio Lanata, and Enzo Pasquale Scilingo, “Recognizing emotions induced by affective sounds through heart rate variability,” in IEEE Transactions on Affective Computing, Vol.6, no.4, pp.385–394, 2015. [CrossRef] [Google Scholar]
- F. Dellaert, T. Polzin and A. Waibel, “Recognizing emotion in speech,” Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP, vol.3, pp. 1970–1973 1996. [CrossRef] [Google Scholar]
- https://ieeexplore.ieee.org/document/1623803?reload=true [Google Scholar]
- Askarzadeh, Alireza, “A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm,” in Computers & Structures, Vol.169, pp.1–12, 2016. [CrossRef] [Google Scholar]
- Rao, R. Venkata, “Teaching-learning-based optimization algorithm,” in Teaching learning based optimization algorithm, pp. 9–39, 2016. [CrossRef] [Google Scholar]
- Agrawal, Shyam Sunder. “Emotions in Hindi speech- analysis, perception and recognition.” 2011 International Conference on Speech Database and Assessments (Oriental COCOSDA) (2011): 7–13. [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.