Open Access
Issue
ITM Web Conf.
Volume 40, 2021
International Conference on Automation, Computing and Communication 2021 (ICACC-2021)
Article Number 03006
Number of page(s) 6
Section Computing
DOI https://doi.org/10.1051/itmconf/20214003006
Published online 09 August 2021
  1. S. Basu, J. Chakraborty and M. Aftabuddin, “Emotion recognition from speech using convolutional neural network with recurrent neural network architecture,” 2017 2nd International Conference on Communication and Electronics Systems (ICCES), 2017, pp. 333–336, doi: 10.1109/CESYS.2017.8321292. [Google Scholar]
  2. J. Umamaheswari and A. Akila, “An Enhanced Human Speech Emotion Recognition Using Hybrid of PRNN and KNN,” 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), 2019, pp. 177–183, doi: 10.1109/C0MITCon.2019.8862221. [Google Scholar]
  3. B. Puterka and J. Kacur, “Time Window Analysis for Automatic Speech Emotion Recognition,” 2018 International Symposium ELMAR, Zadar, 2018, pp. 143–146. doi: 10.23919/ELMAR.2018.85. [Google Scholar]
  4. S. K. Pandey, H. S. Shekhawat and S. R. M. Prasanna, “Deep Learning Techniques for Speech Emotion Recognition: A Review,” 2019 29th International Conference Radioelektronika (RADIOELEK-TRONIKA), 2019, pp. 1–6, doi: 10.1109/RA-DIOELEK.2019.8733432. [Google Scholar]
  5. A. B. Abdul Qayyum, A. Arefeen and C. Shahnaz, “Convolutional Neural Network (CNN) Based Speech-Emotion Recognition,” 2019 IEEE International Conference on Signal Processing, Information, Communication Systems (SPICSCON), 2019, pp. 122–125, doi: 10.1109/SPICSC0N48833.2019.9065172. [Google Scholar]
  6. M. S. Likitha, S. R. R. Gupta, K. Hasitha and A. U. Raju, “Speech based human emotion recognition using MFCC,” 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), 2017, pp. 2257–2260, doi: 10.1109/WiSP-NET.2017.8300161. [Google Scholar]
  7. B. Puterka, J. Kacur and J. Pavlovicova, “Windowing for Speech Emotion Recognition,” 2019 International Symposium ELMAR, 2019, pp. 147–150, doi: 10.1109/ELMAR.2019.8918885. [Google Scholar]
  8. M. K. Pichora-Fuller and K. Dupuis, “Toronto emotional speech set (TESS),” 2010 [Google Scholar]
  9. Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391 [Google Scholar]
  10. B. Vlasenko, B. Schuller, A. Wendemuth, and G. Rigoll, “Combining frame and turn-levelinformation for robust recognition of emotions within speech,”Proceedings of Interspeech,pp. 2249–2252, 01 2007. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.