Open Access
Issue
ITM Web Conf.
Volume 40, 2021
International Conference on Automation, Computing and Communication 2021 (ICACC-2021)
Article Number 03008
Number of page(s) 6
Section Computing
DOI https://doi.org/10.1051/itmconf/20214003008
Published online 09 August 2021
  1. Sharma, Gyanendra & Mala, Shuchi, 2020 10th International Conference on Cloud Computing, Data Science Engineering (Confluence), “Framework for gender recognition using voice”, 32–37, (IEEE, India, 2020). [CrossRef] [Google Scholar]
  2. Koduru Anusha, Hima Bindu Valiveti, and Anil Kumar Budati, International Journal of Speech Technology 23, 1–11 (2020). [CrossRef] [Google Scholar]
  3. Mr. Sundar Ka, Sadagopan E.Nb, Chandran Mc, Aswin Raja S, “Emotion Recognition Using Support Vector Machine.” (2020). [Google Scholar]
  4. Gumelar, Agustinus Bimo, et al, 2019 IEEE 7th International Conference on Serious Games and Applications for Health (SeGAH), “Human Voice Emotion Identification Using Prosodic and Spectral Feature Extraction Based on Deep Neural Networks” (IEEE, Japan, 2019). [Google Scholar]
  5. Jiang, Wei & Wang, Zheng & Jin, Jesse & Han, Xian-feng & Li, Chunguang.”Speech Emotion Recognition with Heterogeneous Feature Unification of Deep Neural Network. Sensors” (2019) [Google Scholar]
  6. Aggarwal, Gaurav, and Rekha Vig, 2019 Amity International Conference on Artificial Intelligence (AICAI), “Acoustic Methodologies for Classifying Gender and Emotions using Machine Learning Algorithms.”,(IEEE,United Arab Emirates, 2019), 672 – 677 (2019). [CrossRef] [Google Scholar]
  7. Jain, Manas & Narayan, Shruthi & Balaji, Pratibha & Bhowmick, Abhijit & Muthu, Rajesh. ”Speech Emotion Recognition using Support Vector Machine”, 45–55 (2018). [Google Scholar]
  8. Poonam Rani, and Ms Geeta, International Journal of Electronics Engineering (ISSN: 0973-7383), 10 • Issue 1, 165–174 (2018). [Google Scholar]
  9. Hossain, Nazia & Jahan, Rifat & Tunka, Tanjila, International Journal of Software Engineering & Applications, 9, 37–44 (2018). [CrossRef] [Google Scholar]
  10. Kerkeni, Leila & Serrestou, Youssef & Mbarki, Mohamed & Raoof, Kosai & Mahjoub, Mohamed, 10th International Conference on Agents and Artificial Intelligence, 175–182 (2018). [Google Scholar]
  11. Alshamsi, Humaid & Kepuska, Veton & Alshamsi, Hazza & Meng, Hongying, 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON), (IEEE, USA,2018), “Automated Speech Emotion Recognition on Smart Phones”, 44–50. [Google Scholar]
  12. Wang, Zhong-Qiu & Tashev, Ivan. Learning utterance-level representations for speech emotion and age/gender recognition using deep neural networks, (2017). [Google Scholar]
  13. Sengupta, Saptarshi & Yasmin, Ghazaala & Ghosal, Dr.Arijit, International Conference on Computing, Communication and Networking Technologies (ICC- CNT 2017) ’Classification of Male and Female Speech Using Perceptual Features” (2017). [Google Scholar]
  14. Pahwa, Anjali & Aggarwal, Gaurav, International Journal of Image, Graphics and Signal Processing, 8, 16–25 (2016) [CrossRef] [Google Scholar]
  15. Xavier, Arputha rathina, International Journal of Computer Science, Engineering and Applications, 2, 99–107 (2012). [Google Scholar]
  16. Paulraj, M. P., et al. “A speech recognition system for Malaysian English pronunciation using Neural Network.” (2009). [Google Scholar]
  17. Rong, Jia & Li, Gang & Chen, Yi-Ping Phoebe, Information Processing & Management 45, 315–328 (2009). [CrossRef] [Google Scholar]
  18. Rosenberg, Aaron & Sambur, Marvin, IEEE Transactions on Acoustics, Speech, and Signal Processing 23, 169–176 (1975). [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.