Open Access
Issue |
ITM Web Conf.
Volume 45, 2022
2021 3rd International Conference on Computer Science Communication and Network Security (CSCNS2021)
|
|
---|---|---|
Article Number | 01039 | |
Number of page(s) | 6 | |
Section | Computer Technology and System Design | |
DOI | https://doi.org/10.1051/itmconf/20224501039 | |
Published online | 19 May 2022 |
- Pampouchidou, P. Simos, K. Marias, F. Meriaudeau, F. Yang, M. Pediaditis, and M. Tsiknakis, “Automatic assessment of depression based on visual cues: A systematic review” IEEE Trans. Affect. Comput., 1–27 (2017). [Google Scholar]
- Haque, Albert, et al. “Measuring depression symptom severity from spoken language and 3D facial expressions.” arXiv preprint arXiv:1811.08592 (2018). [Google Scholar]
- W. C. De Melo, E. Granger, M. B. Lopez, “Encoding temporal information for automatic depression recognition from facial analysis. “ In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 10801084 (2020). [Google Scholar]
- W. C. De Melo, E. Granger, A. Hadid, “Depression detection based on deep distribution learning.” In 2019 IEEE International Conference on Image Processing (ICIP), 4544-4548 (2019). [CrossRef] [Google Scholar]
- A. Jan, H. Meng, Y. F. B. A. Gaus, and F. Zhang, “Artificial intelligent system for automatic depression level analysis through visual and vocal expressions, ” IEEE Trans. Cogn. Develop. Syst. 10, 668–680 (2018). [CrossRef] [Google Scholar]
- Y. Zhu, Y. Shang, Z. Shao, and G. Guo, “Automated depression diagnosis based on deep networks to encode facial appearance and dynamics, ” IEEE Trans. Affect. Comput. 9, 578–584 (2018). [CrossRef] [MathSciNet] [Google Scholar]
- W. C. de Melo, E. Granger, and A. Hadid, “Combining global and local convolutional 3d networks for detecting depression from facial expressions, ” In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (2019). [Google Scholar]
- M. A. Jazaery and G. Guo, “Video-based depression level analysis by encoding deep spatiotemporal features, ” IEEE Trans. Affect. Comput., 1–8 (2018). [Google Scholar]
- X. Zhou, K. Jin, Y. Shang, and G. Guo, “Visually interpretable representation learning for depression recognition from facial images, ” IEEE Trans. Affect. Comput. 11, 3, 542-552 (2018). [Google Scholar]
- X. Zhou, P. Huang, H. Liu, S. Niu. Learning content-adaptive feature pooling for facial depression recognition in videos. Electronics Letters, 55, 11, 648–650 (2019). [CrossRef] [Google Scholar]
- X. Zhou, K. Jin, Y. Shang, et al.: ‘Visually interpretable representation learning for depression recognition from facial images’, Trans. Affect. Comput. 11, 3, 542-552 (2018). [Google Scholar]
- M. Muzammel, H. Salam, Y. Hoffmann, M. Chetouani, A. Othmani. AudVowelConsNet: A phoneme-level based deep CNN architecture for clinical depression diagnosis. Machine Learning with Applications 2, 100005 (2020). [CrossRef] [Google Scholar]
- L. He, C. Cao. Automated depression analysis using convolutional neural networks from speech. J. Biomed. Inform. 83, 103-111 (2018). [CrossRef] [Google Scholar]
- M. Valstar, et al., “AVEC 2013: The continuous audio/visual emotion and depression recognition challenge, ” Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge. 3–10 (2013). [CrossRef] [Google Scholar]
- M. Valstar, et al., “AVEC 2014: 3D dimensional affect and depression recognition challenge, ” Proceedings of the 4th international workshop on audio/visual emotion challenge, 3–10 (2014). [CrossRef] [Google Scholar]
- M. Valstar, J. Gratch, B. Schuller, et al. Avec 2016: Depression, mood, and emotion recognition workshop and challenge. Proceedings of the 6th international workshop on audio/visual emotion challenge (2016). [CrossRef] [Google Scholar]
- F. Ringeval, et al., AVEC 2017 Real-life Depression, and Affect Recognition Workshop and Challenge, Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge (2017). [CrossRef] [Google Scholar]
- L. Yang, D. Jiang, L. He, E. Pei, M. C. Oveneke, and H. Sahli, “Decision tree based depression classification from audio video and language information, ” Proceedings of the 6th international workshop on audio/visual emotion challenge., 89–96 (2016). [CrossRef] [Google Scholar]
- J. R. Williamson, et al., “Detecting depression using vocal, facial and semantic communication cues, ” Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, 11–18 (2016). [CrossRef] [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.