Open Access
ITM Web Conf.
Volume 56, 2023
First International Conference on Data Science and Advanced Computing (ICDSAC 2023)
Article Number 03004
Number of page(s) 10
Section Deep Learning
Published online 09 August 2023
  1. D. Bruckner, H. Zeilinger and D. Dietrich, “C ognitive Automation! Survey of Novel Artificial General Intelligence Methods for the Automation of Human Technical Environments,” IEEE Trans. Industr. Inform., vol. 8, no. 2, pp. 206–215, 2012. [CrossRef] [Google Scholar]
  2. X. Zhang et al., “Fatigue Detection With Covariance Manifolds of Electroencephalography in Transportation Industry,” IEEE Trans. Industr. Inform., vol. 17, no. 5, pp. 3497–3507, 2021. [CrossRef] [Google Scholar]
  3. A. Adikari, D. De Silva, D. Alahakoon and X. Yu, “A Cognitive Model for Emotion Awareness in Industrial Chatbots,” in Proc. IEEE Int. Conf. Ind. Informatics (INDIN), 2019, pp. 183–186. [Google Scholar]
  4. Q. Wei, T. Li and D. Liu, “Learning Control for Air Conditioning Systems via Human Epressions,” IEEE Trans. Ind. Electron., vol. 68, no. 8, pp. 7662–7671, 2021. [CrossRef] [Google Scholar]
  5. B. L. i, D. L. ima, “Facial expression rec ognition via ResNet-50,” Int. J. Artif. Intell. T., vol. 2, pp. 57–64, 2021. [Google Scholar]
  6. M. D. Putro, D.-L. Nguyen and K.-H. Jo, “A Fast CPU Real-time Facial Expression Detector using Sequential Attention Network for Human-robot Interaction,” IEEE Trans. Industr. Inform., Early Access Article, DOI: 10.1109/TII.2022.3145862, 2022. [Google Scholar]
  7. Z. Xi, Y. Niu, J. Chen, X. Kan and H. L. Iu, “Fac ial Expression Recognition of Industrial Internet of Things by Parallel Neural Networks C ombining Texture Features,” IEEE Trans. Industr. Inform., vol. 17, no. 4, pp. 2784–2793, 2021. [CrossRef] [Google Scholar]
  8. J. Hu, L. Shen, S. Albanie, G. Sun and E. Wu, “Squeeze-and-Excitation Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 8, pp. 2011–2023, 2020. [CrossRef] [Google Scholar]
  9. Bhuyan H. K., Chakraborty C., Explainable machine learning for data extraction across computational social system, IEEE Transactions on Computational Social Systems, pages: 1-15, 2022. [CrossRef] [Google Scholar]
  10. S. Woo, J. Park, J.-Y. Lee and I. So Kweon, “CBAM: C onvolutional block attention module,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 3–19. [Google Scholar]
  11. Z. Zhao, Q. Liu and S. Wang, “L earning Deep Global Multi-Scale and Local Attention Features for Facial Expression Recognition in the Wild,” IEEE Transactions on Image Processing, vol. 30, pp. 6544–6556, 2021. [CrossRef] [Google Scholar]
  12. T. Ko and H. Kim, “Fault Classification in High-Dimensional Complex Processes Using Semi-Supervised Deep Convolutional Generative Mod-els,” IEEE Trans. Industr. Inform., vol. 16, no. 4, pp. 2868–2877, 2020. [CrossRef] [Google Scholar]
  13. Bhuyan H. K., Vinayakumar Ravi, M. Srikanth Yadav, Multi-objective optimization-based privacy in data mining, Cluster computing (Springer), Vol- 25, is-sue 6, pages 4275-4287 (2022). [Google Scholar]
  14. Y. Lee, J.-W. Hwang, S. Lee, Y. Bae and J. Park, “An Energy and GPU-Computation EfcientBackbone Network for Real-Time ObjectDetection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. PatternRecogn. Workshops (CVPRW), 2019, pp. 752–760. [Google Scholar]
  15. Bhuyan H. K., Vinay Kumar Ravi, An Integrated Framework with Deep learning for Segmentation and Classification of Cancer Disease, Int J. on Artificial In-telligence Tools (IJAIT), Vol. 32, No. 02, 2340002 (2023) [CrossRef] [Google Scholar]
  16. Bhuyan H. K., A. Vijayaraj, Vinay Kumar Ravi, Development of Secrete Images in Image Transferring System, Multimedia Tools and Applications 82 (5), 7529-7552. 2023. [CrossRef] [Google Scholar]
  17. L. Yang, R. Y. Zhang, L. Li and X. Xie, “Simam: A simple, parameter-free attention module for convolutional neural networks,” in Proc. Int. Conf. Mach. Learn. (ICML), 2021, pp. 11863–11874. [Google Scholar]
  18. Bhuyan H. K., Kamila N. K., Pani S. K., Individual privacy in data mining using fuzzy optimization, Engineering Optimization, Taylor & Francis, Vol. 54, Issue 8, pp. 1305-1323, 2022. [CrossRef] [MathSciNet] [Google Scholar]
  19. C. Chakraborty, K. Mishra, S. K. Majhi, H. K. Bhuyan, Intelligent Latency-aware tasks prioritization and offloading strategy in Distributed Fog-Cloud of Things, IEEE Transactions on Industrial Informatics, VOL. 19, NO. 2, February 2023. [Google Scholar]
  20. A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A database for facial expression, valence, and arousal computing in the wild,” IEEE Trans. Affect. Comput., vol. 10, no. 1, pp. 18–31, 2019. [CrossRef] [Google Scholar]
  21. A. Vijayaraj, Bhuyan H. K., P.T. Vasanth Raj, M. Vijay Anand, Congestion Avoidance Using Enhanced Blue Algorithm, Wireless Personal Communications 128 (3), 1963-1984 2023. [CrossRef] [Google Scholar]
  22. L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han, “On the Variance of the Adaptive L earning Rate and Beyond,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2020, pp. 1–13. [Google Scholar]
  23. P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset foraction unit and emotion-specified expression,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn. Workshops (CVPRW), 2010, pp. 94–101. [Google Scholar]
  24. O. Langner, R. Dotsch, G. Bijlstra, D. H. Wigboldus, S. T. Hawk, and A. Van Knippenberg, “Presentation and validation of the radboud faces database,” Cogn. Emotion, vol. 24, no. 8, pp. 1377–1388, 2010. [CrossRef] [Google Scholar]
  25. S. Li and W. Deng, “Reliable crowdsourcing and deep locality preserving learning for unc onstrained fac ial expression recognition,” IEEE Trans. Image Process., vol. 28, no. 1, pp. 356–370, 2019. [CrossRef] [MathSciNet] [Google Scholar]
  26. E. Barsoum, C. Zhang, C. C. Ferrer, and Z. Zhang, “Training deep networks for facial expression recognition with crowd-sourced label distribution,” in Proc. ACM Int. Conf. Multimodal Interact. (ICMI), 2016, pp. 279–283. [Google Scholar]
  27. Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo and Q. Hu, “ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks,” Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 11531–11539. [Google Scholar]
  28. Bhuyan H. K., Chakraborty C., Pani S. K., Ravi Vinay Kumar Feature and Sub-Feature Selection for Classification using Correlation Coefficient and Fuzzy model, IEEE Transaction on Engineering Management, Volume: 70, Issue: 5, May 2023. [Google Scholar]
  29. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 1 6x1 6 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020. [Google Scholar]
  30. Bhuyan H. K., Vinay Kumar Ravi, Analysis of Sub-feature for Classifica-tion in Data Mining, IEEE Transaction on Engineering Management, 2021. [Google Scholar]
  31. M. Fuyan, S. Bin and L. Shutao, “Facial Expression Rec ognition with Visual Transformers and Attentional Selective Fusion,” IEEE Trans. Affective Comput., Early Access Article, DOI: 10.1109/TAFFC.2021.3122146, 2021. [Google Scholar]
  32. Bhuyan H. K., M. Saikiran, Murchhana Tripathy, Vinayakumar Ravi, Wide-ranging approach-based feature selection for classification, Multimedia Tools and Ap-plications, pages: 1-28, 2022. [Google Scholar]
  33. Q. Huang, C. Huang, X. Wang and F. Jiang, “Facial expression recognition with grid-wise attention and visual transformer,” Inf. Sci., vol. 580, pp. 35–54, 2021. [CrossRef] [Google Scholar]
  34. I. Cugu, E. Sener and E. Akbas, “MicroExpNet: An Extremely Small and Fast Model For Expression Recognition From Face Images,” in Proc. Int. Conf. Image Process. Theory, Tools Appl. (IPTA), 2019, pp. 1–6. [Google Scholar]
  35. Bhuyan H. K., Vinayakumar Ravi, Biswajit Brahma, Nilayam Kumar Kamila, Disease analysis using machine learning approaches in healthcare system, Health and Technology, Vol. 12, Issue 5, pages: 987-1005, 2022. [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.