Open Access
Issue
ITM Web Conf.
Volume 56, 2023
First International Conference on Data Science and Advanced Computing (ICDSAC 2023)
Article Number 02002
Number of page(s) 10
Section Data Science
DOI https://doi.org/10.1051/itmconf/20235602002
Published online 09 August 2023
  1. M. K. Dahouda and I. Joe, “A Deep-Learned Embedding Technique for Categorical Features Encoding,” in IEEE Access, vol. 9, pp. 114381–114391, 2021, DOI: 10.1109/ACCESS.2021.3104357 [CrossRef] [Google Scholar]
  2. H. De Meulemeester and B. De Moor, “Unsupervised Embeddings for Categorical Variables,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 2020, pp. 1-8, DOI: 10.1109/IJCNN48605.2020.9207703. [Google Scholar]
  3. K. Mitrović, D. Milošević and M. Greconici, “Comparison of Machine Learning Algorithms for Shelter Animal Classification,” 2019 IEEE 13 th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 2019, pp. 211-216, DOI: 10.1109/SACI46893.2019.9111575. [CrossRef] [Google Scholar]
  4. Joseph, Manu. “Pytorch tabular: A framework for deep learning with tabular data.” arXiv preprint arXiv:2104.13638 (2021). [Google Scholar]
  5. Qian Zhao, Yue Shi, and Liangjie Hong. 2017. GBCENT: Gradient Boosted Categorical Embedding and Numerical Trees. In Proceedings of the 26th International Conference on World Wide Web (WWW ‘17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1311-1319. [CrossRef] [Google Scholar]
  6. Kang, Wang-Cheng, Derek Zhiyuan Cheng, Tiansheng Yao, Xinyang Yi, Ting Chen, Lichan Hong, and Ed H. Chi. “Learning to embed categorical features without embedding tables for recommendation.” arXiv preprint arXiv:2010.10784 (2020). [Google Scholar]
  7. Guo, Cheng, and Felix Berkhahn. “Entity embeddings of categorical variables.” arXiv preprint arXiv:1604.06737 (2016). [Google Scholar]
  8. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., & Lerer, A. (2019). PyTorch: An Imperative Style, High- Performance Deep Learning Library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Ad’lché-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 32 (NeurIPS 2019) [Google Scholar]
  9. S. Kim, H. Wimmer and J. Kim, “Analysis of Deep Learning Libraries: Keras, PyTorch, and MXnet,” 2022 IEEE/ACIS 20th International Conference on Software Engineering Research, Management and Applications (SERA), Las Vegas, NV, USA, 2022, pp. 54-62, DOI: 10.1109/SERA54885.2022.9806734. [Google Scholar]
  10. C. Sazara and X. Gao, “Predicting Animal Shelter Pet Adoption Times and Feature Importance Analysis using CatBoost,” 2022 IEEE 11th International Conference on Intelligent Systems (IS), Warsaw, Poland, 2022, pp. 1-4, DOI: 10.1109/IS57118.2022.10019608 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.