Open Access
| Issue |
ITM Web Conf.
Volume 81, 2026
International Conference on Emerging Technologies for Multidisciplinary Innovation and Sustainability (ETMIS 2025)
|
|
|---|---|---|
| Article Number | 01023 | |
| Number of page(s) | 7 | |
| DOI | https://doi.org/10.1051/itmconf/20268101023 | |
| Published online | 23 January 2026 | |
- Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297. https://doi.org/10.1007/BF00994018 [Google Scholar]
- Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI Magazine, 17(3), 37–54 https://www.kdnuggets.com/gpspubs/aimag-kdd-overview-1996-Fayyad.pdf [Google Scholar]
- Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67–82 https://doi.org/10.1109/4235.585893 [CrossRef] [Google Scholar]
- Angeline, P. J. (1997). Evolutionary computation: An overview. In Proceedings of the 1997 IEEE International Conference on Evolutionary Computation (pp. 443-448). IEEE. https://doi.org/10.1109/ICEC.1997.592330 [Google Scholar]
- Chapman, P., et al. (2000). CRISP-DM 1.0: Step-by-step data mining guide. SPSS Inc. [Google Scholar]
- https://www.sciencedirect.com/science/article/pii/S1877050921002416 [Google Scholar]
- Mariscal, G., Marbân, Ö., & Fernandez, C. (2002). KDD, SEMMA and CRISP-DM: A parallel overview. In Proceedings of the IADIS European Conference on Data Mining (pp. 1-5). IADIS Press.: https://www.researchgate.net/publication/220969845KDDsemmaandCRISP-DMA paralleloverview [Google Scholar]
- Liu, H., & Yu, L. (2003). Feature Selection Evaluation Application and Small Sample Performance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(6), 712–718. [Google Scholar]
- Pfahringer, B., Bensusan, H., & Giraud-Carrier, C. G. (2004). Meta-learning by landmarking various learning algorithms. In Proceedings of the 21st International Conference on Machine Learning (ICML '04) (pp. 743-750). ACM. https://www.researchgate.net/publication/221345088_ [Google Scholar]
- Bernstein, A., et al. (2008). Toward intelligent assistance for a data mining process. IEEE Transactions on Knowledge and Data Engineering, 21(6), 839–852. https://www.researchgate.net/publication/3297390 [Google Scholar]
- Serban, F., Truemper, K., & Jamil, H. (2009). Intelligent discovery assistants. In E. S. Corchado, A. Abraham, & W. Pedrycz (Eds.), Handbook of Research on Machine Learning Applications (pp. 1-23). IGI Global. https://doi.org/10.4018/978-1-60566-766-9.ch001 [Google Scholar]
- Kotsiantis, S. B. (2009). Particle swarm model selection. In International Conference on Enterprise Information Systems (pp. 53-59). Springerhttps://www.researchgate.net/publication/220320363 [Google Scholar]
- Kitchenham, B., & Charters, S. (2009). Guidelines for performing systematic literature reviews in software engineering (EBSE Technical Report EBSE-2007-01). Keele University. https://www.researchgate.net/publication/258968007 [Google Scholar]
- Gama, J., et al. (2010). Ensemble learning for data stream analysis. Data Mining and Knowledge Discovery, 21(1), 6–27. https://www.sciencedirect.com/science/article/abs/pii/S1566253516302329 [Google Scholar]
- Brazdil, P., et al. (2010). On learning algorithm selection for classification. Applied Intelligence, 33(2), 155–165. https://sci2s.ugr.es/keel/pdf/classification.pdf [Google Scholar]
- Rice, J. R. (2011). The algorithm selection problem. In Advances in Computers (vol. 15, pp. 65-118). Elsevier. https://doi.org/10.1016/S0065-2458(08)60520-3 [Google Scholar]
- Bergstra, J. S., & Bengio, Y. (2011). Algorithms for hyper-parameter optimization. Advances in Neural Information Processing Systems 24 (NIPS 2011) (pp. 2546-2554). https://papers.nips.cc/paper files Paper.pdf [Google Scholar]
- Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb), 281–305. https://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf [Google Scholar]
- Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2012). Sequential model-based optimization. In Learning and Intelligent Optimization (LION 5) (pp. 507-523). Springehttps://ml.irtformatik.uni-freiburg.de/wp-content/uploads/papers/11-LION5-SMAC.pdf [Google Scholar]
- Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. Advances in Neural Information Processing Systems 25 (NIPS 2012) (pp. 2951-2959) https://doi.org/10.48550/arXiv.1206.2944 [Google Scholar]
- Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25. https://www.researchgate.net/publication/319770183 Imagenet [Google Scholar]
- Xu, L., Hutter, F., Hoos, H. H., & Leyton-Brown, K. (2013). Algorithm selection for combinatorial search problems. AI Magazine, 34(4), 37–50. https://doi.org/10.48550/arXiv.1210.7959 [Google Scholar]
- Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., & Leyton-Brown, K. (2013). Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 847-855). https://doi.org/10.48550/arXiv.1208.3719 [Google Scholar]
- Vilalta, R., & Drissi, Y. (2014). A perspective view and survey of meta-learning. arXiv preprint A Perspective View And Survey Of Meta-Learning [Google Scholar]
- Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., & Ng, A. Y. (2014). Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567. https://doi.org/10.48550/arXiv.1412.5567 [Google Scholar]
- LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://www.researchgate.net/publication/277411157_Deep_Learning [CrossRef] [PubMed] [Google Scholar]
- Davenport, T. H., & Patil, D. J. (2015). Data scientist: The sexiest job of the 21st century. Harvard Business Review, 90(10), 70–76. https://www.researchgate.net/publication/232279315_Data_Scientist_The_Sexiest_Job_of_the_21st_Century [Google Scholar]
- Guyon, I., Bennett, K., Cawley, G., Escalante, H. J., Escalera, S., Ho, T. ., & Viegas, E. (2015). Design of the 2015 ChaLearn AutoML challenge. JMLR: Workshop and Conference Proceedings, 40, 1–9. https://www.researchgate.net/publication/328824061 [Google Scholar]
- He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770-778). https://doi.org/10.1109/CVPR.2016.90 [Google Scholar]
- Reif, M., Shafait, F., & Dengel, A. (2016). Automated data pre-processing via meta-learning. IEEE International Conference on Tools with Artificial Intelligence (ICTAI) (pp. 854–861). https://www.researchgate.net/publication/30787276 [Google Scholar]
- Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., & Leyton-Brown, K. (2016). Auto-WEKA 2.0: Automatic model selection and hyperparameter optimization in WEKA. Journal of Machine Learning Research, 17(1), 1–5. https://www.jmlr.org/papers/volume17/15-261Z15-261.pdf [Google Scholar]
- Falkner, S., Klein, A., & Hutter, F. (2016). Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) (pp. 477-492). https://www.cl.cam.ac.uk [Google Scholar]
- Amershi, S., Chickering, M., Drucker, S. M., Lee, B., Simard, P., & Suh, J. (2016). Model-Tracker: Redesigning performance analysis tools for machine learning. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 337-348). https://dl.acm.org/doi/10.1145/2702123.2702509 [Google Scholar]
- Ali, S., & Smith, K. A. (2017). A review of automatic selection of machine learning algorithms. Neurocomputing, 256, 3–15. https://www.researchgate.net/publication/303498946 A review of automatic selection methods for machine_learning_algorithms_and_hyper-parameter_values [Google Scholar]
- Zoph, B., & Le, Q. V. (2017). Neural architecture search with reinforcement learning. International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.1611.01578 [Google Scholar]
- Mendoza, H., Klein, A., Feurer, M., Springenberg, J. T., & Hutter, F. (2017). Towards automatically tuned neural networks. AutoML Workshop at ICML. https://link.springer.com/chapter/10.1007/978-3-030-0531 7 [Google Scholar]
- Vanschoren, J., et al. (2018). On the predictive power of meta-features in OpenML. International Journal of Applied Mathematics and Computer Science https://doi.org/10.1515/amcs-2017-0048 [Google Scholar]
- Wamba, S. F., et al. (2018). How can SMEs benefit from big data? Journal of Strategic Information Systems, 27(1), 53–67. https://doi.org/10.1002/qre.2008 [Google Scholar]
- Liu, H., et al. (2018). DARTS: Differentiable architecture search. ICLR 2018. https://doi.org/10.48550/arXiv.1806.09055 [Google Scholar]
- Zoph, B., et al. (2018). Learning transferable architectures for scalable image recognition. CVPR 2018. https://doi.org/10.48550/arXiv.1707.07012 [Google Scholar]
- Liu, C., et al. (2018). Progressive neural architecture search. ECCV 2018. https://doi.org/10.48550/arXiv.1712.00559 [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

