Open Access
ITM Web Conf.
Volume 43, 2022
The International Conference on Artificial Intelligence and Engineering 2022 (ICAIE’2022)
Article Number 01017
Number of page(s) 7
Published online 14 March 2022
  1. F. Anowar and S. Sadaoui, “Incremental Neural-Network Learning for Big Fraud Data,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2020, pp. 3551–3557, doi: 10.1109/SMC42975.2020.9283136. [CrossRef] [Google Scholar]
  2. L. J. P. Van Der Maaten, E. O. Postma, and H. J. Van Den Herik, “Dimensionality Reduction: A Comparative Review,” J. Mach. Learn. Res., vol. 10, pp. 1–41, 2009, doi: 10.1080/13506280444000102. [Google Scholar]
  3. K. Fukunaga, Introduction to Statistical Pattern Recognition. San Diego, CA, USA, 1990. [Google Scholar]
  4. L. O. Jimenez and D. A. Landgrebe, “Supervised classification in high-dimensional space: geometrical, statistical, and asymptotical properties of multivariate data,” IEEE Trans. Syst. Man, Cybern. Part C (Applications Rev., vol. 28, no. 1, pp. 39–54, 1998, doi: 10.1109/5326.661089. [CrossRef] [Google Scholar]
  5. F. Salo, A. B. Nassif, and A. Essex, “Dimensionality reduction with IG-PCA and ensemble classifier for network intrusion detection,” Comput. Networks, vol. 148, pp. 164–175, Jan. 2019, Accessed: Sep. 03, 2019. [Online]. Available: [CrossRef] [Google Scholar]
  6. M. Partridge and C. Rafael, “Fast dimensionality reduction and simple PCA,” Intell. Data Anal., vol. 2, no. 3, pp. 292–298, 1998, doi: 10.3233/IDA-1998-2304. [Google Scholar]
  7. P. Switzer, “Extensions of linear discriminant analysis for statistical classification of remotely sensed satellite imagery,” J. Int. Assoc. Math. Geol., vol. 12, no. 4, pp. 367–376, 1980, doi: 10.1007/BF01029421. [CrossRef] [Google Scholar]
  8. K. V Ravi Kanth, D. Agrawal, A. El Abbadi, and A. Singh, “Dimensionality Reduction for Similarity Searching in Dynamic Databases,” Comput. Vis. Image Underst., vol. 75, no. 1, pp. 59–72, 1999, doi: [CrossRef] [Google Scholar]
  9. I. Shahin, S. Hamsa, “Novel cascaded Gaussian mixture model-deep neural network classifier for speaker identification in emotional talking environments,” Neural Comput. Appl., pp. 1–13, Oct. 2018, doi: 10.1007/s00521-018-3760-2. [Google Scholar]
  10. A. B. Nassif, I. Shahin, S. Hamsa, N. Nemmour, and K. Hirose, “CASA-Based Speaker Identification Using Cascaded GMM-CNN Classifier in Noisy and Emotional Talking Conditions,” Appl. Soft Comput., vol. 103, pp. 1–24, 2021, doi: 10.1016/j.asoc.2021.107141. [CrossRef] [Google Scholar]
  11. S. Khalid, T. Khalil, and S. Nasreen, “A survey of feature selection and feature extraction techniques in machine learning,” in 2014 Science and Information Conference, 2014, pp. 372–378, doi: 10.1109/SAI.2014.6918213. [CrossRef] [Google Scholar]
  12. A. B. Nassif, M. Azzeh, L. F. Capretz, and D. Ho, “A comparison between decision trees and decision tree forest models for software development effort estimation,” in 2013 3rd International Conference on Communications and Information Technology, ICCIT 2013, 2013, pp. 220–224, doi: 10.1109/ICCITechnology.2013.6579553. [Google Scholar]
  13. M. Azzeh “Analogy-based effort estimation: a new method to discover set of analogies from dataset characteristics,” IET Softw., vol. 9, no. 2, pp. 39–50, 2015, doi: 10.1049/iet-sen.2013.0165. [CrossRef] [Google Scholar]
  14. M. Azzeh, S. Banitaan, and F. Almasalha, “Pareto efficient multi-objective optimization for local tuning of analogy-based estimation,” Neural Comput. Appl., vol. 27, no. 8, pp. 2241–2265, 2016, doi: 10.1007/s00521-015-2004-y. [CrossRef] [Google Scholar]
  15. A. K. Jain and B. Chandrasekaran, “39 Dimensionality and sample size considerations in pattern recognition practice,” in Classification Pattern Recognition and Reduction of Dimensionality, vol. 2, Elsevier, 1982, pp. 835–855. [CrossRef] [Google Scholar]
  16. H. Abdi and L. J. Williams, “Principal Component Analysis,” Wiley Interdiscip. Rev. Comput. Stat., vol. 4, no. 2, pp. 433–459, 2010. [CrossRef] [Google Scholar]
  17. A. Rajaraman and J. D. Ullman., Mining of massive datasets. Cambridge University Press, 2011. [CrossRef] [Google Scholar]
  18. A. G. Akritas and G. I. Malaschonok, “Applications of singular-value decomposition (SVD),” Math. Comput. Simul., vol. 67, no. 1, pp. 15–31, 2004, doi: [CrossRef] [Google Scholar]
  19. H. Abdi, Singular value decomposition (SVD) and generalized singular value decomposition. 2007. [Google Scholar]
  20. A. Van der Maaten and G. Hinton, “Visualizing Data using t-SNE,” J. Mach. Learn. Res., vol. 9, no. 11, pp. 187–202, 2008, doi: 10.1007/s10479-011-0841-3. [Google Scholar]
  21. B. Melit Devassy and S. George, “Dimensionality reduction and visualisation of hyperspectral ink data using t-SNE,” Forensic Sci. Int., vol. 311, p. 110194, 2020, doi: [CrossRef] [Google Scholar]
  22. F. H. M. Oliveira, A. R. P. Machado, and A. O. Andrade, “On the Use of t-Distributed Stochastic Neighbor Embedding for Data Visualization and Classification of Individuals with Parkinson’s Disease,” Comput. Math. Methods Med., vol. 2018, p. 8019232, 2018, doi: 10.1155/2018/8019232. [Google Scholar]
  23. L. J. Cao, K. S. Chua, W. K. Chong, H. P. Lee, and Q. M. Gu, “A comparison of PCA, KPCA and ICA for dimensionality reduction in support vector machine,” Neurocomputing, vol. 55, no. 1, pp. 321–336, 2003, doi: [CrossRef] [Google Scholar]
  24. C. O. S. Sorzano, J. Vargas, and A. P. Montano, “A survey of dimensionality reduction techniques,” pp. 1–35, 2014. [Google Scholar]
  25. A. Tharwat, “Independent component analysis: An introduction,” Appl. Comput. Informatics, vol. 17, no. 2, pp. 222–249, Jan. 2021, doi: 10.1016/j.aci.2018.08.006. [CrossRef] [Google Scholar]
  26. R. T. Olszewski, “Generalized feature extraction for structural pattern recognition in time-series data,” Carnegie Mellon University, Ann Arbor, 2001. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.