Open Access
Issue
ITM Web Conf.
Volume 44, 2022
International Conference on Automation, Computing and Communication 2022 (ICACC-2022)
Article Number 03016
Number of page(s) 7
Section Computing
DOI https://doi.org/10.1051/itmconf/20224403016
Published online 05 May 2022
  1. M. Goto and R. B. Dannenberg, “Music Interfaces Based on Automatic Music Signal Analysis: New Ways to Create and Listen to Music” in IEEE Signal Processing Magazine, vol. 36, no. 1, pp. 74–81, Jan. 2019, doi: 10.1109/MSP.2018.2874360. [CrossRef] [Google Scholar]
  2. Y.M.G. Costa, L.S. Oliveira, C.N. Silla, “An evaluation of Convolutional Neural Networks for music classification using spectrograms”, in Applied Soft Computing, Volume 52, 2017, Pages 28–38, ISSN 1568-4946, https://doi.org/10.1016/j.asoc.2016.12.024. [CrossRef] [Google Scholar]
  3. S. Gollapudi, Practial Machine Learning. Birmingham, U.K.: Packt, 2016. [Google Scholar]
  4. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning.(Report),” Nature, vol. 521, no. 7553, p. 436, May 2015, 2015. [CrossRef] [PubMed] [Google Scholar]
  5. T. Kim, J. Lee, J. Nam, Comparison and analysis of sampleCNN architectures for audio classification, IEEE J. Sel. Top. Sign. Proces. 13 (2) (2019) 285–297. [CrossRef] [Google Scholar]
  6. N. Pelchat and C. M. Gelowitz, “Neural Network Music Genre Classification,” in Canadian Journal of Electrical and Computer Engineering, vol. 43, no. 3 pp. 170–173, Summer 2020, doi: 10.1109/CJECE.2020.2970144. [CrossRef] [Google Scholar]
  7. J. R. Castillo and M. J. Flores, “Web-Based Music Genre Classification for Timeline Song Visualization and Analysis” in IEEE Access, vol. 9, pp. 18801–18816, 2021, doi: 10.1109/ACCESS.2021.3053864. [CrossRef] [Google Scholar]
  8. W. W. Y. Ng, W. Zeng and T. Wang, “Multi-Level Local Feature Coding Fusion for Music Genre Recognition” in IEEE Access, vol. 8, pp. 152713–152727, 2020, doi: 10.1109/ACCESS.2020.3017661. [CrossRef] [Google Scholar]
  9. G. Peeters, 2021, 15 August 2021 [https://ismir.net/resources/datasets/] [Google Scholar]
  10. G. Tzanetakis, 2015, 15 August 2021, [http://marsyas.info/downloads/datasets.html] [Google Scholar]
  11. J. Thickstun, Z. Harchaoui, S. M. Kakade, November 30, 2016, 15 August 2021 [https://zenodo.org/record/5120004#.YbnBRjNBzIV] [Google Scholar]
  12. D. Yu, H. Duan, J. Fang and B. Zeng, “Predominant Instrument Recognition Based on Deep Neural Network With Auxiliary Classification,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp. 852–861, 2020, doi: 10.1109/TASLP.2020.2971419. [CrossRef] [Google Scholar]
  13. J. D. Deng, C. Simmermacher, and S. Cranefield, “A study on feature analysis for musical instrument classification,” IEEE Trans. Syst., Man, Cybern., Part B (Cybern.), vol. 38, no. 2 pp. 429–438, Apr. 2008. [CrossRef] [Google Scholar]
  14. Scheirer, E., and M. Slaney, “Construction and Evaluation of a Robust Multifeature Speech/Music Discriminator,” IEEE International Conference on Acoustics, Speech, and Signal Processing. Volume 2, 1997, pp. 1221–1224. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.