| Issue |
ITM Web Conf.
Volume 85, 2026
Intelligent Systems for a Sustainable Future (ISSF 2026)
|
|
|---|---|---|
| Article Number | 03011 | |
| Number of page(s) | 9 | |
| Section | Data Science, IoT, Optimization & Predictive Analytics | |
| DOI | https://doi.org/10.1051/itmconf/20268503011 | |
| Published online | 09 April 2026 | |
An Empirical Comparative Analysis of LSTM-Based Time Series Models for Skill Gap Forecasting
1 Research Scholar, School of Computing Sciences, Vels Institute of Science, Technology and Advanced Studies (VISTAS), Chennai, Tamil Nadu, India. & Dean of Placement, Shrimathi Devkunvar Nanalal Bhatt Vaishnav College for Women (Autonomous), Chennai, Tamil Nadu, India
2 Professor, Department of Applied Computing and Emerging Technologies, Vels Institute of Science, Technology and Advanced Studies (VISTAS), Chennai 600117, Tamil Nadu, India
This email address is being protected from spambots. You need JavaScript enabled to view it.
This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract
Ongoing skills mismatches between available talent pools and changing business needs remain an important obstacle to organisational productivity and country level economic competitiveness. Predicting the locations of emerging competency gaps that may eventually translate into business process failures requires tools that can effectively extract patterns of sequence demand over time. In this paper, we report the results of an empirical investigation into the performance of seven sequence architectures, namely, Vanilla LSTM, Stacked LSTM, Bidirectional LSTM, CNN-LSTM, LSTM with Bahdanau Attention, GRU, and Transformer, on four synthetic demand sequences over ninety six months for the Technology, Healthcare, Finance, and Manufacturing industries. Using identical evaluation protocols for all architectures, we show that the LSTM with Bahdanau Attention architecture yields the minimum forecasting error for all four metrics: RMSE = 0.0643, MAE = 0.0472, MAPE = 5.09%, R2 = 0.9142. Notably, the LSTM with Bahdanau Attention architecture improves the RMSE error over the baseline Vanilla LSTM architecture by 21.9%. Using the Wilcoxon signed-rank test, we validate the statistical significance of the pairwise differences between architectures. Moreover, we report that the standard deviations of the architectures over five separate training runs remain below 0.001 for all architectures. These results offer evidence-based, metric-driven recommendations for the construction of the next generation of workforce intelligence forecasting systems.
© The Authors, published by EDP Sciences, 2026
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

