Open Access
Issue |
ITM Web Conf.
Volume 69, 2024
International Conference on Mobility, Artificial Intelligence and Health (MAIH2024)
|
|
---|---|---|
Article Number | 01002 | |
Number of page(s) | 6 | |
Section | Artificial Intelligence | |
DOI | https://doi.org/10.1051/itmconf/20246901002 | |
Published online | 13 December 2024 |
- E. Glikson et A. W. Woolley, «Human trust in artificial intelligence: Review of empirical research», Academy of Management Annals, vol. 14, no 2, p. 627–660, 2020. [CrossRef] [Google Scholar]
- K. A. Hoff et M. Bashir, «Trust in automation: Integrating empirical evidence on factors that influence trust», Human factors, vol. 57, no 3, p. 407–434, 2015. [CrossRef] [Google Scholar]
- «AI Index Report 2023 - Artificial Intelligence Index». [En ligne]. Disponible sur: https://aiindex.stanford.edu/report/ [Google Scholar]
- D. Gunning et D. Aha, «DARPA’s explainable artificial intelligence (XAI) program», AI magazine, vol. 40, no 2, p. 44–58, 2019. [CrossRef] [Google Scholar]
- S. M. Lundberg et S.-I. Lee, «A unified approach to interpreting model predictions», in Proceedings of the 31st international conference on neural information processing systems, 2017, p. 4768–4777. [Google Scholar]
- M. T. Ribeiro, S. Singh, et C. Guestrin, «“Why should i trust you?” Explaining the predictions of any classifier», in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, p. 1135–1144. [Google Scholar]
- E. Doumard, J. Aligon, E. Escriva, J.-B. Excoffier, P. Monsarrat, et C. Soulé-Dupuy, «A comparative study of additive local explanation methods based on feature influences», in 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data ((DOLAP 2022), CEUR- WS. org, 2022, p. 31–40. [En ligne]. Disponible sur: https://hal.science/hal-03687554/ [Google Scholar]
- A. B. Arrieta et al., «Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI», Information Fusion, vol. 58, p. 82–115, 2020. [CrossRef] [Google Scholar]
- L. Longo, R. Goebel, F. Lecue, P. Kieseberg, et A. Holzinger, «Explainable artificial intelligence: Concepts, applications, research challenges and visions», in International CrossDomain Conference for ML and Knowledge Extraction, Springer, 2020, p. 1–16. [Google Scholar]
- A. Adadi et M. Berrada, «Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)», IEEE access, vol. 6, p. 52138–52160, 2018. [CrossRef] [Google Scholar]
- P. Hase et M. Bansal, «Evaluating explainable AI: Which algorithmic explanations help users predict model behavior?», arXiv preprint arXiv:2005.01831, 2020. [Google Scholar]
- Z. C. Lipton, «The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.», Queue, vol. 16, no 3, p. 31–57, 2018. [CrossRef] [Google Scholar]
- T. Miller, «Explanation in artificial intelligence: Insights from the social sciences», Artificial intelligence, vol. 267, p. 1–38, 2019. [CrossRef] [MathSciNet] [Google Scholar]
- F. Doshi-Velez et B. Kim, «Towards a rigorous science of interpretable machine learning», arXiv preprint arXiv:1702.08608, 2017. [Google Scholar]
- C. Rudin, «Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead», Nature Machine Intelligence, vol. 1, no 5, p. 206–215, 2019. [CrossRef] [Google Scholar]
- V. Sumanasena et al., «Artificial Intelligence for Electric Vehicle Infrastructure: Demand Profiling, Data Augmentation, Demand Forecasting, Demand Explainability and Charge Optimisation», Energies, vol. 16, no 5, p. 2245, 2023. [CrossRef] [Google Scholar]
- R. Yan et S. Wang, «Ship detention prediction using anomaly detection in port state control: model and explanation», Electron. Res. Arch, vol. 30, p. 3679–3691, 2022. [CrossRef] [Google Scholar]
- J. Hangl, S. Krause, et V. J. Behrens, «Drivers, barriers and social considerations for AI adoption in SCM», Technology in Society, vol. 74, p. 102299, 2023. [CrossRef] [Google Scholar]
- E. Rajabi, S. Nowaczyk, S. Pashami, M. Bergquist, G. S. Ebby, et S. Wajid, «A Knowledge-Based AI Framework for Mobility as a Service», Sustainability, vol. 15, no 3, p. 2717, févr. 2023, DOI: 10.3390/su15032717. [CrossRef] [Google Scholar]
- F. Olan, K. Spanaki, W. Ahmed, et G. Zhao, «Enabling explainable artificial intelligence capabilities in supply chain decision support making», Production Planning & Control, p. 1–12, févr. 2024, DOI: 10.1080/09537287.2024.2313514. [CrossRef] [Google Scholar]
- S. Bhatia et A. S. Albarrak, «A blockchain- driven food supply chain management using QR code and XAI-faster RCNN architecture», Sustainability, vol. 15, no 3, p. 2579, 2023. [CrossRef] [Google Scholar]
- S. Verma, J. Dickerson, et K. Hines, «Counterfactual explanations for machine learning: A review», arXiv preprint arXiv:2010.10596, vol. 2, 2020, [En ligne]. Disponible sur: https://ml-retrospectives.github.io/neurips2020/cameraready/5.pdf [Google Scholar]
- J. van der Waa, E. Nieuwburg, A. Cremers, et M. Neerincx, «Evaluating XAI: A comparison of rule-based and example-based explanations», Artificial intelligence, vol. 291, p. 103404, 2021. [CrossRef] [Google Scholar]
- M. A. Jahin, M. S. H. Shovon, M. S. Islam, J. Shin, M. F. Mridha, et Y. Okuyama, «QAmplifyNet: pushing the boundaries of supply chain backorder prediction using interpretable hybrid quantum-classical neural network», Scientific Reports, vol. 13, no 1, p. 18246, 2023. [CrossRef] [Google Scholar]
- S. Laato, M. Tiainen, A. K. M. Najmul Islam, et M. Mäntymäki, «How to explain AI systems to end users: a systematic literature review and research agenda», Internet Research, vol. 32, no 7, p. 1–31, janv. 2022, DOI: 10.1108/INTR-08-2021-0600. [CrossRef] [Google Scholar]
- W. H. DeLone et E. R. McLean, «Information Systems Success: The Quest for the Dependent Variable», Information Systems Research, vol. 3, no 1, p. 60–95, mars 1992, DOI: 10.1287/isre.3.1.60. [CrossRef] [Google Scholar]
- «The DeLone and McLean Model of Information Systems Success: A Ten-Year Update», Journal of Management Information Systems, vol. 19, no 4, p. 9–30, avr. 2003, DOI: 10.1080/07421222.2003.11045748. [CrossRef] [Google Scholar]
- T. El Oualidi et S. Assar, «Does AI Explainability Meet End-Users’ Requirements? Insights From A Supply Chain Management Case Study», ECIS 2024 TREOS, juin 2024, [En ligne]. Disponible sur: https://aisel.aisnet.org/treos_ecis2024/43 [Google Scholar]
- M. Polanyi, «The tacit dimension», in Knowledge in organisations, Routledge, 2009, p. 135–146. [Google Scholar]
- A. J. Karran, T. Demazure, A. Hudon, S. Senecal, et P.-M. Léger, «Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions», Front. Neurosci., vol. 16, juin 2022, DOI: 10.3389/fnins.2022.883385. [CrossRef] [Google Scholar]
- K. Eberhard, «The effects of visualization on judgment and decision-making: a systematic literature review», Manag Rev Q, vol. 73, no 1, p. 167–214, févr. 2023, DOI: 10.1007/s11301-021-00235-8. [Google Scholar]
- S. Lechler, A. Canzaniello, B. Roßmann, H.A. von der Gracht, et E. Hartmann, «Real-time data processing in supply chain management: revealing the uncertainty dilemma», International Journal of Physical Distribution & Logistics Management, vol. 49, no 10, p. 1003–1019, janv. 2019, DOI: 10.1108/IJPDLM-12-2017-0398. [CrossRef] [Google Scholar]
- «One Size Does Not Fit All», Center for Security and Emerging Technology. [En ligne]. Disponible sur: https://cset.georgetown.edu/publication/one-size-does-not-fit-all/ [Google Scholar]
- H. A. Simon, «Bounded rationality in social science: Today and tomorrow», Mind & Society, vol. 1, no 1, p. 25–39, mars 2000, DOI: 10.1007/BF02512227. [CrossRef] [Google Scholar]
- L. Waardenburg et M. Huysman, «From coexistence to co-creation: Blurring boundaries in the age of AI», Information and Organization, vol. 32, no 4, p. 100432, 2022. [CrossRef] [Google Scholar]
- «Explicabilité (IA)». [En ligne]. Disponible sur: https://www.cnil.fr/fr/definition/explicabilite-ia [Google Scholar]
- O. Biran et C. Cotton, «Explanation and justification in machine learning: A survey», in IJCAI-17 workshop on explainable AI (XAI), 2017, p. 8–13. [Google Scholar]
- U. Ehsan, Q. V. Liao, M. Muller, M. O. Riedl, et J. D. Weisz, «Expanding explainability: Towards social transparency in ai systems», in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, p. 1–19. [Google Scholar]
- Y. Li et J. Hahn, «Review of Research on Human Trust in Artificial Intelligence», 2022, Consulté le: 9 décembre 2023. [En ligne]. Disponible sur: https://aisel.aisnet.org/icis2022/aibusiness/aibusiness/8/ [Google Scholar]
- K. Bauer, M. von Zahn, et O. Hinz, «Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing», Information Systems Research, vol. 34, no 4, p. 1582–1602, déc. 2023, DOI: 10.1287/isre.2023.1199. [CrossRef] [Google Scholar]
- «AI Act | Shaping Europe’s digital future». [En ligne]. Disponible sur: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.