Open Access
Issue |
ITM Web Conf.
Volume 29, 2019
1st International Conference on Computational Methods and Applications in Engineering (ICCMAE 2018)
|
|
---|---|---|
Article Number | 03009 | |
Number of page(s) | 8 | |
Section | Applications in Information Technologies | |
DOI | https://doi.org/10.1051/itmconf/20192903009 | |
Published online | 15 October 2019 |
- A.F. Bobick, J.W. Davis. The recognition of human movement using temporal templates. IEEE Transactions on Pattem Analysis and Machine Intelligence 23.3, pp. 257–267 (2001). [CrossRef] [Google Scholar]
- I. Laptev, On space-time interest points. IJCV 64.2-3, pp. 107–123 (2005). [CrossRef] [Google Scholar]
- M. Andersson, L. Patino, G. J. Burghouts, A. Flizikowski, M. Evans, D. Gustafsson, H. Petersson, K. Schutte, J. Ferryman. Activity recognition and localization on a truck parking lot. Advanced Video and Signal Based Surveillance (2013). [Google Scholar]
- Z. Zivkovic. Improved adaptive Gaussian mixture model for background subtraction, ICPR, (2004). [Google Scholar]
- S. S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems, Artech House (1999). [Google Scholar]
- D. Fortun, P. Bouthemy, C. Kervrann, Optical flow modeling and computation: a survey. CVIU 134, pp. 1–21, (2015). [Google Scholar]
- E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, T. Brox, Flownet 2.0: Evolution of optical flow estimation with deep networks. IEEE Conference on CVPR , Vol.2 (2017). [Google Scholar]
- G.W. Taylor, R. Fergus, Y. LeCun, C. Bregler, Convolutional learning of spatio- temporal features. ECCV, Springer, Berlin, Heidelberg (2010). [Google Scholar]
- L.R. Medsker, L. C. Jain. Recurrent neural networks. Design and Applications 5 (2001). [Google Scholar]
- S. Herath, M. Harandi, F. Porikli. Going deeper into action recognition: A survey. Image and vision computing 60, pp. 4–21 (2017). [CrossRef] [Google Scholar]
- S. Ji, W. Xu, M. Yang, K. Yu. 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1), pp. 221–231 (2013). [CrossRef] [Google Scholar]
- A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, L. Fei-Fei. Large-scale video classification with convolutional neural networks. CVPR, pp. 1725–1732 (2014). [Google Scholar]
- J. Donahue, L.A. A, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proc. IEEE Conference on CVPR, pp. 2625–2634 (2015). [Google Scholar]
- S. Blunsden, R.B. B, The BEHAVE video dataset: ground truthed video for multiperson behavior classification. Annals of the BMVA 4.1–12 (2010). [CrossRef] [Google Scholar]
- L. Patino, T. Cane, A. Vallee, J. Ferryman, Pets 2016: Dataset and challenge. Proceedings of the IEEE Conference on CVPR Workshops (2016). [Google Scholar]
- MPEG standard. Retrieved on 2018, April 24 from MPEG homepage https://mpeg.chiariglione.org/. [Google Scholar]
- V. Kantorov, I. Laptev. Efficient feature extraction, encoding and classification for action recognition. Proceedings of the IEEE Conference on CVPR (2014). [Google Scholar]
- V. Gul, I. Laptev, C. Schmid, Long-term temporal convolutions for action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017). [Google Scholar]
- G. Farnebäck, Fast and accurate motion estimation using orientation tensors and parametric motion models. International Conference on Pattern Recognition, vol.1, pp. 135–139 (2000). [CrossRef] [Google Scholar]
- T. Zhang, W. Jia, B. Yang, J. Yang, X. He, Z. Zheng, Mowld: a robust motion image descriptor for violence detection. Multimedia Tools and Applications 76.1, pp. 14191438 (2017). [Google Scholar]
- C. Xinyi, L. Qingshan, G. Mingchen, D.N. N, Abnormal detection using interaction energy potentials. CVPR (2011). [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.