Open Access
Issue
ITM Web Conf.
Volume 12, 2017
The 4th Annual International Conference on Information Technology and Applications (ITA 2017)
Article Number 03030
Number of page(s) 5
Section Session 3: Computer
DOI https://doi.org/10.1051/itmconf/20171203030
Published online 05 September 2017
  1. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In NIPS, 2012. [Google Scholar]
  2. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. [Google Scholar]
  3. G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(1), 2012. [CrossRef] [EDP Sciences] [Google Scholar]
  4. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537, Nov. 2011. [Google Scholar]
  5. Li M, Zhou L, Yang Z, et al. Parameter server for distributed machine learning[C]//Big Learning NIPS Workshop. 2013, 6: 2. [Google Scholar]
  6. Chilimbi T, Suzue Y, Apacible J, et al. Project adam: Building an efficient and scalable deep learning training system[C]//11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14). 2014: 571–582. [Google Scholar]
  7. Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015[J]. Software available from tensorflow. org, 2015, 1. [Google Scholar]
  8. Docker Project on https://www.docker.io/, 2014. [Google Scholar]
  9. Verma A, Pedrosa L, Korupolu M, et al. Large-scale cluster management at Google with Borg[C]//Proceedings of the Tenth European Conference on Computer Systems. ACM, 2015: 18. [Google Scholar]
  10. Docker Inc. Docker Swarm. 2015. url: https://github.com/docker/swarm (visited on 01/15/2016). [Google Scholar]
  11. Hindman B, Konwinski A, Zaharia M, et al. Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center[C]//NSDI. 2011, 11: 22–22. [Google Scholar]
  12. Schwarzkopf M, Konwinski A, Abd-El-Malek M, et al. Omega: flexible, scalable schedulers for large compute clusters[C]//Proceedings of the 8th ACM European Conference on Computer Systems. ACM, 2013: 351–364. [CrossRef] [Google Scholar]
  13. Marathon on https://mesosphere.github.io/marathon/ [Google Scholar]
  14. Kubernetes on http://kubernetes.io, Aug. 2014. [Google Scholar]
  15. Apache Aurora on http://aurora.incubator.apache.org/, 2014. [Google Scholar]
  16. A. Narayanan. Tupperware: containerized deployment at Facebook. http://www.slideshare.net/dotCloud/tupperware-containerized-deployment-at-facebook, June 2014. [Google Scholar]
  17. MNIST on http://yann.lecun.com/exdb/mnist/ [Google Scholar]
  18. LeCun Y., Bottou L., Bengio Y., and Haffner P. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 86(11):2278–2324, (Nov. 1998). [CrossRef] [Google Scholar]
  19. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to hand- written zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989. [CrossRef] [Google Scholar]
  20. W. Zaremba, I. Sutskever, and O. Vinyals, “Recurrent neural network regularization,” arXiv preprint arXiv:1409.2329, 2014. [Google Scholar]
  21. S. Hochreiterand J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [CrossRef] [PubMed] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.