Tài liệu tham khảo |
Loại |
Chi tiết |
[1] Y. Lecun, L. Bottou, Y. Bengio, and P. Ha, “LeNet,” Proc. IEEE, 1998 |
Sách, tạp chí |
Tiêu đề: |
LeNet,” "Proc. IEEE |
|
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “2012 AlexNet,” Adv. Neural Inf. Process. Syst., 2012 |
Sách, tạp chí |
Tiêu đề: |
2012 AlexNet,” "Adv. Neural Inf. "Process. Syst |
|
[3] C. Szegedy et al., “GoogLeNet,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2014 |
Sách, tạp chí |
Tiêu đề: |
et al.", “GoogLeNet,” "Proc. IEEE Comput. Soc. Conf. Comput. Vis. "Pattern Recognit |
|
[4] K. Simonyan and A. Zisserman, “VGG-16,” arXiv Prepr., 2014 |
Sách, tạp chí |
Tiêu đề: |
VGG-16,” "arXiv Prepr |
|
[5] K. He, X. Zhang, S. Ren, and J. Sun, “ResNet,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2016 |
Sách, tạp chí |
Tiêu đề: |
ResNet,” "Proc. IEEE Comput. Soc. Conf. "Comput. Vis. Pattern Recognit |
|
[6] X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” 2018, doi:10.1109/CVPR.2018.00716 |
Sách, tạp chí |
Tiêu đề: |
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices |
|
[7] T. B. Brown et al., “Language mô hìnhs are few-shot learners,” arXiv. 2020 |
Sách, tạp chí |
Tiêu đề: |
et al.", “Language mô hìnhs are few-shot learners,” "arXiv |
|
[8] A. Gajurel, S. J. Louis, and F. C. Harris, “GPU Acceleration of Sparse Neural Networks,” arXiv. 2020 |
Sách, tạp chí |
Tiêu đề: |
GPU Acceleration of Sparse Neural Networks,” "arXiv |
|
[9] R. Rojas and R. Rojas, “The Backpropagation Algorithm,” in Neural Networks, 1996 |
Sách, tạp chí |
Tiêu đề: |
The Backpropagation Algorithm,” in "Neural Networks |
|
[10] A. Krizhevsky, I. Sutskever, and H. Geoffrey E., “Imagenet,” Adv. Neural Inf. Process. Syst. 25, 2012, doi: 10.1109/5.726791 |
Sách, tạp chí |
Tiêu đề: |
Imagenet,” "Adv. Neural Inf. "Process. Syst. 25 |
|
[11] M. H. Zhu and S. Gupta, “To prune, or not to prune: Exploring the efficacy of pruning for model compression,” arXiv. 2017 |
Sách, tạp chí |
Tiêu đề: |
To prune, or not to prune: Exploring the efficacy of pruning for model compression,” "arXiv |
|
[12] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” 2015 |
Sách, tạp chí |
Tiêu đề: |
Learning both weights and connections for efficient neural networks |
|
[13] B. Dai, C. Zhu, B. Guo, and D. Wipf, “Compressing neural networks using the variational information bottleneck,” 2018 |
Sách, tạp chí |
Tiêu đề: |
Compressing neural networks using the variational information bottleneck |
|
[14] C. Louizos, K. Ullrich, and M. Welling, “Bayesian compression for deep learning,” 2017 |
Sách, tạp chí |
Tiêu đề: |
Bayesian compression for deep learning |
|
[15] C. Louizos, M. Welling, and D. P. Kingma, “Learning sparse neural networks through L0regularization,” arXiv. 2017 |
Sách, tạp chí |
Tiêu đề: |
Learning sparse neural networks through L0regularization,” "arXiv |
|
[16] D. Molchanov, A. Ashukha, and D. Vetrov, “Variational dropout sparsifies deep neural networks,” 2017 |
Sách, tạp chí |
Tiêu đề: |
Variational dropout sparsifies deep neural networks |
|
[17] M. Liang and X. Hu, “Recurrent convolutional nơ-ron network for object recognition,” 2015, doi: 10.1109/CVPR.2015.7298958 |
Sách, tạp chí |
Tiêu đề: |
Recurrent convolutional nơ-ron network for object recognition |
|
[18] R. Girshick, “Fast R-CNN,” 2015, doi: 10.1109/ICCV.2015.169 |
Sách, tạp chí |
|
[19] B. WALLACH, “Faster RCNN,” A World Made Money, 2017 |
Sách, tạp chí |
Tiêu đề: |
Faster RCNN,” "A World Made Money |
|
[20] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” 2017, doi: 10.1109/ICCV.2017.322 |
Sách, tạp chí |
|