Tài liệu tham khảo |
Loại |
Chi tiết |
[1] “Mô hình ngôn ng ữ - Xử lý ngôn ngữ tự nhiên (Trường đại học khoa học kỹ thu ật Nagaoka).” https://sites.google.com/a/jnlp.org/viet/kien-thuc-co-ban-ve-xu-ly-ngon-ngu-tu-nhien/mo-hinh-ngon-ngu (accessed Nov. 25, 2020) |
Sách, tạp chí |
Tiêu đề: |
Mô hình ngôn ngữ - Xử lý ngôn ngữ tự nhiên (Trường đại học khoa học kỹ thuật Nagaoka) |
|
[2] A. Sherstinsky, “Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network,” Phys. Nonlinear Phenom., vol. 404, p. 132306, Mar. 2020, doi: 10.1016/j.physd.2019.132306 |
Sách, tạp chí |
Tiêu đề: |
Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network,” "Phys. Nonlinear Phenom |
|
[3] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi:10.1162/neco.1997.9.8.1735 |
Sách, tạp chí |
Tiêu đề: |
Long Short-Term Memory,” "Neural Comput |
|
[4] A. Vaswani et al., “Attention is All you Need,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H.Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 5998–6008 |
Sách, tạp chí |
Tiêu đề: |
et al.", “Attention is All you Need,” in "Advances in Neural Information Processing Systems 30 |
|
[5] R. Paulus, C. Xiong, and R. Socher, “A Deep Reinforced Model for Abstractive Summarization,” presented at the International Conference on Learning Representations, Feb. 2018, Accessed: Oct. 22, 2020. [Online].Available: https://openreview.net/forum?id=HkAClQgA- |
Sách, tạp chí |
Tiêu đề: |
A Deep Reinforced Model for Abstractive Summarization |
|
[7] J. Cheng, L. Dong, and M. Lapata, “Long Short-Term Memory-Networks for Machine Reading,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, Nov. 2016, pp |
Sách, tạp chí |
Tiêu đề: |
Long Short-Term Memory-Networks for Machine Reading,” in "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing |
|
[8] A. Parikh, O. Tọckstrửm, D. Das, and J. Uszkoreit, “A Decomposable Attention Model for Natural Language Inference,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, Nov. 2016, pp. 2249–2255, doi: 10.18653/v1/D16-1244 |
Sách, tạp chí |
Tiêu đề: |
A Decomposable Attention Model for Natural Language Inference,” in "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing |
|
[9] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, Jun. 2019, pp |
Sách, tạp chí |
Tiêu đề: |
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) |
|
[10] Y. Wu et al., “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation,” Sep. 2016 |
Sách, tạp chí |
Tiêu đề: |
et al.", “Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation |
|
[11] Y. Zhu et al., “Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books,” in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), USA, Dec. 2015, pp. 19–27, doi: 10.1109/ICCV.2015.11 |
Sách, tạp chí |
Tiêu đề: |
et al.", “Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books,” in "Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) |
|
[12] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman, “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding,” in Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Brussels, Belgium, Nov. 2018, pp. 353–355, doi: 10.18653/v1/W18-5446 |
Sách, tạp chí |
Tiêu đề: |
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding,” in "Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP |
|
[14] M. Mohri, “Weighted Finite-State Transducer Algorithms An Overview,” Form. Lang. Appl., vol. 148, Jan. 2004, doi: 10.1007/978-3-540-39886-8_29 |
Sách, tạp chí |
Tiêu đề: |
Weighted Finite-State Transducer Algorithms An Overview,” "Form. Lang. Appl |
|
[15] “(PDF) Maximum Entropy Fundamentals.” https://www.researchgate.net/publication/228873617_Maximum_Entropy_Fundamentals (accessed Nov. 24, 2020) |
Sách, tạp chí |
Tiêu đề: |
(PDF) Maximum Entropy Fundamentals |
|
[17] D. D. Pham, G. B. Tran, and S. B. Pham, “A Hybrid Approach to Vietnamese Word Segmentation Using Part of Speech Tags,” in Proceedings of the 2009 International Conference on Knowledge and Systems Engineering, USA, Oct. 2009, pp. 154–161, doi: 10.1109/KSE.2009.44 |
Sách, tạp chí |
Tiêu đề: |
A Hybrid Approach to Vietnamese Word Segmentation Using Part of Speech Tags,” in "Proceedings of the 2009 International Conference on Knowledge and Systems Engineering |
|
[18] P. Wang, Y. Qian, F. K. Soong, L. He, and H. Zhao, “Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network,” ArXiv151006168 Cs, Oct. 2015, Accessed: Nov. 24, 2020.[Online]. Available: http://arxiv.org/abs/1510.06168 |
Sách, tạp chí |
Tiêu đề: |
Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Recurrent Neural Network,” "ArXiv151006168 Cs |
|
[19] M. Silfverberg, T. Ruokolainen, K. Lindén, and M. Kurimo, “Part-of- Speech Tagging using Conditional Random Fields: Exploiting Sub-Label Dependencies for Improved Accuracy,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Baltimore, Maryland, 2014, pp. 259–264, doi: 10.3115/v1/P14- 2043 |
Sách, tạp chí |
Tiêu đề: |
Part-of-Speech Tagging using Conditional Random Fields: Exploiting Sub-Label Dependencies for Improved Accuracy,” in "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) |
|
[22] T. Mikolov, K. Chen, G. S. Corrado, and J. Dean, Efficient Estimation of Word Representations in Vector Space. 2013 |
Sách, tạp chí |
Tiêu đề: |
Efficient Estimation of Word Representations in Vector Space |
|
[23] M. Tan, C. dos Santos, B. Xiang, and B. Zhou, “LSTM-based Deep Learning Models for Non-factoid Answer Selection,” ArXiv151104108 Cs, Mar. 2016, Accessed: Oct. 21, 2020. [Online]. Available:http://arxiv.org/abs/1511.04108 |
Sách, tạp chí |
Tiêu đề: |
LSTM-based Deep Learning Models for Non-factoid Answer Selection,” "ArXiv151104108 Cs |
|
[24] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv, p. 1409.0473, 2014 |
Sách, tạp chí |
Tiêu đề: |
Neural machine translation by jointly learning to align and translate,” "arXiv |
|
[25] A. M. Rush, S. Chopra, and J. Weston, “A Neural Attention Model for Abstractive Sentence Summarization,” in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, Sep. 2015, pp. 379–389, doi: 10.18653/v1/D15-1044 |
Sách, tạp chí |
Tiêu đề: |
A Neural Attention Model for Abstractive Sentence Summarization,” in "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing |
|