4 MÔ HÌNH ĐỀ XUẤT
6.4 Tiềm năng trong thực tế
Từ kết quả phân tích ở trên chúng ta có thể dự định xây dựng một ứng dụng trích xuất dữ liệu từ resume để giúp cho các nhà tuyển dụng sử dụng trong thực tế.
TÀI LIỆU THAM KHẢO References
[1] Bounfour, DigitalFutures,DigitalTransformation.SpringerInternationalPublishing.Cham, 2016.ISBN: 9780198520115.
[2] J. Devlin et al., “BERT: Pre-trainingof Deep Bidirectional Transformers for Language Un- derstanding,”In: Proceedingsofthe 2019ConferenceoftheNorth AmericanChapterof the AssociationforComputationalLinguistics: HumanLanguageTechnologiesvol.1.10(2019), pp.4171–4186.DOI:http://dx.doi.org/10.18653/v1/N19-1423.
[3] S.Sanyaletal.,“ResumeParserwithNaturalLanguageProcessing”.In:InternationalJournal ofEngineeringScienceandComputingvol.7,no.2.2,pp.4484–4489, Feb. 2017.DOI:
http://dx.doi.org/10.18653/v1/N19-1423.
[4] A.Bouzianeetal.,“QuestionAnsweringSystems:SurveyandTrends,”In:InternationalCon- ference on AdvancedWirelessInformationandCommunication Technologies(AWICT2015)
vol.73, 2015,pp.366–375.DOI:https://doi.org/10.1016/j.procs.2015.12.005. [5] B.Ojokoh1andE.Adebisi, “LASSO:AToolforSurfingtheAnswerNet,”In:JournalofKing
SaudUniversity-ComputerandInformationSciencesvol.32,pp.635–646, July 2020.DOI:
https://doi.org/10.1016/j.jksuci.2018.08.005.
[6] V. V.Nguyen,V. L.Pham,andN. S.Vu, “StudyofInformationExtraction inResume,”
Technical Report, VNU University of Engineering and Technology, 2018.
[7] M.PandeyandS.S.Rautaray, “ACVParserModelusingEntityExtractionProcessandBig DataTools,”In:I.J.InformationTechnologyandComputerSciencevol.9,pp.21–31, 2018.
DOI:https://doi.org/10.5815/ijitcs.2018.09.03.
[8] Ayishathahira, Sreejith, and Raseek. “Combination of Neural Networks and Conditional Ran- domFieldsforEfficientResumeParsing,”In:2018InternationalCETConferenceonControl, Communication, and Computing (IC4), July 2018. DOI: https://doi.org/10.1109/ CETIC4.2018.8530883.
[9] Sun M.Liu Z. Lin Y, Word Representation. Representation Learning for Natural Language Processing. Springer, Singapore., 2016.ISBN: 9780198520115.DOI:https://doi.org/10. 1007/978-981-15-5573-2_2.
[10] C.D.Manning, H.SchutzeandP.Raghavan, Introduction to information retrieval, 10th edition. Cambridge University Press, 2008.
[11] Z.S.Harris, “Distributionalstructure,”In:WORD,vol.39, no. 2-3,pp.146–162, 1954. [12] G.SaltonandC.Buckley, “Term-weightingapproachesinautomatictextretrieval,”Infor-
mationProcessingandManagement,vol.24. no. 5,pp.513–523, 1988.
[13] Y.Bengioetal.,“Aneuralprobabilisticlanguagemodel,”TheJournalofMachineLearning Research(JMLR),vol.3,pp.1137–1155, 2003.
[14] T.Mikolovetal.,“Efficientestimationofwordrepresentationsinvectorspace,”In:1stInter- nationalConferenceonLearningRepresentations,ICLR2013-WorkshopTrackProceedings, 2013,pp.1–12. DOI: https://doi.org/10.48550/arXiv.1301.3781
[15] J.Pennington,R.Socher,andC.Manning, “Glove:GlobalVectorsforWordRepresentation,” In:Proceedingsofthe2014ConferenceonEmpiricalMethodsinNaturalLanguage
Process-ing(EMNLP), Doha, Quatar, 2014,pp.1532–1543.
[16] A. Joulin et al., “Fasttext.zip: Compressing text classification models,” Internet: arXiv-
[17] M.E.Petersetal.,“Deepcontextualizedwordrepresentations.”In: NorthAmericanChapter oftheAssociationforComputationalLinguistics(NAACL), New Orleans, Louisiana, 2018. [18] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. The MIT Press, 2016,
pp. 12–27.
[19] T.T.Quan, “ModernApproachesinNaturalLanguageProcessing," VNUJournalofSci- ence:Comp.Science Com.Eng,vol.37, no.1, 2021.
[20] D.E.Rumelhart,G.E.Hinton,andR.J.Williams, “Learningrepresentationsbyback-propagating errors,”Nature,vol.6088, no. 2-3, pp.533–536, 1986.
[21] A.Vaswanietal.,“AttentionIsAllYouNeed,”In:Proceedingsofthe31stInternationalCon- ferenceonNeuralInformationProcessingSystems, Dec.2017,pp.6000–6010.DOI:
https://arxiv.org/pdf/1706.03762.pdf.
[22] A.Singh, "End-to-end Masked Language Modeling with BERT," Internet:https://keras. io/examples/nlp/masked_language_modeling/, 2021.
[23] S.SchwagerandJ.Solitario, "Question and Answering on SQuAD 2.0: BERT Is All You Need." Internet:https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/
reports/default/15812785.pdf, 2021.
[24] Y. Zhangand Z.Xu, "BERT for Question and Answering on SQuAD 2.0." Internet:
https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1194/reports/
default/15848021.pdf, 2021.
[25] A. Kulkarni,D. Chong,and A. Batarseh, “Foundationsofdata imbalance andsolutions for a data democracy,” In: At the Nexus of Artificial Intelligence, Software Development, and Knowledge Engineering, 2020. DOI: https://doi.org/10.1016/B978-0-12-818366- 3.00005-8.
[26] T.M.T. NguyenandM.Shcherbakov, “VietnameseQuestion AnsweringSystem fromMul- tilingual BERT Models to Monolingual BERT Model,” In: 2020 9th International Confer- ence System Modeling and Advancement in Research Trends (SMART), 2020. DOI:
PHẦN LÝ LỊCH TRÍCH NGANG
Họ và tên: Trần Quốc Tính
Ngày, tháng, năm sinh: 21/08/1997 Nơi sinh: Phú Yên
Địa chỉ liên lạc: 223 Nguyễn Xí, phường 13, quận Bình Thạnh, TP. Hồ Chí Minh
QUÁ TRÌNH ĐÀO TẠO
• Trường Đại học Bách Khoa TP. HCM – Sinh viên khoa KH KT Máy tính (khóa 2015 – 2019)
• Trường Đại học Bách Khoa TP. HCM – Học viên cao học Khoa học Máy tính (khóa 2019 – nay)
QUÁ TRÌNH CÔNG TÁC
• Từ 06/2018 đến 12/2019: làm việc tại Công ty FPT Telecom.