... nhân người học, đảm bảo cho họ thích ứng với đời sống xà hội. TECHNIQUES THAT SUPPORT ACTIVE LEARNING BRAINSTORMING Free writing Listing/bulleting Clustering/mapping/webbing ... thiểu vai trò của ngườiKhác Quá phục tùngTự biến mình thành người vô hìnhThờ ơ Active learning Một số biểu hiện cụ thể của người học theo hướng tích cựcNgười học biết làm chủ...
... directions.2 Active Learning 2.1 Pool-based Active Learning Our base framework of activelearning is based onthe algorithm of (Lewis and Gale, 1994), which iscalled pool-based active learning. ... proposemethods of improving activelearning for parsingby using a smaller constituent than a sentence asa unit that is selected at each iteration of active learning. Typically in activelearning for parsing ... important general frameworkof active learning, since the selection strategy with large mar-gin classifiers (Section 2.2) is much simpler and seems morepractical for activelearning in Japanese dependency...
... machine activelearning with applications to textclassification. Journal of Machine Learning Re-search (JMLR), 2:45–66.David Vickrey, Oscar Kipersztok, and Daphne Koller.2010. An activelearning ... stopping activelearning based on stabi-lizing predictions and the need for user-adjustablestopping. In Proceedings of the Thirteenth Confer-ence on Computational Natural Language Learning (CoNLL-2009), ... Vijay-Shanker. 2009b. Tak-ing into account the differences between activelyand passively acquired data: The case of active learning with support vector machines for imbal-anced datasets. In...
... tags,parse trees, and named entities.In this paper, we introduce multi-task active learning (MTAL), an activelearning paradigm formultiple annotation tasks. We propose a new ALframework ... http://www.cis.upenn.edu/˜dbikel/software.html.Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123–140.David A. Cohn, Zoubin Ghahramani, and Michael I. Jor-dan. 1996. Activelearning with statistical models.Journal of Artificial ... 2004. Multi-criteria-based active learning for named entity recognition. In Proceedingsof ACL’04, pages 589–596.Min Tang, Xiaoqiang Luo, and Salim Roukos. 2001. Ac-tive learning for statistical...
... eachadaptation iteration. The adaptation process using active learning is represented by the curve a, whileapplying count-merging with activelearning is rep-resented by the curve a-c. Note that ... al.(2006), where activelearning was used successfullyto reduce the annotation effort for WSD of 5 Englishverbs using coarse-grained evaluation. In that work,the authors only used activelearning ... BC training examples and 406 WSJ adaptationexamples per noun.3 Active Learning For our experiments, we use naive Bayes as the learning algorithm. The knowledge sources we useinclude parts-of-speech,...
... through active learning, inwhich the next semantic input to annotate is de-termined by the current model. The probabilis-tic nature of BAGEL allows the use of certainty-based activelearning ... certainty-based active learning, fordifferent values of k. As our dataset only con-tains two paraphrases per dialogue act, the samedialogue act can only be queried twice during the active learning ... models trained on 40 utterancesusing activelearning (p = .15 for naturalness andp = .41 for informativeness). These results sug-gest that certainty-based activelearning is benefi-cial for training...
... slow anddifficult. One popular solution is Active Learning, which maximizes learning accuracy while minimiz-ing labeling efforts. In active learning, the learning algorithm itself selects unlabeled ... selection criteria Active Confident Learning (ACL).4 EvaluationTo evaluate our activelearning methods we useda similar experimental setup to Tong and Koller(2001). Each activelearning algorithm ... andare fast to train — an important property for inter- active learning. Experimental validation on a num-ber of datasets shows that activelearning with con-fidence can improve standard methods.2...
... the test set. The ef-fectiveness of activelearning is measured by comparing learning curves (i.e., test accuracy vs. number of trainingsentences ) of activelearning and random selection.4.1 ... clusteringresults in a better learning curve.4.4 Summary ResultFigure 8 shows the best activelearning result comparedwith that of random selection. The learning curve for ac-tive learning is obtained ... time) required for a statistical parser toachieve a satisfactory performance using active learning. Active learning has been studied in the context of manynatural language processing (NLP) applications...
... from passive learning, in which a classifiergets labeled examples randomly. Activelearning isa general framework and does not depend on tasksor domains. It is expected that activelearning willreduce ... 2001). Although there aremany activelearning methods with various classi-fiers such as a probabilistic classifier (McCallum andNigam, 1998), we focus on activelearning with Sup-port Vector ... that active learning works quite well and it significantly reduces labeledexamples to be required. Let us see how many la-beled examples are required to achieve 96.0 % ac-curacy. In active learning...
... future work, we plan to incorporate trian-gulation into our activelearning approach.7 ConclusionThis paper introduced the novel active learning task of adding a new language to an existing multi-lingual ... for active learning in the single language-pair setting which we thenapplied to the multilingual sentence selection pro-tocols. In the multilingual setting, a novel co-training method for active ... pair. In contrast, ourco-training approach uses consensus translationsand our setting for activelearning is very differ-ent from their semi-supervised setting. A Ph.D.proposal by Chris Callison-Burch...
... sentences into active sentences.Our method separates training data for all input(source) particles and uses machine learning foreach particle. We also used numerous rich featuresfor learning. ... Passive/Causative Sentences into Active Sen-tences Using Machine Learning, pages 115–125. SpringerPublisher.Masaki Murata, Qing Ma, and Hitoshi Isahara. 2002. Com-parison of three machine -learning methods ... passive sentences into active sentences. It separates training datainto each input particle and uses machine learning for each particle. We also usednumerous rich features for learning. Ourmethod...