1. Trang chủ
  2. » Công Nghệ Thông Tin

Lecture Notes in Computer Science- P32 pps

5 351 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 5
Dung lượng 156,92 KB

Nội dung

142 Y. Miao, P. Sloep, and R. Koper concept support-activity in LD. An assessment process model based on APS can be transformed into an executable model represented in LD and QTI. Thus, we should be able to use an integrated LD and QTI run-time environment to execute various forms of assessment based on APS. In addition, APS will be organized using the IMS Content Package specification. It can use IEEE Learning Object Metadata (LOM) to describe the meta-data of elements in APS. Moreover, the IMS Reusable Definition of Competency or Educational Objectives can be used to specify traits and assessment objectives. The IMS ePortfolio can be used to model portfolios (coupled with artifacts in APS) and inte- grate a portfolio editor. The IMS Learner Information Profile can be used to import global properties from a run-time environment and export them to it. IMS Enterprise can be used for mapping roles when instantiating a UoA. Therefore, APS is compatible with most existing, relevant e-learning technical specifications. 5 Conclusions and Future Work This paper addressed the problems one faces when attempting to use QTI and LD to support the management of assessment processes, in particular, formative assessment and competence assessment. In order to support the sharing of assessment process information in an interoperable, abstract, and efficient way, we developed APS as a high-level assessment-specific process modeling language. We have developed the conceptual model of APS by adopting a domain-specific modeling approach. The conceptual model has been described through detailing the semantics aggregation model, the conceptual structure model, and the process structure model. The first validation study has been conducted through investigating whether the conceptual model of APS meets the requirements of completeness, flexibility, adaptability, and compatibility. The results suggest that the model does indeed do so. APS should meet additional requirements (e.g., reproducibility, formalization, and reusability), which we intend to investigate after the development of the information model and XML Schemas binding. In order to enable practitioners to easily design and customize their own assessment process models, an authoring tool for modeling assessment processes with APS will be developed in the near future. In order to exe- cute an instantiated model in existing LD and QTI compatible run-time environments, transformation functions have to be developed as well. Then we will carry out ex- periments to investigate the feasibility and usability of APS and the corresponding authoring tool. Finally, we will propose APS as a candidate, new open e-learning technical standard. Acknowledgments. The work described in this paper has been fully supported by the European Commission under the TENCompetence project [project No: IST-2004- 02787]. References 1. Almond, R.G., Steinberg, L., Mislevy, R.J.: A sample assessment using the four process framework. CSE Report 543. Center for study of evaluation. University of California, Los Angeles (2001) 2. APIS: http://www.elearning.ac.uk/resources/1apisoverview Modeling Units of Assessment for Sharing Assessment Process Information 143 3. AQuRate: http://aqurate.kingston.ac.uk/index.htm 4. Biggs, J.B.: Teaching for Quality Learning at University. Society for Research in. Society for Research in Higher Education & Open University Press, Buckingham (1999) 5. Black, P., Wiliam, D.: Assessment and classroom learning. Assessment in Education 5(1), 7–74 (1998) 6. Boud, D.: Enhancing Learning through Self-Assessment. Routledge (1995) 7. Boud, D., Cohen, R., et al.: Peer Learning and Assessment. Assessment and Evaluation in Higher Education 24(4), 413–426 (1999) 8. Bransford, J., Brown, A., Cocking, R.: How People Learn: Mind, Brain, Experience and School, Expanded Edition. National Academy Press, Washington (2000) 9. Brinke, D.J., Van Bruggen, J., Hermans, H., Latour, I., Burgers, J., Giesbers, B., Koper, R.: Modeling assessment for re-use of traditional and new types of assessment. Computers in Human Behavior 23, 2721–2741 (2007) 10. Brown, S., Knight, P.: Assessing Learners in Higher Education. Kogan Page, London (1994) 11. Freeman, M., McKenzie, J.: Implementing and evaluating SPARK, a confidential web- based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects. British Journal of Educational Technology 33(5), 553–572 (2002) 12. Gehringer, E.F.: Electronic peer review and peer grading in computer-science courses. In: Proceedings of the 32nd ACM SIGCSE Technical Symposium on Computer Science Edu- cation, Charlotte, North Carolina (2001) 13. Gipps, C.: Socio-cultural perspective on assessment. Review of Research in Education 24, 355–392 (1999) 14. Koper, E.J.R.: Modelling Units of Study from a Pedagogical Perspective: the Pedagogical Meta-model behind EML (provided as input for the IMS Learning Design), Educational Technology Expertise Centre, Open University of the Netherlands (2001), http://hdl.handle.net/1820/36 15. Koper, R., Olivier, B.: Representing the Learning Design of Units of Learning. Journal of Educational Technology & Society 7(3), 97–111 (2004) 16. LD: http://www.imsglobal.org/learningdesign/index.cfm 17. Lockyer, J.: Multisource feedback in the assessment of physician competencies. Journal Contin Educ. Health Prof. 23(1), 4–12 (2003) 18. Miao, Y., Koper, R.: An Efficient and Flexible Technical Approach to Develop and De- liver Online Peer Assessment. In: Proceedings of CSCL 2007, New Jersey, USA, pp. 502– 511 (2007) 19. Miao, Y., Koper, R.: A Domain-specific Modeling Approach to the Development of Online Peer Assessment. In: Navarette, T., Blat, J., Koper, R. (eds.) Proceedings of the 3rd TENCompetence Open Workshop on Current Research on IMS Learning Design and Life- long Competence Development Infrastructures, Barcelona, Spain, pp. 81–88 (2007), http://hdl.handle.net/1820/1098 20. QTI: http://www.imsglobal.org/question/index.html 21. QuestionMark: http://www.questionmark.com/uk/index.aspx 22. Wills, G., Davis, H., Chennupati, S., Gilbert, L., Howard, Y., Jam, E.R., Jeyes, S., Millard, D., Sherratt, R., Willingham, G.: R2Q2: Rendering and Reponses Processing for QTIv2 Question Types. In: Danson, M. (ed.) Proceedings of the 10th International Computer As- sisted Assessment Conference, pp. 515–522. Loughborough University, UK (2006) 144 Y. Miao, P. Sloep, and R. Koper 23. Stiggins, R.J.: Het ontwerpen en ontwikkelen van performance-assessment toetsen. [De- sign and development of performance assessments]. In: Kessels, J.W.M., Smit, C.A. (eds.) Opleiders in organisaties/Capita Selecta, afl. 10, pp. 75–91. Kluwer, Deventer (1992) 24. TENCompetence project: http://www.tencompetence.org 25. Topping, K.J.: Peer assessment between students in colleges and universities. Review of Educational Research 68, 249–276 (1998) F. Li et al. (Eds.): ICWL 2008, LNCS 5145, pp. 157–166, 2008. © Springer-Verlag Berlin Heidelberg 2008 Computer-Aided Generation of Item Banks Based on Ontology and Bloom's Taxonomy Ming-Hsiung Ying 1 and Heng-Li Yang 2 1 Department of MIS, Chung-Hua University, 707, Sec.2, WuFu Rd., HsinChu, Taiwan 2 Department of MIS, National Cheng-Chi University, 64, Sec.2, Chihnan Rd., Taipei, Taiwan mhying@chu.edu.tw, yanh@nccu.edu.tw Abstract. Online learning and testing are important topics in information edu- cation. Students can take online tests to assess their achievement of learning goals. However, the test results should assign student scores and assess their achievement of knowledge and cognition levels. Teachers currently need to spend considerable time on producing and maintaining on-line testing items. This study applied ontology, Chinese semantic database, artificial intelligence and Bloom's taxonomy to propose a CAGIS E-learning system architecture to assist teachers in creating test items. As the result, the computer assisted teach- ers in producing a large number of test items quickly. These test items covered three types of knowledge and five dimensions of cognitive skills. The test items could meaningfully assess learning level meaningfully. Keywords: Online Test, Test Item Bank, Bloom’s Taxonomy, Ontology, Se- mantic Web. 1 Introduction and Related Works Online learning and subsequent testing have been important topics in information education. Because education is intended to change students behaviors, teachers must use tests well to assess student achievements. Computer-based testing has numerous benefits, including data-rich test results, immediate test feedback, convenient test times and locations, and so on. [1]. In designing test items, teaching goals should be considered when designing test items. According to education testing theory, educational goals can be classified into three different levels: cognition field, emotional field and movement ability [2]. Types of instruction assessment can be grounded in types of knowledge. Three distinct knowledge types require assessment: declarative (knowing what/knowing about), procedural (knowing how), and conditional (knowing why and when) [3]. Bloom identified six levels within the cognitive domain, including knowledge, comprehen- sion, application, analysis, synthesis and evaluation [4]. Anderson and Krathwohl [5] revised the original taxonomy of Bloom by combining both the cognitive process and knowledge dimensions. The revised Bloom's taxonomy comprises a two-dimensional table. One dimension identifies the knowledge (the kind of knowledge to be learned), while the other identifies the cognitive process (the process used to learn). The knowledge dimension comprises four levels: factual, conceptual, procedural, and 158 M H. Ying and H L. Yang meta-cognitive. The cognitive process dimension comprises six levels: remember, understand, apply, analyze, evaluate, and create. This new expanded taxonomy can help instructional designers and teachers set meaningful learning objective, and pro- vide the measurement tool for thinking. Creating and maintaining the item bank is a time-consuming. When the item bank contains an insufficient number of items, the exposure frequencies of items may be too high and students may directly recall the answers [6]. Therefore, how to prepare sufficient items in the bank and efficiently generate items have become important research issues [7]. Deveszic [8] proposed developing Web-based educational applications with more theory and content-oriented intelligence. To increase the effectiveness of the testing system, numerous researchers have applied artificial intelligence, fuzzy theory and other techniques. If information techniques can be properly applied, numerous om- plex issues can be solved, such as test item selection, item generation, scoring, expla- nation, and test feedback to enhance education and learning [9-15]. This study claims that computers can assist in aiding item generation in e-learning environments, if the material can be first stored based on knowledge ontological structure and semantic relation. An intelligent online learning system has been pro- posed to resolve the above problems. 2 Proposed System Architecture To propose a system architecture for computer-aided tem bank generation, this study followed the following steps: (1) Conducting a pilot study to explore the difficulty faced by teachers in manually creating items, and analyzing the item types; (2) De- veloping course material knowledge and item structure ontologies, involving concept of Bloom’s taxonomy; (3) Creating a knowledge base related to online course materi- als; (4) Developing a prototype for computer-aided generation of item system (CAGIS). 2.1 A Pilot Study Exploring the Difficulty of Manual Item Creation Fifteen university teachers from 11 different universities - who had taught "manage- ment information system" courses, participated in the pilot study. These teachers were given two weeks to create test items from specific chapters of a textbook. It was re- quired that the test items should include four types: true-false, multiple-choice, multi- ple-response, and fill-in-the-blank. No upper limited constrained the quantity of test items. Finally, the teachers produced 440 items manually, with the average time taken to complete the task being 4.3 hours. After deleting the duplicate items, there are 386 items left and shown in Table 1. The knowledge types of those items included “fac- tual, conceptual, procedural” knowledge, and their cognitive levels included: “re- member, understand, analyze, and evaluate”. The specific chapters are no suitable knowledge content to generate the item of "apply" level. Some teachers indicated that it would be very difficult to generate the "create" level items using true-false, multi- ple-choice, multiple-response, and fill-in-the-blank question type. . can help instructional designers and teachers set meaningful learning objective, and pro- vide the measurement tool for thinking. Creating and maintaining the item bank is a time-consuming. When. on producing and maintaining on-line testing items. This study applied ontology, Chinese semantic database, artificial intelligence and Bloom's taxonomy to propose a CAGIS E-learning system. 553–572 (2002) 12. Gehringer, E.F.: Electronic peer review and peer grading in computer- science courses. In: Proceedings of the 32nd ACM SIGCSE Technical Symposium on Computer Science Edu- cation,

Ngày đăng: 05/07/2014, 09:20

TỪ KHÓA LIÊN QUAN