1. Trang chủ
  2. » Thể loại khác

Ebook Evaluation and testing in nursing education (5/E): Part 2

227 153 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 227
Dung lượng 4 MB

Nội dung

Part 2 book “Evaluation and testing in nursing education” has contents: Testing and evaluation in online courses and programs, scoring and analyzing tests, clinical evaluation, clinical evaluation methods, program evaluation, grading, interpreting test scores, social, ethical, and legal issues,… and other contents.

ELEVEN Testing and Evaluation in Online Courses and Programs Contemporary nursing students expect educational institutions to p ­ rovide flexible instructional methods that help them balance their academic, employment, family, and personal commitments (Jones & Wolf, 2010) Online education has rapidly developed as a potential solution to these demands The growth rate of online student enrollment in all disciplines has far exceeded the growth rate of traditional course student enrollment in United States higher education (Allen & ­Seaman, 2015) Over 5.2 m ­ illion students enrolled in at least one college-level online course during the fall 2013 academic term, with the proportion of all students taking at least one online course at an all-time high of 32.0% (Allen & ­Seaman, 2015) In nursing, the ­American Association of Colleges of Nursing (2016) reported that 173 registered nurse (RN)-to-master’s degree programs and more than 400 RNto-bachelor of science in nursing programs were offered at least partially online For the purposes of this chapter, online courses are those in which at least 80% of the course content is delivered online Face-to-face courses are those in which 0% to 29% of the content is delivered online; this category includes both ­traditionaland web-facilitated courses Blended (sometimes called hybrid) courses have between 30% and 80% of the course content delivered online (Allen & Seaman, 2015) ­Examples of ­various course management systems used for online courses include Blackboard, Desire2Learn, and Moodle Along with the expansion of online delivery of courses and programs comes concern about how to evaluate their quality Absent a widely accepted standard of evaluating these online offerings, “[t]he institution assumes the responsibility for establishing a means to assess student outcomes This assessment includes overall program outcomes, in addition to specific course outcomes, and a process for using the results for continuous program improvement” (American Association of Colleges of Nursing, 2007) This chapter discusses recommendations for assessment of learning in online courses, including testing and appraising course assignments, to determine if course goals and outcomes have been met It also suggests ways to assess online courses and programs, and to assess teaching effectiveness in online courses and programs 177 178 Part III  Test Construction and Analysis ASSESSMENT OF LEARNING AT THE INDIVIDUAL LEARNER LEVEL Online assessment and evaluation principles not differ substantially from the approaches used in the traditional classroom environment As with traditional format courses, assessment of individual achievement in online courses should involve multiple methods such as tests, written assignments, and contributions to online discussions Technological advances in testing and assessment have made it possible to administer tests on a computer and assess other products of student thinking even in traditional courses (Miller, Linn, & Gronlund, 2013) But courses and programs that are offered only online or in a hybrid format depend heavily or entirely on technological methods to assess the degree to which students have met expected learning targets or outcomes Online Testing The choice to use online testing inevitably raises concerns about academic dishonesty How can the course instructor be confident that students who are enrolled in the course are the ones who are taking the tests? How can teachers prevent students from consulting unauthorized sources while taking tests or sharing information about tests with students who have not yet taken them? To deter cheating and promote academic integrity, faculty members should incorporate a multifaceted approach to online testing Educators can employ low- and high-technology solutions to address this problem One example of a low-technology solution includes creating an atmosphere of academic integrity in the classroom by including a discussion of academic integrity expectations in the syllabus or student handbook (Conway-Klaassen & Keil, 2010; Hart & Morgan, 2009) When teachers have positive relationships with students, interact with them regularly about their learning, and convey a sense of confidence about students’ performance on tests, they create an environment in which cheating is less likely to occur (Brookhart & Nitko, 2015; Miller et al., 2013) Faculty members should develop and communicate clear policies and expectations about cheating on online tests, plagiarism, and other examples of academic dishonesty (Morgan & Hart, 2013) Unfortunately, students not always view cheating or sharing as academic dishonesty; they often believe it is just collaboration (Wideman, 2011) Another low technology option is administering a tightly timed examination (Kolitsky, 2008) This approach may deter students from looking up answers to test items for fear of running out of time to complete the assessment Other suggestions to minimize cheating on online examinations include randomizing the test items and response options; displaying one item at a time and not allowing students to review previous items and responses; creating and using different versions of the test for the same group of learners; and developing open-book examinations (Conway-Klaassen & Keil, 2010) However, each of these approaches has disadvantages that teachers of online courses must take into consideration before implementing them Randomized Sequence of Test Items and Response Options As discussed in Chapter 10, the sequence of test items may affect student p ­ erformance and therefore assessment validity Many testing experts recommend ­arranging items Chapter Eleven  Testing and Evaluation in Online Courses and Programs 179 of each format in order of difficulty, from easiest to most difficult, to minimize test anxiety and allow students to respond quickly to the easy items and spend the majority of testing time on the more difficult ones Another recommendation is to sequence test items of each format in the order in which the content was taught, allowing students to use the content sequence as a cognitive map by which they can more easily retrieve stored information A combination of these approaches—content sequencing with difficulty progression within each content area—may be the ideal design for a test (Brookhart & Nitko, 2015) Many testing experts also recommend varying the position of the correct answer to multiple-choice and matching items in a random way to avoid a pattern that may help test-wise but uninformed students achieve higher scores than their knowledge warrants A simple way to obtain sufficient variation of correct answer position is to arrange the responses in alphabetical or numerical order (Brookhart & Nitko, 2015; Gronlund, 2006) Therefore, scrambling the order of test items and response options on an online test may affect the validity of interpretation of the resulting scores, and there is no known scientific evidence to recommend this practice as a way of preventing cheating on online tests Displaying One Item at a Time and Not Allowing Students to Review Previous Items This tactic is appropriate for the computerized adaptive testing model in which each student’s test is assembled interactively as the person is taking the test Because the answer to one item (correct or incorrect) determines the selection of the next item, there is nothing to be gained by reviewing previous items However, in teacher-constructed assessments for t­ raditional or online testing, students should be permitted and encouraged to return to a previous item if they recall information that would prompt them to change their responses While helping students develop test-taking skills to perform at the level at which they are capable, teachers should encourage students to bypass difficult items and return to them later to use the available time wisely (Brookhart & Nitko, 2015) Therefore, presenting only one item at a time and not permitting students to return to ­previous items may produce test scores that not accurately reflect students’ ­abilities Creating and Using Different Forms of an Examination With the Same Group of Students As discussed in Chapter 2, alternate forms of a test are considered to be e­ quivalent if they were developed from the same test blueprint or table of specifications, and if they produce highly correlated results Equivalent test forms are widely used in standardized testing to assure test security, but alternate forms of ­teacher-­constructed tests usually are not subjected to the rigorous process of obtaining empirical data to document their equivalence Therefore, alternate forms of a test for the same group of students may produce results that are not comparable, leading to i­naccurate interpretations of test scores Developing and Administering Open-Book Tests Tests developed for use in traditional courses usually not permit test-takers to consult references or other resources to arrive at correct responses, and most ­academic 180 Part III  Test Construction and Analysis honesty codes and policies include expectations that students will not consult such resources during assessments without the teacher’s permission However, for online assessments, particularly at the graduate level, teachers may develop tests that permit or encourage students to make use of appropriate resources to select or supply correct answers Commonly referred to as “open-book” or “take-home” tests, these assessments should gauge students’ higher-order thinking abilities by requiring use of knowledge and skill in novel situations One of the higher-order skills that may be important to assess is the ability to identify and use appropriate reference materials for problem solving, decision making, and clinical reasoning Teachers can use test item formats such as essay and context-dependent item sets (interpretive exercises) to craft novel materials for students to analyze, synthesize, and evaluate Because these item formats typically require more time than true–false, multiplechoice, matching, and completion items, teachers should allot sufficient time for online open-book testing Therefore, administering an open-book assessment as a tightly timed examination to deter cheating will not only produce results that not accurately reflect students’ true abilities but will likely also engender unproductive feelings of anxiety and anger among students (Brookhart & Nitko, 2015) An additional low-technology strategy to deter cheating may be the administration of tests in a timed synchronous manner, where students’ test results are not revealed until after all students have finished the examination While synchronous online testing may be inconvenient, adequate advance knowledge of test days and times could alleviate scheduling conflicts that some students may encounter High-technology solutions to prevent cheating on unproctored tests include browser security programs such as Respondus™ to keep students from searching the Internet while taking the examination (Hart & Morgan, 2009) However, this security feature does not prevent students from using a second computer or seeking assistance from other people during the test For those wanting to use the best technology available to prevent academic dishonesty, faculty members could use remote proctoring to assure student identity and monitor student actions (Dunn, Meine, & McCarley, 2010) Remote proctors incorporate a web camera, biometric scanner, and microphone into a single device, which avoids students having to arrange for an approved proctor (Dunn et al., 2010, p 4) Other sophisticated technological methods for preventing online cheating include using fingerprints to authenticate online learners and using computer-locking software to prevent Internet access for messaging and e-mailing (Stonecypher & Wilson, 2014) Students also may be required to use webcams to confirm their identities to the faculty member Some course management systems have ­password-protected access and codes to prevent printing, copying, and pasting Additional anti­cheating methods are requiring an online password that is different for each test and ­changing log-in codes just prior to testing (Stonecypher & Wilson, 2014) However, these methods not prevent students from receiving help from other students Therefore, a reasonable compromise to these dilemmas may be the use of proctored testing centers (Krsak, 2007; Stonecypher & Wilson, 2014; Trenholm, 2007) Many universities and colleges around the country cooperate to offer students the opportunity to take proctored examinations close to their homes Proctors should be approved by the faculty in advance to observe students taking the examination Chapter Eleven  Testing and Evaluation in Online Courses and Programs 181 online (Hart & Morgan, 2009) and should sign an agreement to keep all test materials secure and maintain confidentiality While the administration of proctored examinations is not as convenient as an asynchronous nonproctored test, it offers a greater level of assurance that students are taking examinations independently Course Assignments Course assignments may require adjustment for online learning to suit the electronic medium Online course assignments can be crafted to provide opportunities for students to develop and demonstrate cognitive, affective, and psychomotor abilities Table 11.1 provides specific examples of learning products in the cognitive, affective, and psychomotor domains that can be used for formative and summative evaluation Assignments such as analyses of cases and critical thinking vignettes, discussion boards, and classroom assessment techniques may be used for formative evaluation, while papers, debates, electronic presentations, portfolios, and tests are more frequently used to provide information for summative evaluation (O’Neil, Fisher, & Newbold, 2009) Online course assignments may be used for formative or summative evaluation of student learning outcomes However, the teacher should make it clear to the students how the assignments are being used for evaluation No matter what type of assignment the faculty member assesses, the student must have clearly defined criteria for the assignment and its evaluation Feedback As in traditional courses, feedback during the learning process and following teacher evaluation of assignments facilitates learning Students need more feedback in online learning than in the traditional environment because of the lack of face-to-face interaction and subsequent lack of nonverbal communication ­Teachers should give timely feedback about each assignment to verify that they are in the process of or have finished assessing it, or to inform the student when to expect more detailed feedback O’Neil et al (2009) suggested that feedback should TABLE 11.1 Examples of Methods for Online Assessment of ­Learning COGNITIVE DOMAIN AFFECTIVE DOMAIN PSYCHOMOTOR DOMAIN Discussion boards Online chats Case analysis Term papers Research or evidencebased practice papers Short written ­assignments Journals Electronic portfolios Discussion boards Online chats Case analysis Debates Role-play Discussions of ethical issues Interviews Journals Developing blogs Creating videos Virtual simulations Developing web pages Web-page presentations Interactive modules Presentations 182 Part III  Test Construction and Analysis be given within 24 to 48 hours, but it may not be reasonable to expect teachers to give detailed, meaningful feedback to a large group of students or on a lengthy assignment within that time frame For this reason, the syllabus for an online or a hybrid course should include information about reasonable expectations regarding the timing of feedback from the teacher For example, the syllabus might state, “I will acknowledge receipt of submitted assignments via e-mail within 24 hours, and I will e-mail [or post as a private message on the course management system, or other means] more detailed, ­specific feedback [along with a score or grade if appropriate] within [specify ­time frame].” Feedback to students can occur through a variety of methods Many faculty members provide electronic feedback on written assignments using the Track Changes feature of Microsoft Word (or similar feature of other word processing software) or by inserting comments into the document Feedback also may occur through e-mail or orally using vodcasting, Skype, or scheduled phone conferences As discussed in Chapter 9, the teacher may also incorporate peer critique within the process of completing an assignment For example, for a lengthy written formal paper, the teacher may assign each student a peer-review partner, or each student may ask a peer to critique an early draft The peer reviewer’s written feedback and the resulting revision should then be submitted for the faculty member to assess When an assignment involves participation in discussion using the course management system’s discussion board, the teacher may also assign groups or partners to critique each other’s posted responses to questions posed by the teacher or other students Although peer feedback is important to identify areas in which a student’s discussion contribution is unclear or incomplete, the course faculty member should also post summarized feedback to the student group periodically to identify gaps in knowledge, correct misinformation, and help students construct new knowledge No matter which types of feedback a teacher chooses to use in an online course, clear guidelines and expectations should be established and clearly communicated to the learners, including due dates for peer feedback Students should understand the overall purpose of feedback to effectively engage in these processes Structured feedback forms may be used for individual or group work O’Neil et al (2009) recommended multidimensional feedback that: ■■ Addresses the content of the assignment, quality of the presentation, and grammar and other technical writing qualities ■■ Provides supportive statements highlighting the strengths and areas of improvement ■■ Conveys a clear, thorough, consistent, equitable, constructive, and professional message Development of a scoring rubric provides an assessment tool that uses clearly defined criteria for the assignment and gauges student achievement Rubrics enhance assessment reliability among multiple graders, communicate specific goals to students, describe behaviors that constitute a specific grade, and serve as a feedback tool Table 11.2 provides a sample rubric for feedback about an online discussion board assignment TABLE 11.2 Example of Discussion Board Feedback Rubric CRITERIA EXEMPLARY (3 POINTS) GOOD (2 POINTS) SATISFACTORY (1 POINT) UNSATISFACTORY (0 POINTS) Frequency Participates 4–5 times ­during a week Participates 2–3 times during the week Participates during the week No participation on discussion board Initial assignment posting Posts a well-developed discussion that add­resses or more concepts related to the topic Posts a well-developed discussion addressing at least or key concepts related to the topic Posts a summary with superficial preparation and unsupported discussion No assignment posted Peer feedback ­postings Posts an analysis of a peer’s post extending the discussion with supporting references Posts a response that elaborates on a peer’s comments with ­references Posts superficial responses such as “I agree” or “great idea” Does not post feedback to peers Content Post provides a reflective contribution with evidence-based references extending the discussion Post provides evidencebased facts supporting the topic Post does not add ­substantive information to the discussion Post does not apply to the related topic References Provides personal experiences and reflection with or more supporting references Provides personal ­experiences and only supporting reference Provides personal ­experiences and no references Provides no ­personal experience or ­references Grammar, clarity, ­writing  style Responses organized, no grammatical or spelling errors, correct style Responses organized, 1–2 grammatical and ­spelling errors, uses ­correct style Responses organized, 3–4 grammatical and spelling errors, 1–2 minor style errors Responses are not organized, 5–6 grammatical and spelling errors, many style errors SCORE 184 Part III  Test Construction and Analysis Assessing Student Clinical Performance Clinical evaluation of students in online courses and programs presents challenges to faculty members and program administrators When using an online delivery mode, it is critical to ensure the clinical competence of nursing students Although the didactic component of nursing courses may lend itself well to online delivery, teaching and evaluating clinical skills can prove more challenging in an online context (Bouchoucha, Wikander, & Wilkin, 2013) Methods for evaluating student clinical performance in an online course format usually involve one or more of the following approaches: ■■ Use of preceptors to observe and evaluate performance faculty member travels to the student’s location to observe directly student performance ■■ On-campus or regional evaluation of skills in a simulated setting or with live models or standardized patients ■■ Use of teleconferencing, video recording, live streaming, or similar technologies (National Organization of Nurse Practitioner Faculties [NONPF], 2003) ■■ The Use of Preceptors Students enrolled in online courses or programs usually work with preceptors for the clinical portion of nursing courses Preceptors are responsible for ­guiding the students’ learning in the clinical environment according to well-defined ­learning objectives They are also responsible for evaluating students by giving them regular feedback about their performance and regularly communicating with faculty regarding students’ progress If students are not able to perform according to expectations, the faculty must be notified so that plans for correcting the d ­ eficiencies may be established (Gaberson, Oermann, & Shellenbarger, 2015) Strategies should be implemented in the course for preceptors and other educators involved in the performance evaluation to discuss as a group the competencies to be rated, what each competency means, and the performance of those competencies at different levels on the rating scale This is a critical activity to ensure reliability among preceptors and other evaluators Activities can be provided in which preceptors observe video recordings of performances of students and rate their quality using the clinical evaluation tool Preceptors and course faculty members then can discuss the performance and rating Alternately, discussions about levels of performance and their characteristics and how those levels would be reflected in ratings of the performance can be held with preceptors and course faculty members Preceptor development activities of this type should be done before the course begins and at least once during the course to ensure that evaluators are using the tool as intended and are consistent across student populations and clinical settings Even in clinical courses involving preceptors, faculty members may decide to evaluate clinical skills themselves by reviewing digital Chapter Eleven  Testing and Evaluation in Online Courses and Programs 185 recordings of performance or observing students by using other technology with faculty at the receiving end Digitally recording performance is valuable not only as a strategy for summative evaluation, to assess competencies at the end of a clinical course or another designated point in time, but also for review by students for self-assessment and by faculty to give feedback Faculty Observation and Evaluation Even when preceptors are used to supplement the program faculty, it is the faculty’s responsibility to summatively evaluate the student’s performance Many nurse practitioner programs perform on-site evaluations of students where the faculty member visits the site and observes the interaction of the student with patients and the preceptor (Distler, 2015) While some students may take both online and face-to-face courses at the same institution, most students enrolled in completely online programs are located at some geographical distance from the offering school Because of this distance, the time and cost of travel for faculty members to observe each student more than once in the clinical setting during each clinical course may be prohibitive (NONPF, 2003) Another disadvantage to the on-site evaluation is that the face-to-face faculty and student evaluation can be an uncomfortable time for patients and preceptors (Distler, 2015) In an issue statement on the clinical evaluation of advanced practice nurse and nurse practitioner students, the National Organization of Nurse Practitioner Faculties (NONPF) reaffirmed the need to “evaluate students cumulatively based on clinical observation of student performance by [nurse practitioner] faculty and the clinical preceptor’s assessment” and stated that “[d]irect clinical observation of student performance is essential” (NONPF, 2003) According to the National Task Force on Quality Nurse Practitioner Education (2012), clinical observation may be accomplished using direct or indirect evaluation methods such as student-­ faculty conferences, computer simulation, videotaped sessions, clinical simulations, or other appropriate telecommunication technologies On-Campus or Regional Evaluation Sites Many online nursing programs require students to attend an on-campus intensive study and evaluation period yearly or every academic term In these settings, the nursing faculty can observe students to determine whether they have achieved a certain level of proficiency Direct observation often is facilitated through the use of competency assessments such as the Objective Structured Clinical Assessment tools (Bouchoucha et al., 2013) Some online programs have designated regional evaluation sites where students can go to have their performance evaluated by a faculty member In an on-campus or regional assessment setting, students may be required to demonstrate competency with real patients provided by the student or faculty, or with simulated or standardized patients A standardized patient is a lay person or actor trained to play the role of a patient with specific needs Standardized patients 186 Part III  Test Construction and Analysis have the advantage of training to give specific, immediate feedback to students regarding their skill Use of Recording or Telecommunication Technologies An online mode of course and program delivery affects the faculty’s ability to personally verify the assessments that are made of students in geographically distant locations and increases reliance on the preceptor’s assessment of the student’s performance Various alternative methods, such as virtual video chatting, e-mail, or phone calls, can serve as a method of student evaluation after the clinical partnership has been established (Distler, 2015) Personal video capture technology is an innovative solution to this need (Strand, Fox-Young, Long, & Bogossian, 2013) Small handheld battery-powered digital camera units or tripod-mounted cameras may be brought into the clinical environment for assessment, after obtaining consent from the students’ patients, and their performance of clinical skills is recorded An advantage to this technology is that students may view the recording along with their preceptors and faculty members, offering the opportunity to reflect on their own performance and receive feedback Disadvantages include student anxiety about being recorded, technical difficulty with camera operation and digital file transfer, and difficulty getting permission to use a camera in clinical settings (Strand et al., 2013) Clinical Evaluation Methods The clinical evaluation methods presented in Chapter 14 can be used for evaluation in online courses The critical decision for the teacher is to identify which clinical competencies and skills, if any, need to be observed and the performance rated because that decision suggests different evaluation methods than if the focus of the evaluation is on the cognitive outcomes of the clinical course In programs in which students work with preceptors or adjunct faculty available on-site, any of the clinical evaluation methods presented in Chapter 14 can be used as long as they are congruent with the course outcomes and competencies to be developed by students There should be consistency, though, in how the evaluation is done across preceptors Simulations and standardized patients are other strategies useful in assessing clinical performance in online courses Performance with standardized patients can be digitally recorded, and students can submit their patient assessments and other written documentation that would commonly be done in practice in that situation Students also can complete case analyses related to the standardized patient encounter for assessing their knowledge base and rationale for their decisions Ballman, Garritano, and Beery (2016) described their use of virtual interactive cases in their distance-based nurse practitioner program The interactive case is a virtual patient encounter with a standardized patient The experience is comparable to the student being in an examination room interviewing, collecting data from, and assessing the standardized patient Students can demonstrate clinical skills and perform procedures on manikins and models, with their performance digitally recorded and transmitted to faculty for evaluation In online Index abbreviations, test writing guidelines, 56 accountability, 3, 315, 317, 334 accreditation models, 315–317, 334 evaluation of online programs using, 316–317 achievement testing, 25, 28 ACT scores, 139 ADA See Americans with Disabilities Act administering tests, 371 administration, evaluation of, 333 administration of assessments, responsibilities of, 383 affective domain objectives taxonomy, 16–17 writing objectives, 16–17 “all of the above” answers, 86 alternate-forms reliability, 33, 40 alternatives, in multiple-choice tests, 74, 80–87 ambiguity, avoidance of, 29, 202, 203 Americans with Disabilities Act (ADA), 280–281 analysis, in cognitive taxonomy, 15 analytic scoring, 103–104 analytic skills assessment of, 138 testing of, 73, 89 analytical thinking, assessment of, 73, 89 analyzing (analysis), 135 anecdotal notes, 230–231 anonymous grading system, 272 answer(s) changing, 60, 205 key, 166 patterns, 164 answer sheet, 63, 70, 92, 161, 165, 205 machine-scored, 47 scannable, 51, 166 application in cognitive taxonomy, 14 in integrated objectives framework, 18 application skills assessment of, 95 testing of, 89, 134–135 applying (application), 134–135 articulation, psychomotor skills, 18 assessment, 3–5 results, reliability of, 31 assessment products and services developers, responsibilities of, 380–381 marketers and sellers, responsibilities of, 381–382 selectors, responsibilities of, 382 assessment results interpretation, use, and communication of, 384–385 scoring responsibilities, 383–384 student characteristics, 29–30 assessment validity, 23–29 assessment-criterion relationship considerations, 28 consideration of consequences, 28–29 construct considerations, 26–28 content considerations, 25 defined, 23 historical perspectives, 23 influences on, 29–30 reliability, relationship with, 31–32 test blueprint as documentation of, 25 assignment(s) See also specific types of assessment assessment of, 148–155 bias, 270–272 conduct research on, 386 legal aspects of, 279–281 389 390 Index assignment(s) See also specific types of assessment (cont.) online course, 178–179, 181–183 out-of-class, 113 asymmetric distribution, test scores, 286 at-risk students, 312 attitudes in integrated objectives framework, 18 student acceptance of, 16 audio clips, in high-level learning evaluation, 123 autonomic reactivity, 61 baccalaureate degree programs, accreditation of, 315, 316 behavioral techniques, test anxiety reduction, 62 belief system, evaluation and, 16, 217 benchmarks, 330–331 best-answer items, 80, 83, 89 best-work portfolios, 244 bias assessment, 270–272 sources of, 312 test, 281 in test construction, 204 bimodal distribution, test scores, 286 Bloom’s taxonomy, 14 of cognitive domain, 134 blueprints, in test construction process, 51–54, 63 browser security programs, 180 calculations, short-answer items, 91–93 Caputi’s approach, clinical evaluation, 362–363 carryover effect, 96, 154 case analysis, 120, 229 case method, 118–119, 242–243 examples, 119–121 case presentations, 232 case scenarios, 242 case study, 118–121, 146, 242–243 CAT See computerized adaptive testing C-CEI© See Creighton Competency Evaluation Instrument central tendency error, 237 score interpretation, 288–292 certification examinations, 28, 50 CET See clinical evaluation tool chart/exhibit item, NCLEX® examination, 114 cheating See also online testing low-technology forms of, 169 prevention strategies, 59, 167, 169–172, 178–181, 194 sanction for, 171 score reliability and, 36 cheat sheets, 59 checklists, 231–232 description of, 231 design of, 232 performance evaluation, 232 sample, 233 uses of, 231–232 “choice” items, in test construction, 50, 51 CIPP model See Context, Input, Process, Product model clarifying questions, 121 clarity, in written assignments, 146 classroom evaluation, 8, clerical errors, in test construction, 60 client needs framework, NCLEX examination care environment, safe and effective, 129 health promotion and maintenance, 129 physiological integrity, 130 psychosocial integrity, 129 clinical competencies, 222–223 See also clinical evaluation methods, rating scales clinical conferences, 246–247 clinical evaluation, 217 concept of, 216–218 fairness in, 218–219 feedback, 220–222 formative, 217 versus grading, 217 media clips, 247–248 simulations, 239 subjective process, 216 summative, 218 time factor, 230 tools See rating scales written assignments, 239–243 clinical evaluation methods, 184, 186–187 See also rating scales cases, 242–243 clinical conferences, 246–247 distance education, 240 group projects, 248–250 media clips, 247 observation, 229–238 for online nursing courses, 188 peer evaluation, 249, 250 Index portfolio, 244–246 rating scales, 232–239 selection factors, 227–229 self-assessment, 249–251 simulations, 227, 229, 230, 232, 255–263 standardized patients, 228, 261 written assignments, 239–244 clinical evaluation tool (CET) See also rating scales Caputi’s approach, 362–363 examples of, 339–368 guidelines for, 239, 353–354 for higher level course, 236, 348–352 with multiple levels for rating performance, 233, 355–361 with two levels for rating performance, 236, 339–354 clinical judgment evaluation of, 242 process, 110 clinical learning, written assignments for, 147 clinical observations, 147 clinical outcomes, 222–223 clinical performance, 184–187 faculty observation and evaluation, 185 preceptors, use of, 184–185 Clinical Performance Evaluation Tool (CPET), 236–237 clinical practice competency of teachers, 323 evaluation of, evaluation of students, 223 measurement of, outcomes in, 213–216, 222 student stress in, 219–220 clinical practice grading systems honors–pass–fail, 307 letter grades, 307 pass–fail, 307–309 satisfactory–unsatisfactory, 307 clinical problem-solving assessment, 26 clinical scenarios, 111, 114 clinical setting, critical thinking in, 109 clinical stations, objective structured clinical examinations (OSCE), 262 clinical teachers, questions for evaluating, 325 Code of Fair Testing Practices in Education, 277, 369–374 Code of Professional Responsibilities in Educational Measurement, 277, 379–386 coefficient alpha reliability estimate, 34 cognitive component of test anxiety, 61 391 cognitive domain Bloom’s taxonomy of, 134 sample verbs, 12–13, 15 taxonomy, 14–16 cognitive learning, 14 cognitive skills evaluation case method, 119, 242–243 case study, 118–121, 242–243 distance education courses, 240 multimedia, 111, 247 unfolding cases, 118–121, 242–243 collaborative testing, 173–174 Commission on Collegiate Nursing Education (CCNE) accreditation process for online programs, 316 communication skills debates, 122–123 development of, 214, 378 journals, 240–241 writing assignments, 145 competence/competency demonstration of, evaluation of, for nurses in practice, 223 completion items characteristics of, 89 directions for, 161 test construction, 50 comprehension, 134 in cognitive taxonomy, 14 computer software programs course management, 313 grading, 313 item analysis, 164, 199, 209 online examinations, 167 test item bank, 207–208 computer-generated item analysis report, 199 computer-generated test analysis report, 283 computerized adaptive testing (CAT), 132 computerized tests, 87 concept analysis, 143, 146 concept maps, 147, 241 conciseness, in writing test items, 55 concurrent validity evidence, 28, 39 conferences clinical evaluation, 246–247 criteria for evaluating, 247 distance education courses, 240 evaluation method, 246–247 form for evaluation, 248 learning plan development, 311 online, 242 post-clinical, 112, 148, 246, 247 392 Index confidentiality, 16, 276–277 construct validity evidence, 26–28, 38 constructed-response items, 65 completion (fill-in-the-blank), 93 defined, 50 content, in writing assignment, 149 content validity evidence, 25, 38 Context, Input, Process, Product (CIPP) model, 317 context-dependent item sets, 110–114, 160 advantage of, 111 examples, 113–118 interpretive items on NCLEX®, 111–112 layout, 112 purpose of, 113 writing guidelines, 112–114 context-dependent items, 58 Continuous Quality Improvement (CQI) model, 317 core competencies, health professionals, 214, 234, 236 correction formula, 198 course assignments, 181–183 course management systems, grading systems in, 313 courses evaluation, 321–322 cover page, 161, 162 CPET See Clinical Performance Evaluation Tool CQI model See Continuous Quality Improvement model credit-no credit grades, 298 Creighton Competency Evaluation Instrument (C-CEI©), 257–259 criterion-referenced clinical evaluation, 217 criterion-referenced grading, 300–304 composite score computation, 302–304 fixed-percent method, 302–303 total-points method, 303–304 criterion-referenced score interpretation, 7, 20, 283, 291, 292 criterion-referenced test results, 48 criterion-related validity evidence, 28, 39 critical thinking skills defined, 109–110 distance education courses, 240 in integrated objectives framework, 18 significance of, 213 writing assignments, 146 writing objectives, 11 critical thinking skills, evaluation of See also context-dependent item sets eight elements of reasoning, 109 crowding, avoidance of, 161–163 C-SEI holistic tool, 260 cultural awareness, 367–368 cultural bias, 204, 271–272 cultural competence, 214 cultural differences, differential item functioning, 27, 271 curriculum, 333 assessment of, evaluation of, 319–322 curve, grading on, 305 data analysis, implications of, 73 data collection in assessment process, 20 in evaluation process, generally, in formative evaluation process, debate, 122–123 debriefing, 247 decision-making competency, 214 decision-making skills development of, 16 evaluation of, 239 decision-oriented program assessment models, 317, 334 delegation skills, 215 deliberate practice, 17 developing appropriate tests, 370 diagnostic assessment, 3, dictation, 166 DIF See differential item functioning differential item functioning (DIF), 27, 271 differential validity, 27, 271 difficulty index, test item analysis, 200–201, 209 difficulty level of tests, 48–49 directions, for writing assignments, 152 disabilities, students with, 280 discrimination function of tests level, in test construction, 48–49 selection decisions, 270 discussion, teaching format, 11, 121–122 distance education, 240 clinical component, benefits of, 240 clinical evaluation methods, 240 learning activities, 240 simulations in, standardized patients, 240 distractor analysis, 202 distractors, multiple-choice tests, 74, 85, 89 Index documentation course failure, 312 observations of clinical performance, 231 drafts, written assignments, 145, 148, 152, 153 educational assessment, responsibilities of educating about, 385 educational programs, responsibilities of evaluating, 386 effective evaluation, of program outcomes, 334 effective teaching administrator evaluation, 324, 335 clinical practice competency, 323, 335 evaluation of, 322–334 knowledge of subject matter, 323, 335 peer review, 192–193, 326–328 relationship with learners, 324–327, 335 student ratings, 324–326 teacher, personal characteristics of, 324, 335 teaching portfolio, 328–329 teaching skills, 323–324 electronic journals, 240 electronic portfolios, 244–246 See also portfolios emotionality, 61, 62 end-of-instruction evaluation, environmental conditions, 167–168 equivalent-forms reliability, 33, 40 error score, 30, 34 errors, in test construction, 29, 60 essay items, 93–106 analytic scoring, 103 carryover effects, 96 criteria for assessing, 104–105 directions for, 161 essay item development, 96 extended-response essay items, 98–99 holistic scoring, 102 limitations of, 95–97 organizing and outlining responses, 60 restricted-response essay items, 97–98 sample stems, 100 scoring, 102–103, 105 student choice of items, 97 use for, 94 writing guidelines, 87, 99–102 essay tests, issues with carryover effects, 96 effect of writing ability, 96 limited ability to sample content, 95 rater drift, 97 student choice of items, 97 393 time, 97 unreliability in scoring, 95–96 ethical issues Americans with Disabilities Act (ADA), 280 Code of Fair Testing Practices in Education, 277 Code of Professional Responsibilities in Educational Measurement, 277 importance of, 276, 277 privacy, 276 professional boundaries, violations of, 277 test results, 277, 281 testing standards, 277–278 ethnic bias, 204, 271 evaluation areas for, 330 cognitive taxonomy in, 14 of courses, 321 defined, formative, 8–9, 20, 119, 124, 181, 190–191, 217–218, 227, 228, 234, 295, 302, 321 instruction and, methods, 16 objectives for, 7–8 summaries, 311 summative, 9, 20, 119, 124, 181, 191, 217–218, 227, 228, 295, 302, 321 types of, 7–9 evidence-based practice, clinical evaluation, 341 examinations certification, 28, 50 different forms of, 179 licensure, 50 proctored, 180–181 extended-response essay items, 98–99 external evaluators, 318, 334 face validity, 25 factual questions, 121 faculty, 333 failing grades clinical practice, 309–313 communication of, 305 documentation requirements, 312 effect of, 310 problem identification, 311 support services, 312 unsafe clinical performance policy, 312 failure, prediction of, 28 fair testing environment, recommendations for, 377 fairness, 218–219, 269, 281 394 Index Family Educational Rights and Privacy Act of 1974 (FERPA), 276 fatigue, impact on test-taking skills, 60 feedback cheat sheets, 59 clinical evaluation, 204, 206, 207, 220–222, 228, 229, 256, 261, 262 during conferences, 246–247 in core competencies of nurse educators, 323 distance education, 240 failing grades and, 311 5-step process, 222 in online courses, 181 performance evaluation, 236 posttest discussion, 204, 209 principles of, 221–222 specific, 220 teaching evaluation, 323, 324 on written assignments, 144–145, 151, 155, 243 feedback loop, in evaluation process, FERPA See Family Educational Rights and Privacy Act of 1974 fill-in-the-blank items, 92–93 See also short answer final grade, computation of, 197, 209, 302 fixed-percent grading method, criterionreferenced grading, 302–303 flawed test items, 201, 206, 207 font selection, 166 formal papers, in nursing course, 144, 146 formative evaluation, 295 See also feedback clinical, 217–218, 234 defined, discussion as, 121–122 documenting, 231 in grading, 302 implications of, 8–9, 190–191, 217, 234, 241, 326, 328 purpose of, rating scales for, 234 simulations, 247 standardized patients, 232, 261 Formula 20 (K-R20)/Formula 21 (K-R21), computation of, 34 Frequency distribution characteristics of, 286–287 score interpretation, 283–293 test scores, 283–286 frequency polygon, in test score distribution, 285, 286, 292 gender, in differential item functioning, 27 grade point average (GPA), 297 calculating, 298 grading/grades administrative purposes, 296 assessment bias, 270–272 clinical practice, 307 compression of, 274 consistent, 296 criterion-referenced, 300–304, 314 criticisms of, 296–297 on curve, 287, 305 defined, 295 distinguished from scoring, 182, 197 failing clinical practice, 309–313 group projects, 248–249 guidance and counseling purposes of, 296 importance of, 297 inflation of, 272–274, 297 instructional purposes, 295 learning contract, 309, 310 letter grades, assignment of, 299–300, 314 as motivator, 297 norm-referenced, 304–306, 314 pass-fail, 307–309 purposes of, 295–296, 313 scale, 207 self-evaluation and, 297 self-referenced, 306, 314 software programs, 313 spreadsheet application for, 313 summative evaluation, types of systems, 297–299, 307, 314 written assignments as component, 239–244 grammatical clues, 93 grammatical errors multiple-choice tests, 78 in test construction, 26 group mean and median scores, 38 group projects, 248–249 group writing exercises, 147–148 group-comparison techniques, 27 growth and learning-progress portfolios, 244 guessing answers, 66, 198–199, 209 half-length reliability estimate, 34, 40 halo effect, 96, 237 hand-scored tests, 51, 63, 166, 198, 209 health care professionals, core competencies, 214, 215 higher level learning, 107 Index higher level thinking assessment methods for, 118–121 context-dependent item sets, 110–114 problem solving skills, 108 high-stakes assessments, 278–279 high-technology solutions for academic dishonesty, 178, 180 histograms, test score distribution, 284–286, 292 holistic scoring, 102 rubric, 102 homogeneous content, matching exercises, 70 honors–pass–fail grades, 298, 307 hot-spot items, NCLEX® examination, 114, 132 human patient simulators, 235 imitation learning, 17, 18 iNACOL See International Association for K-12 Online Learning iNACOL national standards for quality online courses, 187, 189–192 in-class writing activities, 147–148 informal language, in test construction, 56 informatics, clinical evaluation, 342 information needs, student preparation for test, 58–60 informing test-takers, 373–374 instructional design in online programs, assessment of, 187–191, 316, 317 instructional process, evaluation skills and, integrated domains framework, 18 intelligence, as a normally distributed variable, 287 interactions, analysis of, 147 internal consistency reliability evidence, 33–34, 40 internal program evaluators, 318, 334 International Association for K-12 Online Learning (iNACOL), 187 interpretive items, 58 interrater reliability, 33, 34, 40 irrelevant data, in test construction, 58, 76 item analysis computer software programs for, 164, 199, 209 difficulty index, 200–201, 209 discrimination index, 201–202, 209 distractor analysis, 202 by hand, 199, 203–204, 209 reports, 199, 283 sample, 199 395 item arrangement in logical sequence, 159–160 item banks development of, 207–208 published, 208, 210 item bias, 27, 271–272 item formats in test construction constructed-response items, 50 objectively scored items, 50 selected-response items, 50, 73 selection criteria, 49–50 subjectively scored items, 50 item sequence, in test construction, 159–160 jargon, avoiding use, 56 Joint Committee on Testing Practices, 369–370 Code of Fair Testing Practices in Education, 369 journals, 143, 145, 147, 240–241 judgment about observations, 230–232, 237 clinical evaluation and, 213, 216, 221 cognitive taxonomy and, 14 evaluation skills and, 7, 9, 20 multiple-choice tests, 73 pass–fail grading, 307 in test construction process, 25, 48, 50, 51 keyed response, 75 knowledge acquisition, 14, 16 development, 14 in integrated objectives framework, 18 knowledge, assessment, multiple-choice items, 74 known-groups technique, 27 Kuder-Richardson formulae, 34 kurtosis, test score distribution, 286, 292 language, in test item writing, 55 learners, positive relationships with, 324 learning assessment of in online programs, 178–179 climate, as curriculum evaluation element, 320 disabilities, 165, 166, 172, 271, 272, 280 environment, significance of, 219, 249 management systems, 190 needs assessment, 323 learning contract, 309, 310 396 Index learning outcomes assessment of, 10, 11 in teaching students, 20 legal issues, 279–281 legibility, significance of, 166 length of test, 47–48, 50 leniency error, 237 letter grades, assignment of, 291 considerations in, 299–300 framework selection, 300 what to include in, 299–300 licensure examination, 28, 29, 50, 128 See also NCLEX® linguistic bias, 272 linguistic modification for non-native speakers, 55 literature reviews, 145, 228 logical error, 238 low-technology solutions to promote academic integrity, 178, 194 machine-scored tests, 63, 205, 209 manipulation, psychomotor skills, 18 matching exercises advantage of, 70 classification of, 50, 51 components of, 70 directions, 161 disadvantages of, 70 examples, 71, 72 scoring, 51 use of correction formula with, 198 writing guidelines, 70–71 mean, score interpretation, 289, 292 measurement criterion-referenced, defined, interpretation, types of, 6–7 norm-referenced, 6–7 validity, 23–30, 55, 63, 251, 279 measurement error, 30, 35, 39 flawed items, 207 test security, 167 media clips, 123, 247–248 See also video clips median, score interpretation, 38, 289, 292 memorization, 46, 74, 95 memory aids, 59 mesokurtic distribution, test scores, 286 miskeying, 202, 203 misspelled words, 207 modality, test score distribution, 286, 292 mode, score interpretation, 288, 292 motor skills, development of, 17 multimedia clinical evaluation methods, 247–248 context-dependent items, 112 distance education courses, 240 on NCLEX®, 111 multiple-choice items, 73–87 advantages of, 74 alternatives, 80–83 best-answer items, 80, 83, 89 construction of, 37, 49, 74–87 correct answer, 83–84 design factors, 160, 167 directions for, 161 distractors, 74, 80, 85–87 format, 79, 89 knowledge level, 134 negatively stated stems, 78 options arrangement, 164–174 parts of, 74 purpose of, 74 scoring procedures, 51 stem, 75–80 time factor, 49 use of correction formula with, 206, 209 variation of items, 87 writing guidelines, 75 multiple-choice tests alternatives, 74, 80–87 distractors, 85 samples, 81, 82 wording, 77, 78 multiple-response items, 87–88 alternatives, 88 computerized, 87 defined, 87 order of responses, 88 sample, 88 multiple true–false items, 69 National Council Licensure Examination (NCLEX®) ADA compliance, 281 administration of, 132–133 characteristics of, 50, 87 format, 92 grade inflation and, 274 item preparation, varied cognitive levels, 133–135 predictors of success on, 28, 29, 139 preparing students for, 139–140 test plan, 128–132 client-needs framework, 128–130 cognitive levels, 131, 133–135 Index integrated processes, 130 nursing process framework, 136–139 percentage of items -PN test plan, 131–132 -RN test plan, 129–130 types of items, 111, 132 National Council of State Boards of Nursing (NCSBN), 128, 256 National Council on Measurement in Education (NCME), 373, 379 National League for Nursing (NLN) Fair Testing Guidelines for Nursing Education, 375–378 National Organization of Nurse Practitioner Faculties (NONPF), 185 naturalization, psychomotor skills, 18 NCLEX® See National Council Licensure Examination NCLEX test plans, 128 NCME See National Council on Measurement in Education NCSBN See National Council of State Boards of Nursing needs assessment of learner, testing for, negative feedback, 218 negatively stated stems, 78 NLN See National League for Nursing No Child Left Behind Act, 269 “none of the above” answers, 86 NONPF See National Organization of Nurse Practitioner Faculties normal distribution, test scores, 287–289, 292, 305 norm-referenced clinical evaluation, 217 norm-referenced grading, 302 defined, 304 grading on the curve, 305 standard deviation method, 306 norm-referenced score interpretations, 6–7, 20, 291, 293 notes about performance, 230–231 nursing care plan, 146, 147, 241 nursing process, framework for test questions, 136–138 Objective Structured Clinical Examination (OSCE), 232, 262–263 objectives achievement of, for assessment and testing, 10–13, 19 development of, 15 performance, 18 397 taxonomies of, 14–18 writing guidelines, 11 observation See also rating scales in clinical evaluation, 229–238 pass–fail grading, 308 significance of, 235, 236 OCNE Classroom Teaching Fidelity Scale See Oregon Consortium for Nursing Education Classroom Teaching Fidelity Scale on-campus/regional evaluation sites, 185–186 online conferences, criteria for evaluating, 247 online courses assessment of, 187–191 assessment of teaching in, 191–194 assignments, assessment of, 178–179, 181–183, 193, 326 clinical evaluation methods, 186–187 clinical performance in, 184–187 evaluation of, 321 feedback in, 181–183 instructional design of, 187–191 online education, defined, 177 online education programs, assessing quality of, 193–194 online instruction, program evaluation, 328 online learning, 123, 177 online nursing program assessment of, 316–317 critical elements for assessment, 316 online teaching, assessment of, 191–194 online testing, 178–179, 194 cheating prevention, 172, 178, 179, 194 conditions for, 168 open-book tests, developing and administering, 179 open-ended questions, 122, 243 open-ended response, 95, 102 options multiple-choice tests, 82, 83, 86 multiple-response tests, 88 oral presentations, case analysis, 120 Oregon Consortium for Nursing Education (OCNE) Classroom Teaching Fidelity Scale, 320 organization in affective domain, 17 in writing assignment, 149 OSCE See Objective Structured Clinical Examination 398 Index outcome(s) assessment, 3–5 clinical evaluation, 213 of clinical practice, 213–216, 222 criterion-referenced clinical evaluation, evaluation, generally, taxonomies, 14–18 use in assessment, 19 writing, 11–13 papers, 243–244 See also written assignments pass–fail grades, 298, 307–309 “pass the writing” assignments, 148 patient needs, order of priority, 366 patient simulators, human, 235 patient-centered care, 364–367 peer evaluation, 192, 249, 250, 328 peer review, 182, 242, 272, 326–328 faculty development for, 328 of online teaching, 192–193 percentage-correct score, 291, 293 percentile rank, determination of, 291, 293 performance problems, 311 performance quality, personal bias, 238 philosophical approaches, 319 placement tests, planning for evaluation method, 227 platykurtic distribution, test scores, 286 “pop” tests, 58 population to be tested, 46–47, 62 portfolio as assignment, 147 clinical evaluation, 244–246 contents of, 244 defined, 244 distance education courses, 240 electronic, 244–246 evaluation of, 244 purpose of, 244 teaching, 328–334 time factor, 246 types of, 245 positive reinforcement, 221 posttest discussion, 204–207 importance of, 205 posttest discussions adding points, 206–207 eliminating items, 206–207 power test, 48 practicality, significance of, 37–38 preceptors, distance education, 247 precision, psychomotor skills, 18 preclinical conference, 241, 246 predictive validity, 28, 39 premises, matching exercises, 70–71 preparing students for tests, 58–62 See also NCLEX® presentation skills, 123, 246 printing guidelines for tests, 166 privacy issues, 276–277 problem-solving skills assessment of, 18, 108, 133 context-dependent items, 130 ill-structured problems, 108, 128–130 improvement of, 143 well-structured problems, 108, 128–130 process in writing assignment, 149 proctor, functions of, 168 proctored examinations in distance education, 172, 180–181 professional accreditation, 316 professional boundaries, violations of, 277 program admission, candidate selection, 270 program admission, examinations, program assessment See also program evaluation curriculum evaluation, 319–322 ethics, 276 online programs, 316–317 stakeholders, 29, 39, 317–318 standardized tests, use in, 25, 28, 318 teaching competencies, 322, 323 teaching effectiveness, 324–334 program development, evaluation and, program evaluation models, 317–321 accreditation, 315–317 program outcomes, effective evaluation of, 334 projects, evaluation of, proofread tests, 165 psychomotor domain development of, 17 objectives taxonomy, 17–18 writing objectives, 17–18 psychomotor skills clinical evaluation feedback, 221 development of, 214 distance education courses, 228 purchased tests, 38 purpose of test, 46, 62 QPA See quality point average QSEN See Quality and Safety Education for Nurses Index quality of education, 3, 8, 20 improvement, 213, 215 improvement, clinical evaluation, 213, 215, 236, 341 of teaching, 230, 322–324 of tests, 204 Quality and Safety Education for Nurses (QSEN), 236, 237, 344–347 quality point average (QPA), 298 questioning, discussions, 121–122 questionnaires, 277 questions, during test, 161, 168–169 quizzes, 66, 174, 295, 300, 302, 304 racial bias, 204 rater drift, 97, 238 rating forms, 223, 236, 239, 311 See also clinical evaluation methods; rating scales rating scales, 232–239, 262 See also clinical evaluation tool (CET); rating forms applications, 237 benefits of, 232, 311 and clinical evaluation, 232–238, 356–360 common errors of, 237 defined, 232 distance education, 240 in evaluation process, examples, 234, 235, 339–368 for final evaluation, 235 See also summative evaluation for formative evaluation, 234 guidelines for using, 239 issues with, 237–238 for summative evaluation, 235 types of, 233–240 raw score, 209, 283, 293 frequency distribution of, 285 reading ability, 60 reading papers, 152 recall essay items and, 93–103 short answer, 91 test construction and, 127 testing of, 92 true–false, 66 receiving, in affective domain, 16 recording observations, methods for, 231 See also checklists reflective journaling, 152 relaxation techniques, tests, 62 399 reliability alternate-forms, 33, 40 assessment reliability, 30, 31 consistency of ratings, measure of, 34 decay, 238 defined, 30, 39 equivalence, measure of, 33 equivalent-forms, 33, 40 error, relationship to, 30 estimating, 31, 33–34 grading system and, 295 influences on, 29, 35 internal consistency, measure of, 33–34, 40 score, influential factors, 35–36, 40 scorer, 34 significance of, 30–36 stability, 33, 40 test–retest, 33, 40 validity, relationship with, 31–32 religious bias, 204 remedial learning, 5, 311 remediation, 140, 311 remembering (knowledge), 134 reproducing tests, 166–167 research papers, 146, 152 responding, in affective domain, 17 Respondus™, 180 restricted-response essay, test construction, 51, 97–98 review sessions, 62 rewrites, written assignments, 145 rubric analytic scoring, 103–104 for assessing conferences, 247 for assessing group projects, 250 for assessing papers, 148–155, 155 for assessing portfolio, 249 benefits of, 148 holistic scoring, 102 sample scoring rubric, term paper, 150 scoring, 182 written assignments, 148–155, 155 “rule of C”, 274 SAT scores, 139 satisfactory–unsatisfactory grades, 298, 307 scientific thinking skills, in integrated objectives framework, 18 score distribution characteristics of, 286, 287 shape of, 287, 289 test See test score distributions 400 Index score interpretation criterion-referenced, 7, 20 distributions, 283–289 norm-referenced, 6–7, 20 standardized tests, 292–293 teacher-made tests, 291–292 scoring analytic, 103 components of, 6, 102 computerized, 87 correction formula, 198, 209 defined, 182, 197, 209 errors, 205 essay tests, 95–97 facilitation of, 160, 162, 163–164 holistic, 102 inconsistency of scores, 39 inflation of, 272–274 influential factors, 28 key, 50 measurement validity, 24 multiple-choice tests, 27, 37 objectively scored test items, 50, 209 procedures, 51, 63 reading papers, 152 recording, 197 relative, 102 rubric, 182 self-, 37 subjectively scored items, 50 suggestions for, 105 unreliability in, 95–96 weighting items, 198, 300 scoring tests, administering and, 370, 371 security issues cheating prevention, 169–172 online testing, 179 in test reproduction, 167 selected-response test items characteristics of, 50 effectiveness of, 65 selecting appropriate tests, 370 self-assessment, 215, 249–251 self-esteem, tests and grades effects, 274–275 self-evaluation, 251, 297 self-referenced grading, 300, 306, 314 self-study, 315, 334 SEM See standard error of measurement severity error, 237 short cases, advantages of, 243 short papers, 147, 148, 243 short written assignments, 123–124 short-answer essays, 91 See also restricted response essay short-answer items, 76, 87 See also fill-inthe-blank writing, 92–93 simulation-based assessment, guidelines for, 256–261 simulations for assessment, 232, 233, 255–256, 256–261 characteristics of, 214, 228 clinical evaluation usage, 232 distance education courses, 240 objective structured clinical examination (OSCE), 232, 262–263 standardized patients, 232, 261 types of, 233–240 skewness, test score distribution, 292 skills development, in teaching, 323–324 slang, test writing guidelines, 56 small-group writing activities, 147–148 social issues, 269–276 assessment bias, 270–272 grade/test score inflation, 272–274 occupational roles, 270 self-esteem, influential factors, 274–275 testing as social control, 275 types of, 270 “social loafing”, 174 spacing, in test design, 161 Spearman–Brown double length formula, 34 Spearman–Brown prophecy formula, 34 speeded test, 48 spelling errors, in test construction, 26 split-half reliability, 34, 40 stability, measure of, 33, 40 standard deviation calculation of, 290 interpreting, 293 norm-referenced grading, 306 standard error of measurement (SEM), 35 standardized patients, 228, 232 clinical, 191 standardized tests ACT, 139 characteristics of, 48, 80 equivalent-form estimates, 33 National League for Nursing, 375, 378 program assessment, 25, 318 SAT, 139 Index score interpretation, 292–293 scores of, 28 user’s manual, 292 Standards for Educational and Psychological Testing, 24, 272 Standards for Teacher Competence in Educational Assessment of Students, 387 stems essay items, 100 multiple-choice items, 75–80, 89 storage of, 167 stress in clinical practice, 219–220 reduction strategies, 275 structural bias, 272 student characteristics, 29–30 with disabilities, assessment of, 280 placement process, preparing for test, 58–62 ratings, 324–326 records, 276, 277 study skills, 275 teaching effectiveness, evaluation of, 324–326 test anxiety, 61–62, 140, 275 test-taking skill, 36, 60–61 test-taking strategies, 275 student achievement assessment methods, 319 student characteristics, assessment results, 29–30 student evaluation of teaching See teacher, evaluation of student–faculty interaction, 322 study skills, 275 summative assessment, 256, 262 summative evaluation clinical, 217–218, 228, 232 defined, distance education courses, 240 in grading, 295, 302, 307, 308 tests, “supply” items, in test construction, 50, 51, 66, 72 support services, 312, 316 supportive environment, 219 syllabus, 302, 310 symmetric distribution, test score, 286 synthesis, in cognitive taxonomy, 14 systematic program evaluation (SPE) components of, 330–332 criteria, content of, 332–334 401 evaluation areas, content of, 332–334 evaluation framework, 330 systematic reviews, 146 systems-oriented models, 317, 321, 334 take-home tests, 179–180 taxonomies, 14–18 teacher evaluation of, 322–324 personal characteristics of, 324, 335 relationship with students, 228 teacher-constructed test, 179 blueprint for, 63, 210 length of, 47 preparing students for, 40, 58–62 time constraints, 49 teacher-made assessment, 29 validation for, 26 teacher-made tests, score interpretation, 291–292 teacher–student relationship, 219, 228 teaching assessment of, 322, 323 peer review of, 192–193 plan, 147 portfolio, 328–334 skills in, 323–324 student evaluation of, in online learning, 192 teaching–learning process, 9, 319, 328 evaluation of, 322 teaching to the test, 274 technological skills, development of, 17, 228 telecommunication technologies, 186 term papers, 146, 150, 243 test administration answering questions during test, 168–169 cheating prevention, 169–172 collaborative testing, 173–174 collecting test materials, 172 conditions, 36 cost of, 37 directions, 37, 168 distributing test material, 168 environmental conditions, 167–168 ethical issues, 278 and scoring, 376 time factor, 37, 58, 62 test analysis report, 283 test anxiety, 61–62, 140, 160, 275 test blueprint, 51–54 body of, 53 column headings, 52 402 Index test blueprint (cont.) content areas, 52, 63 defined, 51–54 elements of, 52 example of, 52 functions of, 53 review of, 53 row headings, 52 significance of, 208, 210 for students, 63 test characteristics, 204 test construction bias in, 204 checklist for, 46 content areas, 52, cost of, 37 difficulty level, 48–49 discrimination level, 48–49 flaws in, 209 item formats, 49–50, 62, 140 population factor, 46–47, 62 scoring procedures, 51 test blueprint, 51–54, 63, 131 test items, development of, 12 test length, 47–48, 50, 62 writing test items, guidelines for, 54–58 test contents, 25, 52, 63 test design rules answer key, 166 answer patterns, 164 cover page, 161 crowding, avoidance of, 161–163 directions, writing guidelines, 160–161, 168 item arrangement in logical sequence, 159–160 number of items, 165 options arrangement, 165 proofreading, 165 related material, arrangement of, 163 scoring facilitation, 163–164 test developers, 369, 370, 372, 373 test development, and implementation, 376 test item banks, developing, 207–208 test length, 47–48, 50, 62 test materials collecting, 172 confiscation of, 171 distribution of, 168 test planning item formats, 49–50, 62 preparing students for tests, 58–62 purpose and population, 46–47, 62 test reproduction duplication, 166 legibility, assurance of, 166 printing, 166 test results communication of, 278 reporting and interpreting, 370, 372, 377 test score distributions, 283–289 test users, 369–372 test–retest reliability, 33, 40 test-taking skill, 36, 60–61 strategies, 275 testing concept of validity in, 24 definition of, objectives of, 10–13, 19 online, 178–179 purpose of, 5–6 tests administering and scoring, 371 developing and selecting appropriate, 370 testwiseness, 60 time constraints, test construction process, 49 time frame for program evaluation, 332 total-points method, grading system, 303–304 true–false items construction of, 48, 67–68 described, 66 item examples, 68, 163 limitation of, 66 multiple, 69 scoring procedures, 51, 163, 200, 207 use of correction formula with, 198 variations of, 68–69 writing guidelines, 67–68 true score, 30 “Truth in Testing” laws, 276 typographical errors, 159, 165, 174 unannounced tests, 275 understanding, in integrated objectives framework, 18 unfolding cases, 118–121, 242–243 unimodal distribution, test scores, 286, 287 unsafe clinical performance, 312 usability, significance of, 37 validity assessment See assessment validity concept of, 23, 24 grading system and, 295 Index importance of, 25 influences on, 29–30 legal issues, 280 test blueprint and, 53 test construction and, 57 values/value system clarification strategies, 228 determination of, 214 development, 16, 19 internalization of, 17 organization of, 17 valuing, in affective domain, 17 variability measures, score distributions, 290 video capture technology, 186 video clips/videotapes, 11, 123, 247 See also media clips videoconferencing, 247 visual disabilities, 166 visual imagery, 62 weighting grading software programs, 300 letter grade assignment, 300 in scoring process, 198 word choice, in test construction, 56 writing ability, effect of, 96 writing activities See also written assignments in-class and small-group, 147–148 for postclinical conferences, 148 writing in the discipline and writing-to-learn activities, 144–145 writing skills development of, 143 improvement strategies, 243 of student, 26 writing structure checklist, 153 writing style, 145, 148, 149 and format, 151 403 writing test items, guidelines for See also test construction chart/exhibit, 111 essay, 99–102 within framework of clinical practice, 136–139 hot spot, 111, 132 matching exercises, 70–71 multiple-choice, 75 multiple-response, 88 ordered response, NCLEX® examination, 111 short answer, 91–93 test items, 54–58, 63 true–false items, 67–68 varied cognitive levels, 133–135 writing-to-learn activities, writing in discipline and, 144–145 written assignments assessment, 148–155 case method, 242–243 case study, 242–243 characteristics of, generally, 11, 239 clinical evaluation, 239–244 concept maps, 241–242 distance education courses, 240 drafts, 145–146, 148, 153 evaluation/grading, 152–155 feedback, 144–145, 151, 155 formal papers, 144, 145, 153 journals, 143, 145, 147, 152, 240–241 nursing care plan, 241 papers, 243 peer review, 148 purposes of, 143–145 rewrites, 145 rubric, 150–151 See also rubric short written assignments, 123–124 types of, 146–148, 155, 240 unfolding cases, 118–121, 242–243 ... assessment, iNACOL standards and guidelines ­developed for assessing the quality of online teaching in K- 12 education (Pape et al., 20 11) may be adapted for use in higher education settings, including nursing. .. Talley, B (20 13) Quality monitoring and accreditation in nursing distance education programs In K Frith & D Clark (Eds.), Distance education in nursing (3rd ed.) New York, NY: Springer Publishing Bouchoucha,... http://www.inacol.org/wp-content/uploads /20 15/ 02/ national-standards-forquality-online-courses-v2.pdf Jones, D., & Wolf, D (20 10) Shaping the future of nursing education today using distance education and technology ABNF Journal, 21 (2) ,

Ngày đăng: 23/01/2020, 14:40

TỪ KHÓA LIÊN QUAN