1. Trang chủ
  2. » Tất cả

The research foundation for the GRE revised general test: a compendium of studies

279 3 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 279
Dung lượng 1,97 MB

Nội dung

The Research Foundation for the GRE revised General Test A Compendium of Studies GRE The Research Foundation for the GRE® revised General Test A Compendium of Studies Cathy Wendler and Brent Bridgeman[.]

GRE The Research Foundation for the GRE® revised General Test: A Compendium of Studies Cathy Wendler and Brent Bridgeman, Editors With assistance from Chelsea Ezzo The Research Foundation for the GRE® revised General Test: A Compendium of Studies Edited by Cathy Wendler and Brent Bridgeman with assistance from Chelsea Ezzo To view the PDF of The Research Foundation for the GRE® revised General Test: A Compendium of Studies visit www.ets.org/gre/compendium Copyright © 2014 Educational Testing Service All rights reserved E-RATER, ETS, the ETS logo, GRADUATE RECORD EXAMINATIONS, GRE, LISTENING LEARNING LEADING, PRAXIS, TOEFL, TOEFL IBT, and TWE are registered trademarks of Educational Testing Service (ETS) in the United States and other countries SAT is a registered trademark of the College Board July 2014 Dear Colleague: Since its introduction in 1949, the GRE® General Test has been an important part of graduate admissions as a proven measure of applicants’ readiness for graduate-level work GRE scores are used by the graduate and business school community to supplement undergraduate records, including grades and recommendations, and other qualifications for graduate-level study The recent revision of the GRE General Test was thoughtful and careful, with consideration given to the needs and practices of score users and test takers A number of goals guided our efforts, such as ensuring that the test was closely aligned with the skills needed to succeed in graduate and business school, providing more simplicity in distinguishing performance differences between candidates, enhancing test security, and providing a more test-taker friendly experience As with other ETS assessments, the GRE General Test has a solid research foundation This research-based tradition continued as part of the test revision The Research Foundation for the GRE® revised General Test: A Compendium of Studies is a comprehensive collection of the extensive research efforts and other activities that led to the successful launch of the GRE revised General Test in August 2011 Summaries of nearly a decade of research, as well as previously unreleased information about the revised test, cover a variety of topics including the rationale for revising the test, the development process, test design, pilot studies and field trials, changes to the score scale, the use of automated scoring, validity, and fairness and accessibility issues We hope you find this compendium to be useful and that it helps you understand the efforts that were critical in ensuring that the GRE revised General Test adheres to professional standards while making the most trusted assessment of graduate-level skills even better We invite your comments and suggestions Sincerely, David Payne Vice President & COO Ida Lawrence Senior Vice President Global Education Research and Development Educational Testing Service Educational Testing Service Page i Acknowledgments We thank Yigal Attali, Jackie Briel, Beth Brownstein, Jim Carlson, Neil Dorans, Rui Gao, Hongwen Guo, Eric Hansen, John Hawthorn, Jodi Krasna, Lauren Kotloff , Longjuan Liang, Mei Liu, Skip Livingston, Ruth Loew, Rochelle Michel, Maria Elena Oliveri, Donald Powers, MaraGale Reinecke, Sharon Slater, John Young, and Rebecca Zwick for their help with previous versions of the Compendium In particular, we acknowledge the time and guidance given by Kim Fryer and Marna Golub-Smith and thank them for their assistance Page ii Contents Overview to the Compendium 0.1.1 Cathy Wendler, Brent Bridgeman, and Chelsea Ezzo Section 1: Development of the GRE® revised General Test 1.0.1 1.1 Revisiting the GRE® General Test 1.1.1 Jacqueline Briel and Rochelle Michel 1.2 A Chronology of the Development of the Verbal and Quantitative Measures on the GRE® revised General Test 1.2.1 Cathy Wendler 1.3 Supporting Efficient, Evidence-Centered Question Development for the GRE® Verbal Measure 1.3.1 Kathleen Sheehan, Irene Kostin, and Yoko Futagi 1.4 Transfer Between Variants of Quantitative Questions 1.4.1 Mary Morley, Brent Bridgeman, and René Lawless 1.5 Effects of Calculator Availability on GRE® Quantitative Questions 1.5.1 Brent Bridgeman, Frederick Cline, and Jutta Levin 1.6 Calculator Use on the GRE® revised General Test Quantitative Reasoning Measure 1.6.1 Yigal Attali 1.7 Identifying the Writing Tasks Important for Academic Success at the Undergraduate and Graduate Levels 1.7.1 Michael Rosenfeld, Rosalea Courtney, and Mary Fowles 1.8 Timing of the Analytical Writing Measure of the GRE® revised General Test 1.8.1 Frédéric Robin and J Charles Zhao 1.9 Psychometric Evaluation of the New GRE® Writing Measure 1.9.1 Gary Schaeffer, Jacqueline Briel, and Mary Fowles 1.10 Comparability of Essay Question Variants .1.10.1 Brent Bridgeman, Catherine Trapani, and Jennifer Bivens-Tatum Page iii Section 2: Creating and Maintaining the Score Scales 2.0.1 2.1 Considerations in Choosing a Reporting Scale for the GRE® revised General Test 2.1.1 Marna Golub-Smith and Cathy Wendler 2.2 How the Scales for the GRE® revised General Test Were Defined .2.2.1 Marna Golub-Smith and Tim Moses 2.3 Evaluating and Maintaining the Psychometric and Scoring Characteristics of the Revised Analytical Writing Measure 2.3.1 Frédéric Robin and Sooyeon Kim 2.4 Using Automated Scoring as a Trend Score: The Implications of Score Separation Over Time 2.4.1 Catherine Trapani, Brent Bridgeman, and F Jay Breyer Section 3: Test Design and Delivery 3.0.1 3.1 Practical Considerations in Computer-Based Testing 3.1.1 Tim Davey 3.2 Examining the Comparability of Paper-Based and Computer-Based Administrations of Novel Question Types: Verbal Text Completion and Quantitative Numeric Entry Questions .3.2.1 Elizabeth Stone, Teresa King, and Cara Cahalan Laitusis 3.3 Test Design for the GRE® revised General Test 3.3.1 Frédéric Robin and Manfred Steffen 3.4 Potential Impact of Context Effects on the Scoring and Equating of the Multistage GRE® revised General Test 3.4.1 Tim Davey and Yi-Hsuan Lee Section 4: Understanding Automated Scoring 4.0.1 4.1 Overview of Automated Scoring for the GRE® General Test .4.1.1 Chelsea Ezzo and Brent Bridgeman 4.2 Comparing the Validity of Automated and Human Essay Scoring .4.2.1 Donald Powers, Jill Burstein, Martin Chodorow, Mary Fowles, and Karen Kukich 4.3 Stumping e-rater®: Challenging the Validity of Automated Essay Scoring 4.3.1 Donald Powers, Jill Burstein, Martin Chodorow, Mary Fowles, and Karen Kukich 4.4 Performance of a Generic Approach in Automated Essay Scoring 4.4.1 Yigal Attali, Brent Bridgeman, and Catherine Trapani Page iv 4.5 Evaluation of the e-rater® Scoring Engine for the GRE® Issue and Argument Prompts 4.5.1 Chaitanya Ramineni, Catherine Trapani, David Williamson, Tim Davey, and Brent Bridgeman 4.6 E-rater® Performance on GRE® Essay Variants 4.6.1 Yigal Attali, Brent Bridgeman, and Catherine Trapani 4.7 E-rater® as a Quality Control on Human Scores 4.7.1 William Monaghan and Brent Bridgeman 4.8 Comparison of Human and Machine Scoring of Essays: Differences by Gender, Ethnicity, and Country .4.8.1 Brent Bridgeman, Catherine Trapani, and Yigal Attali 4.9 Understanding Average Score Differences Between e-rater® and Humans for DemographicBased Groups in the GRE® General Test 4.9.1 Chaitanya Ramineni, David Williamson, and Vincent Weng Section 5: Validation Evidence 5.0.1 5.1 Understanding What the Numbers Mean: A Straightforward Approach to GRE® Predictive Validity 5.1.1 Brent Bridgeman, Nancy Burton, and Frederick Cline 5.2 New Perspectives on the Validity of the GRE® General Test for Predicting Graduate School Grades 5.2.1 David Klieger, Frederick Cline, Steven Holtzman, Jennifer Minsky, and Florian Lorenz 5.3 Likely Impact of the GRE® Writing Measure on Graduate Admission Decisions 5.3.1 Donald Powers and Mary Fowles 5.4 A Comprehensive Meta-Analysis of the Predictive Validity of the GRE®: Implications for Graduate Student Selection and Performance 5.4.1 Nathan Kuncel, Sarah Hezlett, and Deniz Ones 5.5 The Validity of the GRE® for Master’s and Doctoral Programs: A Meta-Analytic Investigation .5.5.1 Nathan Kuncel, Serena Wee, Lauren Serafin, and Sarah Hezlett 5.6 Predicting Long-Term Success in Graduate School: A Collaborative Validity Study 5.6.1 Nancy Burton and Ming-mei Wang 5.7 Effects of Pre-Examination Disclosure of Essay Prompts for the GRE® Analytical Writing Measure 5.7.1 Donald Powers Page v 5.8 The Role of Noncognitive Constructs and Other Background Variables in Graduate Education 5.8.1 Patrick Kyllonen, Alyssa Walters, and James Kaufman Section 6: Ensuring Fairness and Accessibility 6.0.1 6.1 Test-Taker Perceptions of the Role of the GRE® General Test in Graduate Admissions: Preliminary Findings 6.1.1 Frederick Cline and Donald Powers 6.2 Field Trial of Proposed GRE® Question Types for Test Takers With Disabilities 6.2.1 Cara Cahalan Laitusis, Lois Frankel, Ruth Loew, Emily Midouhas, and Jennifer Minsky 6.3 Development of the Computer-Voiced GRE® revised General Test for Examinees Who Are Blind or Have Low Vision 6.3.1 Lois Frankel and Barbara Kirsh 6.4 Ensuring the Fairness of GRE® Analytical Writing Measure Prompts: Assessing Differential Difficulty .6.4.1 Markus Broer, Yong-Won Lee, Saba Rizavi, and Donald Powers 6.5 Effect of Extra Time on Verbal and Quantitative GRE® Scores 6.5.1 Brent Bridgeman, Frederick Cline, and James Hessinger 6.6 Fairness and Group Performance on the GRE® revised General Test 6.6.1 Frédéric Robin Page vi Overview to the Compendium The decision to revise a well-established test, such as the GRE® General Test, is made purposively and thoughtfully because such a decision has major consequences for score users and test takers Considerations as to changing the underlying constructs measured by the test, question types used on the test, the method for delivering the test, and the scale used to report scores must be carefully evaluated (see Dorans & Walker, 2013; Wendler & Walker, 2006) Changes in the test-taking population, the relationship of question types to the skills being measured, or expanding on the use of the test scores requires that a careful examination of the test be undertaken For the GRE General Test, efforts to evaluate possible changes to the test systematically began with approval from the Graduate Record Examinations® (GRE) Board What followed was a decade of extensive efforts and activities that examined multiple question types, test designs, and delivery issues related to the test revision Throughout the redesign process, the Standards for Educational and Psychological Testing (American Educational Research Association [AERA], American Psychological Association [APA], & National Council of Measurement in Education [NCME], 1999) was used as guidance The resulting GRE revised General Test is one that adheres to the Standards The Compendium provides chapters in the form of summaries as a way to describe the extensive development process A number of these chapters are available in longer published documents, such as research reports, journal articles, or book chapters, and their original source is provided for convenience The intention of the Compendium is to provide, in nontechnical language, an overview of specific efforts related to the GRE revised General Test Other studies that were conducted during the time of the development of the GRE revised General Test are not detailed here While these studies are important, only those that in some way contributed to decisions about the GRE revised General Test or contribute to the validity argument for the revised test are included in the Compendium The Compendium is divided into six sections, each of which contains multiple chapters around a common theme It is not expected that readers will read the entire Compendium Instead, the Compendium is designed so that readers may chose to review (or print) specific chapters or sections Each section begins with an overview that summarizes the chapters found within the section Readers may find it helpful to read each section overview and to use the overview as a guide to determine which particular chapters to read Section 1: Development of the GRE revised General Test A test should be revised using planned, documented processes that include, among others, gathering data on the functioning of question types, timing allotted for the test or test sections, and performance differences for subpopulations The focus of the first section is to Overview to the Compendium Page 0.1.1 ... Yigal Attali, Brent Bridgeman, and Catherine Trapani Page iv 4.5 Evaluation of the e-rater® Scoring Engine for the GRE? ? Issue and Argument Prompts 4.5.1 Chaitanya Ramineni, Catherine Trapani, David... the redesign process, the Standards for Educational and Psychological Testing (American Educational Research Association [AERA], American Psychological Association [APA], & National Council of. .. measures—Verbal Reasoning, Quantitative Reasoning, and Analytical Writing Information on the various pilots and field test activities for the revised test, evaluations of the impact of calculator availability,

Ngày đăng: 23/11/2022, 19:02

w