Research Methodologies of School Psychology Research Methodologies of School Psychology is a comprehensive, actionable resource that offers graduate students and school psychologists the knowledge and skills to apply key scientific techniques in practice A volume in the Foundations of School Psychology Research and Practice Series, this book directly addresses the need for definitive resources on mastering research methodologies in the field Covering topics such as development and evaluation of measures, application of various designs, and drawing inferences from data, Ryan J Kettler provides rigorous yet accessible methodological guidance Each chapter includes illustrative examples, summaries of essential learnings, and reflective concluding questions Using these engaging and invaluable strategies, graduate students and school psychologists will be effectively prepared to apply the scientific method in their own professional contexts Ryan J Kettler is Associate Professor in the Department of School Psychology, Graduate School of Applied and Professional Psychology, Rutgers University–New Brunswick, USA Foundations of School Psychology Research and Practice Series Editor: Craig Albers Competent school psychologists function simultaneously as scientists, scholars, and practitioners Therefore, they need a foundation of evidence-based practices to inform their services to educators, parents, students, and others The Foundations of School Psychology Research and Practice Series offers school psychologists the context and skills they need to be effective in both research and practice, integrating in-depth and up-to-date coverage of important theories, methodologies, applications, approaches, and issues across a variety of topics These books are intended for trainers seeking comprehensive course readings, researchers looking to examine concepts from school psychology-specific perspectives, and practitioners ready to build on and apply their knowledge and preparation Research Methodologies of School Psychology Ryan J Kettler Research Methodologies of School Psychology Critical Skills Ryan J Kettler First published 2019 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 Taylor & Francis The right of Ryan J Kettler to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988 All rights reserved No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe Library of Congress Cataloging-in-Publication Data Names: Kettler, Ryan J., author Title: Research methodologies of school psychology: critical skills / Ryan J Kettler Description: New York, NY: Routledge, 2019 | Includes bibliographical references and index Identifiers: LCCN 2018055331 (print) | LCCN 2019001022 (ebook) | ISBN 9781315724072 (eBook) | ISBN 9781138851498 (hardback) | ISBN 9781138851504 (pbk.) | ISBN 9781315724072 (ebk) Subjects: LCSH: School psychology–Research–Methodology Classification: LCC LB1027.55 (ebook) | LCC LB1027.55.K46 2019 (print) | DDC 371.7/13–dc23 LC record available at https://lccn.loc.gov/2018055331 ISBN: 978-1-138-85149-8 (hbk) ISBN: 978-1-138-85150-4 (pbk) ISBN: 978-1-315-72407-2 (ebk) Typeset in Bembo by Deanta Global Publishing Services, Chennai, India To my sometimes collaborators, often mentors, and always friends – Craig Albers, Margaret Garcia, and Steve Elliott – thank you for guiding me and supporting me in the career I love To my family – Kelly and Austin – you are the whole world to me – Ryan Contents About the Author Acknowledgments ix x SECTION Research, Evaluation, and Our Field 1 School Psychology and the Scientific Method Literature Review, Questions, and Hypotheses 24 KATHLEEN LYNNE LANE AND RYAN J KETTLER SECTION Science Begins with Measurement 43 Developing Measures 45 The Reliability of Scores and the Validity of Inferences 67 SECTION Research Designs in School Psychology Randomized Control Trials and Quasi-Experimental Designs in School Psychology 85 87 Correlational and Causal Comparative Designs in School Psychology 112 Survey Designs in School Psychology 132 KATHLEEN LYNNE LANE AND RYAN J KETTLER Single-Case Research Designs in School Psychology KATHLEEN LYNNE LANE AND RYAN J KETTLER 151 viii Contents SECTION Drawing Inferences from Data 167 Statistics, Samples, and Populations 169 10 Research-Related Tasks: Developing Technical Writing Skills 193 KATHLEEN LYNNE LANE AND RYAN J KETTLER 11 Moving Forward: Conducting Research in a Practicing Profession References Index 216 231 242 About the Author He earned his doctorate in Educational Psychology from the University of Wisconsin–Madison Prior to joining Rutgers, Dr Kettler held faculty positions at California State University, Los Angeles and at Vanderbilt University Dr Kettler’s research focuses on data-based decision making in education Active areas within his research program include universal screening, inclusive assessment, socio-emotional competence, and educator effectiveness Dr Kettler has been a principal investigator or co-principal investigator on eight grant projects, including three funded by the US Department of Education (ED) He was previously a co-principal investigator of the School System Improvement (SSI) Project, a five-year federal grant funded through the teacher incentive fund to modernize educator assessment connected to professional development The SSI Project was a partnership between two universities and charter schools throughout the state of New Jersey developed with the ultimate goal of improving educator performance and student achievement In addition to writing Research Methodologies in School Psychology: Critical Skills, Dr Kettler is the lead editor of Universal Screening in Educational Settings: Evidence-Based Decision Making for Schools, a co-editor of the Handbook of Accessible Instruction and Testing Practices: Issues, Innovations, and Applications (Second Edition), and an associate editor for School Psychology International His expertise in assessment has led to collaborations with 12 state departments of education, as well as to peer review of state accountability systems for the ED Dr Kettler has published over 50 peer-reviewed journal articles and over 20 book chapters, encyclopedia entries, and test reviews 234 References Gresham, F.M., & Elliott, S.N (2008) SSiS: Teacher Rating Scales Bloomington, MN: Pearson Assessments Gutkin, T.B., & Conoley, J.C (1990) Reconceptualizing school psychology from a service delivery perspective: Implications for practice, training, and research Journal of School Psychology, 28, 203–223 Haladyna, T.M., Downing, S.M., & Rodriguez, M.C (2002) A review of multiplechoice item-writing guidelines for classroom assessment Applied Measurement in Education, 15, 309–334 Hattie, J (2009) Visible learning: A synthesis of over 800 meta-analyses relating to achievement London, England: Routledge Hattie, J (2012) Visible learning for teachers: Maximizing impact on learning London: Routledge Hattie, J (2015) The applicability of visible learning to higher education Scholarship of Teaching and Learning in Psychology, 1, 79–91 doi:10.1037/stl0000021 Haydon, T., Mancil, G.R., & Van Loan, C (2009) Using opportunities to respond in a general education classroom: A case study Education and Treatment of Children, 32, 267–278 Hayes, S.C., Barlow, D.H., & Nelson-Gray, R.O (1999) The scientist practitioner: Research and accountability in the age of managed care (2nd ed.) Boston, MA: Allyn and Bacon Hess, K., McDivitt, P., & Fincher, M (2008, June) Who are the 2% students and how we design items and assessments that provide greater access for them? Results from a pilot study with Georgia students Paper presented at the 2008 CCSSO National Conference on Student Assessment, Orlando, FL Available at http://www.nciea org/publications/CCSSO_KHPMMF08.pdf Hopkins, W.C (2001) A scale of magnitudes for effect statistics A New View of Statistics Retrieved from http://www.sportsci.org/resource/stats/effectmag.html Horner, R.H., Carr, E.G., Halle, J., Mcgee, G., Odom, S., & Wolery, M (2005) The use of single-subject research to identify evidence-based practice in special education Exceptional Children, 71, 165–179 Horner, R.H., & Odom S.L (2014) Constructing single-case research designs: Logic and options In T.R Kratochwill and J.R Levin (Eds.) Single-case intervention research: Methodological and statistical advances (pp 27–51) Washington, DC: APA doi: 10.1037/14376-002 Howell, D.C (2004) Fundamental Statistics for the Behavioral Sciences Belmont, CA: Brooks/Cole Thomson Learning Hunsley, J., & Meyer, G.J (2003) The incremental validity of psychological testing and assessment: Conceptual, methodological, and statistical issues Psychological Assessment, 15, 446–455 IBM Corp (2015) IBM SPSS Statistics for Windows, Version 23.0 Armonk, NY: IBM Corp Individuals with Disabilities Education Improvement Act of 2004 Pub L., 108–446, 118 Stat 2647 (2004) Institute of Education Sciences (2016) Request for applications: Partnerships and collaborations focus on problems of practice or policy (Research Collaborations Program; CFDA 84.305H) Washington, DC: US Department of Education References 235 Irwin, C.W., & Stafford, E.T (2016) Survey methods for educators: Collaborative survey development (part of 3) (REL 2016-160) Washington, DC: US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Education Laboratory Northeast & Islands Isaac, S., & Michael, W.B (1995) Handbook in research and evaluation (3rd ed.) San Diego, CA: Edits Kane, M (2013) Validating the interpretations and uses of test scores Journal of Educational Measurement, 50, 1–73 Kazdin, A.E (2011) Single case research designs (2nd ed.) New York, NY: Oxford University Press Kratochwill, T.R., Horner, R.H., & Levin, J.R (2017) Negative results: Conceptual and methodological dimensions in single-case intervention research Remedial and Special Education, 39, 67–76 Kennedy, C.H (2005) Single-case designs for educational research Boston, MA: Allyn & Bacon Kettler, R.J (2011a) Computer-based screening for the new modified alternate assessment Journal of Psychoeducational Assessment, 29, 3–13 doi:10.1177/ 0734282910370804 Kettler, R.J (2011b) Holding modified assessments accountable: Applying a unified reliability and validity framework to the development and evaluation of AA-MASs In M Russell (Ed.), Assessing students in the margins: Challenges, strategies, and techniques (pp 311–334) Charlotte, NC: Information Age Publishing Kettler, R.J., Elliott, S.N., & Beddow, P.A (2009) Modifying achievement test items: A theory-guided and data-based approach for better measurement of what students with disabilities know Peabody Journal of Education, 84, 529–551 Kettler, R.J., & Feeney-Kettler, K.A (2011) Screening systems and decisionmaking at the preschool level: Application of a comprehensive validity framework Psychology in the Schools, 48, 430–441 doi:10.1002/pits.20565 Kettler, R.J., Glover, T.A., Albers, C.A., & Feeney-Kettler, K.A (Eds.) (2014) Universal screening in educational settings: Evidence-based decision making for schools Washington, DC: American Psychological Association doi:10.1037/14316-000 Kettler, R.J., Rodriguez, M.R., Bolt, D.M., Elliott, S.N., Beddow, P.A., & Kurz, A (2011) Modified multiple-choice items for alternate assessments: Reliability, difficulty, and differential boost Applied Measurement in Education, 24, 210–234 doi:10.1080/08957347.2011.580620 Kilgus, S.P., Chafouleas, S.M., & Riley-Tillman, T.C (2013) Development and initial validation of the Social and Academic Behavior Risk Screener for elementary grades School Psychology Quarterly, 28, 210–226 Kilgus, S P., Eklund, K., von der Embse, N P., Taylor, C N., & Sims, W A (2016) Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) Teacher Rating Scale and multiple gating procedure within elementary and middle school samples Journal of School Psychology, 58, 21–39 Kilgus, S P., Sims, W A., von der Embse, N P., & Riley-Tillman, T C (2015) Confirmation of models for interpretation and use of the social and academic behavior risk screener (SABRS) School Psychology Quarterly, 30, 335–352 236 References Kratochwill, T.R., Hitchcock, J.H., Horner, R.H., Levin, J.R., Odom, S.L., Rindskopf, D.M., & Shadish, W.R (2013) Single-case intervention research design: The What Works Clearinghouse standards Remedial and Special Education, 34, 26–38 Kuhn, M.R., & Stahl, S.A (2003) Fluency: A review of developmental and remedial practices Journal of Educational Psychology, 95, 3–22 Lane, K.L (2018) Building strong partnerships: Responsible inquiry to learn and grow together – TECBD-CCBD keynote address Manuscript under review Lane, K.L., Bruhn, A.L., Crnobori, M.E., & Sewell, A.L (2009) Designing functional assessment-based interventions using a systematic approach: A promising practice for supporting challenging behavior In Advances in learning and behavior disabilities (Vol 22, pp 341–370) UK: Emerald Group Publishing Limited Lane, K.L., Carter, E., Jenkins, A., Magill, L., & Germer, K (2015) Supporting comprehensive, integrated, three-tiered models of prevention in schools: Administrators’ perspectives Journal of Positive Behavior Interventions, 17, 209–222 doi:10.1177/1098300715578916 Lane, K.L., Common, E.A., Royer, D.J., & Muller, K (2014) Group comparison and single case research design quality indicator matrix using Council for Exceptional Children 2014 Standards Unpublished tool available at www.ci3t.org Lane, K.L., Common, E.A., Royer, D.J., & Oakes, W.P (2018) Conducting systematic reviews of the literature: Guidance for quality appraisals Manuscript under review Lane, K.L., Kalberg, J.R., & Shepcaro, J.C (2009) An examination of the evidence base for function-based interventions for students with emotional and/or behavioral disorders attending middle and high schools Exceptional Children, 75, 321–340 Lane, K.L., Royer, D.J., Messenger, M.L., Common, E.A., Ennis, R.P., & Swogger, E.D (2015) Empowering teachers with low-intensity strategies to support academic engagement: Implementation and effects of instructional choice for elementary students in inclusive settings Education and Treatment of Children, 38, 473–504 doi: 10.1353/etc.2015.001 Lane, K., Wolery, M., Reichow, B., & Rogers, L (2007) Describing baseline conditions: Suggestions from research reports Journal of Behavioral Education, 16, 224–234 Lane, S., Raymond, M.R., & Haladyna, T.M (2016) Handbook of Test Development (2nd ed.) New York, NY: Routledge, Taylor and Francis Lane, S., Raymond, M.R., Haladyna, T.M., & Downing, S.M (2016) Test development process in S Lane, M.R Raymond, & T.M Haladyna (Eds.), Handbook of Test Development (2nd ed.) (pp 3–18) New York, NY: Routledge, Taylor and Francis Lawshe, C.H (1975) A quantitative approach to content validity Personnel Psychology, 28, 563–575 Ledford, J.R., & Gast, D.L (Eds) (2018) Single case research methodology: Applications in special education and behavioral sciences (3rd ed.) New York, NY: Routledge Leitenberg, H (1973) The use of single-case methodology in psychotherapy research Journal of Abnormal Psychology, 82, 87–101 doi:10.1901/jaba.1985.18-3 References 237 Leong, F.T., & Austin, J.T (2005) Psychology Research Handbook: A Guide for Graduate Students and Research Assistants (2nd ed.) Thousand Oaks, CA: Sage Publications Lilienfeld, S.O., Ammirati, R., and David, M (2012) Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error Journal of School Psychology, 50, 7–36 Lucas, S., & Cutspec, P (2005) The role and process of literature searching in the preparation of a research synthesis Retrieved from http://www.wbpress com/shop/the-role-and-process-ofliterature-searching-in-the-preparation-of-aresearch-synthesis/ Maggin, D.M., & Chafouleas, S.M (2013) Introduction to the special series: Issues and advances of synthesizing single-case research Remedial and Special Education, 31, 3–8 Maggin, D.M., Lane, K.L., & Pustejovsky, J.E (2017) Introduction to the special issue on single-case systematic reviews and meta-analyses Remedial and Special Education, 38, 323–330 Majeika, C.E., Walder, J.Pl., Hubbard, J.P., Steeb, K.M., Ferris, G.J., Oakes, W.P., & Lane, K.L (2011) Improving on-task behavior using a functional assessmentbased intervention in an inclusive high school setting Beyond Behavior, 20, 55–66 McGrew, K.S., LaForte, E.M., & Schrank, F.A (2014) Technical Manual: WoodcockJohnson IV Rolling Meadows, IL: Riverside Mehrens, W.A (1997) The consequences of consequential validity Educational Measurement: Issues and Practice, 16(2), 16–18 Mertens, D.M (2015) Research and evaluation in education and psychology (4th ed.) Los Angeles, CA: SAGE Publishing, Inc Mertler, C.A (2016) Introduction to educational research Washington, DC: SAGE Publishing, Inc Mertler, C.A., & Reinhart, R.V (2017) Advanced and multivariate statistical method New York, NY: Routledge Messenger, M., Common, E.A., Lane, K.L., Oakes, W.P., Menzies, H.M., Cantwell, E.D., & Ennis, R.P (2017) Increasing opportunities to respond for students with internalizing behaviors: The utility of choral and mixed responding Behavioral Disorders, 42, 170–184 doi: 10.1177/0198742917712968 Messick, S.M (1995) Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning American Psychologist, 50, 741–749 Miller, L.M., Dufrene, B.A., Olmi, D.J., Tingstrom, D., & Filce, H (2015) Selfmonitoring as a viable fading option in check-in/check-out Journal of School Psychology, 53, 121–135 Mills, G.E., & Gay, L.R (2018) Educational research: Competencies for analysis and applications (12th ed.) United States of America: Pearson Education Limited Murphy, K.R., & Davidshofer, C.O (2005) Psychological testing: Principles and applications (6th ed.) Upper Saddle River, NJ: Prentice Hall Naglieri, J.A., Das, J.P., & Goldstein, S (2014) Cognitive Assessment System (2nd ed.) Austin, TX: PRO-ED 238 References National Association of the Deaf (2016) National association of the deaf Retrieved March 31, 2017 from www.nad.org National Association of School Psychologists (2010a) Model for comprehensive and integrated school psychological services Bethesda, MD: Author National Association of School Psychologists (2010b) NASP practice model 10 domains Bethesda, MD: Author National Association of School Psychologists (2010c) Principles for professional ethics Bethesda, MD: Author National Center for Education Statistics (2001) User’s manual for the ECLS-K base year public-use data files and electronic codebook (NCES 2001–029 revised) Washington, DC: US Government Printing Office National Reading Panel (2000) Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction Washington, DC: Author Newcomer, J.S., Hatry, H.P., and Wholey, J.S (2010) Handbook of practical program evaluation (3rd ed.) San Francisco, CA: John Wiley & Sons, Inc Nicol, A.A.M., & Pexman, P.M (2010a) Presenting your finding: A practical guide for creating figures, posters, and presentations (6th ed.) Washington, DC: American Psychological Association Nicol, A.A.M., & Pexman, P.M (2010b) Presenting your findings: A practical guide for creating tables (6th ed.) Washington, DC: American Psychological Association No Child Left Behind Act of 2002, Pub L No 107–110, § 115, Stat 1425 (2002) Odom, S.L., & Lane, K.L (2014) The applied science of special education: Quantitative approaches, the questions they address, and how they inform practice In L Florian (Ed.) The SAGE Handbook of Special Education (2nd ed., Vol 1, pp 369–388) Los Angeles, CA: Sage Pazzaglia, A.M., Stafford, E.T., & Rodriguez, S (2016a) Survey methods for educators: Sampling respondents and survey administration (part of 3) (REL 2016-160) Washington, DC: US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Education Laboratory Northeast & Islands Pazzaglia, A.M., Stafford, E.T., & Rodriguez, S (2016b) Survey methods for educators: Analysis and reporting of survey data (part of 3) (REL 2016-160) Washington, DC: US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Education Laboratory Northeast & Islands Predy, L., McIntosh, K., & Frank, J.L (2014) Utility of number and type of office discipline referrals in predicting chronic problem behavior in middle schools School Psychology Review, 43, 472–489 Pyrczak, F., & Bruce, R.R (2017) Writing empirical research reports: A basic guide for students of social and behavioral sciences (8th ed.) New York, NY: Routledge Reynolds, C.R., & Kamphaus, R.W (2015) Behavior assessment system for children (3rd ed.) Minneapolis, MN: Pearson Assessments Rhymer, K.N., Dittmer, K.I., Skinner, C.H., & Jackson, B (2000) Effectiveness of a multi-component treatment for improving mathematics fluency School Psychology Quarterly, 15, 40–51 References 239 Roach A.T., Lawton K., Elliott S.N (2014) Best practices in facilitating and evaluating the integrity of school-based interventions In P.L Harrison & A Thomas (Eds.), Best practices in school psychology: Data-based and collaborative decision making (pp 133– 146) Bethesda, MD: National Association of School Psychologists Rodriguez, M.C (2005) Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research Educational Measurement: Issues and Practice, 24(2), 3–13 Rosenbaum, P.R (1995) Observational studies New York, NY: Springer-Verlag Rosenbaum, P.R., & Rubin, D.B (1984) Reducing bias in observational studies using subclassification on the propensity score Journal of the American Statistical Association, 79, 516–524 Rosenbaum, P.R., & Rubin, D.B (1985) Constructing a control group using multivariate matched sampling methods that incorporate the propensity score The American Statistician, 39, 33–38 Ross, K.A (2005) Sample design for educational survey research Paris, France: International Institute for Educational Planning/United Nations Educational, Scientific and Cultural Organization Royer, D.J., Lane, K.L., Cantwell, E.D., & Messenger, M.L (2017) A systematic review of the evidence base for instructional choice in K–12 settings Behavioral Disorders, 42, 89–107 Salkind, N.J (2014) Exploring research (8th ed.) Bloomington, MN: Pearson Sattler, J.M (2008) Assessment of children: Cognitive foundations (5th ed.) San Diego, CA: Jerome M Sattler School-Wide Information System (2011) About SW1S Retrieved from http:// www.swis.org/ Schrank, F.A., Mather, N., & McGrew, K.S (2014a) Woodcock-Johnson IV Tests of Achievement Rolling Meadows, IL: Riverside Publishing Schrank, F.A., Mather, N., & McGrew, K.S (2014b) Woodcock-Johnson IV Tests of Oral Language Rolling Meadows, IL: Riverside Publishing Schrank, F.A., McGrew, K.S., & Mather, N (2014) Woodcock-Johnson IV Tests of Cognitive Abilities Rolling Meadows, IL: Riverside Publishing Schweinhart, L.J., & Weikart, D.P (1997) The high/scope preschool curriculum comparison study through age 23 Early Childhood Research Quarterly, 12, 117–143 Shadish, W.R., Cook, T.D., & Campbell, D.T (2002) Experimental and quasiexperimental designs for generalized causal inference (2nd ed.) New York, NY: CENGAGE Learning Shadish, W.R., Hedges, L.V., Horner, R.H., & Odom, S.L (2015) The role of between-case effect size in conducting, interpreting, and summarizing single-case research Washington, DC: National Center for Education Research, Institute of Education Sciences, US Department of Education Shah, H.K., Domitrovich, C.E., Morgan, N.R., Moore, J.E., Rhoades, B.L., Jacobson, L., & Greenberg, M.T (2017) One or two years of participation: Is dosage of an enhanced publicly funded preschool program associated with the academic and executive function skills of low-income children in early elementary school? Early Childhood Research Quarterly, 40, 123–137 doi: 10.1016/ j.ecresq.2017.03.004 240 References Shavelson, R.J., & Towne, L (2002) Scientific research in education Washington, DC: National Academy Press Sheridan, S.M., Eagle, J.W., Cowan, R.J., & Mickelson, W (2001) The effects of conjoint behavioral consultation: Results of a four-year investigation Journal of School Psychology, 39, 361–385 Sheridan, S.M., Eagle, J.W., & Doll, B (2006) An examination of the efficacy of conjoint behavioral consultation with diverse clients School Psychology Quarterly, 21, 396–417 Sheridan, S.M., Ryoo, J.H., Garbacz, S.A., Kunz, G.M., & Chumney, F.L (2013) The efficacy of conjoint behavioral consultation on parents and children in the home setting: Results of a randomized controlled trial Journal of School Psychology, 51, 717–733 Shogren, K.A., Faggella-Luby, M.N., Bae, S.J., & Wehmeyer, M.L (2004) The effect of choice-making as an intervention for problem behavior: A meta-analysis Journal of Positive Behavior Interventions, 6, 228–237 doi:10.1177/1098300704006 0040401 Sreckovic, M.A., Common, E.A., Knowles, M., & Lane, K.L (2014) A review of self-regulated strategy development for writing for students with EBD Behavioral Disorder, 39, 56–77 doi:10.18541/ser.2011.02.10.1.103 Strong, A.C., Wehby, J.H., Falk, K.B., & Lane, K.L (2004) The impact of a structured reading curriculum and repeated reading on the performance of junior high students with emotional and behavioral disorders School Psychology Review, 33, 561–581 Torgerson, C.J., & Torgerson, D.J (2002) The need for randomized controlled trials in educational research British Journal of Educational Studies, 49, 316–328 Trochim, W.M.K., Cappelleri, J.C., 1992 Cutoff assignment strategies for enhancing randomized clinical trials Controlled Clinical Trials, 13, 190–212 Tucker, J.A (1989) Basic flashcard technique when vocabulary is the goal (Unpublished teaching materials) School of Education, University of Chattanooga, Chattanooga, TN Umbreit, J., Ferro, J., Liaupsin, C., & Lane, K.L (2007) Functional behavioral assessment and function-based intervention: An effective, practical approach Upper Saddle River, NJ: Prentice Hall US Department of Education, National Center for Education Statistics (2012) NCES statistical standards Washington, DC: Author von der Embse, N.P., Pendergast, L., Kilgus, S.P., & Eklund, K.R (2016) Evaluating the applied use of a mental health screener: Structural validity of the Social, Academic, and Emotional Behavior Risk Screener Psychological Assessment, 28, 1265–1275 Walker, H.M (2012) Preparing fundable grant proposals Washington, DC: Association for University Centers on Disability Walker, H.M., Forness, S.R., & Lane, K.L (2014) Design and management of scientific research in applied school settings In B Cook, M Tankersley, and T Landrum (Eds.) Advances in learning and behavioral disabilities (vol 27, pp 141–169) Bingley, UK: Emerald References 241 Walker, H.M., & Pasco, S.I (2015) Foundations of grant writing: A systematic approach based on experience Eugene, OR: University of Oregon Walker, H.M., Severson, H.H., & Feil, E.G (2014) Systematic screening for behavior disorders (2nd ed.) Eugene, OR: Pacific Northwest Publishing Wechsler, D., Raiford, S.E., & Holdnack, J.A (2014) Wechsler intelligence scale for children (5th ed.): Technical and interpretive manual supplement: Special group validity studies with other measures and additional tables Bloomington, MN: Pearson What Works Clearinghouse (2014) Procedures and standards handbook (version 3.0) Retrieved from https://ies.ed.gov/ncee/wwc Williams, S.V (1990) Regression-discontinuity design in health evaluation In L Sechrest, B Perrin, & J Bunker (Eds.), Health science research methodology: Strengthening causal interpretations of nonexperimental data (DHHS Publication No PHS 90-3454, pp 145–149) Rockville, MD: US Department of Health and Human Services Wolery, M., Dunlap, G., & Ledford, J.R (2011) Single-case experimental methods: Suggestions for reporting Journal of Early Intervention, 33, 103–109 Wolery, M., & Ezell, H.K (1993) Subject descriptions and single subject research Journal of Learning Disabilities, 26, 642–647 Wolery, M., & Lane, K (2014) Writing tasks: Literature reviews, research proposals, and final reports In D.L Gast and J Leford (Ed.) Single case research methodology: Applications in special education and behavioral sciences (2nd ed., pp 50–84) New York, NY: Routledge Wolery, M., Lane, K., & Common, E.A (2018) Writing tasks: Literature reviews, research proposals, and final reports In D.L Gast and J Ledford (Ed.) Single case research methodology: Applications in special education and behavioral sciences (3rd ed., pp 43–76) New York, NY: Routledge Wood, B.K., Oakes, W.P., Fettig, A., & Lane, K.L (2015) A review of the evidence base of functional assessment-based interventions for young students using one systematic approach Behavior Disorders, 40, 230–250 Woodcock, R.W., & Johnson, M.B (1977) Woodcock-Johnson PsychoEducational Battery Allen, TX: DLM Yeaton, W.H., & Sechrest, L (1981) Critical dimensions in the choice and maintenance of successful treatments: Strength, integrity, and effectiveness Journal of Consulting and Clinical Psychology, 49, 156–167 Index Note: Page numbers in italics indicate figures Page numbers in bold indicate tables A1-B1-A2-B2 designs 155–158, 156, 160, 162–163 Academic Search Elite (search engine) 18 accessibility of tests 53 accessible population, determining (for surveys) 145 additional variables (affecting associations) 124–126 alternate form reliability estimate 70, 71, 77 alternating treatment designs (ALTs) 155, 160–163, 161 Amato, P.R 122–123 American Educational Research Association 211 American Psychological Association (APA): Concise Rules of APA Style, Sixth Edition 194; conference presentations for 211, 212; Ethical Principles of Psychologists and Code of Conduct 226–227; Ethics Code Standards 8.12b, Publication Credit 207; Presenting Your Findings: A Practical Guide for Creating Figures, Posters, and Presentations 194; Publication Manual of the American Psychological Association, Sixth Edition 117, 194; and scientistpractitioner model 16; and tables and figures, use of 175; and violence against teachers 36 Ammirati, R.: Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error 217–218 analyses of variance (ANOVAs) 76, 186 ancestral searches (for literature reviews) 30 anonymous surveys 137 Anthony, C.J 122–124, 127–129 anxiety levels, measuring 34, 69, 72, 225 assessments and assessment research 14–15, 45, 217, 219–220; see also measures, developing assignment variables 108 attachment theory 224–225 attrition 135 author searches (for literature reviews) 30 authorship 32–33, 207–208 Baer, D.M 193–194 Barlow, D.H.: The Scientist Practitioner 16 baseline phases 156, 200–201 Beddow, P.A 53 behaviorism approach 224–225 bias issues 12, 28, 38, 92, 101, 218 biological variables 113, 117 Blair, J 144 “blind” review process 210 Bliss, S.L 226 Brewer, E 158 Bruhn, A.L 31 Campbell, D.T 88; Experimental and Quasi-Experimental Designs 38 Cantwell, E.D 30 Carroll, E.E 226 Carter, D.R 134–136, 140, 145 categorical (or nominal) scales and variables 172–173, 175, 181–183, 190 Cattell, J.M causal inquiry 151 causality, establishing 8–13 central tendency indicators 175–176 Chafouleas, S.M 48 check-in/check-out (CICO) 156–158 Chenier, K.H 158 chi-square tests 181–183, 182 choral responding 154–157, 156, 161 Christian, L.M 147 Chumney, F.L 9–10 Ci3T models see comprehensive, integrated, three-tiered (Ci3T) models of prevention classical test theory (CTT) 46–47, 57, 60, 68 clinical psychology, goals of Index cluster sampling 146 coding processes (for literature reviews) 27, 31–32 coefficient alpha 70, 71 cognitive interviews (in survey development) 143 cognitive labs 73–74, 77 cognitive load theory 53–54 Cohen, J.: A Power Primer 182, 187 Cohen’s d 184–186 cohort surveys 134–135 Collins, T.A 158 Common, E.A 30–32, 193, 202 comparative questions 39, 154 compensatory equalization 12 component questions 38, 154 comprehensive, integrated, three-tiered (Ci3T) models of prevention 134–135, 140 conference presentations and proposals 28, 211–212 confidential surveys 136–137 confirmatory factor analysis (CFA) 75, 77 Conjoint Behavioral Consultation (CBC) 9–13 Conrad, F.G 144 consequential validity 76–78, 77 constructs: and assessment techniques 46–47; and construct validity 12, 72, 77–78, 169; defined 45; developing 50; inadequate explication of 12 consultation and consultation research 15–16 content validity 56, 73, 77 control group 9–12, 76, 88–105; 107–111, 121 convenience sampling 171, 172 Cook, B.G 88; Experimental and QuasiExperimental Designs 38 correlation see correlational and causal comparative designs; item-to-total (ITT) correlation; linear and curvilinear correlations; Pearson correlations; pointbiserial correlation; Spearman correlation correlational and causal comparative designs 112–131, 113–114; comparison of 113–115; defined 112; and descriptive statistics 169; designing 120–123; and Every Student Succeeds Act 87–88; limitations of 116–117; and precautions 117–120; and variation, explanations for 123–129 Council for Exceptional Children (CEC) 32, 33, 201, 211 criterion variables 189–190 Crnobori, M.E 31 cross-informant agreement 71 cross-sectional surveys 133–134 243 Dart, E.H 158, 159 data analytic plans 142, 148, 185, 201 data series (of MBDs) 158–160 David, M.: Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error 217–218 Davidshofer, C.O 70 Deaf persons 119 degrees of association 113–114, 113–114, 116–117 demographic information 81, 91, 101; see also specific types of variables demonstration questions 38, 154 dependent variables (DV): defined 9; and experimental designs 87, 88; interval 183–190; in literature reviews 34; and onegroup posttest-only design 104; and pretestposttest design 94–95; and randomized designs 93; in research questions 38–39; and single-case designs 152–153; and statistical conclusion validity 10 descriptive statistics 174–177 descriptive studies 25, 151 DeVellis, R.F.: Scale Development, Fourth Edition 49, 49, 50, 55, 56 dichotomous variables 173 differential item functioning (DIF) 59, 60 differential selection 121–122 differential treatment 101 Dillman, D.A 147 DiPerna, J.C 122–123 directional (one-tailed) predictions 184–185, 188 disability variables 63, 152, 172–174, 175, 181–183, 182 dispersion or spread indicators 176–177 dissertations 28, 198 divorce and academic achievement 122–124, 127–129 Downing, S.M 49 DuBois, M.R 90–93, 96–99 Dufrene, B.A 156–157 Dynamic Indicators of Basic Early Literacy Skills (DIBELS) 91 effect sizes 39, 126, 153, 163, 178, 180–181, 185; see also specific indicators electronic searches (for literature reviews) 29–30 Elliott, C.D 53 Elliott, S.N.: Best Practices in Facilitating and Evaluating the Integrity of School-Based Interventions 220 empirically supported programs 218–221 244 Index empirical studies 34–35 engagement, supporting student 154 ERIC (search engine) 18 error-score variance 68 Espelage, D 36 essay questions (in surveys) 141 eta squared (η2) indicator 186 Every Student Succeeds Act (2015) 87–88 evidence-based practices (EBPs) 27, 216–218, 217, 221 exclusion and inclusion criteria (for literature reviews) 28–31 experimental designs 87–88, 200–202; see also quasi-experimental designs; randomized control trials (RCTs) expert reviews 55, 73, 77 exploratory factor analysis (EFA) 75, 77 external validity 12, 88–89, 169, 174, 221 group membership (of participants) 59, 119–120; see also demographic information factor analysis 58–59, 74–75 Factorial Validity Index 56 FastBridge 62, 64 feasibility issues 37 feedback (during survey development process) 142–143 Ferro, J 31 Filce, H 156–157 filtered and unfiltered information 18 Floyd, R.G 17–18 follow-up surveys 135 formative instrument evaluation 79, 201–202, 202 Frank, J.L 120–121 Friedman test 186 full-scale intelligence quotient (FSIQ) 80, 173–174 functional assessment-based interventions (FABIs) 31 funnel approach 34 Furlow, C.M 158 ideographic approach see single-case research design incentives (during sampling) 147 inclusion and exclusion criteria (for literature reviews) 28–31 incremental rehearsal (IR) 90, 96–100 independent variables (IV): categorical 183–186; and correlational and causal comparative studies 113; defined 9; and experimental designs 87, 88, 91; interval 187–190; in literature reviews 34; and pretest-posttest design 94–95; in research questions 38–39; and single-case designs 152–153, 158; and statistical conclusion validity 10 Individualized Education Programs (IEPs) 14, 138 Individuals with Disabilities Education Improvement Act (IDEIA) 14, 15 inferential statistics 178–190; see also statistical analyses Institute of Education Sciences (IES): National Center for Education Evaluation and Regional Assistance 132, 148, 149 Institutional Review Boards (IRBs) 136, 147, 148, 210 instructional choice 36–37 internal consistency (of measurement tools) 69–70, 77 internal validity 11–12, 33, 74–75, 77, 88–89, 169, 174, 221 inter-scorer (or inter-rater) reliability (IRR) 34, 70–72, 77 interval-level or better scales 174–177 intervention phases 156, 200–201 Garbacz, S.A 9–10 Gast, D.L 155, 162 Gay, L.R 134 gender variables 63, 90–91, 117, 128, 174, 181–183, 182, 186 geographic regions (as variables) 63 Germer, K.A 134 Google Scholar (search engine) 18 grade point averages (GPAs) 174 grant writing 195, 212; see also technical writing Gresham, F.M 158 group design perspective 37 Haladyna, T.M.: Handbook of Test Development, Second Edition 49, 49 hand searches (for literature reviews) 30 Hatry, H.P 220–221 Hattie, J 178 Hayes, S.C.: The Scientist Practitioner 16, 17 Hemphill, E.M 90 hierarchical models (of explanations for variation) 128–129, 129 histograms 177, 177 Hopkins, W.C 182, 187 Horner, R.H 154, 160, 162 Houghton Mifflin Harcourt 62 hypothesis testing 37–39; see also questions and hypotheses, developing Index interventions 15, 217, 219–220 interviews (as survey format) 136 introspection see self-observation Irwin, C.W 133, 138–141, 143, 149 item analysis protocols 73–74, 77 item banking 56 item difficulty 57–58, 62 item discrimination and dispersion 57–58 item response theory (IRT) 57, 59, 60, 75, 77 item-to-total (ITT) correlation 57–60 item tryouts 56 item validity 58–59 Jenkins, A 134 journals, peer-reviewed 26, 28–29, 208–211 Kane, M 81 Kennedy, C.H 38–39 Kettler, R.J 53 Kilgus, S.P 48, 52, 55–56, 60–61, 64; see also Social, Academic, and Emotional Behavior Risk Screener Kruskal-Wallis test 186 Kunz, G.M 9–10 LaForte, E.M.: Woodcock Johnson IV Technical Manual 48 Lane, K.L.: Ci3T model survey 134–136, 140, 145; and coding of sources 32; FABI literature review by 31; and formative evaluation considerations 202; Handbook of Test Development, Second Edition 49, 49, 50, 52–53, 56, 62–63; on results, sharing 62–63; on technical writing 193 language variables 63, 172–173 Lawton, K.: Best Practices in Facilitating and Evaluating the Integrity of School-Based Interventions 220 Ledford, J.R 155, 162 letter-sound expression (LSE) 96, 99 letter-sound fluency (LSF) 91, 93, 94, 96 Liaupsin, C 31 Lilienfeld, S.O.: Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error 217–219 linear and curvilinear correlations 188, 188 literature reviews: and empirically supported programs, selecting 218; functions of 25, 25–26; and meta-analysis 39; in research proposals 197–198; sources, locating 28–31; topics, determining and focusing 26–28; writing 32–35, 197–198 245 longitudinal designs 99, 99–100, 105, 124 longitudinal surveys 134–135 low-incidence populations 152, 163; see also disability variables Magill, L 134 Mann-Whitney U test 185 masters’ theses 28, 198 maternal education level 63 McCleary, D.F 226 McGrew, K.S 51; Woodcock Johnson IV Technical Manual 48 McIntosh, K 120–121 mean, median, and mode 175–176 measures, developing 45–66, 46; and assessment techniques 46–48; and forms, completing 61–62; and items, developing and testing 52–61, 54; planning 49, 50–52; and reliability of 11; and results, sharing 62–64 mediation (statistical model) 126, 126–127 Mertens, D.M 101, 118–122, 125 Messick, S.M 72–73 meta-analysis 39 method section of research proposals 198–199 Miller, L.M 156–157 mixed responding 155, 159, 160, 161 moderation (statistical model) 127, 127–128 mono-operation bias 12 multi-element designs see alternating treatment designs multi-level models (of explanations for variation) 128–129, 129 multiple-baseline designs 158–160, 159, 162–163 multiple-choice tests 47, 47, 58 multiple-regression analyses 190 multiple-treatment interference or interaction 161 Murphy, K.R 70 National Association of School Psychologists (NASP) 211, 216; Comprehensive and Integrated School Psychological Services document 14; Principles for Professional Ethics 226 National Association of the Deaf 119 National Center on Educational Statistics 138; Early Childhood Longitudinal Study – Kindergarten Class of 1998–99 122 Nelson-Gray, R.O.: The Scientist Practitioner 16 neutral language 38 Newcomer, J.S 220–221 Newton, I 212 non-directional (two-tailed) predictions 184–185 246 Index nonprobability sampling 146, 147 nonsense-word fluency (NWF) 91, 93, 96, 99 Norfolk, P.A 17–18 norming samples 63 Oakes, W 32 observations and developing research questions 36–37 Odom, S.L 154, 160, 162 office discipline referrals (ODRs) 120–121, 124, 126, 128, 129 Office for Human Research Protections (OHRP) 227 Olmi, D.J 156–157 Open Researcher and Contributor IDs (ORCIDs) 196 oral reading fluency (ORF) 18–20, 19–21, 161 order of authorship 207–208 order of entry (of additional variables) 125 ordinal scales 172–174 panel surveys 135 parametric questions 38, 154 participants (described in research proposals) 199 Pazzaglia, A.M 133, 144, 146, 148 Peabody Picture Vocabulary Test, Third Edition (PPVT-III) 103, 104, 106–107, 109 Pearson correlations 76, 187 peer-review process 208–211 person-first language 119 phi coefficient (Φ) 181–183 point-biserial correlation 57–60 posttest 93, 94–95, 95, 98–100, 99, 103–109, 108, 111, 185–186 power (of a study) 96 predictive language 38 predictor variables 189–190 Predy, L 120–121, 124, 126, 128, 129 Preschool Behavior Screening System (PBSS) 48 preschool dosage 102–107, 109 “prescriptions” for research and practice 217–218 pretest 93, 94–96, 95, 97–101, 103–111, 143, 185–186 probability sampling 146, 147 problem statements 196–197 process or mechanisms inquiry 151 program evaluation 220–221, 226, 227 ProQuest (search engine) 31 pseudoscientific practices 218–219, 219 PsycARTICLES (search engine) 31 PsychCorp (publisher) 67 psychological assessments 46 psychometric studies 25, 50, 58–59 PsycINFO (search engine) 18, 31 publication bias 28 quality indicators (QIs) 27, 32, 33 quasi-experimental designs 15, 87–89, 91, 100–103, 105, 107–111, 113, 116, 174; see also randomized control trials (RCTs) questionnaires 135–136 questions and hypotheses, developing 35–39 race and ethnicity variables 63, 117–119, 172–174, 186 randomized control trials (RCTs) 87–89, 91–92, 100–101, 103, 105, 108–111; and alternative-treatments design with pretest 97; and basic randomized design comparing treatment to control 92–93, 93, 96, 97, 104, 106; and basic randomized design comparing two treatments 96, 97; and basic randomized design comparing two treatments and control 97, 98; comparison of 101–103, 108; vs correlational and causal comparative studies 112, 113, 115, 116, 123; and crossover design 100; and descriptive statistics 169, 174; and exemplary quasi-experimental design 102–103; and factorial design 98; feasibility of 151–152; and longitudinal design 99, 99–100, 105; and multiple treatments and controls with pretest 97–98; and nonequivalent control group design 107; and nonrandom assignment 101–109; and one-group posttest-only design 103–104; and one-group pretest–posttest design 104–105; and posttest-only design with nonequivalent groups 106–107; and pretest–posttest control group design 94–95, 95, 98, 100, 105, 107; and random assignment 89–100; and regression-discontinuity design 107–108, 108; and school contexts 226; vs single-case designs 163; and Solomon four-group design 95–96; symbols of 91–92; and time series design 105–106 random sampling 13, 146, 170 range (indicator of dispersion) 176 Rasch analyses 60 rating scales 46, 48, 50–51, 55–56, 58–59, 78–79 raw scores 62–64; see also scores and inferences Raymond, M.R.: Handbook of Test Development, Second Edition 49 reading fluency 51, 157 Index re-consenting 148 reference lists (in research proposals) 202–203 Regional Educational Laboratory (REL) Northeast and Islands 138–139 regression analyses 189–190 regression and internal validity 11 regression discontinuity designs (RDDs) 151, 152, 163 related-samples t-tests 185 relations to other variables, validity evidence based on 75–76, 77 reliability evidence see scores and inferences replication research 36 research, conducting 226–227 Research Library (search engine) 31 research proposals 195–203; see also technical writing research questions, selection 35–36 research reports 203–207; see also technical writing response processes, validity based on 73–74, 77 response rates 137–138, 146–148 reversal designs 156 Riley-Tillman, T.C 48 Risley, T.R 193–194 Riverside Publishing 47; see also Woodcock Johnson IV Tests of Achievement Roach, A.T.: Best Practices in Facilitating and Evaluating the Integrity of School-Based Interventions 220 Rodriguez, M.R 58 Ross, K.A 146 Royer, D.J 32–35 r-squared (r2) indicator 187, 190 Ryoo, J.H 9–10 sample selection 144–147; see also specific types of sampling procedures school psychology and scientific method 3–23; and causality 8–13; gaps between 3–4; interconnection between 4–5; and professional practices 13–16; and research consumption 17–18; scientific method defined 6–8; and scientific thinking in clinical practice 18–21; and scientistpractitioner model 16–17 School-Wide Information System (SWIS) 121 Schrank, F.A.: Woodcock Johnson IV Technical Manual 48, 51, 55 scientific method, defined 6–8; see also school psychology and scientific method scientist-practitioner model 16, 218 scores and inferences 67–83; and accessibility 78–79; and argument-based approach to 247 test validation 81; and developmental stage of measures 79; and generalizability 80; and incremental validity 79–80; and reliability evidence 68–72, 77; and validity evidence 72–78, 77; and WISC-V 67–68, 71–75 search strategies (for literature reviews) 28–29 Sechrest, L 28 selection differences affecting internal validity 11 selective mutism 222–225, 223 self-observation (or introspection) 4–5, 156–157 settings (described in research proposals) 199–200 Sewell, A.L 31 Shadish, W.R 10–13, 88–90, 164, 169; Experimental and Quasi-Experimental Designs 38 Shah, H.K 102–107 shared variation between variables 115 Sharkow, D 16 Shavelson, R.J 151 Sheridan, S.M 9–13 Shogren, K.A 29 significance 34–35, 126, 178–181, 183–187, 189–191 single-case research design (SCRD) 151–165; A1-B1-A2-B2 designs 155–158, 156, 160, 162–163; alternating treatment designs (ALTs) 155, 160–163, 161; analysis of 153; benefits and challenges of 163–164; vs correlational studies 112–113, 115–116; in FABI literature review by Lane 31; features and functions of 151–155; multiple-baseline designs 158–160, 159, 162–163; and research proposals 200–202; research questions for 37, 38 Skinner, C.H 226 Smyth, J.D 147 Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) 48, 52, 55–56, 60–62, 64 Social Skills Improvement System (SSIS) 64 socio-economic variables 63, 174 Spearman-Brown correction 70 Spearman correlation 189 split-half reliability 70, 71 Stafford, E.T 133, 138–141, 143, 149 standard deviation (dispersion indicator) 177 standard differences 76 standardized assessments 219–220 Standards for Educational and Psychological Testing 69, 77–78 statistical analyses 169–192; categorical IVs and interval DVs 183–186; categorical variables 248 Index and chi-square tests 181–183; descriptive 174–177; and hypothesis testing 178–180; interval IVs and interval DVs 187–190; overview of 169; and power analysis 180–181; and sampling 170–172; and scales of measurement 172–177, 190 statistical conclusion validity 10–11, 169 Statistical Package for the Social Sciences (SPSS) 181 stratified sampling 146, 171–172 summative instrument evaluation 79 survey blueprints (or tables of specifications) 139–140 survey designs 132–150; and collaborative survey development 138–144; and data analysis and dissemination 148; defined 132; formats for 135–136, 140; and management approaches 138; and representativeness 137–138; and responses 136–137; and sample selection and administration approaches 144–147; types of 133–135 surveys of examinees (for validity evidence) 73–74, 77 systematic sampling 146, 171 systematic sources 69 tables of specifications (or survey blueprints) 139–140 target population, determining (for surveys) 145 technical writing skills and process 193–215; authorship 207–208; conference proposals 211–212; guidelines for 212–213; overview 193–195; peer-review process 208–211; research proposals 195–203; research reports 203–207 Test Accessibility and Modification Inventory (TAMI) 53, 54 Testing Standards see Standards for Educational and Psychological Testing test-retest stability 70–71, 77 tests 46, 47, 49; see also measures, developing; scores and inferences; specific types of tests; defined 46–47; and forms, completing 61–62; and internal validity 11; and items, developing 52–55; and items, testing 56–61; planning 50–52; and security 63; types of 47–48 theoretical approaches 221–226 timeframes and timing (for measurement tools) 51, 123–124 Tingstrom, D 156–157 topics of interest, determining and focusing 26–28, 139–142 Towne, L 151 traditional drill (TD) procedures 96–98, 100 treatment group 88–94, 95, 96, 99, 99, 102, 108–111, 108 treatment-outcome studies 25 trend surveys 134 true-score variance 68 t-tests 76, 183–186 Tutoring Buddy Program 90–91, 96 Type I errors (or alpha [α]) 179, 179–181, 187 Type II errors (or beta [β]) 179, 179, 180 Umbreit, J 31 universal design theory 53 universal screening systems 18–19; see also specific measures and tests University of Pennsylvania US Census (2010) 118 US Department of Education 88 US Department of Health and Human Services 227 validity of studies 10–13; see also scores and inferences; specific categories of validity measurement variance of distribution (indicator of dispersion) 176–177 violence against teachers 36 Volpe, R.J 90 von Mizener, B.H 226 Walker, H.M 35 Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) 67–69, 68, 173 Wholey, J.S 220–221 Wilcoxon signed-rank test 185 WISC-V Technical and Interpretive Manual 71–75, 80–81 withdrawal designs 156 “within subjects” designs 152 Witmer, L Wolery, M.: on authorship 207–208; on formative evaluation considerations for SCRDs 201, 202; on literature reviews, functions of 25–26; on literature reviews, steps for 26–35; on technical writing 193 Wolf, M.M 193–194 Woodcock Johnson IV Tests of Achievement (WJ IV ACH) 46, 47–48, 51, 60–64 writing skills see technical writing skills and process Wundt, W 4–5 Yeaton, W.H 28 .. .Research Methodologies of School? ?Psychology Research Methodologies of School Psychology is a comprehensive, actionable resource that offers graduate students and school psychologists... lives of students in schools Such studies are informative to each of the primary aspects of the profession, as described in the next section The Profession of School Psychology The profession of school. .. origins of science and school psychology, Chapter covers the scientific method, the issue of establishing causality, the profession of school psychology, and the role of science in school psychology