Statistical methods an introduction to basic statistical concepts and analysis cheryl ann WWillard, routledge, 2020 scan

367 24 0
Statistical methods an introduction to basic statistical concepts and analysis cheryl ann WWillard, routledge, 2020 scan

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Statistical Methods Statistical Methods: An Introduction to Basic Statistical Concepts and Analysis, Second Edition is a textbook designed for students with no prior training in statistics It provides a solid background of the core statistical concepts taught in most introductory statistics textbooks Mathematical proofs are deemphasized in favor of careful explanations of statistical constructs The text begins with coverage of descriptive statistics such as measures of central tendency and variability, then moves on to inferential statistics Transitional chapters on z-scores, probability, and sampling distributions pave the way to understanding the logic of hypothesis testing and the inferential tests that follow Hypothesis testing is taught through a four-step process These same four steps are used throughout the text for the other statistical tests presented including t tests, one- and two-way ANOVAs, chi-square, and correlation A chapter on nonparametric tests is also provided as an alternative when the requirements cannot be met for parametric tests Because the same logical framework and sequential steps are used throughout the text, a consistency is provided that allows students to gradually master the concepts Their learning is enhanced further with the inclusion of “thought questions” and practice problems integrated throughout the chapters New to the second edition: • Chapters on factorial analysis of variance and non-parametric techniques for all data • Additional and updated chapter exercises for students to test and demonstrate their learning • Full instructor resources: test bank questions, PowerPoint slides, and an Instructor Manual Cheryl Ann Willard serves on the faculty of the Psychology Department at Lee College in the Houston, Texas area She has been teaching courses in psychology and statistics for over 25 years and continues to take great joy in witnessing her students develop new skills and apply them in new and creative ways Statistical Methods An Introduction to Basic Statistical Concepts and Analysis Second Edition Cheryl Ann Willard Second edition published 2020 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2020 Taylor & Francis The right of Cheryl Ann Willard to be identified as author of this work has been asserted by her in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988 All rights reserved No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe First edition published by Pyrczak Publishing 2010 Library of Congress Cataloging-in-Publication Data A catalog record for this book has been requested ISBN: 978-0-367-20351-1 (hbk) ISBN: 978-0-367-20352-8 (pbk) ISBN: 978-0-429-26103-9 (ebk) Typeset in Minion by Apex CoVantage, LLC Visit the eResources: www.routledge.com/9780367203528 To Jim, a constant source of loving support Contents List of Tables ix Preface x Acknowledgments xi Introduction xii   Introduction to Statistics1   Organizing Data Using Tables and Graphs22   Measures of Central Tendency44   Measures of Variability 60  5 z-Scores and Other Standard Scores 81   Probability and the Normal Distribution 92   Sampling Distribution of Means 107   Hypothesis Testing 115  9 One-Sample t Test 133 10 Two-Sample t Test: Independent Samples Design 147 11 Two-Sample t Test: Related Samples Design 160 12 172 Confidence Interval Versus Point Estimation vii viii • Contents 13 One-Way Analysis of Variance 190 14 Factorial Analysis of Variance 209 15 Correlation and Regression 233 16 Chi-Square 254 17 Nonparametric Statistics for Ordinal Data 272 Appendix A. Glossary of Statistical Terms 287 Appendix B Glossary of Statistical Formulas 294 Appendix C Answers to Odd Numbered End-of-Chapter Problems 300 Tables 327 Index 347 Tables Table 1 Table of the Normal Distribution Curve 327 Table 2 Critical Values of the t-Distribution331 Table 3 Critical Values of F for the 05 Level 332 Table 4 Critical Values of F for the 01 Level 336 Table 5 Studentized Range Statistic (q) for the 05 Level 340 Table 6 Studentized Range Statistic (q) for the 01 Level 341 Table 7 Critical Values for the Pearson Correlation 342 Table 8 Critical Values for Chi-Square and Kruskal-Wallis 343 Table 9 Critical Values for the Mann-Whitney U Test 344 Table 10 Critical Values for Wilcoxon’s Signed-Ranks T Test 345 Table 11 Critical Values for Spearman’s Rank Correlation Coefficient 345 ix Factorial Analysis of Variance • 219 MS A SS A df bet A 37 18.5 MSB SSB df bet B 32 32 MS AB SS AB df bet AB MS = wi SSwi 12 = =1 df wi 12 3.5 F-Statistic Finally, we are in a position to compute the F-statistics for this problem: = FA MS A 18.5 = = 18.5 MSwi = FB MSB 32 = = 32 MSwi MS AB 3.5 = = 3.5 MSwi We will need to compare each of our obtained F-values with the associated critical values of F in order to determine whether our null hypotheses should be rejected To so, we will consult the F-distribution table We are using α = .05 Dfwi will be the same for all F-values but we must use the appropriate dfbet value associated with each F-statistic = FAB • For FA, dfwi   = 12, dfbet(A)   = 2 Thus, Fcrit = 3.88 • For FB, dfwi  = 12, dfbet(B)    = 1 Thus, Fcrit = 4.75 • For FAB, dfwi = 12, dfbet(AB) = 2 Thus, Fcrit = 3.88 Interpreting Main Effects and Interactions When interpreting the results of a two-way ANOVA, it is best to first look at the results for any interaction because this result will determine how to proceed with interpreting the main effects As mentioned earlier, this is facilitated by creating a graph of the cell means In Graph 14.3, you can see that the lines tend towards being parallel, suggesting no significant interaction If your calculations show a non-significant interaction and significant main effects, the main effects can be interpreted in the same way as with a one-way ANOVA Each factor can be evaluated independently by means of post hoc tests However, if there is a significant interaction then the main effects cannot be interpreted directly, even if their obtained F-value exceeds Fcrit This is because the interaction makes it necessary to qualify the main effects Remember that if a significant interaction exists, this means that the effect that one independent variable has on the dependent variable depends on the level of the other independent variable Thus, the factors cannot be interpreted independently because they are creating a combined effect This point will come up again with a later example where an interaction is present 220 • Factorial Analysis of Variance Mean No of Errors Teenagers (B1) Adults (B2) 3 Hrs Hrs Hours of Sleep Hrs Graph 14.3  Sleep Deprivation Study Table 14.4  Summary Table for Sleep Deprivation Study Source SS df MS F p 18.5 32 3.5 18.5 32 3.5 < 05 < 05 > 05 Factor A Factor B Interaction Within 37 32 12 2 12 Total 88 17 As we did before with a one-way ANOVA, we will arrange our obtained values into a summary table as shown in Table 14.4 In this case, the table includes values for SS, df, MS, Fobt, and p-values for each source of between-treatments variance (i.e., Factors B, A, and the interaction between B and A) The p-values are those associated with our alpha level that will help us to determine whether or not to reject the null hypothesis Within-treatment variance, which is the variance associated with error, will only include values for SS, df, and MS For the current problem, our obtained FAB-value was 3.5 and Fcrit was 3.88 Since FAB does not exceed Fcrit, we can conclude that there was no significant interaction between Factors A and B and we will fail to reject the null hypothesis associated with the interaction Since that is the case, Factors A and B can each be interpreted separately For any of the individual factors that have more than two levels and that show significance, researchers will usually follow up with a post hoc test similar to Tukey’s HSD that you learned previously For our purposes here, we will simply note the areas of significance For Factor A, hours of sleep, our obtained F-value was 18.5 and Fcrit was 3.88 We can therefore reject the null hypothesis for Factor A and conclude that increasing the number of hours slept resulted in significantly fewer math errors For Factor B, age group, our obtained F-value was 32 and Fcrit was 4.75 Thus, the null hypothesis for Factor B can also be rejected and we can conclude that teenagers made significantly more math errors than adults Effect Size In a factorial ANOVA, an effect size will be calculated for each of the significant results The same statistic that we used for a one-way ANOVA, eta squared (η2), can be used here as well Factorial Analysis of Variance • 221 In this case, however, three sources of between-treatments variance are possible and so our eta squared formulas will be as follows: K A2 SS A SStot KB2 SSB SStot K AB SS AB SStot For our problem, since only Factors A and B were significant, only the effect sizes for these variables need to be considered For Factor A: K A2 37 42 88 For Factor B: KB2 32 36 88 The effect size obtained by eta squared for Factor A suggests that 42% of the total variance in the number of errors made can be explained by the number of hours slept Similarly, 36% of the total variance can be explained by age group (Factor B) In each case, the effect size is substantial A drawback to using eta squared in a factorial ANOVA is that, as more variables are added, the proportion of variance accounted for by a single factor automatically decreases This makes it difficult to interpret the contribution of one factor to the total amount of variability This problem is sometimes addressed by computing a partial eta squared value which involves subtracting other sources of variability than that being considered from the total variability before calculating η2.1 However, another effect size value, called omega squared (ω2), is often recommended as a supplement to η2, especially for small samples The formulas for omega squared are: Z A2 ZB2 Z AB SS A  df A MSwi SStot  MSwi SSB  df B MSwi SStot  MSwi SS AB  df AB MSwi SStot  MSwi For our problem, again we need only to consider the omega squared values for Factors A and B separately: For Factor A: Z A2 For Factor B: ZB2 37  88  32  1 ... in new and creative ways Statistical Methods An Introduction to Basic Statistical Concepts and Analysis Second Edition Cheryl Ann Willard Second edition published 2020 by Routledge 52 Vanderbilt.. .Statistical Methods Statistical Methods: An Introduction to Basic Statistical Concepts and Analysis, Second Edition is a textbook designed for... chapter exercises for students to test and demonstrate their learning • Full instructor resources: test bank questions, PowerPoint slides, and an Instructor Manual Cheryl Ann Willard serves on the

Ngày đăng: 28/07/2020, 00:21

Mục lục

    2 Organizing Data Using Tables and Graphs

    3 Measures of Central Tendency

    5 z-Scores and Other Standard Scores

    6 Probability and the Normal Distribution

    7 Sampling Distribution of Means

    10 Two-Sample t Test: Independent Samples Design

    11 Two-Sample t Test: Related Samples Design

    12 Confidence Interval Versus Point Estimation

    13 One-Way Analysis of Variance

    14 Factorial Analysis of Variance

Tài liệu cùng người dùng

  • Đang cập nhật ...