Collecting, Managing, and Assessing Data Using Sample Surveys Collecting, Managing, and Assessing Data Using Sample Surveys provides a thorough, step-by-step guide to the design and implementation of surveys Beginning with a primer on basic statistics, the first half of the book takes readers on a comprehensive tour through the basics of survey design Topics covered include the ethics of surveys, the design of survey procedures, the design of the survey instrument, how to write questions, and how to draw representative samples Having shown readers how to design surveys, the second half of the book discusses a number of issues surrounding their implementation, including repetitive surveys, the economics of surveys, Web-based surveys, coding and data entry, data expansion and weighting, the issue of nonresponse, and the documenting and archiving of survey data The book is an excellent introduction to the use of surveys for graduate students as well as a useful reference work for scholars and professionals peter stopher is Professor of Transport Planning at the Institute of Transport and Logistics Studies at the University of Sydney He has also been a professor at Northwestern University, Cornell University, McMaster University, and Louisiana State University Professor Stopher has developed a substantial reputation in the field of data collection, particularly for the support of travel forecasting and analysis He pioneered the development of travel and activity diaries as a data collection mechanism, and has written extensively on issues of sample design, data expansion, nonresponse biases, and measurement issues  Collecting, Managing, and Assessing Data Using Sample Surveys Peter Stopher CA MB R I DGE UNI VE R S I T Y P R E S S Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Tokyo, Mexico City Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521681872 © Peter Stopher 2012 This publication is in copyright Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press First published 2012 Printed in the United Kingdom at the University Press, Cambridge A catalogue record for this publication is available from the British Library ISBN 978-0-521-86311-7 Hardback ISBN 978-0-521-68187-2 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate  To my wife, Carmen, with grateful thanks for your faith in me and your continuing support and encouragement Contents List of figures List of tables Acknowledgements page╇ xix xxii xxv Introduction 1.1 The purpose of this book 1.2 Scope of the book 1.3 Survey statistics Basic statistics and probability 2.1 Some definitions in statistics 2.1.1 Censuses and surveys 2.2 Describing data 2.2.1 Types of scales Nominal scales Ordinal scales Interval scales Ratio scales Measurement scales 2.2.2 Data presentation: graphics 2.2.3 Data presentation: non-graphical Measures of magnitude Frequencies and proportions Central measures of data Measures of dispersion The normal distribution Some useful properties of variances and standard deviations Proportions or probabilities Data transformations Covariance and correlation Coefficient of variation 1 6 8 9 10 10 11 16 17 17 21 34 45 46 47 48 50 51 vii viii Contents Other measures of variability Alternatives to Sturges’ rule 53 62 Basic issues in surveys 3.1 Need for survey methods 3.1.1 A definition of sampling methodology 3.2 Surveys and censuses 3.2.1 Costs 3.2.2 Time 3.3 Representativeness 3.3.1 Randomness 3.3.2 Probability sampling Sources of random numbers 3.4 Errors and bias 3.4.1 Sample design and sampling error 3.4.2 Bias 3.4.3 Avoiding bias 3.5 Some important definitions 64 64 65 65 66 67 68 69 70 71 71 73 74 78 78 Ethics of surveys of human populations 4.1 Why ethics? 4.2 Codes of ethics or practice 4.3 Potential threats to confidentiality 4.3.1 Retaining detail and confidentiality 4.4 Informed consent 4.5 Conclusions 81 81 82 84 85 86 89 Designing a survey 5.1 Components of survey design 5.2 Defining the survey purpose 5.2.1 Components of survey purpose Data needs Comparability or innovation Defining data needs Data needs in human subject surveys Survey timing Geographic bounds for the survey 5.3 Trade-offs in survey design 91 91 93 94 94 97 99 99 100 101 102 Methods for conducting surveys of human populations 6.1 Overview 6.2 Face-to-face interviews 6.3 Postal surveys 104 104 105 107 Contents ix 6.4 Telephone surveys 6.5 Internet surveys 6.6 Compound survey methods 6.6.1 Pre-recruitment contact 6.6.2 Recruitment Random digit dialling 6.6.3 Survey delivery 6.6.4 Data collection 6.6.5 An example 6.7 Mixed-mode surveys 6.7.1 Increasing response and reducing bias 6.8 Observational surveys 108 111 112 112 113 115 117 118 119 120 123 125 Focus groups 7.1 Introduction 7.2 Definition of a focus group 7.2.1 The size and number of focus groups 7.2.2 How a focus group functions 7.2.3 Analysing the focus group discussions 7.2.4 Some disadvantages of focus groups 7.3 Using focus groups to design a survey 7.4 Using focus groups to evaluate a survey 7.5 Summary 127 127 128 128 129 131 131 132 134 135 Design of survey instruments 8.1 Scope of this chapter 8.2 Question type 8.2.1 Classification and behaviour questions Mitigating threatening questions 8.2.2 Memory or recall error 8.3 Question format 8.3.1 Open questions 8.3.2 Field-coded questions 8.3.3 Closed questions 8.4 Physical layout of the survey instrument 8.4.1 Introduction 8.4.2 Question ordering Opening questions Body of the survey The end of the questionnaire 8.4.3 Some general issues on question layout Overall format 137 137 137 138 139 142 145 145 146 147 150 150 153 153 154 158 159 160 518 References Leece, P., M Bhandari, S Sprague, M F Swiontkowski, E H Schemitsch, P Tornetta, P J Devereaux, and G H Guyatt (2004) ‘Internet versus mailed questionnaires: a randomized comparison’, Journal of Medical Internet Research, (3), e30 Leighton, V (2002) ‘Developing a new data archive in a time of maturing standards’, IASSIST Quarterly, 26 (1), 5–9 Lind, K., T Johnson, V Parker, and S Gillespie (1998) ‘Telephone non-response: a factorial experiment of techniques to improve telephone response rates’, in ASA, 1998 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 848–50 (available at www.amstat.org/sections/srms/Proceedings/papers/1998_145.pdf; accessed 31 August 2006) Liu, X (2007) ‘Comparing the quality of the Master Address File and the current demographic household surveys’ multiple frames’, Demographic Statistical Methods Division, US Census Bureau, Washington, DC, www.fcsm.gov/07papers/Liu.II-C.pdf (accessed on April 2008) Louviere, J J., D A Hensher, and J D Swait (2000) Stated Choice Methods: Analysis and Applications, Cambridge University Press Lynn, P and P Clarke (2001) ‘Separating refusal bias and non-contact bias: evidence from UK national surveys’, Working Paper no 2001–24, Institute for Social and Economic Research, London Madden, M and S Jones (2008) Networked Workers, Washington, DC: Pew Research Center (available at www.pewinternet.org/pdfs/PIP_Networked_Workers_FINAL.pdf; accessed 19 December 2008) Manski, C F and S R Lerman (1977) ‘The estimation of choice probabilities from choice-based samples’, Econometrica, 45 (8), 1977–88 McDonald, H and S Adam (2003) ‘A comparison of online and postal data collection methods in marketing research’, Journal of Marketing Intelligence and Planning, 21 (2), 85–95 McGuckin, N., S Liss, and M Keyes (2001) ‘Hang-ups: looking at nonresponse in telephone surveys’, paper presented at the international conference on transport survey quality and innovation ‘How to recognize it and how to achieve it’, Kruger Park, South Africa, 10 August McLachlan, G J and T Krishnan (1997) The EM Algorithm and Extensions, New York: John Wiley McKemmish, S., G Acland, N Ward, and B Reed (2001) ‘Describing records in context in the continuum: the Australian Recordkeeping Metadata Schema’, Monash University, Melbourne (available at www.infotech.monash.edu.au/research/groups/rcrg/publications/archiv01.html; accessed 29 August 2003) McNamara, C (1999) ‘Basics of conducting focus groups’, Authenticity Consulting, www.managementhelp.org/evaluatn/focusgrp.htm (Accessed 25 January 2006) Medical Outcomes Trust (1995) ‘SAC instrument review criteria’, Medical Outcomes Trust Bulletin, 3(4), 1–4 Michaud, S., D Dolson, D Adams, and M Renaud (1995) ‘Combining administrative and survey data to reduce respondent burden in longitudinal surveys’, in ASA, 1995 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 11–20 Moore, F (2000) ‘Enterprise storage: profiling the storage hierarchy€– where should data reside?’, Computer Technology Review, 20 (3), 24–5 Morgan, D L (1988) Focus Groups as Qualitative Research, London: Sage â•… (1996) ‘Focus groups’, Annual Review of Sociology, 22, 129–52 Morgan, D L and R A Krueger (1993) ‘When to use focus groups and why’, in D L Morgan (ed.), Successful Focus Groups: Advancing the State of the Art, Thousand Oaks, CA: Sage, 3–20 Morris, J and T Adler (2003) ‘Mixed mode surveys’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 239–52 References 519 MRA (2007) The Code of Marketing Research Standards, Glastonbury, CT: Marketing Research Association; available at www.mra-net.org/pdf/images/documents/expanded_code.pdf (accessed 20 April 2005) Murakami, E and C Ulberg (1997) ‘The Puget Sound Transportation Panel’, in T F Golob, R Kitamura, and L Long (eds.), Panels for Transportation Planning: Methods and Applications, Norwell, MA: Kluwer Academic Press, 159–92 National Archives of Australia (1999) ‘Archives Advice 41: recordkeeping metadata standard for Commonwealth agencies’, National Archives of Australia, Canberra (available at www.naa gov.au/recordkeeping/rkpubs/advices/index.html; accessed September 2003) Nederhof, A J (1983) ‘The effects of material incentives in mail surveys: two studies’, Public Opinion Quarterly, 47 (1), 103–11 Norheim, B (2003) ‘Stated preference surveys: we have confidence tests of the results?’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 347–63 Norwegian Social Science Data Services (1999) ‘Providing global access to distributed data through metadata standardization: the parallel stories of Nesstar and the DDI’, paper presented at the Conference of European Statisticians, Neuchâtel, 14 June (available at www.nesstar.org; accessed 20 August 2003) O’Connor, J J and E F Robertson (1997) ‘Frank Yates’, www-history.mcs.st-andrews.ac.uk/ Mathematicians/Yates.html (accessed January 2005) Oldendick, R W and M W Link (1994) ‘The answering machine generation’, Public Opinion Quarterly, 58 (2), 264–73 OECD (2004) Trends in International Migration, Paris: OECD Orne, M T (1962) ‘On the social psychology of the psychological experiment: with particular reference to demand characteristics and their implications’, American Psychologist, 17 (11), 776–83 Ortúzar, J de D and M E H Lee-Gosselin (2003) ‘From respondent burden to respondent delight’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 523–8 Oz, M E., D H Winston, N Desmond, G N Herlitz, G H Raniella, and N L Gingrich (2006) ‘The Hippocratic oath, V2.0: using focus groups in health care policy’, American Journal of Health Studies, 21 (3/4), 189–98 Pettersson, H and B Sisouphanthong (2005) ‘Cost model for an income and expenditure survey’, in United Nations, Household Sample Surveys in Developing and Transition Countries, New York: United Nations, 267–77 PADI (2007) ‘Preservation metadata, Preserving Access to Digital Information (PADI)’, National Library of Australia, Canberra (available at www.nla.gov.au/padi/topics/32.html; accessed 16 October 2009) Porst, R and C von Briel (1995) ‘Wären Sie vielleicht bereit, sich gegebenenfalls noch einmal befragen zu lassen? Oder: Die Gründe für die Teilnahme an Panel-befragungen’, ZUMAArbeitsbericht no 95/04, Mannheim: Zentrum für Umfragen, Methoden und Analysen Powell, R A and H M Single (1996) ‘Focus groups’, International Journal of Quality in Health Care, (5), 499–504 Pratt, J H (2003) ‘Survey instrument design’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 137–50 Presser, S (1989) ‘Collection and design issues: discussion’, in D Kasprzyk, G Duncan, G Kalton, and M P Singh (eds.), Panel Surveys, New York: John Wiley, 75–59 RAND Corporation (1955) A Million Random Digits, New York: Free Press Rasinski, K A., D Mingay, and N M Bradburn (1994) ‘Do respondents really “Mark all that apply” on self-administered questions?’, Public Opinion Quarterly, 58 (3), 400–8 520 References Richardson, A J (2000) ‘Behavioural mechanisms of nonresponse in mailback travel surveys’, paper presented at the 79th annual meeting of the Transportation Research Board, Washington, DC, 10 January Richardson, A J and€╛E S Ampt (1993) ‘Southeast Queensland Household Travel Survey: final report 4: all study areas’, Working Paper no TWP 93/6, Transport Research Centre, Melbourne Richardson, A J., E S Ampt, and A H Meyburg (1995) Survey Methods for Transport Planning, Melbourne: Eucalyptus Press Richardson, A J and A H Meyburg (2003) ‘Definitions of unit nonresponse in travel surveys’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 587–604 Richardson, A J and A Pisarski (1997) ‘Guidelines for quality assurance in travel and activity surveys’, paper presented at the international conference on transport survey quality and innovation ‘Transport surveys: raising the standard’, Grainau, Germany, 26 May Rodrigues, N M C and K I Belträo (2005) ‘A split questionnaire survey design applied to the Brazilian census’, at http://iussp2005.princeton.edu/download.aspx?submissionId=51631 (accessed 28 February 2005) Rogers, J D and D Godard (2005) Trust and Confidence in the California Courts 2005: A Survey of the Public and Attorneys, part II, Executive Summary of Methodology with Survey Instruments, San Francisco: Judicial Council of California (available at www.courtinfow ca.gov/reference/4_37pubtrust.htm) Rose, J M., M C J Bliemer, D A Hensher, and A T Collins (2008) ‘Designing efficient stated choice experiments in the presence of reference alternatives’, Transportation Research Part B: Methodological, 42 (4), 395–406 Rosnow, R L (2002) ‘The nature and role of demand characteristics in scientific enquiry’, Prevention and Treatment, 5, article 37 Rosnow, R L and E J Robinson (1967) Experiments in Persuasion, New York: Academic Press Rubin, D B (1987) Multiple Imputation for Nonresponse in Surveys, New York: John Wiley Rushkoff, D (2005) Get Back in the Box: Innovation from the Inside Out, New York : Collins Ryu, E., M P Couper, and R W Marans (2005) ‘Survey incentives: cash vs in-kind; face-toface vs mail; response rate vs nonresponse error’, International Journal of Public Opinion Research, 18 (1), 89–106 Sammer, G (2003) ‘Ensuring quality in stated response surveys’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 365–75 Sangster, R L and N J Meekins (2004) ‘Modeling the likelihood of interviews and refusals: using call history data to improve efficiency of effort in a national RDD survey’, in ASA, Proceedings of the 2004 Joint Statistical Meeting: Section on Survey Research Methods, Alexandria, VA: ASA, 4311–17 (available at www.bls.gov/ore/abstract/st/st040090.htm; (accessed 31 August 2006) Schober, M F., F G Conrad, P Ehlen, and S S Fricker (2003) ‘How Web surveys differ from other kinds of user interfaces’, in ASA, 2003 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 190–5 (available at www stanford.edu/~ehlen/2003–05_ASA.pdf; accessed 16 January 2009) Schonlau, M., R D Fricker, and M Elliott (2002) Conducting Research Surveys via E-mail and the Web, Santa Monica, CA: RAND Corporation (available at www.rand.org/pubs/monograph_ reports/MR1480; accessed 14 May 2010) Schwarz, N and T Wellens (1994) ‘Cognitive dynamics of proxy responding: the diverging perspectives of actors and observers’, Working Paper in Survey Methodology no 94–07, Statistical Research Division US Census Bureau, Washington, DC Scott, D W (1979) ‘On optimal and data-based histograms’ Biometrika, 66(3), 605–10 References 521 Sebold, J (1988) ‘Survey period length, unanswered numbers, and nonresponse in telephone surveys’, in R M Groves, P P Biemer, L E Lyberg, J T Massey, W L Nicholls, and J Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 247–56 Sharp, J (2003) ‘Data interrogation and management’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 629–34 Singer, E (2002) ‘The use of incentives to reduce nonresponse in household surveys’, in R M Groves, D A Dillman, J L Eltinge, and R J A Little (eds.), Survey Nonresponse, New York: John Wiley, 163–78 Singer, E., R M Groves, and A D Corning (1999) ‘Differential incentives: beliefs about practices, perceptions of equity, and effects on survey participation’, Public Opinion Quarterly, 63 (2), 251–60 Smith, T W (1999) ‘Standards for final disposition codes and outcome rates for surveys’, Federal Committee on Statistical Methodology, Washington, DC, www.fcsm.gov/99papers/smith.html (accessed 14 August 2009) Smyth, J D., L M Christian, and D A Dillman (2006) ‘Does yes or no on the telephone mean the same as check-all-that-apply on the Web?’, paper presented at the 2nd International Conference on Telephone Survey Methodology, Miami, 12 January Sprehe, J T (1997) ‘The US Census Bureau’s Data Access and Dissemination System (DADS)’, Government Information Quarterly, 14 (1), 91–100 Starmer, C (2000) ‘Developments in non-expected utility theory: the hunt for a descriptive theory of choice under risk’, Journal of Economic Literature, 38 (2), 332–82 Statistics Canada (2005) Learning a Living: First Results from the Adult Literacy and Life Skills Survey, Ottawa: Statistics Canada (available at www.statcan.gc.ca/pub/89–603-x/2005001/ pdf/4200878-eng.pdf; accessed 25 September 2009) Stetkær, K (2004) ‘Danish experiences in tackling the respondent burden’, paper presented at workshop ‘Future challenges of services statistics’, Luxembourg city, 29 June Stopher, P R (1997) ‘A review of separate and joint strategies for the use of data on revealed and stated choices’, in P Bonnel, R Chapleau, M E H Lee-Gosselin, and C Raux (eds.), Urban Travel Survey Methods: Measuring the Present, Simulating the Future, Lyons: Cartier, 15–32 â•… (1998), ‘A review of separate and joint strategies for the use of data on revealed and stated choices’, Transportation, 25 (2), 187–205 â•… (2008) ‘Collecting and processing data from mobile technologies’, invited resource paper presented at the 8th International conference on survey methods in transport, Annecy, France, 26 May â•… (2010) ‘The travel survey toolkit’, in M Lee-Gosselin and J Zmud (eds.), Transport Survey Methods: Keeping Up with a Changing World, Bingley, UK: Emerald Press, 15–46 Stopher, P R., R Alsnih, C G Wilmot, C Stecher, J Pratt, J P Zmud, W Mix, M Freedman, K Axhausen, M Lee-Gosselin, A E Pisarski, and W Brög (2008a) Standardized Procedures for Personal Travel Surveys, Washington, DC: Transportation Research Board â•… (2008b) Technical appendix to Standardized Procedures for Personal Travel Surveys, Web-only Document no 93, Washington, DC: Transportation Research Board, http:trb.org/news/blurb_ detail.asp?id8858 Stopher, P R., P Bullock, and F Horst (2002) ‘Exploring the use of Passive GPS devices to measure travel’, in K C P Wang, S Medanat, S Nambisan, and G Spring (eds.), Proceedings of the 7th International Conference on Applications of Advanced Technologies to Transportation, Reston, VA: American Society of Civil Engineers, 959–67 Stopher, P R and S P Greaves (2006) ‘Guidelines for samplers: measuring a change in behaviour from before and after surveys’, Transportation, 34 (1), 1–16 â•… (2009) ‘Missing and inaccurate information from travel surveys: pilot results’, paper presented at the 32nd Australasian Transport Research Forum, Auckland, October 522 References Stopher, P R and D A Hensher (2000) ‘Are more profiles better than fewer? Searching for parsimony and relevance in stated choice experiments’, Transportation Research Record, 1719, 165–74 Stopher, P R and P M Jones (2003) Transport Survey Quality and Innovation, Oxford: Pergamon Press Stopher, P R., K Kockelman, S P Greaves, and E Clifford (2008c) ‘Reducing burden and sample sizes in multiday household travel surveys’, Transportation Research Record, 2064, 12–18 Stopher, P R and H M A Metcalf (1996) Methods for Household Travel Surveys, Washington, DC: National Academy Press Stopher, P R and A H Meyburg (1979) Survey Sampling and Multivariate Analysis for Social Scientists and Engineers, Lexington, MA: Lexington Books Stopher, P R., L Shillito, D T Grober, and H M A Stopher (1986) ‘On-board bus surveys: no questions asked’, Transportation Research Record, 1085, 50–7 Stopher, P R and C Stecher (1993) ‘Blow up: expanding a complex random sample travel survey’, Transportation Research Record, 1412, 10–16 Stopher, P R., M Xu, and C FitzGerald (2005) ‘Assessing the accuracy of the Sydney Household Travel Survey with GPS’, paper presented at the 28th Australasian Transport Research Forum, 30 Sydney, September Stopher, P., M Xu, and C FitzGerald (2007) ‘Assessing the accuracy of the Sydney Household Travel Survey with GPS’, Transportation, 34 (6), 723–41 Sturges, H (1926) ‘The choice of a class interval’, Journal of the American Statistical Association, 21, 65–6 Stycis, J M (1981) ‘A critique of focus group and survey research: the machismo case’, Studies in Family Planning, 21 (12), 450–6 Sudman, S and N M Bradburn (1974) Response Effects in Surveys, Chicago: Aldine Press â•… (1982) Asking Questions: A Practical Guide to Questionnaire Design, San Francisco: JosseyBass Swann, N., and P R Stopher (2008) ‘Evaluation of a GPS survey by means of focus groups’, paper presented at the 87th annual meeting of the Transportation Research Board, Washington, DC, 16 January Sykes, W and M Collins (1988) ‘Effects of mode of interview: experiments in the UK’, in R M Groves, P P Beimer, L E Lyberg, J T Massey, W L Nicholls, and J Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 301–20 Talvitie, A P (1997) ‘Things planners believe in, and things they deny’, Transportation, 24 (1), 1–31 Tooley, M (1996) ‘Incentives and rates of return for travel surveys’, Transportation Research Record, 1551, 67–73 Tourangeau, R and T W Smith (1996) ‘Asking sensitive questions: the impact of data collection mode, question format, and question context’, Public Opinion Quarterly, 60 (2), 275–304 Transport Data Centre (2002) ‘Household travel survey nonresponse study report’, internal publication, Transport Data Centre, New South Wales Department of Transport, Sydney Transportation Research Board (2000) Transport Surveys: Raising the Standard: Proceedings of an International Conference on Transport Survey Quality and Innovation, May 24–30, 1997, Granau, Germany, Transportation Research Circular no E-C008, Washington, DC: Transportation Research Board Traugott, M W., R M Groves, and J Lepkowski (1987) ‘Using dual frame designs to reduce nonresponse in telephone surveys’, Public Opinion Quarterly, 51 (4), 522–39 Triplett, T (1998) ‘What is gained from additional call attempts and refusal conversion and what are the cost implications?’, unpublished paper, Survey Research Center, University of Maryland, College Park References 523 Tuckel, P and H O’Neill (2002) ‘The vanishing respondent in telephone surveys’, Journal of Advertising Research, 42 (5), 26–48 United Nations Economic and Social Council (2005) ‘The experiences of ABS with reducing respondent burden through the use of administrative data and through the use of smarter statistical methodologies’, CES/2005/18, Geneva: United Nations Economic and Social Council US Census Bureau (2000) ‘Design and Methodology, Current Population Survey’, Technical Paper no 63, US Department of Commerce, Washington, DC (available at www.census.gov/ prod/2000pubs/tp63.pdf; accessed November 2003) US Census Bureau (2005) ‘Response to question reference 050308–000073’, available at Question and Answer Center, http://askcensus.gov (accessed March 2005) US Department of Transportation (2004) ‘Archived Data User Service (ADUS); ITS standards advisory’, US Department of Transportation, Washington, DC (available at www.standards.its dot.gov/Documents/ADUS_Advisory.pdf; accessed 30 June 2004) Van der Reis, P and A S Harvey (2006) ‘Survey implementation’, in P R Stopher and C C Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 213–22 Van der Reis, P., M Lombard and I Kriel (2004) ‘A qualitative analysis of factors contributing to inconsistent responses in stated preference surveys’, paper presented at the 7th international conference on travel survey methods, Playa Herradura, Costa Rica, August Van Evert, H., W Brög, and E Erl (2006) ‘Survey design: the past, the present and the future’, in: P R Stopher and C C Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier Press, 75–93 Ven Inger, E., L Stoop, and K Breedveld (2008) ‘Nonresponse in the Dutch Time Use Survey Strategies for response enhancement and bias reduction’, Field Methods, 21 (1), 69–90 Vinten, G (1995) ‘The art of asking threatening questions’, Management Decision, 33 (7), 35–9 Wagner, D P (1997) Lexington Area Travel Data Collection Test: Global Positioning Systems for Personal Travel Surveys: Final Report, Columbus, OH: Battelle Memorial Institute Walvis, T H (2003) ‘Avoiding advertising research disaster: advertising and the uncertainty principle’, Journal of Brand Management, 10 (6), 403–9 Warriner, K., J Goyder, H Gjertsen, P Hohner, and K McSpurren (1996) ‘Charities, no; lotteries, no; cash, yes: main effects and interactions in a Canadian incentives experiment’, Public Opinion Quarterly, 60 (4), 542–62 Whitworth, E R (1999) ‘Can the use of a second mailing increase mail response rates in historically hard to enumerate census tracts?’, Paper presented at the international conference on survey nonresponse, Portland, OR, 31 October (available at www.jpsm.umd.edu; accessed 30 January 2003) WHO (2005) ‘Activity costs of surveys, studies, including technical assistance, for M&E implementation’, WHO/HIV/SIR version 17/3/05, WHO, Geneva (available at www.globalhivevaluation.org/media/globalaids/Regional2005/Module7-Agency_Specific_Meetings/ UNAIDS_Meeting/Matrix_of_Activity_Costs_of_Surveys_Studies.doc; accessed 27 May 2010) Wigan, M R., M Grieco and J Hine (2002) ‘Enabling and managing greater access to transport data through metadata’, paper presented at the 81st annual Transportation Research Board meeting, Washington, DC, 15 January Willits, F K and J O Janota (1996) ‘A matter of order: effects of response order on answers to surveys’, paper presented at the annual meeting of the Rural Sociological Society, Des Moines, IA, 16 August Wilmot, C G and S Shivananjappa (2003) ‘Comparison of hot-deck and neural network imputation’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 543–54 524 References Wolf, J (2006) ‘Application of new technologies in travel surveys’, in P R Stopher and C C Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 531–44 Wolf, J., M Loechl, M Thompson, and C Arce (2003) ‘Trip rate analysis in GPS-enhanced personal travel surveys’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 483–98 Xu, M., B J Bates, and J C Schweitzer (1993) ‘The impact of messages on survey participation in answering machine households’, Public Opinion Quarterly, 57 (2), 232–7 Yang, Y M (2004) ‘Survey errors and survey costs: experience from surveys of arrestees’, in ASA, Proceedings of the 2004 Joint Statistical Meeting: Section on Survey Research Methods, Alexandria, VA: ASA, 4656–8 Yates, F (1965) Sampling Methods for Censuses and Surveys, 3rd edn., London: Charles Griffin and Co Yuan, Y C (2008) ‘Multiple imputation for missing data: concepts and new development (version 9.0)’, SAS Institute, Cary, NC (http://support.sas.com/rnd/app/papers/multipleimputation.pdf; accessed 20 August 2009) Zimowski, M., R Tourangeau, R Ghadialy, and S Pedlow (1997) Nonresponse in Household Travel Surveys, National Opinion Research Center, University of Chicago (available at www fhwa.dot.gov/ohim/nonrespond.pdf; accessed 19 October 2001) Zmud, J P (2003) ‘Designing instruments to improve response’, in P R Stopher and P M Jones (eds.), Transport Survey Quality and Innovation, Oxford: Elsevier, 89–108 â•… (2006) ‘Instrument design: decisions and procedures’, in P R Stopher and C C Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 143–60 Zmud, J P and J Wolf (2003) ‘Identifying the correlates of trip misreporting: results from the California Statewide Household Travel Survey GPS Study’, paper presented at the 10th international conference on travel behaviour research, Lucerne, 15 August Index AAPOR, 436 accuracy, 66, 98, 125, 180, 228, 265, 287, 340, 356, 363, 469, 471, 489, 495 address listing, 257, 479 reporting, 407 administrative costs, 358 data, 249 records, 487 affirmation, 138 bias, 149, 348 age, 461 nonresponse to, 461 age category, 461 analysis of variance, 285, 291, 496 animation in web surveys, 396 ANOVA See€analysis of variance answer ready, 180, 360 required, 179 answering machine(s), 121, 377, 378 appearance of survey materials, 377 of the survey, 161 appropriateness of the medium, 248 archive, 499, 503, 506, 508, 509, 510 archived data, 503 archiving, arithmetic mean, 28 arrows, 108, 167 artificial neural networks, 458 attitude surveys, 492 attitudes, 199, 247, 347 of others, 246 attitudinal questions, 157 attribute levels, 207 attrition, 341, 344, 350, 491, 492 authority principle, 153 average(s), 21, 369, 453 rolling, 355 average imputation, 454 basic statistics and probability, benchmarking, 492 bias(es), 71, 74, 78, 123, 220, 233, 316, 379, 417, 418, 421, 430, 431, 444, 468, 512, 515, 516, 517 avoidance of, 78 coverage, 231 from incentives, 442 nonresponse, 444 sources of, 74 bimodal, 25 blank in coding, 403 booklet, 160 instruction, 173 box and whisker plot, 36 bracketing method, 462 branching, 157 burden, 81 call attempts, 219 number of, 217 caller ID, 121, 479 displays, 110 caller identification, See€caller ID CAPI, 106, 362, 381, 413 CASRO, 81, 435 categories layout, 184 mixed ordered and unordered, 188 ordered, 187 ranges, 184 respondent, 433 response, 184, 200, 391, 405, 412, 415, 442, 450, 460 unordered, 187 CATI, 109, 119, 143, 381, 413, 449 census, 7, 65, 66, 67, 73, 84, 92, 137, 292, 412, 493 advantages of, 66 bias, 468 comparisons to, 186 geography, 292, 306, 308, 407 change, 343 character recognition, 415 525 526 Index charitable donation(s), 237, 238, 239 checking responses, 382 choice situations number of, 207 choice-based sampling, 314, 333 classes equal, 19 maximum number of, 18 classification of survey responses, 433 closed questions See€closed-ended questions closed-ended questions, 140, 145, 154, 188, 413 cluster samples, 3, 420 cluster sampling, 314, 316, 321, 323, 420 clusters, 316, 323, 420 equal size, 317, 324 unequal size, 322, 324, 326, 327 variance of, 321 code book, 501, 510 coding, 3, 169, 401, 412 complex variables, 405 coding consistency, 404 coding manual, 477 coding the responses, 401 coefficient of determination, 51 coefficient of variation, 52 cold-deck imputation, 456 colour, 108, 166 use of, 166 comments, 159, 175 comparability, 96 comparison head-to-head, 188 to census, 418 complete household definition, 224 complete response definition, 225 completeness, 471, 489 of a sampling frame, 266 complex variables coding, 405 computer-assisted personal interview See€CAPI computer assisted surveys, 363 computer-assisted telephone interview See€CATI computer-assisted telephone interviewing See CATI 376 conditioning, 132, 156, 348, 351 confidence limit of sampling error, 282 confidentiality, 82, 151, 395, 487, 488 threats to, 84 conjoint analysis, 199 consent form, 86, 114 contacts number and type of, 211, 213 contamination, 132, 347 content information, 508 continuous data, 18 continuous scale, 11 continuous surveys, 3, 352 advantages, 354 continuous variables, 468 control group, 97 convenience samples, 336 conversion of soft refusals, 444 correction factors, 476 correlation, 50, 341, 342, 497 cost(s) , 66, 73, 356, 363 of data archiving, 509 of pilot surveys and pretests, 262 of translation, 483 covariance, 50, 68, 280, 339 cover of the survey, 162 coverage, 122 coverage error, 464, 467, 477 creation of codes, 412 criterion variables, 421 cross-sectional sample, 351 cross-sectional surveys, 3, 491 cross-tabulation, 426 cross-tabulations See€pivot tables cumulative frequencies, 18 cursor movements in web surveys, 398 data, 4, 6, 7, 8, 64 administrative, 487 archiving, 1, 477, 506 central measures of, 21 cleaning, 451 cleaning costs, 363 cleaning statistic(s), 464, 466 coding, 91, 402 collection, 2, 93, 118, 418 prospective, 143 retrospective, 143 collection methods, 83 design of, 211 comparability, 96 continuous, 62 cross-checks, 382 description of, descriptive, 419 discrete, 13 documentation, 499 electronic, 374 encryption, 395 entry, 3, 163, 174, 401, 413 entry screens, 414 expansion, 418, 419 incomplete, 381 inconsistency, 208 inconsistent, 194, 209 integrity, 507 missing, 226, 228, 402 needs, 93, 94, 98 defining, 98 quality, 4, 92, 387, 464, 503 indicators, 218 Index measures of, 464, 469 raw, 19, 83, 417, 452, 510 repair, 228, 363, 416, 450, 451, 466 incorrect, 417 statistics, 477 secondary, 422, 492 uses of, 64 weighting, 418, 421, 430 database, 91, 169, 370 debrief of interviewers, supervisors, 255 decision parameter, demand characteristic, 149, 348 demographic questions, 158, 361 descriptive data, 419 descriptive information, 508 descriptive metadata creation of, 501 purposes of, 500 design effect(s), 321, 339, 486, 491 design issues for web surveys, 389 design of survey instruments, devices cost, 358 diary prospective, 225 difference in the means, 339 differential incentives, 443 difficulty(ies) , 243 emotional, 245 intellectual, 244 physical, 243 reducing, 246 discrete data, 18, 21 scale, 11 variables, 468 disposition codes, 438, 439 for random-digit dialling, 438 disproportionate sampling, 295, 297, 303, 318, 419 ‘Do not call’ registry, 110 documentation, 452, 499, 503 standards, 503 documenting, don’t know code, 403 donor observations, 457 double negatives, 196 dress rehearsal See€pilot survey drop-down boxes, 399 drop-down list, 410 effect feel good, 247 elements of a population, 79 eligibility, 257, 433, 472 criteria, 501 known, 433 rate, 218, 369, 437 unknown, 371, 433, 435 527 eligible units, 371, 433 EPSEM, 70, 333 equal allocation, 294 equal probability of selection method See€EPSEM equal probability sample, 223, 334 equal probability sampling, 70, 266, 268 error, 71, 277, 298, 334, 356 maximum permitted sampling, 281 normal law of, 282 random, 73 sampling, 73, 270 systematic See€bias ESOMAR, 81 ethical issues, ethics, 2, 3, 81, 131, 248, 480, 487, 488 code of, 82 definition of, 81 evaluate by focus groups, 134 evaluation, 127, 134, 460 exchange codes, 115 expansion, 3, 430 expansion factor(s), 79, 275, 296, 419 expectation maximisation, 457 step, 457 extreme values, 23, 30, 41 face-to-face interview(s)/interviewing, 104, 105, 106, 213, 219, 220, 359, 362, 434, 443, 478, 479 face-to-face survey See€face-to-face interview face-to-face surveys See€face-to-face interview falsified information, 474 FAQs See€frequently asked questions fatigue, 159, 210, 344 feelings, 199 field coding, 146 field sampling, 334, 479 field-coded questions, 145, 146 fieldwork, 263, 354 procedures, 501 finite population correction factor, 279, 282, 321 first contact, 490 fixed costs, 357 flag(s), 404, 416, 451 data repair, 451 focus group(s) , 2, 127, 141, 176, 210, 252, 255, 347, 492 analysis of, 131 disadvantages of, 131 function of, 129 number of, 128 selection of members, 128 size of, 128 follow-up procedures, 213 foreign languages, 482 fractile, 35 frequencies, 17, 415 frequently asked questions, 368, 374 future directions, 478 fuzziness of locations, 86 528 Index gazetteer, 363, 409, 410 gazetteers, 413 geocoders, 410 geocoding, 363, 402, 406, 408, 410 geographic bounds, 100 geographic coding, geographic information in telephone numbers, 480 geographic information systems See€GISs geographic referencing See€geocoding geographical list See€gazetteer geometric mean, 25 geospatial data, 508, 509 geospatial metadata, 503 gifts, 239 GISs, 411, 490 Global Positioning System devices See€GPS devices globalisation, 481 GPS, 121, 123, 134, 141, 235, 396, 475, 493, 495 GPS devices, 142, 144, 475, 488, 493 graphical presentation, 11 graphics, 167, 198 use of, 167 grid selection, 222 group interview, 128 haphazard, 69, 70 haphazard samples, 3, 335 harassment, 379 hard refusals, 444 harmonic mean, 27 high-resolution pictures, 395 histogram or bar chart, 13 historical imputation, 453 hot-deck imputation, 457, 459 identification numbers, 374 illegal aliens, 483 implementation, 365, 377 of survey, 91 of the survey, 354, 365 imputation, 228, 452, 466, 477, 502 artificial neural network, 458 average, 454 cold-deck, 456 evaluation of, 460 expectation maximisation, 457 historical, 453 hot-deck, 249, 457 multiple, 458 ratio, 454 regression, 455 incentives, 183, 235, 346, 360, 442, 445 administration of, 238 contingent, 239 differential, 239 monetary, 443 prepaid, 236, 240 promised, 237 income nonreponse to, 462 income categories, 462 incomplete responses, 381 increasing response See€reducing nonresponse independence assumptions of, 495 independent measurement, 475 independent variables addition of, 49 subtraction of, 49 in-depth, 128 surveys, 128 ineligible units, 371, 433 inference, 228, 416, 452, 466, 502 informed consent, 86 initial contact, 150, 433, 434, 449 innovation, 96 instructions, 107, 112, 172, 183, 244, 399, 441, 484, 490 coding, 501 to interviewers, 368 typeface, 166 intentional samples, 335 interactive capabilities of web surveys, 400 internet, 83, 104, 120, 124, 220, 385, 388, 490 browser, 113 penetration, 386, 387 recruitment for, 115 survey(s), 111, 220, 443, 490 interquartile range, 35 interval scale, 10 interview survey(s), 105, 371 telephone, 217 interviewer(s), 3, 78, 83, 105, 255 bias, 147 characteristics, 502 costs of, 357 instructions to, 368 monitoring, 369 multi-lingual, 482 selection of, 365 training of, 368 interviewing costs, 357 interviews face-to-face, 219 item nonresponse, 2, 106, 111, 134, 218, 226, 403, 416, 431, 450, 460, 502 iterative proportional fitting procedure, 427 JavaScript®, 396 judgmental samples, key attributes, 281 key characteristics, 421 key data items, 228 key variables, 228, 369, 466, 502 accuracy of, 495 kurtosis coefficient of, 54 Index language, 160, 366, 481, 483, 491, 494 barrier, 366, 491 level, 107 survey, 189 translation, 482 leaving messages on answering machines, 379 legitimate skip code, 403 leptokurtic, 55 line graph, 15 literacy, 226, 483, 491, 494 logic checks, 413 loss of interest, 344 lotteries, 239 lottery, 237 malicious software, 396 mark-sensing forms, 163, 414 maximisation step, 457 maximum likelihood estimate, 457 mean, 23, 28, 39, 274, 468 absolute deviation of, 37, 39 arithmetic, 23 geometric, 25 group, 285 harmonic, 27 imputation, 454 overall, 286 population, 40, 68, 275, 277, 291, 308, 317, 323 quadratic, 28 rolling, 354 sample, 23, 40, 68, 275, 277, 291, 308, 317, 323 standard error of, 296 variance of, 290, 318 weighted, 286, 287, 290, 293, 294, 296, 422 means difference in, 339 measurable, 70 measures of dispersion, 34 median, 24, 28, 35, 468 of grouped data, 24 memory error, 142 metadata, 417, 477, 499 descriptive, 500, 501 geospatial, 503 preservation, 500, 503, 507 purposes of descriptive, 500 standards, 500 missing data code, 416 coding, 402 missing value statistic, 464, 465 missing values, 403, 450 mitigation of threatening questions, 139 mixed-mode surveys, 120, 123, 125, 486 mobile telephones, 480, 487, 488 mock-up of web survey, 397 mode, 24, 28 comparison study, 464 survey, 120, 124, 236, 449, 464, 486 model, 64 ANN, 459 open archival system, 508 paired selections, 329 simple random, 329 statistical, 418 stratified random, 329 successive difference, 330 survey costs, 358, 363 moderator, 129 moment appropriate, 242 monitoring costs, 357 motivation to answer, 183 MRA, 81 multi-lingual interviewers, 482 multinomial logit analysis, 418 multiple imputation, 458 multiple observations, 495 multistage samples, multistage sampling, 305, 362, 420, 490 multistage sampling units requirements, 308 nearest birthday method, 221 newsletter, 346 nominal scale, non-coverage, 79, 480 non-overlapping samples, 338, 342 nonparticipatory surveys, 269 nonrespondents, 422, 433 nonresponse, 4, 77, 112, 117, 344, 346, 431, 445 bias, 446 incentive effect on, 442 item, 226, 431 reasons for, 440, 446 surveys, 445 unit, 431 nonsalient, 148 normal distribution, 45, 74, 270 normal law of error, 269, 282 numeric value match to code, 405 objective repair, 417 observation, 7, 64, 226, 418, 429, 494 donor, 457 error, 79 missing, 454 multiple, 495 valid, 23 observational survey, 79, 105, 125, 226, 269 observations valid, 19 ogive, 15 open-ended question(s), 99, 140, 145, 188, 412 529 530 Index opinion no, 201, 387 opinion surveys, 492 opinions, 153, 199, 246, 247 optimum allocation, 297, 300 order effects, 138 ordinal scale, outlier, 417 overall quality, 477 overlapping designs, 254 overlapping samples, 3, 345, 350 packaging information, 508 page break, 170 paired selection, 323, 327 paired selections model, 329 panel, 3, 134, 139, 342, 343, 453, 491, 495 attrition, 344 conditioning, 348 disadvantages, 349 refreshed, 350 rotating, 351 rotation, 491 sample size of, 349 split, 351 subsample, 344, 350 survey(s), 3, 241, 337, 344, 348, 491 wave, 343 paper and pencil interview See€PAPI PAPI, 106 parameter, 7, 79, 269 partial overlap, 339 participatory surveys, 269, 451 past experience, 241 perceived relevance, 242 percentile, 35 personal interview surveys, 435 pie chart, 13 pilot survey(s), 3, 90, 176, 195, 210, 234, 240, 251, 253, 282, 368, 409, 443, 460, 477 definition, 251 sample size of, 258 pivot tables, 415 platykurtic, 55 population, 6, 7, 65, 79, 273, 501 census, 422 covariance, 68 elements, 79 mean, 40, 68, 275, 277, 291, 308, 317, 323 proportion, 47, 275 representativeness, 65, 68 sampling frame, 266 statistics, 287, 418 survey, 79, 306 total, 275, 308 totals, 422, 430 unknown, 422 value, 38, 50, 79 values, 23, 273, 308, 322, 337 variance, 68, 79, 278, 323 postal survey(s), 107, 108, 215, 216, 361, 434, 441 materials, 360, 371, 377, 443, 485 post-stratification, 292, 293 gains of, 293 precoding, 174 preferences, 199, 347 pre-filled response, 394 pre-notification letter, 211, 242, 374, 379 pre-recruitment contact, 112 preservation description information, 508 preservation metadata, 500, 503, 507 standards, 500 pretest(s), 3, 90, 176, 210, 251, 253, 261, 368, 409, 460, 477 definition, 251 of incentives, 442 sample sizes, 261 primacy, 138, 174, 187, 193 primacy-bound, 148 privacy, 81, 82 probability sampling, 70, 265 weighted, 296 progress bar, 394 proportion(s), 17, 47, 274 standard error, 279 variance, 47 proportionate sampling, 290, 295, 318, 419 proportionate stratification effect, 329 prospective data collection, 143 proxy, 477, 489 reporting, 3, 224, 471, 477, 488, 491, 502 rules for, 471 statistic, 472 reports, 224, 471, 488 responsible adult as, 224 publicity, 373, 442 purposes of a panel, 344, 346, 492 of a pilot survey, 251, 253 of a survey, 92, 152, 212, 368, 501 of expansion and weighting, 418 of stratification, 287 quadratic mean, 28 qualitative, 128, 199 questions, 199 research, 127 qualitative surveys, 128 quality, 66 adherence, 477 control, 101, 102, 369, 477 of a survey, 476, 477, 502 of data, 263, 354 of response, 239 of survey, 101, 371 of the data, 387, 464, 469, 472, 503 measures, 477 quartile, 35 Index question(s) agree/disagree, 205 attitude, 137, 180 attitudinal, 157, 201, 203, 386 behaviour, 137, 138 belief, 180 biased, 74 branch, 167 branching, 157 categorical, 186 classification, 137, 159 closed-ended, 140, 145, 147, 154, 174, 188, 413 demographic, 158, 361 design, double-barrelled, 196 field-coded, 145, 146 first, 153 focus group, 129 follow-up, 473 format, 122, 145, 398 frequently-asked, 368, 374 initial, 398 introductory, 153 layout, 159, 201 long, 190, 244 numbering, 169, 398 open See€question: open-ended open-ended, 99, 145, 188, 412 opening, 153 opinion, 137 ordering, 150, 153 qualitative, 199 ready answer to, 180 refinement, 133 repeated, 171 repetitive, 245 requiring answer, 178 revealing, 182 scaling, 200 screening, 153 sensitive, 186, 227, 381, 431 splitting, 170 stated response, 206 threatening, 138, 157, 226, 408, 450 type, 137 vague, 192 vagueness, 180 wording, 2, 110, 130, 182, 254, 390, 450 writing, 178, 188 questionnaire, 104 design, 159 end of, 159 format, 160 layout, 159, 163 self-administered, 117 self-report, 248 quota samples, quota sampling, 334 random-digit dialling, 115, 229, 361, 362, 467, 479, 481, 490 531 random error, 73 random numbers sources of, 71 random sampling, 69, 78, 268, 269, 270 randomness, 69, 71 test for, 71 range, 34, 148 interquartile, 35 ratio, 274 imputation, 454 scales, 10 rationalisation bias, 149 raw data, 83, 415, 466 RDD See€random-digit dialling recall, 144, 182 recency, 138, 142, 148, 174, 188, 193, 202 recency-bound, 148 reciprocity, 152, 236, 447 record keeping, 370 recruitment, 113, 216, 433, 490 response rate, 438 reference points, 207 reference values, 468 refreshed panel, 350 refusal(s), 422, 444 conversion, 368 hard, 230 rate, 369 soft, 230 refuse, 83 code, 403 regression imputation, 455 relational data base, 509 relationship between variance and covariance, 50 linear, 51 reliability, 180 reminders, 214, 371, 443, 446 e-mail, 220 repair of data, 228, 363, 416, 450, 451, 466 repeated occasions, 337 repetitive questions, 245 repetitive surveys, replacement of panel members, 345 sample, 229 representative sample, 265, 388 representativeness, 65, 68, 363, 485, 490 of focus groups, 128 of systematic samples, 331 request(s) for a call back, 377, 380 respondent bias, 146 respondent burden, 98, 144, 240, 244, 249, 381, 392, 445, 486, 491, 494 respondents, 433 response(s) categories, 184, 390 codes, 174 habitual, 203 lexicographic, 208, 209 532 Index response(s) (cont.) random, 208, 209 rate(s), 4, 110, 120, 123, 213, 369, 371, 373, 379, 385, 432, 435, 464, 477, 485, 486, 490, 491, 502 how to calculate, 432 retrospective data collection, 143 role play, 368 rolling pilot survey, 254 rolling samples, 352 root mean square, 28 error, 469 rotating panel, 351 safety, 83 salience, 142, 163, 181 salient, 148 sample, 7, 64, 65, 68 bias, 464, 468 continuous, 354 convenience, 336 cost, 65 costs, 358 covariance, 68 design, 3, 73, 265, 501 non-adherence, 421 disposition, 502 distribution, 79 expansion, 419 expert, 335 frame, 479 haphazard, 335 intentional, 75, 335 judgmental, 75, 335 mean, 23, 40, 68, 275, 291 non-compliance, 430 non-overlapping, 338 overlapping, 350 pilot survey, 253 pretest, 261 probability, 70, 71 proportion, 275 purposeful, 69 quota, 334 replacement, 229, 233, 491 representativeness, 65 rolling, 352 selection procedures, 502 self-weighted, 291 size(s), 3, 73, 79, 102, 228, 281, 476 definition of, 281 for pretests and pilot surveys, 260 statistics, 273 survey uses of, 65 systematic, 328 value, 79 variance, 68, 497 weighting, 421 sampling, 3, 257, 265, 269 bias, 79, 387 cost of, 314 EPSEM, 70 error(s), 3, 73, 79, 270, 277, 295, 450, 502 fraction, 419 frame, 79, 266, 268, 271, 362, 501 methodology, 65, 90 methods, 5, 270 quasi-random, 314 procedure, 314 process testing, 257 rate, 274, 419 units, 80, 221 with replacement, 268 without replacement, 268 scale, scan, 414 scarcity, 152 scatter plot, 11, 51 screening questions, 153 selection sample for pilot surveys and pretests, 255 selective memory, 144 self-administered, 111 self-administered questionnaire, 117 self-administered surveys, 392, 422 self-report, 225, 475, 486, 488, 493 self-weighted sample, 291 sensitive questions, 381, 431 show cards, 174 silent numbers, 110 silent telephone numbers, 481 simple random model, 329 simple random sample(s), 3, 362 simple random sampling, 70, 271, 419 situation hypothetical, 206 skewness coefficient of, 53 skirmish, 252 social desirability bias, 149, 348 soft refusal(s), 380, 444 split design, 249 split panel, 351 SRS See€simple random sampling standard deviation, 41, 277 for probability, 47 of a proportion, 47 of a proportion, maximum, 48 properties of, 46 standard error, 277, 337 maximum, 282 of difference in means, 339 of differences, 343 of non-random samples, 334 of panel, 343 of proportion, 279 of ratios, 279 of the population total, 278 standards for metadata, 500 ...Collecting, Managing, and Assessing Data Using Sample Surveys Collecting, Managing, and Assessing Data Using Sample Surveys provides a thorough, step-by-step guide to the design and implementation of surveys. .. repetitive surveys, the economics of surveys, Web-based surveys, coding and data entry, data expansion and weighting, the issue of nonresponse, and the documenting and archiving of survey data The... Dillman and Tarnai, 1991) Selection grid by age and gender Partial listing of households for a simple random sample Excerpt of random numbers from the RAND Million Random Digits Selection of sample