...epresentations of free movement data.pdf tài liệu, giáo án, bài giảng , luận văn, luận án, đồ án, bài tập lớn về tất...
Inside InformationMaking Sense of Marketing DataD.V.L. SMITH & J.H. FLETCHERJOHN WILEY & SONS, LTDChichester · New York · Weinheim · Brisbane · Singapore · Toronto Copyright © 2001 by John Wiley & Sons, Ltd Baffins Lane, Chichester, West Sussex, PO19 1UD, England National 01243 779777 International (+44) 1243 779777 e-mail (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on http://www.wiley.co.uk or http://www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, including uploading, downloading, printing, recording or otherwise, except as permitted under the fair dealing provisions of the Copyright, Designs and Patents Act 1988, or under the terms of a licence issued by the Copyright Licensing Agency, 90 Tottenham Court Road, London, W1P 9HE, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons, Ltd, Baffins Lane, Chichester, West Sussex, PO19 1UD, UK or e-mailed to permreq@wiley.co.uk or faxed to (+44) 1243 770571. Other Wiley Editorial Offices John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, USA WILEY-VCH Verlag GmbH Pappelallee 3, D-69469 Weinheim, Germany John Wiley & Sons Australia, Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Canada) Ltd, 22 Worcester Road Rexdale, Ontario, M9W 1L1, Canada John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library This title is also available in print as ISBN 0 471 49543 3 (Cloth) Typeset in 11/15 pt Garamond by Mayhew Typesetting, Rhayader, Powys ContentsForeword by Andrew McIntosh viiPreface xAcknowledgements xii1 Mastering Twenty-First-Century Information 1The information paradox 2Twenty-®rst-century information craft skills 4A new holistic way of evaluating information 7About this book 82 Acquiring Effective Information Habits 11The seven pillars of information wisdom 13Understanding the evidence jigsaw 24Developing a personal information strategy 28Robustness checks 33Getting to the storyline 42Acting on information 483 A Primer in Qualitative Evidence 51Softer evidence here to stay 53Making `faith' decisions 54The quality of qualitative research 63Understanding the overall analysis approach adopted 74Making judgements and decisions from qualitative evidence 79The safety of qualitative evidence for decision-making: aseven-point checklist 834 Understanding Survey Data 85A recap on the key characteristics of survey-based research 87Seven key checks 93 5 Designing Actionable Research 145Step 1: is formal research the answer? 146Step 2: de®ning and re®ning the problem 149Step 3: start at the end: clarify the decisions to be made 153Step 4: pinpointing the information gaps 158Step 5: developing a ®tness-to-purpose design 158Step 6: deciding on the research design 165Step 7: choosing an agency 167Appendix A: An overview of the market research `toolbag' 168Appendix B: A ®ve-step guide to writing a market researchbrief 1716 Holistic Data Analysis 177The key principles of holistic data analysis 178The main techniques underpinning holistic data analysis 180Putting it all together: holistic analysis summarised 183Ten-step guide to holistic data analysis 1857 Information-Based Decision-Making Using Time Cartograms for the Visual Representation of Free Movement Data Rehmat Ullah and Menno-Jan Kraak A great amount of multivariable temporal data is available these days Temporal data are often related to movement This could be along fixed networks, such as rail or road networks, or free movement by animals or birds Suitable visual representations need to be designed in order to analyze and synthesize these data to produce useful insights about phenomena and systems represented by the data [2] Modern computer technologies make it possible to use alternative visualization methods A time cartogram is an alternative visual tool, which is well-suited for representing temporal data related to movement along paths with stops It visualizes travelling-times by replacing geographic-distance with time-distance, distorting the geography accordingly Two types of time cartograms exist: centered and non-centered A centered time cartograms shows travellingtimes from a starting location to all other destinations in the region, while a non-centered time cartogram visualizes travelling-times between all pairs of locations In the literature, we find some examples of time cartograms applied mainly to network based movement (e.g., [1] [3]) However, limited research has been done on time cartograms to represent temporal data associated with free movement Hence, there are challenges to develop new algorithms and to create time cartograms for both the network based and especially free movement In our previous work [4], a two-step method to constructing centered time cartograms for the visual representation of scheduled movement data was presented A case of the Dutch railways was used to illustrate the method The method involved vector calculus (to displace the train station based on travelling-times from a starting station) and moving-lease-squares based affine deformation (to deform the map’s boundaries and the railroads accordingly) An example output is given in Fig Fig 1a shows the Dutch railways network in the province of Overijssel Travelling-times (in minutes) between stations are indicted by numbers along the railroad Rehmat Ullah and Menno-Jan Kraak are with the Department of GeoInformation Processing at Faculty of Geo-Information Science and Earth Observation of the University of Twente in the Netherlands E-mail: {r.ullah, m.j.kraak}@utwente.nl segments Fig 1b is a centered time cartogram with Enschede as the starting station This particular cartogram shows travelling-times (indicated by the concentric circles) from the city of Enschede to other parts of the Overijssel In this research, a two-step method to construct non-centered time cartograms for the visual representation of free movement data is proposed (see Fig 2) The first step uses vector calculus to distort the locations based on travelling-times between them The second step applies moving-least-squares based similarity deformation to distort the background accordingly The mathematical detail of the method and the results will be presented during the workshop R EFERENCES [1] [2] [3] [4] Bies, S., & Kreveld, M (2013) Time-space maps from triangulations In W Didimo & M Patrignani (Eds.), Graph Drawing (Vol 7704, pp 511-516): Springer Berlin Heidelberg Diansheng, G., Jin, C., MacEachren, A M., & Liao, K (2006) A visualization system for space-time and multivariate patterns (VISSTAMP) IEEE Transactions on Visualization and Computer Graphics, 12(6), 1461-1474 Shimizu, E., & Inoue, R (2009) A new algorithm for distance cartogram construction International Journal of Geographical Information Science, 23(11), 1453-1470 Ullah, R & Kraak, M.-J (2014) An alternative method to constructing time cartograms for the visual representation of scheduled movement data Journal of Maps (Accepted for publication) Fig a): Overijssel’s railways b): A centered time cartogram with the city of Enschede as the starting station The concentric circles depict the travelling-times in steps of 10 minutes from the starting station Fig The proposed two-step method for the construction of non-centered time cartograms The method involves vector calculus and movingleast-squares based similarity deformation The vector calculus is used to distort the locations based on travelling-times, and the moving-leastsquares based similarity deformation is applied to distort the background accordingly TẠP CHÍ PHÁT TRIỂN KH&CN, TẬP 11, SỐ 03 - 2008 Trang 49 DEVELOPMENT OF AN AUTOUMATIC DATA PROCESSING FOR TRIAXIAL COMPRESSION TEST Pham Hong Thom (1) , Le Minh Son (2) , Phan Tan Tung (1) , Nguyen Tan Tien (1) (1)University of Technology, VNU-HCM (2) H.A.I Survey & Construction Company, Hochiminh City ( Manuscript Received on November 01 st , 2007, Manuscript Revised March 09 th , 2008 ) ABSTRACT : In triaxial compression test using in soil mechanics, three parameters need to be monitored: pressure, displacement and drainage volume during testing time. The drainage volume during the test flows through a vertical pipe which accompanied with a ruler. Pressure and displacement are indicated by gauges. In manual operation, these parameters are recorded by examiner after certain duration. This paper studies on developing an automatic data processing for automatic recording these parameters. A camera is used to track the drainage level then its volume is determined. Digital pressure sensor and displacement sensor are used to measure pressure and displacement. Two PIC 18F458s are used to receive signals from sensors and connect to PC through RS232. The test results in the required forms are given using software. Experiment has been done to verify the proposed solution. 1. INTRODUCTION A typical geotechnical engineering project begins with a site investigation of soil and bedrock on and below an area of interest to determine their engineering properties including how they will interact with, on or in a proposed construction . Examining of soil properties, especially in shear stress, is indispensable to understanding of the area in or on which the construction will take place. There are two main kinds of the test: direct shear test and triaxial test. Direct shear test is used to find the shear strength parameters of soil quickly. In direct shear test, only the stresses at failure are known, whereas in the triaxial test, the complete state of stress is assumed to be known at all stages during the test. Therefore, triaxial test is the most confident test to determine the property of soil although it is quite complex and time- consuming. There are two types of the test machine: completely automatic machine and semiautomatic machine. The first one gives exactly the results of experiment and convenience for examiner but its cost is very high. The second one has lower cost but it is inconvenience for examiner to get the testing results during test processing because it usually takes two or three days, even a week to perform the test. At the moment, there is the demand of upgrade the second one to the first one by using some simple and low cost data acquisition systems using a personal computer. The data acquisition systems must attain some advantages, such as: automatic record testing result during test; attain the required accuracy; easy manufacture with acceptable costs. There are some commercial automatic testing machines available in domestic market. However, it cannot be used with the exit testing machine in companies. To be convenient for users, this study proposes an automatic system which acts like a “plug-in” part with easy operating functions. Three parameters need to be monitor are drainage volume, pressure and displacement of the specimen. A camera sensor, pressure load cell and displacement transducer are used. A Science & Technology Development, Vol 11, No.03- 2008 Trang 50 controller is designed to drive the camera along the vertical drainage pipe to track the water level then drainage volume. All signals from sensors are recorded and sent to PC by using wide-used Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2010, Article ID 874592, 12 pages doi:10.1155/2010/874592 Research Article Scale Mixture of Gaussian Modelling of Polarimetric SAR Data Anthony P. Doulgeris and Torbjørn Eltoft The Department of Physics and Technology, University of Tromsø, 9037 Tromsø, Norway Correspondence should be addressed to Anthony P. Doulgeris, anthony.p.doulgeris@uit.no Received 1 June 2009; Accepted 28 September 2009 Academic Editor: Carlos Lopez-Martinez Copyright © 2010 A. P. Doulgeris and T. Eltoft. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper describes a flexible non-Gaussian statistical method used to model polarimetric synthetic aperture radar (POLSAR) data. We outline the theoretical basis of the well-know product model as described by the class of Scale Mixture models and discuss their appropriateness for modelling radar data. The statistical distributions of several Scale mixture models are then described, including the commonly used Gaussian model, and techniques for model parameter estimation are given. Real data evaluations are made using airborne fully polarimetric SAR studies for several distinct land cover types. Generic scale mixture of Gaussian features is extracted from the model parameters and a simple clustering example presented. 1. Introduction It is well known that POLSAR data can be non-Gaussian in nature and that various non-Gaussian models have been used to fit SAR images—firstly with single channel amplitude distributions [1–3] and later extended into the polarimetric realm where the multivariate K-distributions [4, 5]andG- distributions [6] have been successful. These polarimetric models are derived as stochastic product models [7, 8]of a non-Gaussian texture term and a multivariate Gaussian- based speckle term, and can be described by the class of models known as Scale Mixture of Gaussian (SMoG) models. The assumed distribution of the texture term gives rise to different product distributions and the parameters used to describe them. In this paper we only investigate the semisymmetric zero- mean case, which is expected for scattering in the natural terrain, and the more general scale mixture model includes a skewness term to account for a dominant or coherent scatterer and a mean value vector. Extension to the non- symmetric case or expanding to a multitextural/nonscalar product will be addressed in the future. It is worth not- ing that these methods are general multivariate statistical techniques for covariate product model analysis and can be generally applied to single, dual, quad, and combined (stacked) dual frequency SAR images, or any type of coherent imaging system. The significance and interpretation of the parameters,however,maybedifferent in each case. The scale mixture models essentially describe the proba- bility density function giving rise to the measured complex scattering coefficients. They therefore model at the scattering vector level, that is, Single-Look Complex (SLC) data sets, which contain 4-dimensional complex values. These complex vectors represent both magnitude and phase for the four combinations of both transmitted and received signals for both horizontal and vertical polarisation. Statistical modelling is achieved by looking at a small neighbourhood of pixels around each point and the model parameters are estimated from this collection of data vectors. Parameter estimation, particularly Introduction Two recent systematic reviews that evaluated intensive insulin therapy (IIT) in critically ill patients grouped the included randomized controlled trials (RCTs) by type of intensive care unit (ICU): surgical versus medical versus mixed medical–surgical [1,2]. Both reviews found no mortality reduction among all critically ill patients. e more recent review by Griesdale and colleagues, however, found that IIT reduced mortality in patients admitted to surgical ICUs, but not in patients admitted to medical ICUs or mixed medical–surgical ICUs [2]. Potential explanations to support the benefi cial eff ects of IIT among critically ill surgical patients were proposed in the accompanying editorial: a greater use of central and arterial lines in surgical ICUs, which allows for more accurate monitoring and correc tion of blood glucose; acute hyperglycemia in surgical patients, who are more likely to benefi t from correction than medical patients with chronic elevations and adap tive responses; and better achievement of target glucose levels in surgical ICU studies compared with medical ICU or mixed ICU studies [3]. In contrast to the fi nding of the most recent review, however, the large NICE-SUGAR RCT enrolling over 6,000 critically ill patients suggested increased mortality both overall and among the subgroup of surgical patients [4]. ( is largest trial to date was included in the most recent review but was analyzed among the mixed medical–surgical ICU group of trials [2].) ese contrasting results between the meta-analyses [1,2] and the most recent trial [4] may stem from sensi- tivity of the meta-analytic results to methodologic deci- sions. In particular, the decision to group trials by type of ICU rather than by type of patient may not be intuitive for clinicians, for whom the important question is whether IIT saves lives in critically ill surgical patients regardless of the type of ICU in which they are treated, which depends on hospital organization. e objective of the present viewpoint article was therefore to determine whether IIT has a diff erential eff ect in surgical compared with medical critically ill patients by incorporating all available Abstract Two recent systematic reviews evaluating intensive insulin therapy (IIT) in critically ill patients grouped randomized controlled trials (RCTs) by type of intensive care unit (ICU). The more recent review found that IIT reduced mortality in patients admitted to a surgical ICU, but not in those admitted to medical ICUs or mixed medical–surgical ICUs, or in all patients combined. Our objective was to determine whether IIT saves lives in critically ill surgical patients regardless of the type of ICU. Pooling mortality data from surgical and medical subgroups in mixed-ICU RCTs (16 trials) with RCTs conducted exclusively in surgical ICUs ( ve trials) and in medical ICUs ( ve trials), respectively, showed no e ect of IIT in the subgroups of surgical patients (risk ratio = 0.85, 95% con dence interval (CI) = 0.69 to 1.04, P = 0.11; I 2 = 51%, 95% CI = 1 to 75%) or of medical patients (risk ratio = 1.02, 95% CI = 0.95 to 1.09, P = 0.61; I 2 = 0%, 95% CI = 0 to 41%). There was no di erential e ect between subgroups (interaction P=0.10). There was statistical heterogeneity in the surgical subgroup, with some trials demonstrating signi cant bene t and others demonstrating signi cant harm, but no surgical subgroup consistently bene ted from IIT. Such a reanalysis suggests that IIT does not reduce mortality in critically ill surgical patients or medical patients. Further insights may come from individual patient data meta-analyses or from future large multicenter RCTs in more narrowly de ned subgroups of surgical patients. © 2010 BioMed Central Ltd Does intensive insulin therapy really reduce mortality in critically ill surgical patients? Areanalysis of meta-analytic data Jan O Friedrich 1,2,3 *, Genome Biology 2005, 6:P4 Deposited research article A novel scheme to assess factors involved in the reproducibility of DNA-microarray data Sacha AFT van Hijum 1 , Anne de Jong 1 , Richard JS Baerends 1 , Harma A Karsens 1 , Naomi E Kramer 1 , Rasmus Larsen 1 , Chris D den Hengst 1 , Casper J Albers 2 , Jan Kok 1 and Oscar P Kuipers 1 Addresses: 1 Department of Molecular Genetics, 2 Groningen Bioinformatics Centre, University of Groningen, Groningen Biomolecular Sciences and Biotechnology Institute, PO Box 14, 9750 AA Haren, the Netherlands. Correspondence: Oscar P Kuipers. E-mail: o.p.kuipers@rug.nl comment reviews reports deposited research interactions information refereed research .deposited research AS A SERVICE TO THE RESEARCH COMMUNITY, GENOME BIOLOGY PROVIDES A 'PREPRINT' DEPOSITORY TO WHICH ANY ORIGINAL RESEARCH CAN BE SUBMITTED AND WHICH ALL INDIVIDUALS CAN ACCESS FREE OF CHARGE. ANY ARTICLE CAN BE SUBMITTED BY AUTHORS, WHO HAVE SOLE RESPONSIBILITY FOR THE ARTICLE'S CONTENT. THE ONLY SCREENING IS TO ENSURE RELEVANCE OF THE PREPRINT TO GENOME BIOLOGY'S SCOPE AND TO AVOID ABUSIVE, LIBELLOUS OR INDECENT ARTICLES. ARTICLES IN THIS SECTION OF THE JOURNAL HAVE NOT BEEN PEER-REVIEWED. EACH PREPRINT HAS A PERMANENT URL, BY WHICH IT CAN BE CITED. RESEARCH SUBMITTED TO THE PREPRINT DEPOSITORY MAY BE SIMULTANEOUSLY OR SUBSEQUENTLY SUBMITTED TO GENOME BIOLOGY OR ANY OTHER PUBLICATION FOR PEER REVIEW; THE ONLY REQUIREMENT IS AN EXPLICIT CITATION OF, AND LINK TO, THE PREPRINT IN ANY VERSION OF THE ARTICLE THAT IS EVENTUALLY PUBLISHED. IF POSSIBLE, GENOME BIOLOGY WILL PROVIDE A RECIPROCAL LINK FROM THE PREPRINT TO THE PUBLISHED ARTICLE. Posted: 3 March 2005 Genome Biology 2005, 6:P4 The electronic version of this article is the complete one and can be found online at http://genomebiology.com/2005/6/4/P4 © 2005 BioMed Central Ltd Received: 3 March 2005 This is the first version of this article to be made available publicly. This information has not been peer-reviewed. Responsibility for the findings rests solely with the author(s). A novel scheme to assess factors involved in the reproducibility of DNA-microarray data Running title: a novel scheme to assess DNA-microarray data quality Sacha A.F.T. van Hijum 1 , Anne de Jong 1 , Richard J.S. Baerends 1 , Harma A. Karsens 1 , Naomi E. Kramer 1 , Rasmus Larsen 1 , Chris D. den Hengst 1 , Casper J. Albers 2 , Jan Kok 1 and Oscar P. Kuipers 1,* 1 Department of Molecular Genetics, 2 Groningen Bioinformatics Centre, University of Groningen, Groningen Biomolecular Sciences and Biotechnology Institute, PO Box 14, 9750 AA Haren, the Netherlands. * Corresponding author: o.p.kuipers@rug.nl. ABSTRACT Background In research laboratories using DNA-microarrays, usually a number of researchers perform experiments, each generating possible sources of error. There is a need for a quick and robust method to assess data quality and sources of errors in DNA-microarray experiments. To this end, a novel and cost-effective validation scheme was devised, implemented, and employed. Results A number of validation experiments were performed on Lactococcus lactis IL1403 amplicon- based DNA-microarrays. Using the validation scheme and ANOVA, the factors contributing to the variance in normalized DNA-microarray data were estimated. Day-to-day as well as experimenter-dependent variances were shown to contribute strongly to the variance, while dye and culturing had a relatively modest contribution to the variance. Conclusions Even in cases where 90 % of the data were kept for analysis and the experiments were performed under challenging conditions (e.g. on different days), the CV was at an acceptable 25 %. Clustering experiments showed that trends can be reliably detected also from (very) lowly expressed genes. The validation scheme thus allows determining conditions that could be improved to yield even higher DNA-microarray data quality. ... of Enschede as the starting station The concentric circles depict the travelling-times in steps of 10 minutes from the starting station Fig The proposed two-step method for the construction of