1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Handbook of Industrial Automation edited by Richard L. Shell Ernest L. HallUniversity pptx

857 2,8K 1

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 857
Dung lượng 11,83 MB

Nội dung

Brown Department of Mathematics, Michigan State University, East Lansing, Michigan Jin Cao Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati, Cincin

Trang 1

Marcel Dekker, Inc New York•Basel

Handbook of

Industrial Automation

edited by

Richard L Shell Ernest L Hall

University of Cincinnati Cincinnati, Ohio

Trang 2

This book is printed on acid-free paper.

Headquarters

Marcel Dekker, Inc

270 Madison Avenue, New York, NY 10016

Copyright # 2000 by Marcel Dekker, Inc All Rights Reserved

Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical,including photocopying, micro®lming, and recording, or by any information storage and retrieval system, without permission inwriting from the publisher

Current printing (last digit):

10 9 8 7 6 5 4 3 2 1

PRINTED IN THE UNITED STATES OF AMERICA

Trang 3

This handbook is designed as a comprehensive reference for the industrial automation engineer Whether in a small

or large manufacturing plant, the industrial or manufacturing engineer is usually responsible for using the latest andbest technology in the safest, most economic manner to build products This responsibility requires an enormousknowledge base that, because of changing technology, can never be considered complete The handbook willprovide a handy starting reference covering technical, economic, certain legal standards, and guidelines that should

be the ®rst source for solutions to many problems The book will also be useful to students in the ®eld as it provides

a single source for information on industrial automation

The handbook is also designed to present a related and connected survey of engineering methods useful in avariety of industrial and factory automation applications Each chapter is arranged to permit review of an entiresubject, with illustrations to provide guideposts for the more complex topics Numerous references are provided toother material for more detailed study

The mathematical de®nitions, concepts, equations, principles, and application notes for the practicing industrialautomation engineer have been carefully selected to provide broad coverage Selected subjects from both under-graduate- and graduate-level topics from industrial, electrical, computer, and mechanical engineering as well asmaterial science are included to provide continuity and depth on a variety of topics found useful in our work inteaching thousands of engineers who work in the factory environment The topics are presented in a tutorial style,without detailed proofs, in order to incorporate a large number of topics in a single volume

The handbook is organized into ten parts Each part contains several chapters on important selected topics Part

1 is devoted to the foundations of mathematical and numerical analysis The rational thought process developed inthe study of mathematics is vital in developing the ability to satisfy every concern in a manufacturing process.Chapters include: an introduction to probability theory, sets and relations, linear algebra, calculus, differentialequations, Boolean algebra and algebraic structures and applications Part 2 provides background information onmeasurements and control engineering Unless we measure we cannot control any process The chapter topicsinclude: an introduction to measurements and control instrumentation, digital motion control, and in-processmeasurement

Part 3 provides background on automatic control Using feedback control in which a desired output is compared

to a measured output is essential in automated manufacturing Chapter topics include distributed control systems,stability, digital signal processing and sampled-data systems Part 4 introduces modeling and operations research.Given a criterion or goal such as maximizing pro®t, using an overall model to determine the optimal solutionsubject to a variety of constraints is the essence of operations research If an optimal goal cannot be obtained, thencontinually improving the process is necessary Chapter topics include: regression, simulation and analysis ofmanufacturing systems, Petri nets, and decision analysis

iii

Trang 4

Part 5 deals with sensor systems Sensors are used to provide the basic measurements necessary to control amanufacturing operation Human senses are often used but modern systems include important physical sensors.Chapter topics include: sensors for touch, force, and torque, fundamentals of machine vision, low-cost machinevision and three-dimensional vision Part 6 introduces the topic of manufacturing Advanced manufacturing pro-cesses are continually improved in a search for faster and cheaper ways to produce parts Chapter topics include: thefuture of manufacturing, manufacturing systems, intelligent manufacturing systems in industrial automation, mea-surements, intelligent industrial robots, industrial materials science, forming and shaping processes, and moldingprocesses Part 7 deals with material handling and storage systems Material handling is often considered a neces-sary evil in manufacturing but an ef®cient material handling system may also be the key to success Topics include

an introduction to material handling and storage systems, automated storage and retrieval systems, tion, and robotic palletizing of ®xed- and variable-size parcels

containeriza-Part 8 deals with safety and risk assessment Safety is vitally important, and government programs monitor themanufacturing process to ensure the safety of the public Chapter topics include: investigative programs, govern-ment regulation and OSHA, and standards Part 9 introduces ergonomics Even with advanced automation,humans are a vital part of the manufacturing process Reducing risks to their safety and health is especiallyimportant Topics include: human interface with automation, workstation design, and physical-strength assessment

in ergonomics Part 10 deals with economic analysis Returns on investment are a driver to manufacturing systems.Chapter topics include: engineering economy and manufacturing cost recovery and estimating systems

We believe that this handbook will give the reader an opportunity to quickly and thoroughly scan the ®eld ofindustrial automation in suf®cient depth to provide both specialized knowledge and a broad background of speci®cinformation required for industrial automation Great care was taken to ensure the completeness and topicalimportance of each chapter

We are grateful to the many authors, reviewers, readers, and support staff who helped to improve the script We earnestly solicit comments and suggestions for future improvements

manu-Richard L ShellErnest L Hall

Trang 5

Preface iii

Contributors ix

Part 1 Mathematics and Numerical Analysis

1.1 Some Probability Concepts for Engineers 1

Enrique Castillo and Ali S Hadi

1.2 Introduction to Sets and Relations

Part 2 Measurements and Computer Control

2.1 Measurement and Control Instrumentation Error-Modeled PerformancePatrick H Garrett

2.2 Fundamentals of Digital Motion Control

Ernest L Hall, Krishnamohan Kola, and Ming Cao

v

Trang 6

2.3 In-Process Measurement

William E Barkman

Part 3 Automatic Control

3.1 Distributed Control Systems

Dobrivoje Popovic

3.2 Stability

Allen R Stubberud and Stephen C Stubberud

3.3 Digital Signal Processing

Richard Brook and Denny Meyer

4.2 A Brief Introduction to Linear and Dynamic Programming

Part 5 Sensor Systems

5.1 Sensors: Touch, Force, and Torque

Richard M Crowder

5.2 Machine Vision Fundamentals

Prasanthi Guda, Jin Cao, Jeannine Gailey, and Ernest L Hall

Trang 7

6.2 Manufacturing Systems

Jon Marvel and Ken Bloemer

6.3 Intelligent Manufacturing in Industrial Automation

George N Saridis

6.4 Measurements

John Mandel

6.5 Intelligent Industrial Robots

Wanek Golnazarian and Ernest L Hall

6.6 Industrial Materials Science and Engineering

Part 7 Material Handling and Storage

7.1 Material Handling and Storage Systems

William Wrennall and Herbert R Tuttle

7.2 Automated Storage and Retrieval Systems

Stephen L Parsley

7.3 Containerization

A Kader Mazouz and C P Han

7.4 Robotic Palletizing of Fixed- and Variable-Size/Content Parcels

Hyder Nihal Agha, William H DeCamp, Richard L Shell, and Ernest L Hall

Part 8 Safety, Risk Assessment, and Standards

9.1 Perspectives on Designing Human Interfaces for Automated Systems

Anil Mital and Arunkumar Pennathur

9.2 Workstation Design

Christin Shoaf and Ashraf M Genaidy

Trang 8

9.3 Physical Strength Assessment in Ergonomics

Sean Gallagher, J Steven Moore, Terrence J Stobbe, James D McGlothlin, and Amit Bhattacharya

Part 10 Economic Analysis

10.1 Engineering Economy

Thomas R Huston

10.2 Manufacturing-Cost Recovery and Estimating Systems

Eric M Malstrom and Terry R Collins

Index 863

Trang 9

Hyder Nihal Agha Research and Development, Motoman, Inc., West Carrollton, Ohio

C Ray Asfahl University of Arkansas, Fayetteville, Arkansas

William E Barkman Fabrication Systems Development, Lockheed Martin Energy Systems, Inc., Oak Ridge,Tennessee

Benita M Beamon Department of Industrial Engineering, University of Washington, Seattle, WashingtonLudwig Benner, Jr Events Analysis, Inc., Alexandria, Virginia

Amit Bhattacharya Environmental Health Department, University of Cincinnati, Cincinnati, Ohio

Ken Bloemer Ethicon Endo-Surgery Inc., Cincinnati, Ohio

Richard Brook Off Campus Ltd., Palmerston North, New Zealand

William C Brown Department of Mathematics, Michigan State University, East Lansing, Michigan

Jin Cao Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati, Cincinnati,Ohio

Ming Cao Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati, Cincinnati,Ohio

Enrique Castillo Applied Mathematics and Computational Sciences, University of Cantabria, Santander, SpainFrank S Cheng Industrial and Engineering Technology Department, Central Michigan University, MountPleasant, Michigan

Ron Collier Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati, Cincinnati,Ohio

Terry R Collins Department of Industrial Engineering, University of Arkansas, Fayetteville, Arkansas

Jane Cronin Department of Mathematics, Rutgers University, New Brunswick, New Jersey

Richard M Crowder Department of Electronics and Computer Science, University of Southampton,Southampton, England

Richard B Darst Department of Mathematics, Colorado State University, Fort Collins, Colorado

ix

Trang 10

William H DeCamp Motoman, Inc., West Carrollton, Ohio

Steve Dickerson Department of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GeorgiaVerna Fitzsimmons Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Jeannine Gailey Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Sean Gallagher Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health,Pittsburgh, Pennsylvania

Patrick H Garrett Department of Electrical and Computer Engineering and Computer Science, University ofCincinnati, Cincinnati, Ohio

Ashraf M Genaidy Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Wanek Golnazarian General Dynamics Armament Systems, Burlington, Vermont

Prasanthi Guda Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Ali S Hadi Department of Statistical Sciences, Cornell University, Ithaca, New York

Ernest L Hall Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

C P Han Department of Mechanical Engineering, Florida Atlantic University, Boca Raton, Florida

Thomas R Huston Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Avraam I Isayev Department of Polymer Engineering, The University of Akron, Akron, Ohio

Ki Hang Kim Mathematics Research Group, Alabama State University, Montgomery, Alabama

Krishnamohan Kola Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Eric M Malstromy Department of Industrial Engineering, University of Arkansas, Fayetteville, ArkansasJohn Mandel National Institute of Standards and Technology, Gaithersburg, Maryland

Jon Marvel Padnos School of Engineering, Grand Valley State University, Grand Rapids, Michigan

A Kader Mazouz Department of Mechanical Engineering, Florida Atlantic University, Boca Raton, FloridaJames D McGlothlin Purdue University, West Lafayette, Indiana

M Eugene Merchant Institute of Advanced Manufacturing Sciences, Cincinnati, Ohio

Denny Meyer Institute of Information and Mathematical Sciences, Massey University±Albany, PalmerstonNorth, New Zealand

Angelo B Mingarelli School of Mathematics and Statistics, Carleton University, Ottawa, Ontario, CanadaAnil Mital Department of Industrial Engineering, University of Cincinnati, Cincinnati, Ohio

J Steven Moore Department of Occupational and Environmental Medicine, The University of Texas HealthCenter, Tyler, Texas

*Retired

yDeceased

Trang 11

Diego A Murio Department of Mathematical Sciences, University of Cincinnati, Cincinnati, Ohio

Lawrence E Murr Department of Metallurgical and Materials Engineering, The University of Texas at El Paso, ElPaso, Texas

Joseph H Nurre School of Electrical Engineering and Computer Science, Ohio University, Athens, OhioStephen L Parsley ESKAYCorporation, Salt Lake City, Utah

Arunkumar Pennathur University of Texas at El Paso, El Paso, Texas

Dobrivoje Popovic Institute of Automation Technology, University of Bremen, Bremen, Germany

Shivakumar Raman Department of Industrial Engineering, University of Oklahoma, Norman, OklahomaGeorge N Saridis Professor Emeritus, Electrical, Computer, and Systems Engineering Department, RensselaerPolytechnic Institute, Troy, New York

Richard L Shell Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

Christin Shoaf Department of Mechanical, Industrial, and Nuclear Engineering, University of Cincinnati,Cincinnati, Ohio

J B Srivastava Department of Mathematics, Indian Institute of Technology, Delhi, New Delhi, India

Terrence J Stobbe Industrial Engineering Department, West Virginia University, Morgantown, West VirginiaAllen R Stubberud Department of Electrical and Computer Engineering, University of California Irvine, Irvine,California

Stephen C Stubberud ORINCON Corporation, San Diego, California

Hiroyuki Tamura Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan

Fred J Taylor Department of Electrical and Computer Engineering and Department of Computer andInformation Science Engineering, University of Florida, Gainesville, Florida

Herbert R Tuttle Graduate Engineering Management, University of Kansas, Lawrence, Kansas

William Wrennall The Leawood Group Ltd., Leawood, Kansas

Trang 12

Many engineering applications involve some element

of uncertainty [1] Probability is one of the most

com-monly used ways to measure and deal with

uncer-tainty In this chapter we present some of the most

important probability concepts used in engineering

applications

The chapter is organized as follows Section 1.2 ®rst

introduces some elementary concepts, such as random

experiments, types of events, and sample spaces Then

it introduces the axioms of probability and some of the

most important properties derived from them, as well

as the concepts of conditional probability and

indepen-dence It also includes the product rule, the total

prob-ability theorem, and Bayes' theorem

Section 1.3 deals with unidimensional random

vari-ables and introduces three types of varivari-ables (discrete,

continuous, and mixed) and the corresponding

prob-ability mass, density, and distribution functions

Sections 1.4 and 1.5 describe the most commonly

used univariate discrete and continuous models,

respectively

Section 1.6 extends the above concepts of univariate

models to the case of bivariate and multivariate

mod-els Special attention is given to joint, marginal, and

conditional probability distributions

Section 1.7 discusses some characteristics of randomvariables, such as the moment-generating function andthe characteristic function

Section 1.8 treats the techniques of variable formations, that is, how to obtain the probaiblity dis-tribution function of a set of transformed variableswhen the probability distribution function of the initialset of variables is known Section 1.9 uses the transfor-mation techniques of Sec 1.8 to simulate univariateand multivariate data

trans-Section 1.10 is devoted to order statistics, givingmethods for obtaining the joint distribution of anysubset of order statistics It also deals with the problem

of limit or asymptotic distribution of maxima andminima

Finally, Sec 1.11 introduces probability plots andhow to build and use them in making inferences fromdata

1.2 BASIC PROBABILITY CONCEPTS

In this section we introduce some basic probabilityconcepts and de®nitions These are easily understoodfrom examples Classic examples include whether amachine will malfunction at least once during the

®rst month of operation, whether a given structurewill last for the next 20 years, or whether a ¯ood will1

Trang 13

occur during the next year, etc Other examples include

how many cars will cross a given intersection during a

given rush hour, how long we will have to wait for a

certain event to occur, how much stress level a given

structure can withstand, etc We start our exposition

with some de®nitions in the following subsection

1.2.1 Random Experiment and Sample Space

Each of the above examples can be described as a

ran-dom experiment because we cannot predict in advance

the outcome at the end of the experiment This leads to

the following de®nition:

De®nition 1 Random Experiment and Sample

Space: Any activity that will result in one and only

one of several well-de®ned outcomes, but does not

allow us to tell in advance which one will occur is called

a random experiment Each of these possible outcomes is

called an elementary event The set of all possible

ele-mentary events of a given random experiment is called

Therefore, for each random experiment there is an

associated sample space The following are examples of

random experiments and their associated sample

spaces:

Rolling a six-sided fair die once yields

Waiting for a machine to malfunction yields

How many cars will cross a given intersection yields

De®nition 2 Union and Intersection: If C is a set

con-taining all elementary events found in A or in Bor in

both, then write C ˆ …A [ B† to denote the union of A

and B, whereas, if C is a set containing all elementary

events found in both A and B, then we write C ˆ …A \ B†

to denote the intersection of A and B

Referring to the six-sided die, for example, if

A ˆ f1; 3; 5g, B ˆ f2; 4; 6g, and C ˆ f1; 2; 3g, then …A [

f1; 3g and …A \ B† ˆ , where  denotes the empty set

Random events in a sample space associated with a

random experiment can be classi®ed into several types:

which contains more than one elementary event

is called a composite event Thus, for example,

observing an odd number when rolling a sided die once is a composite event because itconsists of three elementary events

six-2 Compatible vs mutually exclusive events Twoevents A and B are said to be compatible ifthey can simultaneously occur, otherwise theyare said to be mutually exclusive or incompatibleevents For example, referring to rolling a six-sided die once, the events A ˆ f1; 3; 5g and B ˆf2; 4; 6g are incompatible because if one eventoccurs, the other does not, whereas the events

A and C ˆ f1; 2; 3g are compatible because if weobserve 1 or 3, then both A and C occur

3 Collectively exhaustive events If the union ofseveral events is the sample space, then theevents are said to be collectively exhaustive.f1; 3; 5g and B ˆ f2; 4; 6g are collectivelyexhaustive events but A ˆ f1; 3; 5g and C ˆ f1;2; 3g are not

Then A and B are said to be complementaryevents or B is the complement of A (or viceversa) The complement of A is usually denoted

by A For example, in the six-sided die example,

if A ˆ f1; 2g, A ˆ f3; 4; 5; 6g Note that an eventand its complement are always de®ned with

A and A are always mutually exclusive and lectively exhaustive events, hence …A \ A† ˆ 1.2.2 Probability Measure

col-To measure uncertainty we start with a given sampletively exhaustive outcomes of a given experiment areare closed under the union, intersection, complemen-tary and limit operations Such a class is called a -algebra Then, the aim is to assign to every subset in 

a real value measuring the degree of uncertainty aboutits occurrence In order to obtain measures with clearphysical and practical meanings, some general andintuitive properties are used to de®ne a class of mea-sures known as probability measures

De®nition 3 Probability Measure: A function p ping any subset A   into the interval ‰0; 1Š is called aprobability measure if it satis®es the following axioms:

Trang 14

map-Axiom 2 Additivity: For any (possibly in®nite)

sequence, A1; A2; ; of disjoint subsets of , then

p[AiˆXp…Ai†

Axiom 1 states that despite our degree of uncertainty,

gation formula that can be used to compute the

prob-ability of a union of disjoint subsets It states that the

uncertainty of a given subset is the sum of the

uncer-tainties of its disjoint parts

From the above axioms, many interesting properties

of the probability measure can be derived For

example:

Property 1 Boundary: p…† ˆ 0

Property 2 Monotonicity: If A  B  , then

p…A†  p…B†

Property 3 Continuity±Consistency: For every

increasing sequence A1 A2 or decreasing

sequence A1 A2 of subsets of  we have

lim

i!1p…Ai† ˆ p… lim

i!1Ai†Property 4 Inclusion±Exclusion: Given any pair of

subsets A and B of , the following equality

always holds:

p…A [ B† ˆ p…A† ‡ p…B† p…A \ B† …1†

Property 1 states that the evidence associated with a

complete lack of information is de®ned to be zero

Property 2 shows that the evidence of the membership

of an element in a set must be at least as great as the

evidence that the element belongs to any of its subsets

In other words, the certainty of an element belonging

to a given set A must not decrease with the addition of

elements to A

Property 3 can be viewed as a consistency or a

con-tinuity property If we choose two sequences

conver-ging to the same subset of , we must get the same limit

of uncertainty Property 4 states that the probabilities

of the sets A; B; A \ B, and A [ B are not independent;

they are related by Eq (1)

Note that these properties respond to the intuitive

notion of probability that makes the mathematical

model valid for dealing with uncertainty Thus, for

example, the fact that probabilities cannot be larger

than one is not an axiom but a consequence of

Axioms 1 and 2

De®nition 4 Conditional Probability: Let A and Bbetwo subsets of variables such that p…B† > 0 Then, theconditional probability distribution (CPD) of A given B

De®ntion 5 Independence of Two Events: Let A and B

be two events Then A is said to be independent of Bifand only if

otherwise A is said to be dependent on B

Equation (5) means that if A is independent of B,then our knowledge of B does not affect our knowl-edge about A, that is, B has no information about A.Also, if A is independent of B, we can then combineEqs (2) and (5) and obtain

Equation (6) indicates that if A is independent of B,then the probability of A \ B is equal to the product oftheir probabilities Actually, Eq (6) provides a de®ni-tion of independence equivalent to that in Eq (5).One important property of the independence rela-tion is its symmetry, that is, if A is independent of B,then B is independent of A This is because

p…B j A† ˆp…A \ B†p…A† ˆp…A† p…B†p…A† ˆ p…B†

Because of the symmetry property, we say that A and

B are independent or mutually independent The cal implication of symmetry is that if knowledge of B isrelevant (irrelevant) to A, then knowledge of A is rele-vant (irrelevant) to B

practi-The concepts of dependence and independence oftwo events can be extended to the case of more thantwo events as follows:

Trang 15

De®nition 6 Independence of a Set of Events: The

events A1; ; Am are said to be independent if and

only if

p…A1\ \ Am† ˆYm

iˆ1

otherwise they are said to be dependent

In other words, fA1; ; Amg are said to be

indepen-dent if and only if their intersection probability is equal

to the product of their individual probabilities Note

that Eq (7) is a generalization of Eq (6)

An important implication of independence is that it

is not worthwhile gathering information about

inde-pendent (irrelevant) events That is, independence

means irrelevance

From Eq (3) we get

p…A1\ A2† ˆ p…A1j A2† p…A2† ˆ p…A2j A1† p…A1†

This property can be generalized, leading to the

so-called product or chain rule:

p…A1\ \ An† ˆ p…A1† p…A2j A1†

p…An j A1\ \ An 1†1.2.4 Total Probability Theorem

Theorem 1 Total Probability Theorem: Let fA1; ;

Ang be a class of events which are mutually incompatible

and such that [

Theorem 2 Bayes' Theorem: Let fA1; ; Ang be a

class of events which are mutually incompatible and

1.3 UNIDIMENSIONAL RANDOMVARIABLES

In this section we de®ne random variables, distinguishamong three of their types, and present various ways ofpresenting their probability distributions

De®nition 7 Random Variable: A possible

vector-n, which assigns to each

ele-X…!† ˆ x, is called an n-dimensional random variable.random variable X is also known as the support of X.When n ˆ 1 in De®nition 7, the random variable issaid to be unidimensional and when n > 1, it is said

to be multidimensional In this and Secs 1.4 and 1.5,

we deal with unidimensional random variables.Multidimensional random variables are treated inSec 1.6

Example 1 Suppose we roll two dice once Let A bethe outcome of the ®rst die and Bbe the outcome of theconsists of 36 possible pairs (A,B), as shown in Fig 2.Suppose we de®ne a random variable X ˆ A ‡ B, that

is, X is the sum of the two numbers observed when we rolltwo dice once Then X is a unidimensional random vari-able The support of this random variable is the set f2;3; ; 12g consisting of 11 elements This is also shown

in Fig 2

1.3.1 Types of Random VariablesRandom variables can be classi®ed into three types:discrete, continuous, and mixed We de®ne and giveexamples of each type below

Figure 1 Graphical illustration of the total probability rule

Trang 16

De®nition 8 Discrete Random Variables: A random

variable is said to be discrete if it can take a ®nite or

countable set of real values

As an example of a discrete random variable, let X

denote the outcome of rolling a six-sided die once

Since the support of this random variable is the ®nite

set f1; 2; 3; 4; 5; 6g, then X is discrete random variable

The random variable X ˆ A ‡ B in Fig 2 is another

example of discrete random variables

De®nition 9 Continuous Random Variables: A

ran-dom variable is said to be continuous if it can take an

uncountable set of real values

For example, let X denote the weight of an object,

then X is a continuous random variable because it can

take values in the set fx : x > 0g, which is an

uncoun-table set

De®nition 10 Mixed Random Variables: A random

variable is said to be mixed if it can take an uncountable

set of values and the probability of at least one value of x

is positive

Mixed random variables are encountered often inengineering applications which involve some type ofcensoring Consider, for example, a life-testing situa-tion where n machines are put to work for a givenperiod of time, say 30 days Let Xi denotes the time

at which the ith machine malfunctions Then Xi is

a random variable which can take the values

fx : 0 < x  30g This is clearly an uncountable set.But at the end of the 30-day period some machinesmay still be functioning For each of these machinesall what we know is that Xi 30g Then the probabilitythat Xiˆ 30 is positive Hence the random variable Xi

is of the mixed type The data in this example is known

Figure 2 Graphical illustration of an experiment consisting of rolling two dice once and an associated random variable which isde®ned as the sum of the two numbers observed

Trang 17

sored Of course, there are situations where both right

and left censoring are present

1.3.2 Probability Distributions of Random

Variables

So far we have de®ned random variables and their

support In this section we are interested in measuring

the probability of each of these values and/or the

prob-ability of a subset of these values We know from

In other words, we are interested in ®nding the

prob-ability distribution of a given random variable Three

equivalent ways of representing the probability

distri-butions of these random variables are: tables, graphs,

and mathematical functions (also known as

mathema-tical models)

1.3.3 Probability Distribution Tables

As an example of a probability distribution that can be

displayed in a table let us ¯ip a fair coin twice and let X

be the number of heads observed Then the sample

HT; HHg, where TH, for example, denotes the

out-come: ®rst coin turned up a tail and second a head

The sample space of the random variable X is then

f0; 1; 2g For example, X ˆ 0 occurs when we observe

TT The probability of each of these possible values of

X is found simply by counting how many elements of

We can see that X ˆ 0 occurs when we observe the

outcome TT, X ˆ 1 occurs when we observe either

HT or TH, and X ˆ 2 occurs when we observe HH

Since there are four equally likely elementary events in

p…X ˆ 0† ˆ 1=4, p…X ˆ 1† ˆ 2=4, and p…X ˆ 2† ˆ 1=4

This probability distribution of X can be displayed in a

table as in Table 1 For obvious reasons, such tables

are called probability distribution tables Note that to

denote the random variable itself we use an uppercaseletter (e.g., X), but for its realizations we use the cor-responding lowercase letter (e.g., x)

Obviously, it is possible to use tables to display theprobability distributions of only discrete random vari-ables For continuous random variables, we have touse one of the other two means: graphs or mathema-tical functions Even in discrete random variables withlarge number of elements in their support, tables arenot the most ef®cient way of displaying the probabilitydistribution

1.3.4 Graphical Representation of ProbabilitiesThe probability distribution of a random variable canequivalently be represented graphically by displayingvalues in the support of X on a horizontal line anderecting a vertical line or bar on top of each of thesevalues The height of each line or bar represents theprobability of the corresponding value of X Forexample, Fig 3 shows the probability distribution ofthe random variable X de®ned in Example 1

For continuous random variables, we have in®nitelymany possible values in their support, each of whichhas a probability equal to zero To avoid this dif®culty,

we represent the probability of a subset of values by anarea under a curve (known as the probability densitycurve) instead of heights of vertical lines on top of each

of the values in the subset

For example, let X represent a number drawn domly from the interval ‰0; 10Š The probability distri-bution of X can be displayed graphically as in Fig 4.The area under the curve on top of the support of Xhas to equal 1 because it represents the total probabil-ity Since all values of X are equally likely, the curve is

ran-a horizontran-al line with height equran-al to 1/10 The height

of 1/10 will make the total area under the curve equal

to 1 This type of random variable is called a

contin-Table 1 The Probability Distribution of the Random

Variable X De®ned as the Number of Heads Resulting

from Flipping a Fair Coin Twice

012

0.250.500.25 Figure 3 Graphical representation of the probability distri-bution of the random variable X in Example 1.

Trang 18

uous uniform random variable and is dentoed by

U…a; b†, where in this example a ˆ 0 and b ˆ 10

If we wish, for example, to ®nd the probability that

X is between 2 and 6, this probability is represented by

the shaded area on top of the interval (2, 6) Note here

that the heights of the curve do not represent

probabil-ities as in the discrete case They represent the density

of the random variable on top of each value of X

1.3.5 Probability Mass and Density Functions

Alternatively to tables and graphs, a probability

dis-tribution can be displayed using a mathematical

func-tion For example, the probability distribution of the

random variable X in Table 1 can be written as

A function like the one in Eq (8) is known as a

prob-ability mass function (pmf) Examples of the pmf of

other popular discrete random variables are given in

Sec 1.4 Sometimes we write p…X ˆ x† as p…x† for

where A is the support of X

As an example of representing a continuous random

variable using a mathematical function, the graph of

the continuous random variable X in Fig 4 can be

represented by the function

p…x† (because it represents the probability that X ˆ x)and the latter by f …x† (because it represents the height

of the curve on top of x)

Note that every pdf f …x† must satisfy the followingconditions:

f …x† > 0; 8x 2 A; f …x† ˆ 0; 8x =2 A;

…

x2Af …x† ˆ 1where A is the support of X

Probability distributions of mixed random variablescan also be represented graphically and using probabil-ity mass±density functions (pmdf) The pmdf of a mixedrandom variable X is a pair of functions p…x† and f …x†such that they allow determining the probabilities of X

to take given values, and X to belong to given vals, respectively Thus, the probability of X to takevalues in the interval …a; b† is given by

inter-Xx<b x>a

p…x† ‡

…b

af …x† dxThe interpretation of each of these functions coincideswith that for discrete and continuous random vari-ables The pmdf has to satisfy the following conditions:

1.3.6 Cumulative Distribution Function

An alternative way of de®ning the probability mass±density function of a random variable is by means ofthe cumulative distribution function (cdf) The cdf of arandom variable X is a function that assigns to eachreal value x the probability of X having values lessthan or equal to x Thus, the cdf for the discrete case is

P…x† ˆ p…X  x† ˆX

ax

p…x†

and for the continuous case is

Figure 4 Graphical representation of the pdf of the U…0; 10†

random variable X

Trang 19

F…x† ˆ p…X  x† ˆ

…x

1f …x† dxNote that the cdfs are denoted by the uppercase

letters P…x† and F…x† to distinguish them from the

pmf p…x† and the pdf f …x† Note also that since p…X

ˆ x† ˆ 0 for the continuous case, then

p…X  x† ˆ p…X < x† The cdf has the following

properties as a direct consequence of the de®nitions

Every distribution function can be written as a

lin-ear convex combination of continuous

distribu-tions and step funcdistribu-tions

1.3.7 Moments of Random Variables

The pmf or pdf of random variables contains all the

information about the random variables For example,

given the pmf or the pdf of a given random variable,

we can ®nd the mean, the variance, and other moments

of the random variable The results in this section are

presented for the continuous random variables using

the pdf and cdf, f …x† and F…x†, respectively For the

discrete random variables, the results are obtained by

replacing f …x†, F…x†, and the integration symbol by

p…x†, P…x†, and the summation symbol, respectively

De®nition 11 Moments of Order k: Let X be a

ran-dom variable with pdf f …x†, cdf F…x†, and support A

Then the kth moment mkaround a 2 A is the real

Note that the Stieltjes±Lebesgue integral, Eq (10),

does not always exist In such a case we say that the

corresponding moment does not exist However, Eq

(10) implies the existence of

…

Ajx ajkf …x† dxwhich leads to the following theorem:

Theorem 3 Existence of Moments of Lower Order: Ifthe tth moment around a of a random variable X exists,then the sth moment around a also exists for 0 < s  t.The ®rst central moment is called the mean or theexpected value of the random variable X, and isdenoted by  or E‰XŠ Let X and Y be random vari-ables, then the expectation operator has the followingimportant properties:

E‰cŠ ˆ c, where c is a constant

E‰aX ‡ bY ‡ cŠ ˆ aE‰XŠ ‡ bE‰YŠ ‡ c; 8a; b; c 2 A

a  Y  b ) a  E‰YŠ  b:

jE‰YŠj  E‰j yjŠ:

The second moment around the mean is called thevariance of the random variable, and is denoted byVar…X† or 2 The square root of the variance, , iscalled the standard deviation of the random variable.The physical meanings of the mean and the varianceare similar to the center of gravity and the moment ofinertia, used in mechanics They are the central anddispersion measures, respectively

Using the above properties we can write

E‰…X a†2Š ˆ 2‡ … a†2

1.4 UNIVARIATE DISCRETE MODELS

In this section we present several important discreteprobability distributions that often arise in engineeringapplications.Table 2shows the pmf of these distribu-tions For additional probability distributions, seeChristensen [2] and Johnson et al [3]

Trang 20

1.4.1 The Bernoulli Distribution

The Bernoulli distribution arises in the following

situa-tion Assume that we have a random experiment with

two possible mutually exclusive outcomes: success,

with probability p, and failure, with probability

1 p This experiment is called a Bernoulli trial

De®ne a random variable X by

X ˆ 10 if we obtain successif we obtain failure



Then, the pmf of X is as given in Table 2 under the

Bernoulli distribution It can be shown that the

1.4.2 The Discrete Uniform DistributionThe discrete uniform random variable U…n† is a ran-dom variable which takes n equally likely values Thesevalues are given by its support A Its pmf is

p…X ˆ x† ˆ 1=n0 if x 2 Aotherwise



Table 2 Some Discrete Probability Mass Functions that Arise in Engineering Applications

Bernoulli p1 p if x ˆ 0if x ˆ 1



0 < p < 1

x 2 f0; 1gBinomial

nx

 

px…1 p†n x n 2 f1; 2; g0 < p < 1

x 2 f0; 1; ; ngNonzero binomial xn

x 2 f1; 2; gNegative binomial x 1r 1

pr…1 p†x r n 2 f1; 2; g

0 < p < 1

x 2 f0; 1; ; ngHypergeometric Dx

  …n; N† 2 f1; 2; g; n < N

max…0; n N ‡ D†  x  min…n; D†

Poisson ex!x  > 0x 2 f0; 1; gNonzero Poisson 

x

x!…e  1†

 > 0

x 2 f1; 2; gLogarithmic series x ln…1 p† x

0 < p < 1 > 0

x 2 f1; 2; gDiscrete Weibull …1 p†x

…1 p†…x‡1† 0 < p < 1; > 0x 2 f0; 1; gYule n …x† …n ‡ 1†…n ‡ x ‡ 1† …x; n† 2 f1; 2; g

Trang 21

1.4.3 The Binomial Distribution

Suppose now that we repeat a Bernoulli experiment n

times under identical conditions (that is, the outcome

of one trial does not affect the outcomes of the others)

In this case the trials are said to be independent

Suppose also that the probability of success is p and

that we are interested in the number of trials, X in

which the outcomes are successes The random

vari-able giving the number of successes after n realizations

of independent Bernoulli experiments is called a

bino-mial random variable and is denoted as B…n; p† Its pmf

is given inTable 2 Figure 6 shows some examples of

pmfs associated with binomial random variables

In certain situations the event X ˆ 0 cannot occur

The pmf of the binomial distribution can be modi®ed

to accommodate this case The resultant random able is called the nonzero binomial Its pmf is given inTable 2

vari-1.4.4 The Geometric or Pascal DistributionSuppose again that we repeat a Bernoulli experiment ntimes, but now we are interested in the random vari-able X, de®ned to be the number of Bernoulli trialsthat are required until we get the ®rst success Notethat if the ®rst success occurs in the trial number x,then the ®rst …x 1† trials must be failures (see Fig 7).Since the probability of a success is p and the prob-ability of the …x 1† failures is …1 p†x 1 (becausethe trials are independent), then the

p…X ˆ x† ˆ p…1 p†x 1 This random variable is calledthe geometric or Pascal random variable and isdenoted by G…p†

1.4.5 The Negative Binomial DistributionThe geometric distribution arises when we are inter-ested in the number of Bernoulli trials that are requireduntil we get the ®rst success Now suppose that wede®ne the random variable X as the number ofBernoulli trials that are required until we get the rthsuccess For the rth success to occur at the xth trial, wemust have …r 1† successes in the …x 1† previoustrials and one success in the rth trial (see Fig 8).This random variable is called the negative binomialrandom variable and is denoted by NB…r; p† Its pmf

is given in Table 2 Note that the gometric distribution

is a special case of the negative binomial distributionobtained by setting …r ˆ 1†, that is, G…p† ˆ NB…1; p†.1.4.6 The Hypergeometric Distribution

Consider a set of N items (products, machines, etc.), Ditems of which are defective and the remaining …N D†items are acceptable Obtaining a random sample ofsize n from this ®nite population is equivalent to with-drawing the items one by one without replacement

Figure 5 A graph of the pmf and cdf of a Bernoulli

distribution

Figure 6 Examples of the pmf of binomial random variables Figure 7 Illustration of the Pascal or geometric randomvariable, where s denotes success and f denotes failure

Trang 22

This yields the hypergeometric random variable, which

is de®ned to be the number of defective items in the

sample and is denoted by HG…N; D; n†

Obviously, the number X of defective items in the

sample cannot exceed the total number of defective

items D nor the sample size n Similarly, the number

…n X† of acceptable items in the sample cannot be

less than zero or exceed n minus the total number of

acceptable items …N D† Thus, we must have

max…0; n …N D††  X  min…n; D† This random

variable has the hypergeometric distribution and its

pmf is given in Table 2 Note that the numerator in

the pmf is the number of possible samples with x

defec-tive and …n x† acceptable items, and that the

denomi-nator is the total number of possible samples

The mean and variance of the hypergeometric

ran-dom variable are D and

D…N n†

DN

respectively When N tends to in®nity this distribution

tends to the binomial distribution

1.4.7 The Poisson Distribution

There are events which are not the result of a series of

experiments but occur in random time instants or

loca-tions For example, we can be interested in the number

of traf®c accidents occurring in a time interval, or the

number of vehicles arriving at a given intersection

For these types of random variables we can make

the following (Poisson) assumptions:

The probability of occurrence of a single event in an

interval of brief duration dt is dt, that is,

pdt…1† ˆ dt ‡ o…dt†2, where is a positive

con-stant

The probability of occurrence of more than one

event in the same interval dt, is negligible with

respect to the previous one, that is

nonoverlap-pt…x† ˆe tx! xtxLetting  ˆ t, we obtain the pmf of the Poisson ran-dom variable as given in Table 2 Thus, the Poissonrandom variable gives the number of events occurring

in period of given duration and is denoted by P…†,where  ˆ t, that is, the intensity times the durationt

As in the nonzero binomial case, in certain tions the event X ˆ 0 cannot occur The pmf of thePoisson distribution can be modi®ed to accommodatethis case The resultant random variable is called thenonzero Poisson Its pmf is given in Table 2

situa-1.5 UNIVARIATE CONTINUOUS MODELS

In this section we give several important continuousprobability distributions that often arise in engineeringapplications Table 3 shows the pdf and cdf of thesedistributions For additional probability distributions,see Christensen [2] and Johnson et al [4]

1.5.1 The Continuous Uniform DistributionThe uniform random variable U…a; b† has already beenintroduced in Sec 1.3.5 Its pdf is given in Eq (9), fromwhich it follows that the cdf can be written as (seeFig 9):

of the ®rst event is 1 F…x†, and the probability ofzero events is given by the Poisson probability distri-bution Thus, we have

Figure 8 An illustration of the negative binomial random

variable

Trang 23

1.5.3 The Gamma Distribution

Let Y be a Poisson random variable with parameter 

Let X be the time up to the kth Poisson event, that is,

the time it takes for Y to be equal to k Thus theprobability that X is in the interval …x; x ‡ dx† is

f …x† dx But this probability is equal to the probability

of there having occurred …k 1† Poisson events in aperiod of duration x times the probability of occur-rence of one event in a period of duration dx Thus,

we have

f …x† dx ˆe …k 1†!x…x†k 1  dxfrom which we obtain

f …x† ˆ…x†…k 1†!k 1e x 0  x < 1 …12†Expression (12), taking into account that the gammafunction for an integer k satis®es

Table 3 Some Continuous Probability Density Functions that Arise in EngineeringApplications

x > 0Gamma …x†k 1e x

…k†

 > 0; k 2 f1; 2; g

x  0Beta …r† …t†…r ‡ t†xr 1…1 x†t 1 0  x  1r; t > 0

! …n‡1†=2 n 2 f1; 2; g

1 < x < 1Central F ……n1‡ n2†=2†nn1 =2

Trang 24

which is valid for any real positive k, thus, generalizing

the exponential distribution The pdf in Eq (14) is

known as the gamma distribution with parameters k

and  The pdf of the gamma random variable is

plotted in Fig 11

1.5.4 The Beta Distribution

The beta random variable is denoted as Beta…r; s†,

where r > 0 and s > 0 Its name is due to the presence

of the beta function

… p; q† ˆ

…1

0xp 1…1 x†q 1dx p > 0; q > 0Its pdf is given by

of experimental data Figure 12 shows different ples of the pdf of the beta random variable Two

exam-Figure 9 An example of pdf and cdf of the uniform random

Trang 25

particular cases of the beta distribution are interesting.

Setting (r=1, s=1), gives the standard uniform U…0; 1†

distribution, while setting …r ˆ 1; s ˆ 2 or r ˆ 2; s ˆ 1)

gives the triangular random variable whose cdf is given

1.5.5 The Normal or Gaussian Distribution

One of the most important distributions in probability

and statistics is the normal distribution (also known as

the Gaussian distribution), which arises in various

applications For example, consider the random

vari-able, X, which is the sum of n identically and

indepen-dently distributed (iid) random variables Xi Then, by

the central limit theorem, X is asymptotically normal,

regardless of the form of the distribution of the

ran-dom variables Xi

The normal random variable with parameters  and

2 is denoted by N…; 2† and its pdf is

f …x† ˆ 1

p2 exp

…x †222

!

1 < x < 1

The change of variable, Z ˆ …X †=, transforms

a normal N…; 2† random variable X in another

ran-dom variable Z, which is N…0:1† This variable is called

the standard normal random variable The main

inter-est of this change of variable is that we can use tables

for the standard normal distribution to calculate

prob-abilities for any other normal distribution For

exam-ple, if X is N…; 2†, then

p…X < x† ˆ p X <x 

ˆ p Z < x  ˆ x  where …z† is the cdf of the standard normal distribu-

tion The cdf …z† cannot be given in closed form

However, it has been computed numerically and tables

for …z† are found at the end of probability and

statis-tics textbooks Thus we can use the tables for the

stan-dard normal distribution to calculate probabilities for

any other normal distribution

1.5.6 The Log-Normal Distribution

We have seen in the previous subsection that the sum

of iid random variables has given rise to a normaldistribution In some cases, however, some randomvariables are de®ned to be the products instead ofsums of iid random variables In these cases, takingthe logarithm of the product yields the log-normal dis-tribution, because the logarithm of a product is thesum of the logarithms of its components Thus, wesay that a random variable X is log-normal when itslogarithm ln X is normal

Using Theorem 7, the pdf of the log-normal randomvariable can be expressed as

f …x† ˆ 1xp2 exp

…ln x †222

!

x  0

where the parameters  and  are the mean and thestandard deviation of the initial random normal vari-able The mean and variance of the log-normal ran-dom variable are e‡ 2 =2 and e2…e2 2

e 2

†,respectively

1.5.7 The Chi-Squared and Related DistributionsLet Y1; ; Yn be independent random variables,where Yi is distributed as N…i; 1† Then, the variable

X ˆXn

iˆ1

Yi2

is called a noncentral chi-squared random variable with

n degrees of freedom, noncenrality parameter

 ˆPniˆ12i; and is denoted as 2n…† When  ˆ 0 weobtain the central chi-squared random variable, which

is denoted by 2n The pdf of the central chi-squaredrandom variable with n degrees of freedom is given inTable 3, where …:† is the gamma function de®ned in

Eq (13)

The positive square root of a 2

n…† random variable

is called a chi random variable and is denoted by

n…† An interesting particular case of the n…† isthe Rayleigh random variable, which is obtained for

…n ˆ 2 and  ˆ 0† The pdf of the Rayleigh randomvariable is given in Table 3 The Rayleigh distribution

is used, for example, to model wave heights [5].1.5.8 The t Distribution

Let Y1be a normal N…; 1† and Y2be a 2independentrandom variables Then, the random variable

Trang 26

T ˆ YX1

2=np

is called the noncentral Student's t random variable

with n degrees of freedom and noncentrality parameter

 and is denoted by tn…† When  ˆ 0 we obtain the

central Student's t random variable, which is denoted

by tn and its pdf is given in Table 3 The mean and

variance of the central t random variable are 0 and

X ˆX1=n1

X2=n2

is known as the noncentral Snedecor F random variable

with n1 and n2 degrees of freedom and noncentrality

parameters 1 and 2; and is denoted by Fn1;n2…1; 2†

An interesting particular case is obtained when

1ˆ 2ˆ 0, in which the random variable is called

the noncentral Snedecor F random variable with n1

and n2 degrees of freedom In this case the pdf is

given in Table 3 The mean and variance of the central

F random variable are

In this section we deal with multidimensional random

variables, that is, the case where n > 1 in De®nition 7

In random experiments that yield multidimensional

random variables, each outcome gives n real values

The corresponding components are called marginal

variables Let fX1; ; Xng be n-dimensional random

variables and X be the n  1 vector containing the

components fX1; ; Xng The support of the random

variable is also denoted by A, but here A is

multidi-mensional A realization of the random variable X is

denoted by x, an n  1 vector containing the

compo-nents fx1; ; xng Note that vectors and matrices aredenoted by boldface letters Sometimes it is also con-venient to use the notation X ˆ fX1; ; Xng, whichmeans that X refers to the set of marginals

fX1; ; Xng We present both discrete and continuousmultidimensional random variables and study theircharacteristics For some interesting engineering multi-dimensional models see Castillo et al [6,7]

1.6.1 Multidimensional Discrete RandomVariables

A multidimensional random variable is said to be crete if its marginals are discrete The pmf of a multi-dimensional discrete random variable X is written as

dis-p…x† or p…x1; ; xn† which means

p…x† ˆ p…x1; ; xn† ˆ p…X1ˆ x1; ; Xnˆ xn†The pmf of multidimensional random variables can betabulated in probability distribution tables, but thetables necessarily have to be multidimensional Also,because of its multidimensional nature, graphs of thepmf are useful only for n ˆ 2 The random variable inthis case is said to be two-dimensional A graphicalrepresentation can be obtained using bars or lines ofheights proportional to p…x1; x2† as the followingexample illustrates

Example 2 Consider the experiment consisting ofrolling two fair dice Let X ˆ …X1; X2† be a two-dimen-sional random variable such that X1 is the outcome ofthe ®rst die and X2 is the minimum of the two dice Thepmf of X is given in Fig 13, which also shows themarginal probability of X2 For example, the probabil-ity associated with the pair …3; 3† is 4/36, because,according to Table 4, there are four elementary eventswhere X1ˆ X2 ˆ 3

Table 4 Values of X2ˆ min…X; Y† for Different Outcomes

of Two Dice X and YDie 1

Die 2

123456

111111

122222

123333

123444

123455

123456

Trang 27

The pmf must satisfy the following properties:

Example 3 The Multinomial Distribution: We have

seen in Sec 1.4.3 that the binomial random variable

results from random experiments, each one having two

possible outcomes If each random experiment has more

than two outcomes, the resultant random variable is

called a multinomial random variable Suppose that

we perform an experiment with k possible outcomes r1;

; rk with probabilities p1; ; pk, respectively Since

the outcomes are mutually exclusive and collectively

exhaustive, these probabilities must satisfy

Pk

iˆ1piˆ 1 If we repeat this experiment n times and

let Xibe the number of times we obtain outcomes ri, for

1.6.2 Multidimensional Continuous RandomVariables

A multidimensional random variable is said to be tinuous if its marginals are continuous The pdf of ann-dimensional continuous random variable X is written

con-as f …x† or f …x1; ; xn† Thus f …x† gives the height ofthe density at the point x and F…x† gives the cdf, that is,

f …x1; x2† ˆ@2F…x@x 1; x2†

1@x2Among other properties of two-dimensional cdfs we men-tion the following:

Trang 28

For example, Fig 14 illustrates the fourth property,

showing how the probability that …X1; X2† belongs to a

given rectangle is obtained from the cdf

1.6.3 Marginal and Conditional Probability

Distributions

We obtain the marginal and conditional distributions

for the continuous case The results are still valid for

the discrete case after replacing the pdf and integral

symbols by the pmf and the summation symbol,

respectively Let fX1; ; Xng be n-dimensional

contin-uous random variable with a joint pdf f …x1; ; xn†

The marginal pdf of the ith component, Xi, is obtained

by integrating the joint pdf over all other variables

For example, the marginal pdf of X1 is

We de®ne the conditional pdf for the case of

two-dimensional random variables The extension to the

n-dimensional case is straightforward For simplicity of

notation we use …X; Y† instead of …X1; X2† Let then

…Y; X† be a two-dimensional random variable The

random variable Y given X ˆ x is denoted by

…Y j X ˆ x† The corresponding probability density

and distribution functions are called the conditional

pdf and cdf, respectively

The following expressions give the conditional pdf

for the random variables …Y j X ˆ x† and …X j Y ˆ y†:

fYjXˆx… y† ˆf…X;Y†f …x; y†

con-FXjYy…x† ˆ p…X  x j Y  y† ˆp…X  x; Y  y†

De®nition 12 Moments of a Multidimensional RandomVariable: The moment k1; ;kn;a1; ;an of order

…k1; ; kn†, ki2 f0; 1; g with respect to the point a ˆ

…a1; ; an† of the n-dimensional continuous randomvariable X ˆ …X1; ; Xn†, with pdf f …x1; ; xn† andsupport A, is de®ned as the real number

…x 1 ; ;x n †2A

…x1 a1†k1…x2 a2†k2 …xn an†kn

p…x1; ; xn†where f …x1; ; xn† is the pdf of X

The moment of ®rst order with respect to the origin

is called the mean vector, and the moments of secondorder with respect to the mean vector are called thevariances and covariances The variances and covar-iances can conveniently be arranged in a matrix calledthe variance±covariance matrix For example, in thebivariate case, the variance±covariance matrix is

Figure 14 An illustration of how the probability that …X1;

X2† belongs to a given rectangle is obtained from the cdf

Trang 29

is the covariance between X and Y, where X is the

mean of the variable X Note that D is necessarily

symmetrical

Figure 15 gives a graphical interpretation of the

contribution of each data point to the covariance and

its corresponding sign In fact the contribution term

has absolute value equal to the area of the rectangle

in Fig 15(a) Note that such area takes value zero

when the corresponding points are on the vertical or

the horizontal lines associated with the means, and

takes larger values when the point is far from the

means

On the other hand, when the points are in the ®rst

and third quadrants (upper-right and lower-left) with

respect to the mean, their contributions are positive,

and if they are in the second and fourth quadrants

(upper-left and lower-right) with respect to the mean,

their contributions are negative [see Fig 15(b)]

Another important property of the

variance±covar-iance matrix is the Cauchy±Schwartz inequality:

jXYj pXXYY …17†

The equality holds only when all the possible pairs

(points) are in a straight line

The pairwise correlation coef®cients can also be

arranged in a matrix

q ˆ XX XY

YX YY

This matrix is called the correlation matrix Its

diago-nal elements XX and YY are equal to 1, and the

off-diagonal elements satisfy 1  XY  1

1.6.5 Sums and Products of Random Variables

In this section we discuss linear combinations and

pro-ducts of random variables

Theorem 4 Linear Transformations: Let …X1; ; Xn†

be an n-dimensional random variable and lX and DX beits mean and covariance matrix Consider the lineartransformation

Y ˆ CXwhere X is the column vector containing …X1; ; Xn†and C is a matrix of order m  n Then, the mean vectorand covariance matrix of the m-dimensional randomvariable Y are

lY ˆ ClX and Yˆ CDXCT

Theorem 5 Expectation of a Product of IndependentRandom Variables: If X1; ; Xnare independent ran-dom variables with means

E‰X1Š; ; E‰XnŠrespectively, then, we have

1.6.6 Multivariate Moment-GeneratingFunction

Let X ˆ …X1; ; Xn† be an n-dimensional randomvariable with cdf F…x1; ; xn† The moment-generat-ing function MX…t1; ; tn† of X is

Mx…t1; ; tn† ˆ

…

R net1 x 1 ‡‡t n x ndF…x1; ; xn†Like in the univariate case, the moment-generatingfunction of a multidimensional random variable maynot exist

The moments with respect to the origin areE‰X 1

t 1 ˆˆt n ˆ0

Figure 15 Graphical illustration of the meaning of the

covariance

Trang 30

Example 5 Consider the random variable with pdf

f …x1; ; xn†

ˆ

Yn iˆ1

Let X be an n-dimensional normal random variable,

which is denoted by N…l; D†, where l and D are the

mean vector and covariance matrix, respectively The

The following theorem gives the conditional mean and

variance±covariance matrix of any conditional

vari-able, which is normal

Theorem 6 Conditional Mean and Covariance

Matrix: Let Y and Z be two sets of random variables

having a multivariate Gaussian distribution with mean

vector and covariance matrix given by

where lY and DYY are the mean vector and covariance

matrix of Y, lZ and DZZ are the mean vector and

cov-ariance matrix of Z, and DYZis the covariance of Y and

Z Then the CPD of Y given Z ˆ z is multivariateGaussian with mean vector lYjZˆz and covariancematrix DYjZˆz, where

lYjZˆzˆ lY‡ DYZD 1

DYjZˆzˆ DYY DYZDZZ1DZYFor other properties of the multivariate normal distri-bution, see any multivariate analysis book, such asRencher [8]

1.6.8 The Marshall±Olkin Distribution

We give two versions of the Marshall±Olkin tion with different interpretations Consider ®rst a sys-tem with two components Both components aresubject to Poissonian processes of fatal shocks, suchthat if one component is affected by one shock itfails Component 1 is subject to a Poisson processwith parameter 1, component 2 is subject to aPoisson process with parameter 2, and both are sub-ject to a Poisson process with parameter 12 Thisimplies that

distribu-F…s; t† ˆ p‰X > s; Y > tŠ

ˆ pfZ1…s; 1† ˆ 0; Z2…t; 2† ˆ 0; g

Z12…max…s; t†; 12† ˆ 0; g

ˆ exp‰ 1s 2t 12max…s; t†Šwhere Z…s; † represents the number of shocks pro-duced by a Poisson process of intensity  in a period

of duration s and F…s; t† is the survival function.This model has another interpretation in terms ofnonfatal shocks as follows Consider the above model

of shock occurrence, but now suppose that the shocksare not fatal Once a shock of intensity 1 hasoccurred, there is a probability p1of failure of compo-nent 1 Once a shock of intensity 2has occurred, there

is a probability p2 of failure of component 2 and,

®nally, once a shock of intensity 12 has occurred,there are probabilities p00, p01, p10, and p11 of failure

of neither of the components, component 1, nent 2, or both components, respectively In this case

Trang 31

This two-dimensional model admits an obvious

The pmf or pdf of random variables contains all the

information about the random variables For example,

given the pmf or the pdf of a given random variable,

we can ®nd the mean, the variance, and other moments

of the random variable We can also ®nd functions

related to the random variables such as the

moment-generating function, the characteristic function, and

the probability-generating function These functions

are useful in studying the properties of the

correspond-ing probability distribution In this section we study

these characteristics of the random variables

The results in this section is presented for

continu-ous random variables using the pdf and cdf, f …x† and

F…x†, respectively For discrete random variables, the

results are obtained by replacing f …x†, F…x†, and the

integration symbol by p…x†, P…x†, and the summation

symbol, respectively

1.7.1 Moment-Generating Function

Let X be a random variable with a pdf f …x† and cdf

F…x† The moment-generating function (mgf) of X is

de®ned as

MX…t† ˆ E‰etxŠ ˆ

…1

1etxdFX…x†

In some cases the moment-generating function does

not exist But when it exists, it has several very

impor-tant properties

The mgf generates the moments of the random

vari-able, hence its name In fact, the k central moment of

the random variable is obtained by evaluating the kth

derivative of MX…t† with respect to t at t ˆ 0 That is, if

M…k†…t† is the kth derivative of MX…t† with respect to t,

then the kth central moment of X is mkˆ M…k†…0†

Example 6 The moment-generating function of theBernoulli random variable with pmf

p…x† ˆ p1 p if x ˆ 1if x ˆ 0



is

M…t† ˆ E‰etXŠ ˆ et1p ‡ et0…1 p† ˆ 1 p ‡ pet

For example, to ®nd the ®rst two central moments of

X, we ®rst differentiate MX…t† with respect to t twice andobtain M…1†…t† ˆ pet and M…2†…t† ˆ pet In fact,

M…k†…t† ˆ pet, for all k Therefore, M…k†…0† ˆ p, whichproves that all central moments of X are equal to p.Example 7 The moment-generating function of thePoisson random variable with pmf

p…x† ˆex!x x 2 f0; 1; gis

M…t† ˆ E‰etXŠ ˆX1

xˆ0

etxe xx! ˆ e 

X1 xˆ0

…et†xx!

ˆ e ee t

ˆ e…e t 1†

For example, the ®rst derivative of M…t† with respect to t

is M…1†…t† ˆ e eteet, from which it follows that themean of the Poisson random variable is M…1†…0† ˆ .The reader can show tht E‰X2Š ˆ M…2†…0† ˆ  ‡ 2,from which it follows that Var…x† ˆ , where we haveused Eq (11)

Example 8 The moment-generating function of theexponential random variable with pdf

f …x† ˆ e x x  0is

Tables 5and6give the mgf, mean, and variance ofseveral discrete and continuous random variables Thecharacteristic function X…t† is discussed in the follow-ing subsection

Trang 32

1.7.2 Characteristic Function

Let X be a univariate random variable with pdf f …x†

and cdf F…x† Then, the characteristic function (cf) of X

is de®ned by

X…t† ˆ

…1

where i is the imaginary unit Like the mgf, the cf is

unique and completely characterizes the distribution of

the random variable But, unlike the mgf, the cf always

exists

Note that Eq (19) shows that X…t† is the Fourier

transform of f …x†

Example 9 The characteristic function of the discrete

uniform random variable U…n† with pmf

p…x† ˆ1n x ˆ 1; ; nis

Table 6 Moment-Generating Functions, Characteristic Functions, Means, and Variances ofSome Continuous Probability Distributions

Trang 33

function are:

Xˆ 1:

j X…t†j  1:

X… t† ˆ X…t†, where z is the conjugate of z

If Z ˆ aX ‡ b, where X is a random variable, and

a and b are real constants, we have

Z…t† ˆ eitb X…at†, where X…t† and Z…t† are

the characteristic functions of Z and X,

respec-tively

The characteristic function of the sum of two

inde-pendent random variables is the product of their

characteristic functions, that is, X‡Y…t†

ˆ X…t† Y…t†

The characteristic function of a linear convex

com-bination of random variables is the linear convex

combination of their characteristic functions with

the same coef®cients: aFx‡bFy…t† ˆ a X…t† ‡

b Y…t†

The characteristic function of the sum of a random

number N iid random variables fX1; ; Xng is

given by

S…t† ˆ N log X…t†

i

where X…t†, N…t†, and S…t† are the

character-istic functions of Xi, N and S ˆPNiˆ1Xi,

respec-tively

One of the main applications of the characteristic

function is to obtain the central moments of the

corre-sponding random variable In fact, in we differentiate

the characteristic function k times with respect to t, we

…k†

X…0† ˆ ik…1

1xkdF…x† ˆ ikmkfrom which we have

mkˆ …k†X…0†

where mkis the kth central moment of X

Example 11 The central moments of the Bernoullirandom variable are all equal to p In effect, its charac-teristic function is

X…t† ˆ peit‡ qand, according to Eq (20), we get

The moments with respect to the origin can beobtained by

Trang 34

1 iti

i

Example 14 The characteristic function of the

multi-normal random variable is

'…t1; ; tn† ˆ exp i Xn

kˆ1

tkk

Xn k;jˆ1

kjtktj2

0BB

@

1CCA

2664

3775

Example 15 The characteristic function of the

multi-nominal random variable, M…n; p1; ; pk†, can be

Theorem 7 Transformations of Continuous Random

Variables: Let …X1; ; Xn† be an n-dimensional

ran-dom variable with pdf f …x1; ; xn† de®ned on the set A

be a one-to-one continuous transformation from the set

A to the set B Then, the pdf of the random variable

…Y1; ; Yn† on the set B, is

f …h1…y1; ; yn†; h2…y1; ; yn†; ;

hn…y1; y2; ; yn††j det…J†jwhere

X1ˆ h1…Y1; ; Yn†

X2ˆ h2…Y1; ; Yn†

Xnˆ hn…Y1; ; Yn†

is the inverse transformation of Eq (21) and j det…J†jthe absolute value of the determinant of the Jacobianmatrix J of the transformation The ijth element of J

is given by @Xi=@Yj.Example 16 Let X and Y be two independent normal

N…0; 1† random variables Then the joint pdf is

U ˆ X ‡ Y

V ˆ X Ywhich implies that

X ˆ …U ‡ V†=2

Y ˆ …U V†=2Then the Jacobian matrix is

J ˆ @X=@U @X=@V@Y=@U @Y=@V

Trang 35

which is the product of a function of u and a function of

v de®ned in a rectangle Thus, U and V are independent

N…0; 2† random variables

1.8.2 Other Transformations

If the transformation Eq (21) is not one-to-one, the

above method is not applicable Assume that for each

point …x1; ; xn† in A there is one point in B, but each

point in B, has more than one point in A Assume

further that there exists a ®nite partition …A1; ; An†,

of A, such that the restriction of the given

transforma-tion to each Ai, is a one-to-one transformation Then,

there exist transformations of B in Ai de®ned by

X1ˆ h1i…Y1; ; Yn†

X2ˆ h2i…Y1; ; Yn†

Xnˆ hni…Y1; ; Yn†

with jacobians Ji i ˆ 1; ; m Then, taking into

account that the probability of the union of disjoint

sets is the sum of the probabilities of the individual

sets, we obtain the pdf of the random variable

A very useful application of the change-of-variables

technique discussed in the previous section is that it

provides a justi®cation of an important method for

simulating any random variable using the standard

uniform variable U…0; 1†

1.9.1 The Univariate Case

Theorem 8 Let X be a univariate random variable with

cdf F…x† Then, the random variable U ˆ F…x† is

distrib-uted as a standard uniform variable U…0; 1†

Example 17 Simulating from a Probability

Distribution: To generate a sample from a probability

distribution f …x†, we ®rst compute the cdf,

of the cdf evaluated at ui For example, Fig 16 shows thecdf F…x† and two values x1 and x2 corresponding to theuniform U…0; 1† numbers u1 and u2

Theorem 9 Simulating Normal RandomVariables: Let X and Y be independent standard uni-form random variables U…0; 1† Then, the random vari-ables U and V de®ned by

U ˆ … 2 log X†1=2sin…2Y†

V ˆ … 2 log X†1=2cos…2Y†

are independent N…0; 1† random variables

1.9.2 The Multivariate Case

In the multivariate case …X1; ; Xn†, we can simulateusing the conditional cdfs:

F…x1†; F…x2jx1†; ; F…xnjx1; ; xn 1†

as follows First we simulate X1 with F…x1† obtaining

x1 Once we have simulated Xk 1 obtaining xk 1, wesimulate Xk using F…xkjx1; ; xk 1†, and we continuethe process until we have simulated all X's We repeatthe whole process as many times as desired

1.10 ORDER STATISTICS AND EXTREMESLet …X1; ; Xn† be a random sample coming from apdf f …x† and cdf F…x† Arrange …X1; ; Xn† in an

Figure 16 Sampling from a probability distribution f …x†using the corresponding cdf F…x†:

Trang 36

increasing order of magnitude and let X1:n     Xn:n

be the ordered values Then, the rth element of this

new sequence, Xr:n, is called the rth order statistic of

the sample

Order statistics are very important in practice,

espe-cially so for the minimum, X1:nand the maximum, Xn:n

because they are the critical values which are used in

engineering, physics, medicine, etc (see, e.g., Castillo

and Hadi [9±11]) In this section we study the

distribu-tions of order statistics

1.10.1 Distributions of Order Statistics

The cdf of the rth order statistic Xr:n is [12, 13]

Fr:n…x† ˆ P‰Xr:n xŠ ˆ 1 Fm…x†…r 1†

ˆXn

kˆr

nk

!

Fk…x†‰1 F…x†Šn k

ˆ r nr

! …F…x†

0 ur 1…1 u†n rdu

ˆ IF…x†…r; n r ‡ 1† …22†

where m…x† is the number of elements in the sample

with value Xj  x and Ip…a; b† is the incomplete beta

function, which is implicitly de®ned in Eq (22)

If the population is absolutely continuous, then the

pdf of Xrn is given by the derivative of Eq (22) with

where …a; b† is the beta function

Example 18 Distribution of the minimum order

statis-tic Letting r ˆ 1 in Eqs (22) and (23) we obtain the cdf

and pdf of the minimum order statistic:

FX1:n…x† ˆXn

kˆ1

nk

sta-ˆ F…x†n and fXn:n…x† ˆ nFn 1…x†f …x†

1.10.2 Distributions of Subsets of Order

StatisticsLet Xr1:n; ; Xrk:n, be the subset of k order statistics oforders r1< < rk, of a random sample of size n com-ing from a population with pdf f …x† and cdf F…x† Withthe aim of obtaining the joint distribution of this set,consider the event xj  Xrj:n< xj‡ xj; 1  j  k forsmall values of xj, 1  j  k (see Fig 17) That is, kvalues in the sample belong to the intervals …xj; xj‡

xj† for 1  j  k and the rest are distributed in such away that exactly …rj rj 1 1† belong to the interval

…xj 1‡ xj 1; xj† for 1  j  k, where x0ˆ 0, r0ˆ 0,

rk‡1ˆ n ‡ 1, x0 ˆ 1 and xk‡1ˆ 1

Consider the following multinomial experimentwith the 2k ‡ 1 possible outcomes associated with the2k ‡ 1 intervals illustrated in Fig 17 We obtain asample of size n from the population and determine

to which of the intervals they belong Since we assumeindependence and replacement, the numbers of ele-ments in each interval is a multinomial random vari-able with parameters

fn; f …x1†x1; ; f …xk†xk; ‰F…x1† F…x0†Š;

‰F…x2† F…x1†Š; ; ‰F…xk‡1† F…xk†Šgwhere the parameters are n (the sample size) and theprobabilities associated with the 2k ‡ 1 intervals.Consequently, we can use the results for multinomialrandom variables to obtain the joint pdf of the k orderstatistics and obtain

Figure 17 An illustration of the multinomial experimentused to determine the joint pdf of a subset of k orderstatistics

Trang 37

the joint distribution of the maximum and the

mini-mum of a sample of size n, which becomes

f1;n:n…x1; x2† ˆ n…n 1† f …x1† f …x2†‰F…x2† F…x1†Šn 2

x1 x2

1.10.3.2 Joint Distribution of Two Consecutive

Order StatisticsSetting k ˆ 2, r1ˆ i and r2ˆ i ‡ 1 in Eq (24), we get

the joint density of the statistics of orders i and i ‡ 1:

fi;i‡1:n…x1; x2† ˆ n!f …x1† f …x…i 1†!…n i 1†!2†Fi 1…x1†‰1 F…x2†Šn i 1

x1 x2

1.10.3.3 Joint Distribution of Any Two Order

StatisticsThe joint distribution of the statistics of orders r and

from Eq (24) setting k ˆ n and obtain

f1; ;n:n…x1; ; xn† ˆ n!Yn

iˆ1

f …xi† x1     xn

1.10.4 Limiting Distributions of Order Statistics

We have seen that the cdf of the maximum Zn andminimum Wn of a sample of size n coming from apopulation with cdf F…x† are Hn…x† ˆ P‰Zn  xŠ ˆ

Fn…x† and Ln…x† ˆ P‰Wn xŠ ˆ 1 ‰1 F…x†Šn.When n tends to in®nity we have

lim

n!1Hn…an‡ bnx† ˆ lim

n!1Fn…an‡ bnx† ˆ H…x† 8x

…25†and

De®nition 13 Domain of Attraction of a GivenDistribution: A given distribution, F…x†, is said tobelong to the domain of attraction for maxima of

H…x†, if Eq (25) holds for at least one pair of sequences

fang and fbn> 0g Similarly, when F…x† satis®es (26) wesay that it belongs to the domain of attraction forminima of L…x†

The problem of limit distribution can then be statedas:

1 Find conditions under which Eqs (25) and (26)are satis®ed

2 Give rules for building the sequences

Trang 38

follow-Theorem 10 Feasible Limit Distribution for

Maxima: The only nondegenerate distributions H…x†

satisfying Eq (25) are

Gumbel: H3;0…x† ˆ exp‰ exp… x†Š 1 < x < 1

Theorem 11 Feasible Limit Distribution for

Minima: The only nondegenerate distributions L…x†

satisfying Eq (26) are

Gumbel: L3;0…x† ˆ 1 exp… exp x† 1 < x < 1

To know the domains of attraction of a given

dis-tribution and the associated sequences, the reader is

referred to Galambos [16]

Some important implications of his theorems are:

1 Only three distributions (Frechet, Weibull, and

Gumbel) can occur as limit distributions for

maxima and minima

2 Rules for determining if a given distribution

F…x† belongs to the domain of attraction of

these three distributions can be obtained

3 Rules for obtaining the corresponding

sequences fang and fbng or fcng and fdng …i ˆ 1;

.† can be obtained

4 A distribution with no ®nite end in the

asso-ciated tail cannot belong to the Weibull domain

of attraction

5 A distribution with ®nite end in the associated

tail cannot belong to the Frechet domain of

attraction

Next we give another more ef®cient alternative to

solve the same problem We give two theorems [13, 17]

that allow this problem to be solved The main

advan-tage is that we use a single rule for the three cases

Theorem 12 Domain of Attraction for Maxima of aGiven Distribution: A necessary and suf®cient condi-tion for the continuous cdf F…x† to belong to the domain

of attraction for maxima of Hc…x† is thatlim

"!0

F 1…1 "† F 1…1 2"†

F 1…1 2"† F 1…1 4"†ˆ 2cwhere c is a constant This implies that:

If c < 0, F…x† belongs to the Weibull domain ofattraction for maxima

If c ˆ 0, F…x† belongs to the Gumbel domain ofattraction for maxima

If c > 0, F…x† belongs to the Frechet domain ofattraction for maxima

Theorem 13 Domain of Attraction for Minima of aGiven Distribution: A necessary and suf®cient condi-tion for the continuous cdf F…x† to belong to the domain

of attraction for minima of Lc…x† is thatlim

"!0

F 1…"† F 1…2"†

F 1…2"† F 1…4"†ˆ 2cThis implies that:

If c < 0, F…x† belongs to the Weibull domain ofattraction for minima

If c ˆ 0, F…x† belongs to the Gumbel domain ofattraction for minima

If c > 0, F…x† belongs to the Frechet domain ofattraction for minima

Table 7shows the domains of attraction for maximaand minima of some common distributions

1.11 PROBABILITY PLOTSOne of the graphical methods commonly used by engi-neers is the probability plot The basic idea of prob-ability plots, of a biparametric family of distributions,consists of modifying the random variable and theprobability drawing scales in such a manner that thecdfs become a family of straight lines In this way,when the cdf is drawn a linear trend is an indication

of the sample coming from the corresponding family

In addition, probability plots can be used to mate the parameters of the family, once we havechecked that the cdf belongs to the family

esti-However, in practice we do not usually know theexact cdf We, therefore, use the empirical cdf as anapproximation to the true cdf Due to the randomcharacter of samples, even in the case of the sample

Trang 39

coming from the given family, the corresponding graph

will not be an exact straight line This complicates

things a little bit, but if the trend approximates

linear-ity, we can say that the sample comes from the

asso-ciated family

In this section we start by discussing the empirical

cdf and de®ne the probability graph, then give

exam-ples of the probability graph for some distributions

useful for engineering applications

1.11.1 Empirical Cumulative Distribution

Function

Let xi:n denote the ith observed order statistic in a

random sample of size n Then the empirical cumulative

distribution function (ecdf) is de®ned as

p…X ˆ x† ˆ 0i=n if xif x < xi:n x  x1:n i‡1:n i ˆ 1; ; n 1

1 if x > xn:n

8

<

:This is a jump (step) function However, there exist

several methods that can be used to smooth this

func-tion, such as linear interpolation methods [18]

1.11.2 Fundamentals of Probability Plots

A probability plot is simply a scatter plot with

trans-formed scales for the two-dimensional family to

become the set of straight lines with positive slope(see Castillo [19], pp 131±173)

Let F…x; a; b† be a biparametric family of cdfs,where a and b are the parameters We look for a trans-formation

such that the family of curves y ˆ F…x; a; b† after formation (27) becomes a family of straight lines.Note that this implies

trans-h…y† ˆ h‰F…x; a; b†Š ˆ ag…x† ‡ b ,  ˆ a ‡ b

…28†where the variable  is called the reduced variable.Thus, for the existence of a probabilistic plot asso-ciated with a given family of cdfs F…x; a; b† it is neces-sary to have F…x; a; b† ˆ h 1‰ag…x† ‡ bŠ

As we mentioned above, in cases where the true cdf

is unknown we estimate the cdf by the ecdf But theecdf has steps 0; 1=n; 2=n; ; 1 However, the twoextremes 0 and 1, when we apply the scale transforma-tion become 1 and 1, respectively, in the case ofmany families Thus, they cannot be drawn

Due to the fact that in the order statistic xi:n theprobability jumps from …i 1†=n to i=n, one solution,which has been proposed by Hazen [20], consists ofusing the value …i 1=2†=n; thus, we draw on the prob-ability plot the points

…xi:n; …i 0:5†=n† i ˆ 1; ; nOther alternative plotting positions are given in Table

8 (For a justi®cation of these formulas see Castillo[13], pp 161±166.)

In the following subsection we give examples ofprobability plots for some commonly used randomvariables

1.11.3 The Normal Probability PlotThe cdf F…x; ; † of a normal random variable can bewritten as

Table 7 Domains of Attraction of the Most Common

Distributions

Distributiona

Domain of attractionfor maxima for minimaNormal

GumbelWeibullGumbelWeibullGumbelGumbelWeibullWeibullGumbelWeibullFrechetWeibullGumbelFrechet

a M ˆ for maxima; m ˆ for minima.

Table 8 Plotting Positions Formulas

Trang 40

F…x; ; † ˆ x   …29†

where  and  are the mean and the standard

devia-tion, respectively, and …x† is the cdf of the standard

normal variable N…0; 1† Then, according to Eqs (27)

and (28), Eq (29) gives

 ˆ g…x† ˆ x  ˆ h…y† ˆ  1…y† a ˆ1

 b ˆ



and the family of straight lines becomes

 ˆ a ‡ b ˆ 

Once the normality assumption has been checked,

esti-mation of the parameters  and  is straightforward

In fact, setting  ˆ 0 and  ˆ 1 in Eq (30), we obtain

 ˆ 0 ) 0 ˆ … †= )  ˆ 

 ˆ 1 ) 1 ˆ … †= )  ˆ  ‡  …31†

Figure 18 shows a normal probability plot, where the

ordinate axis has been transformed by  ˆ  1…y†,

whereas the abscissa axis remains untransformed

Note that we show the probability scale Y and the

reduced scale 

1.11.4 The Log-Normal Probability PlotThe case of the log-normal probability plot can bereduced to the case of the normal plot if we take intoaccount that X is log-normal iff Y ˆ log…X† is normal.Consequently, we transform X into log…x† and obtain anormal plot Thus, the only change consists of trans-forming the X scale to a logarithmic scale (seeFig 19).The mean and the standard deviation of the log-normal distribution can then be estimated by

we get

Figure 18 An example of a normal probability plot

... probability of the union of disjoint

sets is the sum of the probabilities of the individual

sets, we obtain the pdf of the random variable

A very useful application of the change -of- variables... inverse transformation of Eq (21) and j det…J†jthe absolute value of the determinant of the Jacobianmatrix J of the transformation The ijth element of J

is given by @Xi=@Yj.Example... > 0Its pdf is given by

of experimental data Figure 12 shows different ples of the pdf of the beta random variable Two

exam-Figure An example of pdf and cdf of the uniform random

Ngày đăng: 22/03/2014, 11:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w