Stochastic Controledited Part 15 ppt

40 195 0
Stochastic Controledited Part 15 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Stochastic Control552 Goodness-of-fittesting. It is assumed that     ,x expx,baxf~ X 1 i      = )|( , ,15)1(1i x(  0), , , (93) where the parameters  and  are unknown; (=0.87). Thus, for this example, r = n = 15, k = 3, m = 5, 1 = 0.95, 1.6X 1   , and S = 170.8. It can be shown that the ,2n)1(1j , )XX)(1in( )XX)(1in( 1U j 2j 2i 1ii 1j 2i 1ii j                               (94) are i.i.d. U(0,1) rv’s (Nechval et al., 1998). We assess the statistical significance of departures from the left-truncated Weibull model by performing the Kolmogorov-Smirnov goodness- of-fit test. We use the  K statistic (Muller et al., 1979). The rejection region for the  level of significance is {  K >  K n;  }. The percentage points for  K n;  were given by Muller et al. (1979). For this example,  K = 0.220 <  K n=13;  =0.05 = 0.361. (95) Thus, there is not evidence to rule out the left-truncated Weibull model. It follows from (92), for ,5.0 kmn km 05.0    (96) that     .51 151505.0 15 15 8.170 1.61 kmn km n s xh 87.0 δ n δ                                                             1/ 14 1 1/ 1 1 1 (97) Thus, the manufacturer has 95% assurance that no failures will occur in each shipment before h = 5 month intervals. 5. Examples 5.1 Example 1 An electronic component is required to pass a performance test of 500 hours. The specification is that 20 randomly selected items shall be placed on test simultaneously, and 5 failures or less shall occur during 500 hours. The cost of performing the test is $105 per hour. The cost of redesign is $5000. Assume that the failure distribution follows the one-parameter exponential model (15). Three failures are observed at 80, 220, and 310 hours. Should the test be continued? We have from (19) and (20) ;hours 1960 3 3101731022080     (98) 17 2 6 500 pas 1960 310 exp 1960 x exp 1960 310 exp !14 !2 !17 p                                      ;79665.0dx 1960 x exp 1960 1 6 15 6               (99) Since hours 05.430dx),x|x(fx 0 r x s r ss     ,hours 94.347 105 5000 79665.0310 c c px 1 2 pas k   (100) abandon the present test and initiate a redesign. 5.2 Example 2 Consider the following problem. A specification for an automotive hood latch is that, of 30 items placed on test simultaneously, ten or fewer shall fall during 3000 cycles of operation. The cost of performing the test is $2.50 per cycle. The cost of redesign is $8500. Seven failures, which follow the Weibull distribution with the probability density function (25), are observed at 48, 300, 315, 492, 913, 1108, and 1480 cycles. Shall the test be continued beyond the 1480th cycle? It follows from (29) and (30) that 6.2766    and .9043.0  In turn, these estimates yield pas p  =0.25098. Since hours 6.1877dx),x|x(fx 0 r x s r ss     ,hours 33 .2333 5.2 8500 25098.01480 c c px 1 2 pas k   (101) continue the present test. 6. Stopping Rule in Sequential-Sample Testing At the planning stage of a statistical investigation the question of sample size (n) is critical. For such an important issue, there is a surprisingly small amount of published literature. Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large- sample procedure relies on the property that the distribution of the t-like quantities is close to the standard normal in large samples. To estimate these quantities the maximum Stochastic Decision Support Models and Optimal Stopping Rules in a New Product Lifetime Testing 553 Goodness-of-fittesting. It is assumed that     ,x expx,baxf~ X 1 i      = )|( , ,15)1(1i x(  0), , , (93) where the parameters  and  are unknown; (=0.87). Thus, for this example, r = n = 15, k = 3, m = 5, 1 = 0.95, 1.6X 1   , and S = 170.8. It can be shown that the ,2n)1(1j , )XX)(1in( )XX)(1in( 1U j 2j 2i 1ii 1j 2i 1ii j                               (94) are i.i.d. U(0,1) rv’s (Nechval et al., 1998). We assess the statistical significance of departures from the left-truncated Weibull model by performing the Kolmogorov-Smirnov goodness- of-fit test. We use the  K statistic (Muller et al., 1979). The rejection region for the  level of significance is {  K >  K n;  }. The percentage points for  K n;  were given by Muller et al. (1979). For this example,  K = 0.220 <  K n=13;  =0.05 = 0.361. (95) Thus, there is not evidence to rule out the left-truncated Weibull model. It follows from (92), for ,5.0 kmn km 05.0    (96) that     .51 151505.0 15 15 8.170 1.61 kmn km n s xh 87.0 δ n δ                                                             1/ 14 1 1/ 1 1 1 (97) Thus, the manufacturer has 95% assurance that no failures will occur in each shipment before h = 5 month intervals. 5. Examples 5.1 Example 1 An electronic component is required to pass a performance test of 500 hours. The specification is that 20 randomly selected items shall be placed on test simultaneously, and 5 failures or less shall occur during 500 hours. The cost of performing the test is $105 per hour. The cost of redesign is $5000. Assume that the failure distribution follows the one-parameter exponential model (15). Three failures are observed at 80, 220, and 310 hours. Should the test be continued? We have from (19) and (20) ;hours 1960 3 3101731022080     (98) 17 2 6 500 pas 1960 310 exp 1960 x exp 1960 310 exp !14 !2 !17 p                                      ;79665.0dx 1960 x exp 1960 1 6 15 6               (99) Since hours 05.430dx),x|x(fx 0 r x s r ss     ,hours 94.347 105 5000 79665.0310 c c px 1 2 pas k   (100) abandon the present test and initiate a redesign. 5.2 Example 2 Consider the following problem. A specification for an automotive hood latch is that, of 30 items placed on test simultaneously, ten or fewer shall fall during 3000 cycles of operation. The cost of performing the test is $2.50 per cycle. The cost of redesign is $8500. Seven failures, which follow the Weibull distribution with the probability density function (25), are observed at 48, 300, 315, 492, 913, 1108, and 1480 cycles. Shall the test be continued beyond the 1480th cycle? It follows from (29) and (30) that 6.2766  and .9043.0  In turn, these estimates yield pas p  =0.25098. Since hours 6.1877dx),x|x(fx 0 r x s r ss     ,hours 33 .2333 5.2 8500 25098.01480 c c px 1 2 pas k   (101) continue the present test. 6. Stopping Rule in Sequential-Sample Testing At the planning stage of a statistical investigation the question of sample size (n) is critical. For such an important issue, there is a surprisingly small amount of published literature. Engineers who conduct reliability tests need to choose the sample size when designing a test plan. The model parameters and quantiles are the typical quantities of interest. The large- sample procedure relies on the property that the distribution of the t-like quantities is close to the standard normal in large samples. To estimate these quantities the maximum Stochastic Control554 likelihood method is often used. The large-sample procedure to obtain the sample size relies on the property that the distribution of the above quantities is close to standard normal in large samples. The normal approximation is only first order accurate in general. When sample size is not large enough or when there is censoring, the normal approximation is not an accurate way to obtain the confidence intervals. Thus sample size determined by such procedure is dubious. Sampling is both expensive and time consuming. Hence, there are situations where it is more efficient to take samples sequentially, as opposed to all at one time, and to define a stopping rule to terminate the sampling process. The case where the entire sample is drawn at one instance is known as “fixed sampling”. The case where samples are taken in successive stages, according to the results obtained from the previous samplings, is known as “sequential sampling”. Taking samples sequentially and assessing their results at each stage allows the possibility of stopping the process and reaching an early decision. If the situation is clearly favorable or unfavorable (for example, if the sample shows that a widget’s quality is definitely good or poor), then terminating the process early saves time and resources. Only in the case where the data is ambiguous do we continue sampling. Only then do we require additional information to take a better decision. In this section, the following optimal stopping rule for determining the efficient sample size sequentially under assigning warranty period is proposed. 6.1 Stopping Rule on the Basis of the Expected Beneficial Effect Suppose the random variables X 1 , X 2 , …, all from the same population, are observed sequentially and follow the two-parameter Weibull fatigue-crack initiation lifetime distribution (64). After the nth observation (nn 0 , where n 0 is the initial sample size needful to estimate the unknown parameters of the underlying probability model for the data) the experimenter can stop and receive the beneficial effect on performance, ,cnhc PL );m:1( 1   (102) where c 1 is the unit value of the lower conditional (1) prediction limit (warranty period) PL );m:1( h   )x(h nPL );m:1(  (Nechval et al., 2007a, 2007b), x n = (x 1 , …, x n ), and c is the sampling cost. Below a rule is given to determine if the experimenter should stop in the nth observation, x n , or if he should continue until the (n+l)st observation, X n+1 , at which time he is faced with this decision all over again. Consider )x,X(h n 1n PL );m:1(   as a function of the random variable X n+1 , when x 1 , …, x n are known, then it can be found its expected value   nn 1n PL );m:(1 xx,X(h E |)   .dvdx)x|v,x(fx,x(h 1n n 1n 0 0 n 1n PL );m:(1        ) (103) where )x|v,x(f n 1n dveev x veenv n n 1i x lnv 0 x lnv 2n 1 1n x lnv x lnv 2n i n 1i i 1n n 1i i                                                                            ,ee )1n( n 1i x lnv x lnv i1n                                        (104) the maximum likelihood estimates   and   of  and  , respectively, are determined from the equations (66) and (67), dv)x|v,x(f 0 n 1n    is the predictive probability density function of X n+1 . Now the optimal stopping rule is to determine the expected beneficial effect on performance for continuing   )1n(cx,X(h Ec n 1n PL );m:(1 1    n x|) (105) and compare this with (102). If     ,cx(hxx,X(h Ec nPL );m:(1 nn 1n PL );m:(1 1     )|) (106) it is profitable to continue; If     ,cx(hxx,X(h Ec nPL );m:(1 nn 1n PL );m:(1 1     )|) (107) the experimenter should stop. 7. Conclusions Determining when to stop a statistical test is an important management decision. Several stopping criteria have been proposed, including criteria based on statistical similarity, the probability that the system has a desired reliability, and the expected cost of remaining faults. This paper presents a new stopping rule in fixed-sample testing based on the statistical estimation of total costs involved in the decision to continue beyond an early failure as well as a stopping rule in sequential-sample testing to determine when testing should be stopped. The paper considers the problem that can be stated as follows. A new product is submitted for lifetime testing. The product will be accepted if a random sample of n items shows less than s failures in performance testing. We want to know whether to stop the test before it is completed if the results of the early observations are unfavorable. A suitable stopping decision saves the cost of the waiting time for completion. On the other hand, an incorrect stopping decision causes an unnecessary design change and a complete rerun of the test. It Stochastic Decision Support Models and Optimal Stopping Rules in a New Product Lifetime Testing 555 likelihood method is often used. The large-sample procedure to obtain the sample size relies on the property that the distribution of the above quantities is close to standard normal in large samples. The normal approximation is only first order accurate in general. When sample size is not large enough or when there is censoring, the normal approximation is not an accurate way to obtain the confidence intervals. Thus sample size determined by such procedure is dubious. Sampling is both expensive and time consuming. Hence, there are situations where it is more efficient to take samples sequentially, as opposed to all at one time, and to define a stopping rule to terminate the sampling process. The case where the entire sample is drawn at one instance is known as “fixed sampling”. The case where samples are taken in successive stages, according to the results obtained from the previous samplings, is known as “sequential sampling”. Taking samples sequentially and assessing their results at each stage allows the possibility of stopping the process and reaching an early decision. If the situation is clearly favorable or unfavorable (for example, if the sample shows that a widget’s quality is definitely good or poor), then terminating the process early saves time and resources. Only in the case where the data is ambiguous do we continue sampling. Only then do we require additional information to take a better decision. In this section, the following optimal stopping rule for determining the efficient sample size sequentially under assigning warranty period is proposed. 6.1 Stopping Rule on the Basis of the Expected Beneficial Effect Suppose the random variables X 1 , X 2 , …, all from the same population, are observed sequentially and follow the two-parameter Weibull fatigue-crack initiation lifetime distribution (64). After the nth observation (nn 0 , where n 0 is the initial sample size needful to estimate the unknown parameters of the underlying probability model for the data) the experimenter can stop and receive the beneficial effect on performance, ,cnhc PL );m:1( 1   (102) where c 1 is the unit value of the lower conditional (1) prediction limit (warranty period) PL );m:1( h   )x(h nPL );m:1(  (Nechval et al., 2007a, 2007b), x n = (x 1 , …, x n ), and c is the sampling cost. Below a rule is given to determine if the experimenter should stop in the nth observation, x n , or if he should continue until the (n+l)st observation, X n+1 , at which time he is faced with this decision all over again. Consider )x,X(h n 1n PL );m:1(   as a function of the random variable X n+1 , when x 1 , …, x n are known, then it can be found its expected value   nn 1n PL );m:(1 xx,X(h E |)   .dvdx)x|v,x(fx,x(h 1n n 1n 0 0 n 1n PL );m:(1        ) (103) where )x|v,x(f n 1n dveev x veenv n n 1i x lnv 0 x lnv 2n 1 1n x lnv x lnv 2n i n 1i i 1n n 1i i                                                                            ,ee )1n( n 1i x lnv x lnv i1n                                        (104) the maximum likelihood estimates   and   of  and  , respectively, are determined from the equations (66) and (67), dv)x|v,x(f 0 n 1n    is the predictive probability density function of X n+1 . Now the optimal stopping rule is to determine the expected beneficial effect on performance for continuing   )1n(cx,X(h Ec n 1n PL );m:(1 1    n x|) (105) and compare this with (102). If     ,cx(hxx,X(h Ec nPL );m:(1 nn 1n PL );m:(1 1     )|) (106) it is profitable to continue; If     ,cx(hxx,X(h Ec nPL );m:(1 nn 1n PL );m:(1 1     )|) (107) the experimenter should stop. 7. Conclusions Determining when to stop a statistical test is an important management decision. Several stopping criteria have been proposed, including criteria based on statistical similarity, the probability that the system has a desired reliability, and the expected cost of remaining faults. This paper presents a new stopping rule in fixed-sample testing based on the statistical estimation of total costs involved in the decision to continue beyond an early failure as well as a stopping rule in sequential-sample testing to determine when testing should be stopped. The paper considers the problem that can be stated as follows. A new product is submitted for lifetime testing. The product will be accepted if a random sample of n items shows less than s failures in performance testing. We want to know whether to stop the test before it is completed if the results of the early observations are unfavorable. A suitable stopping decision saves the cost of the waiting time for completion. On the other hand, an incorrect stopping decision causes an unnecessary design change and a complete rerun of the test. It Stochastic Control556 is assumed that the redesign would improve the product to such an extent that it would definitely be accepted in a new lifetime testing. The paper presents a stopping rule based on the statistical estimation of total costs involved in the decision to continue beyond an early failure. Sampling is both expensive and time consuming. The cost of sampling plays a fundamental role and since there are many practical situations where there is a time cost and an event cost, a sampling cost per observed event and a cost per unit time are both included. Hence, there are situations where it is more efficient to take samples sequentially, as opposed to all at one time, and to define a stopping rule to terminate the sampling process. One of these situations is considered in the paper. The practical applications of the stopping rules are illustrated with examples. 8. Acknowledgments This research was supported in part by Grant No. 06.1936, Grant No. 07.2036, Grant No. 09.1014, and Grant No. 09.1544 from the Latvian Council of Science. 9. References Amster, S. J. (1963). A modified bayes stopping rule. The Annals of Mathematical Statistics, Vol. 34, pp. 1404-1413 Arrow, K. J. ; Blackwell, D. & Girshick, M. A., (1949). Bayes and minimax solutions of sequential decision problems. Econometrica, Vol. 17, pp. 213-244 El-Sayyad, G. M. & Freeman, P. R. (1973). Bayesian sequential estimation of a Poisson process rate. Biometrika, Vol. 60, pp. 289-296 Freeman, P. R. (1970). Optimal bayesian sequential estimation of the median effective dose. Biometrika, Vol. 57, pp. 79-89 Freeman, P. R. (1972). Sequential estimation of the size of a population. Biometrika, Vol. 59, pp. 9-17 Freeman, P. R. (1973). Sequential recapture. Biometrika, Vol. 60, pp. 141-153 Freeman, P. R. (1983). The secretary problem and its extensions: a review. International Statistical Review , Vol. 51, pp. 189-206 Hewitt, J.E. (1968). A note on prediction intervals based on partial observations in certain life test experiments. Technometrics, Vol. 10, pp. 850-853 Kaminsky, K.S. (1977). Comparison of prediction intervals for failure times when life is exponential. Technometrics, Vol. 19, pp. 83-86 Kendall, M. G. & Stuart, A. S. (1969). The Advanced Theory of Statistics, Vol. 1 (3rd edition), Charles Griffin and Co. Ltd, London Lawless, J.F. (1971). A prediction problem concerning samples from the exponential distribution with applications in life testing. Technometrics, Vol. 13, pp. 725-730 Likes, J. (1974). Prediction of sth ordered observation for the two-parameter exponential distribution. Technometrics, Vol. 16, pp. 241-244 Lindley, D. V. & Barnett, B.N. (1965). Sequential sampling: two decision problems with linear losses for binomial and normal random variables. Biometrika, Vol. 52, pp. 507- 532 Lingappaiah, G.S. (1973). Prediction in exponential life testing. Canadian Journal of Statistics, Vol. 1, pp. 113-117 Muller, P.H.; Neumann, P. & Storm, R. (1979). Tables of Mathematical Statistics , VEB Fachbuchverlag, Leipzig Nechval, N. A. (1982). Modern Statistical Methods of Operations Research, RCAEI, Riga Nechval, N. A. (1984). Theory and Methods of Adaptive Control of Stochastic Processes, RCAEI, Riga Nechval, N.A. (1986). Effective invariant embedding technique for designing the new or improved statistical procedures of detection and estimation in signal processing systems, In : Signal Processing III: Theories and Applications, Young, I. T. et al. (Eds.), pp. 1051-1054, Elsevier Science Publishers B.V., North-Holland Nechval, N. A. (1988a). A general method for constructing automated procedures for testing quickest detection of a change in quality control. Computers in Industry, Vol. 10, pp. 177-183 Nechval, N. A. (1988b). A new efficient approach to constructing the minimum risk estimators of state of stochastic systems from the statistical data of small samples, In : Preprint of the 8th IFAC Symposium on Identification and System Parameter Estimation , pp. 71-76, Beijing, P.R. China Nechval, N.A. & Nechval, K.N. (1998). Characterization theorems for selecting the type of underlying distribution, In: Proceedings of the 7 th Vilnius Conference on Probability Theory and 22 nd European Meeting of Statisticians, pp. 352-353, TEV, Vilnius Nechval, N. A. & Nechval, K. N. (1999). Invariant embedding technique and its statistical applications, In : Conference Volume of Contributed Papers of the 52nd Session of the International Statistical Institute , Finland, pp. 1-2, ISI  International Statistical Institute, Helsinki, http://www.stat.fi/isi99/proceedings/arkisto/varasto/nech09 02.pdf Nechval, N. A. & Nechval, K. N. (2000). State estimation of stochastic systems via invariant embedding technique, In : Cybernetics and Systems’2000, Trappl, R. (Ed.), Vol. 1, pp. 96-101, Austrian Society for Cybernetic Studies, Vienna Nechval, N. A. ; Nechval, K. N. & Vasermanis, E. K. (2001). Optimization of interval estimators via invariant embedding technique. IJCAS (An International Journal of Computing Anticipatory Systems), Vol. 9, pp. 241-255 Nechval, K. N. ; Nechval N. A. & Vasermanis, E. K. (2003a). Adaptive dual control in one biomedical problem. Kybernetes (The International Journal of Systems & Cybernetics), Vol. 32, pp. 658-665 Nechval, N. A. ; Nechval, K. N. & Vasermanis, E. K. (2003b). Effective state estimation of stochastic systems. Kybernetes (The International Journal of Systems & Cybernetics), Vol. 32, pp. 666-678 Nechval, N. A. & Vasermanis, E. K. (2004). Improved Decisions in Statistics, SIA “Izglitibas soli”, Riga Nechval, K. N. ; Nechval, N. A. ; Berzins, G. & Purgailis, M. (2007a). Planning inspections in service of fatigue-sensitive aircraft structure components for initial crack detection. Maintenance and Reliability, Vol. 35, pp. 76-80 Nechval, K. N. ; Nechval, N. A. ; Berzins, G. & Purgailis, M. (2007b). Planning inspections in service of fatigue-sensitive aircraft structure components under crack propagation. Maintenance and Reliability, Vol. 36, pp. 3-8 Nechval, N. A. ; Berzins, G. ; Purgailis, M. & Nechval, K. N. (2008). Improved estimation of state of stochastic systems via invariant embedding technique. WSEAS Transactions on Mathematics, Vol. 7, pp. 141-159 Stochastic Decision Support Models and Optimal Stopping Rules in a New Product Lifetime Testing 557 is assumed that the redesign would improve the product to such an extent that it would definitely be accepted in a new lifetime testing. The paper presents a stopping rule based on the statistical estimation of total costs involved in the decision to continue beyond an early failure. Sampling is both expensive and time consuming. The cost of sampling plays a fundamental role and since there are many practical situations where there is a time cost and an event cost, a sampling cost per observed event and a cost per unit time are both included. Hence, there are situations where it is more efficient to take samples sequentially, as opposed to all at one time, and to define a stopping rule to terminate the sampling process. One of these situations is considered in the paper. The practical applications of the stopping rules are illustrated with examples. 8. Acknowledgments This research was supported in part by Grant No. 06.1936, Grant No. 07.2036, Grant No. 09.1014, and Grant No. 09.1544 from the Latvian Council of Science. 9. References Amster, S. J. (1963). A modified bayes stopping rule. The Annals of Mathematical Statistics, Vol. 34, pp. 1404-1413 Arrow, K. J. ; Blackwell, D. & Girshick, M. A., (1949). Bayes and minimax solutions of sequential decision problems. Econometrica, Vol. 17, pp. 213-244 El-Sayyad, G. M. & Freeman, P. R. (1973). Bayesian sequential estimation of a Poisson process rate. Biometrika, Vol. 60, pp. 289-296 Freeman, P. R. (1970). Optimal bayesian sequential estimation of the median effective dose. Biometrika, Vol. 57, pp. 79-89 Freeman, P. R. (1972). Sequential estimation of the size of a population. Biometrika, Vol. 59, pp. 9-17 Freeman, P. R. (1973). Sequential recapture. Biometrika, Vol. 60, pp. 141-153 Freeman, P. R. (1983). The secretary problem and its extensions: a review. International Statistical Review , Vol. 51, pp. 189-206 Hewitt, J.E. (1968). A note on prediction intervals based on partial observations in certain life test experiments. Technometrics, Vol. 10, pp. 850-853 Kaminsky, K.S. (1977). Comparison of prediction intervals for failure times when life is exponential. Technometrics, Vol. 19, pp. 83-86 Kendall, M. G. & Stuart, A. S. (1969). The Advanced Theory of Statistics, Vol. 1 (3rd edition), Charles Griffin and Co. Ltd, London Lawless, J.F. (1971). A prediction problem concerning samples from the exponential distribution with applications in life testing. Technometrics, Vol. 13, pp. 725-730 Likes, J. (1974). Prediction of sth ordered observation for the two-parameter exponential distribution. Technometrics, Vol. 16, pp. 241-244 Lindley, D. V. & Barnett, B.N. (1965). Sequential sampling: two decision problems with linear losses for binomial and normal random variables. Biometrika, Vol. 52, pp. 507- 532 Lingappaiah, G.S. (1973). Prediction in exponential life testing. Canadian Journal of Statistics, Vol. 1, pp. 113-117 Muller, P.H.; Neumann, P. & Storm, R. (1979). Tables of Mathematical Statistics , VEB Fachbuchverlag, Leipzig Nechval, N. A. (1982). Modern Statistical Methods of Operations Research, RCAEI, Riga Nechval, N. A. (1984). Theory and Methods of Adaptive Control of Stochastic Processes, RCAEI, Riga Nechval, N.A. (1986). Effective invariant embedding technique for designing the new or improved statistical procedures of detection and estimation in signal processing systems, In : Signal Processing III: Theories and Applications, Young, I. T. et al. (Eds.), pp. 1051-1054, Elsevier Science Publishers B.V., North-Holland Nechval, N. A. (1988a). A general method for constructing automated procedures for testing quickest detection of a change in quality control. Computers in Industry, Vol. 10, pp. 177-183 Nechval, N. A. (1988b). A new efficient approach to constructing the minimum risk estimators of state of stochastic systems from the statistical data of small samples, In : Preprint of the 8th IFAC Symposium on Identification and System Parameter Estimation , pp. 71-76, Beijing, P.R. China Nechval, N.A. & Nechval, K.N. (1998). Characterization theorems for selecting the type of underlying distribution, In: Proceedings of the 7 th Vilnius Conference on Probability Theory and 22 nd European Meeting of Statisticians, pp. 352-353, TEV, Vilnius Nechval, N. A. & Nechval, K. N. (1999). Invariant embedding technique and its statistical applications, In : Conference Volume of Contributed Papers of the 52nd Session of the International Statistical Institute , Finland, pp. 1-2, ISI  International Statistical Institute, Helsinki, http://www.stat.fi/isi99/proceedings/arkisto/varasto/nech09 02.pdf Nechval, N. A. & Nechval, K. N. (2000). State estimation of stochastic systems via invariant embedding technique, In : Cybernetics and Systems’2000, Trappl, R. (Ed.), Vol. 1, pp. 96-101, Austrian Society for Cybernetic Studies, Vienna Nechval, N. A. ; Nechval, K. N. & Vasermanis, E. K. (2001). Optimization of interval estimators via invariant embedding technique. IJCAS (An International Journal of Computing Anticipatory Systems), Vol. 9, pp. 241-255 Nechval, K. N. ; Nechval N. A. & Vasermanis, E. K. (2003a). Adaptive dual control in one biomedical problem. Kybernetes (The International Journal of Systems & Cybernetics), Vol. 32, pp. 658-665 Nechval, N. A. ; Nechval, K. N. & Vasermanis, E. K. (2003b). Effective state estimation of stochastic systems. Kybernetes (The International Journal of Systems & Cybernetics), Vol. 32, pp. 666-678 Nechval, N. A. & Vasermanis, E. K. (2004). Improved Decisions in Statistics, SIA “Izglitibas soli”, Riga Nechval, K. N. ; Nechval, N. A. ; Berzins, G. & Purgailis, M. (2007a). Planning inspections in service of fatigue-sensitive aircraft structure components for initial crack detection. Maintenance and Reliability, Vol. 35, pp. 76-80 Nechval, K. N. ; Nechval, N. A. ; Berzins, G. & Purgailis, M. (2007b). Planning inspections in service of fatigue-sensitive aircraft structure components under crack propagation. Maintenance and Reliability, Vol. 36, pp. 3-8 Nechval, N. A. ; Berzins, G. ; Purgailis, M. & Nechval, K. N. (2008). Improved estimation of state of stochastic systems via invariant embedding technique. WSEAS Transactions on Mathematics, Vol. 7, pp. 141-159 Stochastic Control558 Nechval, N. A. ; Berzins, G. ; Purgailis, M. ; Nechval, K .N. & Zolova, N. (2009). Improved adaptive control of stochastic systems. Advances In Systems Science and Applications, Vol. 9, pp. 11-20 Petrucelli, J. D. (1988) Secretary Problem, In : Encyclopedia of Statistical Sciences, Kotz, S. & Johnson, N. (Eds.), Vol. 8, pp. 326-329, Wiley, New York Raiffa, H. & Schlaifer, R. (1968). Applied Statistical Decision Theory, Institute of Technology Press, Massachusetts Samuels, S. M. (1991). Secretary Problems, In : Handbook of Sequential Analysis, Ghosh, B. K. & Sen, P. K. (Eds.), pp. 35-60, Dekker, New York Wald, A. & Wolfowitz, J. (1948). Optimum character of the sequential probability ratio test. The Annals of Mathematical Statistics, Vol. 19, pp. 326-339 A non-linear double stochastic model of return in nancial markets 559 A non-linear double stochastic model of return in nancial markets Vygintas Gontis, Julius Ruseckas and Aleksejus Kononovičius 0 A non-linear double stochastic model of return in financial markets Vygintas Gontis, Julius Ruseckas and Aleksejus Kononoviˇcius Institute of Theoretical Physics and Astronomy of Vilnius University Lithuania 1. Introduction Volatility clustering, evaluated through slowly decaying auto-correlations, Hurst effect or 1/ f noise for absolute returns, is a characteristic property of most financial assets return time series Willinger et al. (1999). Statistical analysis alone is not able to provide a definite answer for the presence or absence of long-range dependence phenomenon in stock returns or volatility, unless economic mechanisms are proposed to understand the origin of such phenomenon Cont (2005); Willinger et al. (1999). Whether results of statistical analysis correspond to long- range dependence is a difficult question and subject to an ongoing statistical debate Cont (2005). Extensive empirical analysis of the financial market data, supporting the idea that the long- range volatility correlations arise from trading activity, provides valuable background for fur- ther development of the long-ranged memory stochastic models Gabaix et al. (2003); Plerou et al. (2001). The power-law behavior of the auto-regressive conditional duration process Sato (2004) based on the random multiplicative process and it’s special case the self-modulation process Takayasu (2003), exhibiting 1/ f fluctuations, supported the idea of stochastic mod- eling with a power-law probability density function (PDF) and long-range memory. Thus the agent based economic models Kirman & Teyssiere (2002); Lux & Marchesi (2000) as well as the stochastic models Borland (2004); Gontis et al. (2008; 2010); Queiros (2007) exhibiting long-range dependence phenomenon in volatility or trading volume are of great interest and remain an active topic of research. Properties of stochastic multiplicative point processes have been investigated analytically and numerically and the formula for the power spectrum has been derived Gontis & Kaulakys (2004). In the more recent papers Kaulakys et al. (2006); Kaulakys & Alaburda (2009); Ruseckas & Kaulakys (2010) the general form of the multiplicative stochastic differential equation (SDE) was derived in agreement with the model earlier proposed in Gontis & Kaulakys (2004). Since Gontis & Kaulakys (2004) a model of trading activity, based on a SDE driven Poisson-like process, was presented Gontis et al. (2008) and in the most recent paper Gontis et al. (2010) we proposed a double stochastic model, whose return time series yield two power-law statistics, i.e., the PDF and the power spectral density (PSD) of absolute return, mimicking the empirical data for the one-minute trading return in the NYSE. In this chapter we present theoretical arguments and empirical evidence for the non-linear double stochastic model of return in financial markets. With empirical data from NYSE and Vilnius Stock Exchange (VSE) demonstrating universal scaling of return statistical properties, 27 Stochastic Control560 which is also present in the double stochastic model of return Gontis et al. (2010). The sections in this chapter follow the chronology of our research papers devoted to the stochastic model- ing of financial markets. In the second sections we introduce multiplicative stochastic point process reproducing 1/ f β noise and discuss it’s possible application as the stochastic model of financial market. In the section 3 we derive multiplicative SDE statistically equivalent to the introduced point process. Further, in the section 4 we propose a Poisson-like process driven by multiplicative SDE. More sophisticated version of SDE reproducing statistics of trading activ- ity in financial markets is presented in the section 5 and empirical analysis of high frequency trading data from NYSE in the section 6. Section 7 introduces the stochastic model with a q-Gaussian PDF and power spectrum S ( f ) ∼ 1/ f β and the section 8 the double stochastic model of return in financial market. We present scaled empirical analysis of return in New York and Vilnius stock exchanges in comparison with proposed model in the sections 9. Short conclusions of the most recent research results is presented in the section 10. 2. 1/ f noise: from physics to financial markets The PSD of a large variety of different evolutionary systems at low frequencies have 1/ f behavior. 1/ f noise is observed in condensed matter, river discharge, DNA base sequence structure, cellular automatons, traffic flow, economics, financial markets and other complex systems with the evolutionary elements of self-organization (see, e.g., a bibliographic list of papers by Li (2009)). Considerable amount of such systems have fractal nature and thus their statistics exhibit scaling. It is possible to define a stochastic model system exhibiting fractal statistics and 1/ f noise, as well. Such model system may represent the limiting behavior of the dynamical or deterministic complex systems, explaining the evolution of the complexity into chaotic regime. Let us introduce a multiplicative stochastic model for the time interval between events in time series, defining in such a way the multiplicative point process. This model exhibits the first order and the second order power-law statistics and serves as the theoretical description of the empirical trading activity in the financial markets Gontis & Kaulakys (2004). First of all we consider a signal I (t) as a sequence of the random correlated pulses I (t) = ∑ k a k δ(t − t k ) (1) where a k is a contribution to the signal of one pulse at the time moment t k , e.g., a contribution of one transaction to the financial data. Signal (1) represents a point process used in a large variety of systems with the flow of point objects or subsequent actions. When a k = ¯ a is constant, the point process is completely described by the set of times of the events {t k } or equivalently by the set of inter-event intervals {τ k = t k+1 −t k }. Various stochastic models of τ k can be introduced to define a such stochastic point process. In the papers Kaulakys & Meškauskas (1998); Kaulakys (1999; 2000) it was shown analytically that the relatively slow Brownian fluctuations of the inter-event time τ k yield 1/f fluctuations of the signal (1). In the generalized version of the model Gontis & Kaulakys (2004) we have introduced a stochastic multiplicative process for the inter-event time τ k , τ k+1 = τ k + γτ 2µ−1 k + τ µ k σε k . (2) Here the inter-event time τ k fluctuates due to the external random perturbation by a sequence of uncorrelated normally distributed random variable {ε k } with zero expectation and unit variance, σ denotes the standard deviation of the white noise and γ  1 is a damping con- stant. Note that from the big variety of possible stochastic processes we have chosen the multiplicative one as it yields multifractal intermittency and power-law PDF. Certainly, in Eq. (2) the τ k diffusion has to be restricted in some area 0 < τ min < τ k < τ max . Multiplicativity is specified by µ (pure multiplicativity corresponds to µ = 1, while other values of might be considered as well). The iterative relation (2) can be rewritten as Langevin SDE in k-space, inter-event space, dτ k = γτ 2µ−1 k + στ µ k dW k . (3) Here we interpret k as continuous variable while W k defines the Wiener noise in inter-event space. Steady state solution of the stationary Fokker-Planck equation with zero flow, corresponding to (3), gives the probability density function for τ k in the k-space (see, e.g., Gardiner (1986)) P k (τ k ) = Cτ α k = α + 1 τ (α+1) max −τ (α+1) min τ α k , α = 2γ/σ 2 −2µ. (4) The steady state solution (4) assumes Ito convention involved in the relation between expres- sions (2), (3) and (4) and the restriction for the diffusion 0 < τ min < τ k < τ max . In the limit τ min → 0 and τ max → ∞ the explicit expression of the signal’s I(t) PSD S µ ( f ) was derived in Gontis & Kaulakys (2004): S µ ( f ) = Ca 2 √ πτ(3 −2µ) f  γ π f  α 3 −2µ Γ( 1 2 + α 3 −2µ ) cos( πα 2 (3−2µ) ) . (5) Equation (5) reveals that the multiplicative point process (2) results in the PSD S ( f ) ∼ 1/f β with the scaling exponent β = 1 + 2γ/σ 2 −2µ 3 −2µ . (6) Analytical results (5) and (6) were confirmed with the numerical calculations of the PSD ac- cording to equations (1) and (2). Let us assume that a ≡ 1 and the signal I( t) counts the transactions in financial markets. In that case the number of transactions in the selected time window τ d , defined as N(t) = t+τ d  t I(t)dt, measures the trading activity. PDF of N for the pure multiplicative model, with µ = 1, can be expressed as, for derivation see Gontis & Kaulakys (2004), P (N) = C  τ 2+α d (1 + γN) N 3+α (1 + γ 2 N) 3+α ∼  1 N 3+α , N  γ −1 , 1 N 5+2α , N  γ −1 . (7) Numerical calculations confirms the obtained analytical result (7). In the case of pure multiplicativity, µ = 1, the model has only one parameter, 2γ/σ 2 , which defines scaling of the PSD, the power-law distributions of inter-event time and the number of deals N per time window. The model proposed with the adjusted parameter 2γ/σ 2 nicely describes the empirical PSD and the exponent of power-law long range distribution of the trading activity N in the financial markets, see Gontis & Kaulakys (2004) for details. [...]... Finance and Stochastics, 3, p 1-13 580 Stochastic Control Mean-variance hedging under partial information 581 28 0 Mean-variance hedging under partial information M Mania1),2) , R Tevzadze1),3) and T Toronjadze1),2) 1) Georgian American University, Business School Razmadze Mathematical Institute 3) Institute of Cybernetics Georgia 2) A Abstract We consider the mean-variance hedging problem under partial... market driven by the nonlinear stochastic differential equation We will investigate how previously introduced modulated Poisson stochastic point process can be adjusted to the empirical trading activity, defined as number of transactions in the selected time window τd In order to obtain the number of events, N, in the selected time window, τd , one has to integrate the stochastic signal Eq (17) in the... behavior: calm and excited Ability to reproduce empirical PDF of inter-trade time A non-linear double stochastic model of return in financial markets 571 and trading activity as well as the power spectrum in very detail for various stocks provides a background for further stochastic modeling of volatility 7 The stochastic model with a q-Gaussian PDF and power spectrum S( f ) ∼ 1/ f β In section (3) we have... (42) coincides with Eq (15) in ref Kaulakys & Alaburda (2009) only for high values of the variable r r0 , these are the values responsible for the PSD Note that the frequency f in equation (47) is the scaled frequency matching the scaled time ts (44) The scaled equations (44)-(48) define a stochastic model with two parameters λ and η responsible for the power A non-linear double stochastic model of return... and we can expect Eqs (47-48) to work for the stochastic variable X as well We have also previously analyzed the influence of signal integration on the PDF in previous modeling of trading activity (see Gontis & Kaulakys (2004)) Integration of the nonlinear stochastic signal increases the exponent of the power law tails in the area of the highest values 574 Stochastic Control 101 (a) 100 (b) 105 10-1 10... q-Gaussian with r0 = 0.2 and λ = 3.6 All these empirically defined parameters form the background for the stochastic model of the return in the financial market Consequently, we propose to model the long-range memory stochastic return MA(rt ) by X = ¯ r0 ts +τs ¯ x (s)/ ds, where x is a continuous stochastic variable defined by Eq (49) and r0 = τs ts ˜ r0 × 2 = 0.4 The remaining parameters , xmax and σt2... xmax (15) In that case SDE (12) is rewritten as dx = η− xmin x λ m + 2 2 m x − m xmax x2η −1 + x η dWs , (16) where m is parameter responsible for sharpness of restriction Many numerical simulations were performed to prove validity of power-law statistics (14) (15) for the class of SDE (16) Kaulakys et al (2006) Recently (see Ruseckas & Kaulakys (2010)) it was shown that power-law statistics (14)- (15) ... the previous section we introduced the stochastic multiplicative point process, which was proposed in Gontis & Kaulakys (2004), presented a formula for the PSD and discussed a possible application of the model to reproduce the long-range statistical properties of financial markets The same long-range statistical properties pertaining to the more general ensemble of stochastic systems can be derived from... reproduce the long-range statistics with a q-Gaussian PDF and power spectrum S( f ) ∼ 1/ f β 2 The q-Gaussian PDF of stochastic variable r with variance σq can be written as P(r ) = Aq expq − r2 2 (3 − q)σq , (35) here Aq is a constant of normalization, while q defines the power law part of the distribution P(r ) is introduced through the variational principle applied to the generalized entropy Tsallis... experience modeling one-over-f noise and trading activity in financial markets Gontis & Kaulakys (2004); Kaulakys et al (2005), building nonlinear stochastic differential equations exhibiting power law statistics Kaulakys et al (2006); Kaulakys & Alaburda 572 Stochastic Control (2009), described here in previous sections, we know that processes with power spectrum S( f ) ∼ 1/ f β can be obtained using . model. It follows from (92), for ,5.0 kmn km 05.0    (96) that     .51 151 505.0 15 15 8.170 1.61 kmn km n s xh 87.0 δ n δ                                                             1/ 14 1 1/ 1 1 1 . model. It follows from (92), for ,5.0 kmn km 05.0    (96) that     .51 151 505.0 15 15 8.170 1.61 kmn km n s xh 87.0 δ n δ                                                             1/ 14 1 1/ 1 1 1 . (2008). Improved estimation of state of stochastic systems via invariant embedding technique. WSEAS Transactions on Mathematics, Vol. 7, pp. 141 -159 Stochastic Decision Support Models and

Ngày đăng: 20/06/2014, 12:20

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan