Radio Network Optimisation Process for umts 2nd phần 9 ppsx

66 226 0
Radio Network Optimisation Process for umts 2nd phần 9 ppsx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Each factor should be seen as a concept characterised by many measurements or parameters. Service support performance is the ability of an organisation to provide a service and assist in its utilisation. An example of service support performance is the ability to provide assistance in commissioning a basic service or a supplementary service such as the call-waiting service or directory enquiri es service. Typical measures include mean service provisioning time, billing error probability, incorrect charging or accounting probability, etc. Service operability performance is the ability of a service to be successfully and easily operated by a user. Typical measures are related to service-user mistake probability, dialling mistake probability, call abandonment probability, etc. Service accessibility performance is the ability of a service to be obtained, within given tolerances when requested by a user. Measures include items such as access probability, mean service access delay, network accessibility, connection accessibility, mean access delay, etc. Service retainability performance is the ability of a service, once obtained, to continue to be provided under given conditions for a requested duration. Typically items like service retainability, connection retainability, premature release probability, release failure probability, etc. are monitored. Service integrity performance is the degree to which a service is provided without excessive impairments (once obtained). Items like interruption of a service, time between interruptions, interruption duration, mean time between interruptions, mean interruption duration are followed. Service securi ty performance is the protection provided against unauthorised monitoring, misuse, fraudulent use, natural disaster, etc. Network performance is composed of planning, provisioning and administrative per- formance. Further, trafficability performance, transmission performance and network item dependability performance are part of network performance. Var ious combina- tions of these factors provide the needed service performance support. Planning, provisioning and administrative performance is the degree to which these activities enable the network to respond to current and emerging requirements. All actions related to RAN optimisation belong to this category. Trafficability performance is the degree to which the capacity of the network components meets the offered network traffic under specified conditions. Transmission performance is related to the reliability of reproduction of a signal offered to a telecommunication system, under given conditions, when this system is in an in-service state. Network item dependability performance is the collective term used to describe availability performance and its influencing factors – reliability performance, maintainability performance and maintenance support performance. Network performance is a co nceptual framework that enables network characteris- tics to be defined, measured and controlled so that network operators can achieve the targeted service performance. A service provider creates a network with network performance levels that are sufficient to enable the service provider to meet its business objectives while satisfying customer requirements. Usually this involves compromise between cost, the capabilities of the network and the levels of performance that the network can support. UMTS Quality of Service 495 An essential difference between service and network performance parameters is that service performance parameters are user-oriented while network performance parameters are network provider- and technology-oriented. Thus, service parameters focus on user-perceivable effects and network performance parameters focus on the efficiency of the network providing the service to the customers. Service Availability Service availability as such is not present in the ‘service performance’ definition, but has turned out to be one of the key parameters related to customer perception and customer satisfaction [16]. Although definitions for network and element availability exist, service availability as such does not have an agreed technical definition. This leads easily to misunderstandings, false expectations and customer dissatisfaction. In Figure 8.26 the combined items from accessibility, retainability and integrity performance are identified as components of service availability performance. Support performance Operability performance Accessibility performance Retainability performance Integrity performance Security performance Service performance Component of Service Availability Component of Support performance Operability performance Accessibility performance Retainability performance Integrity performance Security performance Service performance Component of Service Availability Component of Figure 8.26 Relationship of service availability to service performance. In [16] examples how to compute service availability measures are shown. Further, in [16] the 3GPP contribution for the definitions of Key Quality Indicators (KQIs) can be found. [16] does not provide direct support for measurement grouping in order to conclude service availability, though. The practical realisation of service availa bility monitoring is discussed in the next section. Service Quality Monitoring The variety of mobile services brings new challenges for operators in monitoring, optimising and managing their networks through services. Service quality management should support service level processes by providing operators with up- to-date views of service quality based on QoS KPIs collected from the network. The performance information should be provided service by service and prioritised for each service package for effective and correctly targeted optimisation. Involvement of OSI Layer 1, 2 and 3 methods of controlling service performance is required, so that end-user QoS requirements can be translated into technology-specific delivered servi ce performance/network performance measurements and parameters, including QoS distribution and transactions between carriers and systems forming 496 Radio Network Planning and Optimisation for UMTS part of any connection. Thus service monitoring as well as good network planning are needed, and the close coupling of traffic engineering and service and network perform- ance cannot be overemphasised. Performance-related information from the mobile network and services should be collected and classified for further utilisation in reporting and optimisation tools. Network-dependen t factors for a mobile service may cover: . radio access performance; . core network performance; . transmission system performance data; . call detail records; . network probes; . services and service systems data. Different views and reports about PM information should support an operator’s network and service planning: . 3G UMTS service classes (UMTS bearer); . individual services (aggregate); . customer’s service class/profile; . geographical location; . time of day, day of week, etc.; . IP QoS measures, L1, L2 measures; . terminal equipment type. It should also be possible to trace the calls and connections of individual users (see Chapter 7). One source for QoS KPIs is service-specific agents that can be used for monitoring Performance Indicators (PIs) for different services. Active measurement of service quality verification implies testing of the actual communication servi ce, in contrast to passive collection of data from network elements. Network measurements can be collected from different network elements to perform regular testing of the services. Special probes can be used to perform simulated transaction requests at scheduled intervals. By installing the probes at the edge of the IP network, the compound effects of network, server and application delays on the service can be measured, providing an end-user perception of the QoS. In order to conclude service performance an end-to-end view is important. A network management system entity that is able to combine measurements from different data sources is required. The Service Quality Manager (SQM) concept effectively supports the service monitoring and service assurance process. All service-relevant information that is available in the operator environment can be collected. The information forwarded to the SQM is used to determine the current status of defined services. The current service level is calculated by service-specific correlation rules. Different correlation rules for different types of services (e.g., MMS, WAP, streaming services) are provided. Figure 8.27 illustrates the general concept of SQM and its interfaces to collect relevant data from other measurement entities and products. Passive data provide information abou t the alarm situation (fault management) and performance (performance management) within individual network elements. UMTS Quality of Service 497 Performance management data in terms of network element measurements and KPIs are discussed in Chapter 7. Real time traffic data from charging and billing records provide additional information, which can be utilised to have a very detailed view towards specific services. Active measurements (probing) complement the previous data sources well, providing a snapshot on service usage from the customer perspective. All these different data sources can be integrated in SQM. SQM correlates the data from different origins to provide a global view towards the network from the customer perspective. SQM’s drill-down functionality to all underlying systems at the network level allows efficient troubleshooting and root cause analysis. SQM can be configured to provide information of the service availability. Measurements from different sources are collected and correlated with service availability-related rules. An example of SQM output related to service availability is given in Figure 8.28. The ability to calculate profiled values using Service Quality Manager provides a powerful mechanism to discover abnormal service behaviour or malfunctions in the network. Further, a rule set to indicate the severeness of a service-relat ed fault or performance degradation can be defined and the distribution of the different levels of faults can be monitored. This severity-based sorting helps the operator to put right the priority of corrective actions. An example of service degradation output is given in Figure 8.29. The SQM concept bridges the gap between network performance and service performance. With operator-definable correlation rules and the capability to utilise measurements of a different nature, service performance can be monitored and concluded. Thus technical network and technol ogy-facing measurements can be translated to measures that provide an indication of end-to-end performance and end-user satisfaction. 498 Radio Network Planning and Optimisation for UMTS Real time traffic data Fault Mgmt Alarms and topology Service Quality Manager Network Network level level Drill down for detailed troubleshooting Integrate all data from the network to determine service levels and problems Performance Mgmt Counters, KPIs Active measurements 3 rd party access access core core IT IT network and service infrastructure network and service infrastructure service platforms service platforms Service Service level level Multi-vendor integration Real time traffic data Fault Mgmt Alarms and topology Service Quality Manager Network Network level level Drill down for detailed troubleshooting Integrate all data from the network to determine service levels and problems Performance Mgmt Counters, KPIs Active measurements 3 rd party access access core core IT IT network and service infrastructure network and service infrastructure service platforms service platforms access access core core IT IT network and service infrastructure network and service infrastructure service platforms service platforms Service Service level level Multi-vendor integration Drill down for detailed troubleshooting Figure 8.27 Service quality manager and data sources. UMTS Quality of Service 499 Figure 8.28 Service quality manager report: availability of a service over 180 days. The definition of service availability is operator-specific and contains items from the service performance framework (see Figure 8.26). Figure 8.29 Service quality manager fault severity analysis. Vertical axis represents the number of problems, horizontal axis is time and colour coding indicates whether the service problem is critical, major, minor or warning. The classification is operator-specific. Quality of Service Feedback Loops In Chapter 7 the optimisation feedback loop concept is introduced. The interfaces, configuration management and performance management data availability and the management system role are discussed. In this section the same co ncept is applied , but now from the QoS point of view. The most important requirement for QoS management is the ability to verify the provided quality in the network. Second requirement is then the ability to guarantee the provided quality. Therefore, monitoring and post-processing tools play a very important role in QoS management. A post-processing system needs to be able to present massive and complex network performance and quality data both in textual and in highly advanced graphical formats. In terrelationships between different viewpoints of QoS are presented in Figure 8.30. The picture comes from [26], and the version presented here is slightly modified. The figure captures the complexity of QoS management very well. From the optimisation point of view, there are at least three main loops, which constitute a challenge for management tools. The network-level optimisation loop (on the right side of Figure 8.30) is mainly concerned with service assurance. Network performance objectives have been set based on QoS-related criteria, and the main challenge of the operator is to monitor the performance objectives by deriving the network status from network performance measurements. The optimisation loop from service level to network level covers the process from determination of QoS/application performance-related criteria to QoS/application performance offered to the subscriber. Once application performance-related criteria have been determined, the operator can derive the network performance objectives. The network is then monitored and measured based on these objectives, and application performance achieved in the network can be interpreted from these measurements. At 500 Radio Network Planning and Optimisation for UMTS User/Subscriber Bearer LevelService Application Level Bearer performance objectives Bearer performance m eas urements Perceived performance QoE User requirements Offered application performance Application performance related criteria Achieved application performance User/Subscriber Bearer LevelService Application Level Bearer performance objectives Bearer performance m eas urements Perceived performance QoE User requirements Offered application performance Application performance related criteria Achieved application performance Figure 8.30 Quality of service feedback loops. Dashed arrows indicate feedback; solid arrows indicate activity and flow [26]. this point, there may be a gap between the offered and the achieved application performance. Depending on the types of difference, application performance-related criteria or the network configuration might need fine-tuning. Further, there can be application-related performance and usability deficiencies that cannot be fixed by retuning the network. The third optimisation loop involves optimisation of QoS perceived by the subscriber, who has certain requirements for the quality of the application used. The QoS offered to the subscriber depends on application needs and actual network capacity and capability. The percei ved quality depends on the quality available from the network and from the applications, including usability aspects. Subscriber satis- faction then depends on the difference between his/her expectations and the perceived quality. The ultimate optimisation goal is to optimise the QoS that the subscriber perceives – i.e., the QoE. 8.7 Concluding Remarks The classification of end-user services was discussed within a framework allowing for detailed analysis. Requirements and characteristics for services were discussed within this framework. The 3GPP QoS architecture is a versatile and comprehensive basis for providing future services as well. From the viewpoint of the service management process , there are certain issues which the standardised architecture does not solve. The support provided by the 3GPP standard architecture to service configuration was discussed above. The reverse direction requires that it is possible to map specific counters in network elements onto service-specific KQIs. The Wireless Service Measurement Team of TeleManagement Forum has defined a set of KQIs and KPIs [6] and submitted it to the SA5 working group in 3GPP. Many of the counters have been standardised by 3GPP, but they are not – and indeed should not be – associated with particular services. Thus, conceptually one needs a service assurance ‘middleware’ layer for mapping elem ent-specific counters to end-to-end service per- formance levels, as depicted in Figure 8.31. UMTS Quality of Service 501 Service KQIs Service assurance “middleware” Node B RNC SGSN GGSN Service KQIs Service assurance “middleware” Node B RNC SGSN GGSN Node B RNC SGSN GGSN Figure 8.31 An illustration of service-oriented assurance. References [1] 3GPP, TS 23.107, v5.12.0 (2004-03), QoS Concept and Architecture, March 2004. [2] 3GPP, TS 23.207, v5.9.0 (2004-03), End-to-end QoS Concept and Architecture, March 2004. [3] 3GPP, TS 23.228, v5.13.0, IP Multimedia Subsystem (IMS), December 2004. [4] Communications Quality of Service: A Framework and Definitions, ITU-T Recommenda- tion G.1000, November 2001. [5] End-user Multimedia QoS Categories, ITU-T Recommendation G.1010, November 2001. [6] Wireless Service Measurement, Key Quality Indicators, GB 923A, v1.5, April 2004, TeleManagement Forum. [7] Koivukoski, U. and Ra ¨ isa ¨ nen, V. (eds), Managing Mobile Services: Technologies and Business Practices, John Wiley & Sons, 2005. [8] McDysan, D., QoS and Traffic Management in IP and ATM Networks, McGraw-Hill, 2000. [9] Padhye, J., Firoiu, V., Towsley, D. and Kurose, J., Modelling TCP reno performance. IEEE/ACM Transactions on Networking, 8, 2000. [10] Poikselka ¨ , M., Mayer, G., Khartabil, H. and Niemi, A., IMS: IP Multimedia Concepts and Services in the Mobile Domain, John Wiley & Sons, 2004. [11] Ra ¨ isa ¨ nen, V., Implementing Service Quality in IP Networks, John Wiley & Sons, 2003. [12] Ra ¨ isa ¨ nen, V., Service quality support: An overview. Computer Communications, 27, pp. 1539ff., 2004. [13] Ra ¨ isa ¨ nen, V., A framework for service quality, submitted to IEEE. [14] Schulzrinne, H., Casner, S., Frederick, R. and Jacobson, V., RTP: A Transport Protocol for Real-time Applications, RFC 1889, January 1996, Internet Engineering Task Force. [15] Armitage, G., Quality of Service in IP Networks, MacMillan Technical Publishing, 2000. [16] SLA Management Handbook, Volume 2, Concepts and Principles, GB917-2, TeleManage- ment Forum, April 2004. [17] Cuny, R., End-to-end performance analysis of push to talk over cellular (PoC) over WCDMA. Communication Systems and Networks, September 2004, Marbella, Spain. Inter- national Association of Science and Technology for Development. [18] Antila, J. and Lakkakorpi, J., On the effect of reduced Quality of Service in multi-player online games. International Journal of Intelligent Games and Simulations, 2, pp. 89ff., 2003. [19] Halonen, T., Romero, J. and Melero, J., GSM, GPRS, and EDGE Performance: Evolution towards 3G/UMTS, John Wiley & Sons, 2003. [20] Bouch, A., Sasse, M.A. and DeMeer, H., Of packets and people: A user-centred approach to Quality of Service. Proc. IWQoS ’00, Pittsburgh, June 2000 , IEEE. [21] Lakaniemi, A., Rosti, J. and Ra ¨ isa ¨ nen, V., Subjective VoIP speech quality evaluation based on network measurements. Proc. ICC ’01, Helsinki, June 2001, IEEE. [22] 3GPP, TS 32.403, v5.8.0 (2004-09), Telecommunication Management; Performance Management (PM); Performance Measurements – UMTS and Combined UMTS/GSM (Release 5). [23] Laiho, J. and Soldani, D., A policy based Quality of Service management system for UMTS radio access networks. Proc. of Wireless Personal Multimedia Communications (WPMC) Conf., 2003. [24] Soldani, D. and Laiho, J., User perceived performance of interactive and background data in WCDMA networks with QoS differentiation. Proc. of Wireless Personal Multimedia Communications (WPMC) Conf., 2003. 502 Radio Network Planning and Optimisation for UMTS [25] Soldani, D., Wacker, A. and Sipila ¨ , K., An enhanced virtual time simulator for studying QoS provisioning of multimedia services in UTRAN. Proc. of MMNS 2004 Conf., San Diego, California, October 2004, pp. 241–254. [26] Wireless Service Measurement Handbook, GB923, v3.0, March 2004, TeleManagement Forum. UMTS Quality of Service 503 [...]... state of the network is: Advanced Analysis Methods and Radio Access Network Autotuning 511 Table 9. 1 Discretised key performance indicator values [%] with corresponding discretisation limits The a priori limits given by a domain expert are greyed out KPI Unacceptable Bad Normal Good >99 .00 — SDCCH Access 99 .00 — SDCCH Success 98 .00 99 .10 TCH Access 99 .00 — TCH Success 98 .00 98 .75 99 .35 >99 .35 HO Failure... Failure HO Failure Due to Blocking TCH Drops 99 .00 (node 98 .00 (node 99 .00 98 .00 (node !5.00 (node !5.00 !2.00 3) 5) 1) 9) Bad Normal Good — 99 .10 (node 8) — 98 .75 (node 11) !2.08 (node 7) !0.23 (node 15) !0.57 >99 .00 99 .56 >99 .00 99 .35 !0 .91 !0.08 !0. 19 — >99 .56 — >99 .35 . Good SDCCH Access 99 .00 — > ;99 .00 — SDCCH Success 98 .00 99 .10 99 .56 > ;99 .56 TCH Access 99 .00 — > ;99 .00 — TCH Success 98 .00 98 .75 99 .35 > ;99 .35 HO Failure !5.00 !2.08 !0 .91 <0 .91 HO Failure. 5) 99 .10 (node 8) 99 .56 > ;99 .56 TCH Access 99 .00 — > ;99 .00 — TCH Success 98 .00 (node 1) 98 .75 (node 11) 99 .35 > ;99 .35 HO Failure !5.00 (node 9) !2.08 (node 7) !0 .91 <0 .91 HO Failure Due. that the network s radio coverage is available for the subscri ber outdoors and indoors. 508 Radio Network Planning and Optimisation for UMTS Figure 9. 2 Knowledge discovery in databases for quality

Ngày đăng: 14/08/2014, 12:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan