1. Trang chủ
  2. » Công Nghệ Thông Tin

System Analysis, Design, and Development Concepts, Principles, and Practices phần 10 doc

76 217 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 76
Dung lượng 2,29 MB

Nội dung

corrective action, WHAT tests impacted by the failure must be repeated before you can resume testing at the test sequence where the failure occurred? The answer resides in regression testing. During regression testing only those aspects of the component design that have been impacted by a corrective action are re-verified. Form, fit, and function checks of the UUT that were suc- cessfully completed and not affected by any corrective actions are not re-verified. Discrepancy Reports (DRs) Inevitably, discrepancies will appear between actual test data and bounded expected results spec- ified in the SPS or item development specification. We refer to the occurrence of a discrepancy as a test event. When test events such as failures occur, a Discrepancy Report (DR) is recorded. The Test Direc- tor should define WHAT constitutes a test event in the SIVP and event criteria for recording a DR. At a minimum, DRs document: 1. The test event, date, time. 2. Conditions and prior sequence of steps preceding a test event. 3. Test article identification. 4. Test configuration. 5. Reference documents and versions. 6. Specific document item requirement and expected results. 7. Results observed and recorded. 8. DR author and witnesses or observers. 9. Degree of significance requiring a level of urgency for corrective action. DRs have levels of significance that affect the test schedule. They involve safety issues, data integrity issues, isolated tests that may not affect other tests, and cosmetic blemishes in the test article. As standard practice, establish a priority system to facilitate disposition of DRs. An example is provided in Table 55.1. 55.2 SITE Fundamentals 743 Table 55.1 Example test event DR classification system Priority Event Description 1 Emergency condition All testing must be TERMINATED IMMEDIATELY due to imminent DANGER to the Test Operators, test articles, or test facility. 2 Test component or Testing must be HALTED until a corrective action is performed. configuration failure Corrective action may require redesign or replacement of a failed component. 3 Test failure Testing can continue if the failure does not diminish the integrity of remaining tests; however, the test article requires corrective action reverification prior to integration at the next higher level. 4 Cosmetic blemish Testing is permitted to continue, but corrective, action must be performed prior to system acceptance. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com SITE Work Products SITE work products that serve as objective evidence for a single item, regardless of level of abstrac- tion, include: 1. A set of dated and signed entries in the Test Log that identify and describe: a. Test Team—the name of the responsible engineering team and lead. b. Test Article—what “parent” test article was integrated from what version of lower level test articles. c. Test(s) conducted. d. Test results—where recorded and stored. e. Problems, conditions, or anomalies—encountered and identified. 2. Discrepancy report (DR). 3. Hardware trouble reports (HTRs) documented and logged. 4. Software change requests (CRs) documented and logged. 5. State of readiness of the test article for scheduling formal verification. Guidepost 55.1 This completes our overview of SITE fundamentals. We now shift our attention to planning for SITE. 55.3 PLANNING FOR SITE SITE success begins with insightful planning to identify the test objectives; roles; responsibilities, and authorities; tasking, resources, facilities; and schedule. Testing, in general, involves two types of test plans: 1. The Test and Evaluation Master Plan (TEMP). 2. The System Integration and Verification Plan (SIVP). The Test and Evaluation Master Plan (TEMP) In general, the TEMP is a User’s document that expresses HOW the User or an Independent Test Agency (ITA) representing the User’s interests plans to validate the system, product, or service. From a User’s perspective, development of a new system raises critical operational and technical issues (COIs/CTIs) that may become SHOWSTOPPERS to validating satisfaction of an organiza- tion’s operational need. So, the scope of the TEMP covers the Operational Test and Evaluation (OT&E) period and establishes objectives to verify resolution of COIs/CTIs. The TEMP is structured to answer a basic question: Does the system, product, or services, as delivered, satisfy the User’s validated operational needs—in terms of problem space and solution space? Answering this question requires formulation of a set of scenario-driven test objectives— namely use cases and scenarios. The System Integration and Verification Plan (SIVP) The SIVP is written by the System Developer and expresses their approach for integrating and testing the SYSTEM or PRODUCT. The scope of the SIVP, which is contract dependent, covers the Developmental Test and Evaluation (DT&E) period from Contract Award through the formal System Verification Test (SVT), typically at the System Developer’s facility. 744 Chapter 55 System Integration, Test, and Evaluation Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com The SIVP identifies objectives, organizational roles and responsibilities, tasks, resource requirements, strategy for sequencing testing activities, and schedules. Depending on contract requirements, the SIVP may include delivery, installation, and checkout at a User’s designated job site. Developing the System Integration and Test Strategy The strength of a system integration and test program requires “up front” THINKING to ensure that the vertical integration occurs just in time (JIT) in the proper sequences. Therefore, the first step is to establish a strong system integration and test strategy. One method is to construct a System Integration and Test Concept graphically or by means of an integration decision tree. The test concept should reflect: 1. WHAT component integration dependencies are critical? 2. WHO is responsible and accountable for the integration? 3. WHEN and in WHAT sequence they will be integrated? 4. WHERE is the integration to be performed? 5. HOW the components will be integrated? The SITE process may require a single facility such as a laboratory or multiple facilities within the same geographic area, or integration across various geographical locations. Destructive Test Sequence Planning During the final stages of SITE Developmental Test and Evaluation (DT&E), several test articles may be required. The challenge for SEs is: HOW and in WHAT sequence do we conduct non- destructive tests to collect data to verify design compliance prior to conducting destructive test that may destroy or damage the test article? THINK through these sequences carefully. Guidepost 55.2 Once the site plans are in place, the next step requires establishing the test organization. 55.4 ESTABLISHING THE TEST ORGANIZATION One of the first steps following approval of the SIVP is establishing the test organization and assign- ment of roles, responsibilities, and authorities. Key roles include Test Director, Lab Manager, Tester, Test Safety Officer or Range Safety Officer (RSO), Quality Assurance (QA), Security Representa- tive, and Acquirer/User Test Representative. Test Director Role The Test Director is a member of the System Developer’s program and serves as the key decision authority for testing. Since SITE activities involve interpretation of specification statement lan- guage and the need to access test ports and test points to collect data for test article compliance verification, the Test Director role should be assigned EARLY and be a key participant in System Design Segment reviews. At a minimum, the primary Test Director responsibilities are: 1. Develop and implement the SIVP. 2. Chair the Test and Evaluation Working Group (TEWG), if applicable. 55.4 Establishing the Test Organization 745 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 3. Plan, coordinate, and synchronize test team task assignments, resources, and communications. 4. Exercise authoritative control of the test configuration and environment. 5. Identify, assess, and mitigate test risks. 6. Review and approve test conduct rules and test procedures. 7. Account for personnel environmental, safety, and health (ES&H). 8. Train test personnel. 9. Prioritize and disposition of discrepancy reports (DRs). 10. Verify DR corrective actions. 11. Accomplish contract test requirements. 12. Preserve test data and results. 13. Conduct failure investigations. 14. Coordinate with Acquirer/User test personnel regarding disposition of DRs and test issues. Lab Manager Role The Lab Manager is a member of the System Developer’s program and supports the Test Director. At a minimum, the primary Lab Manager responsibilities are: 1. Implement the test configuration and environment. 2. Acquire of test tools and equipment. 3. Create the laboratory notebook. 4. Support test operator training. Test Safety Officer or Range Safety Officer Role Since testing often involves unproven designs and test configurations, safety is a very critical issue, not only for test personnel but also for the test article and facilities. Therefore, every program should designate a Safety Officer. In general, there are two types of test safety officers: SITE Test Officer and Range Safety Officer (RSO). • The Test Safety Officer is a member of the System Developer’s organization and supports the Test Director and Lab Manager. • The Range Safety Officer is a member of a test range. In some cases, Range Safety Officers (RSOs) have authority to destruct test articles should they become unstable and uncontrollable during a test or mission and pose a threat to personnel, facil- ities, and/or the public. Tester Role As a general rule, system, product, or service developers should not test their own designs; it is simply a conflict of interest. However, at lower levels of abstraction, programs often lack the resources to adequately train testers. So, System Developers often perform their own informal testing. For some contracts Independent Verification and Validation (IV&V) Teams, internal or external to the program or organization, may perform the testing. 746 Chapter 55 System Integration, Test, and Evaluation Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Regardless of WHO performs the tester role, test operators must be trained in HOW to safely perform the test, record and document results, and deal with anomalies. Some organizations for- mally train personnel and refer to them as certified test operators (CTOs). Quality Assurance (QA) Representative At a minimum, the System Developer’s Quality Assurance Representative (QAR) is responsible for ensuring compliance with contract requirements, organizational and program command media, the SIVP, and ATPs for software-intensive system development efforts, a Software Quality Assurance (SQA) representative is assigned to the program. Security Representative At a minimum, the System Developer’s Security Representative, if applicable, is responsible for the assuring compliance with contract security requirements, organizational and program command media, the programs security plan and ATPs. Acquirer Test Representative Throughout SITE, test issues surface that require an Acquirer decision. Additionally, some Acquirers represent several organizations, many with conflicting opinions. This presents a challenge for System Developers. One solution is for the Acquirer Program Manager to designate an indi- vidual to serve as an on-site representative at the System Developer’s facility and provide a single voice representing all Acquirer viewpoints. Primary responsibilities are to: 1. Serve as THE single point of contact for ALL Acquirer and User technical interests and communications. 2. Work with the Test Director to resolve any critical operational or technical test issues (COIs/CTIs) that affect Acquirer-User interests. 3. Where applicable by contract, collaborate with the Test Director to assign priorities to dis- crepancy reports (DRs). 4. Where appropriate, review and coordinate approval of acceptance test procedures (ATPs). 5. Where appropriate, provide a single set of ATP comments that represent a consensus of the Acquirer-User organizations. 6. Witness and approve ATP results. 55.5 DEVELOPING ATPs In general, ATPs provide the scripts to verify compliance with SPS or item development specifica- tion requirements. In Chapters 13 through 17 System Mission Concepts, we discussed that an SPS or item’s development specification (IDS) is derived from use cases and scenarios based on HOW the User envisions using the system, product, or service. In general, the ATPs script HOW TO demonstrate that the SYSTEM or item provides a specified set of capabilities for a given phase and mode of operation. To a test strategy, we create test cases that verify these capabilities as shown in Table 55.2. Types of ATPs ATPs are generally of two types: procedure-based and scenario-based (ATs). 55.5 Developing ATPs 747 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Procedure-Based ATPs. Procedure-based ATPs are highly detailed test procedures that describe test configurations, environmental controls, test input switchology, expected results and behavior, among other details, with prescribed script of sequences and approvals. Consider the example shown in Table 55.3 for a secure Web site log-on test by an authorized user. Scenario-Based ATPs. Scenario-based ATPs are generally performed during system validation either in a controlled facility or in a prescribed field environment. Since system validation is intended to evaluate a system’s capabilities to meet a User’s validated operational needs, those needs are often scenario based. Scenario-based ATPs employ objectives or missions as key drivers for the test. Thus, the ATP tends to involve very high level statements that describe the operational mission scenario to be accomplished, objective(s), expected outcome(s), and performance. In general, scenario-based ATPs defer to the Test Operator to determine which sequences of “switches and buttons” to use based on operational familiarity with the test article. 748 Chapter 55 System Integration, Test, and Evaluation Table 55.2 Derivation of test cases Phase of Mode of Use Cases Use Case Required Test Cases Operation Operation (UCs) Scenarios Operational (TCs) Capability Pre-mission phase Mode 1 UC 1.0 ROC 1.0 TC 1.0 Scenario 1.1 ROC 1.1 TC 1._ . . . TC 1._ Scenario 1.n ROC 1.n TC 1._ . . . TC 1._ Mode 2 UC 2.0 ROC 2.0 TC 2.0 Scenario 2.1 ROC 2.1 TC 2._ . . . TC 2._ Scenario 2.n ROC 2.n TC 2._ . . . TC 2._ UC 3.0 ROC 3.0 TC 3.0 Scenario 3.1 ROC 3.1 TC 3._ . . . TC 3._ Scenario 3.n ROC 3.n TC 3._ TC 3._ Mission phase (Modes) (Fill-in) (Fill-in) (Fill-in) (Fill-in) Post-mission phase (Modes) (Fill-in) (Fill-in) (Fill-in) (Fill-in) Note: A variety of TCs are used to test the SYSTEM/entity’s inputs/outputs over acceptable and unacceptable ranges. Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com 55.6 PERFORMING SITE TASKS SITE, as with any system, consists of three phases: pre-testing phase, testing phase, and a post- testing phase. Each phase consists of a series of tasks for integrating, testing, evaluating, and ver- ifying the design of an item’s or configuration item (CI). Remember, every system is unique. The discussions that follow represent generic test tasks that apply to every level of abstraction. These tasks are highly interactive and may cycle numerous times, especially in the testing phase. Task 1.0: Perform Pre-test Activities • Task 1.1 Configure the test environment. • Task 1.2 Prepare and instrument the test article(s) for SITE. • Task 1.3 Integrate the test article into the test environment. • Task 1.4 Perform a test readiness inspection and assessment. Task 2.0: Test and Evaluate Test Article Performance • Task 2.1 Perform informal testing. • Task 2.2 Evaluate informal test results. • Task 2.3 Optimize design and test article performance. • Task 2.4 Prepare test article for formal verification testing. • Task 2.5 Perform a “dry run” test to check out ATP. 55.6 Performing SITE Tasks 749 Table 55.3 Example procedure-based acceptance test (AT) form Test Test Operator Expected Measured, Pass/Fail Operator QA Step Action to be Results Displayed, Results Initials Performed or Observed and Date Results Step 1 Using the left Web browser is Web site appears Pass JD QA mouse button, launched to 4/18/XX 205 click on the Web selected Web KW 4/18/XX site link. site. Step 2 Using the left Logon access As expected Pass JD QA mouse button, click dialogue box 4/18/XX 205 on the “Logon” opens up. KW 4/18/XX button. Step 3 Position the cursor Fixed cursor As expected Pass JD QA within the User blinks in field. 4/18/XX 205 Name field of the KW 4/18/XX dialogue box. Step 4 Enter user ID (max. Field displayed. User ID entered Pass JD QA of 10 characters) 4/18/XX 205 and click SUBMIT KW 4/18/XX Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Task 3.0: Verify Test Article Performance Compliance • Task 3.1 Conduct a test readiness review (TRR). • Task 3.2 Formally verify the test article. Task 4.0: Perform Post-test Follow-up Actions • Task 4.1 Prepare item verification test reports (VTRs). • Task 4.2 Archive test data. • Task 4.3 Implement all DR corrective actions. • Task 4.4 Refurbish/recondition test article(s) for delivery, if permissible. Resolving Discrepancy Reports (DRs) When a test failure occurs and a discrepancy report (DR) is documented, a determination has to be made as to the significance of the problem on the test article and test plan as well as isolation of the problem source. While the general tendency is to focus on the test article due to its unproven design, the source of the problem can originate from any one of the test environment elements shown in Figure 55.2 such as a test operator or test procedure error; test configuration or test envi- ronment problem, or combinations of these. From these contributing elements we can construct a fault isolation tree such as the one shown in Figure 55.3. Our purpose during an investigation such as this is to assume everything is suspect and logically rule out elements by a process of elimination. Once the DR and the conditions surrounding the failure are understood, our first decision is to determine if the problem originated external or internal to the test facility, as applicable. For those problems originating within the facility, decide if this is a test operator, test article, test configura- tion, Test environment, or test measurement problem. Since the test procedure is the orchestration mechanism, start with it and its test configuration. Is the test configuration correct? Was the test environment controlled at all times without any dis- 750 Chapter 55 System Integration, Test, and Evaluation Log Test Discrepancy Log Test Discrepancy Test Article Proble m Test Article Problem Item Specification Proble m Item Specification Problem Test Operator Proble m Test Operator Problem Test Configuration Proble m Test Configuration Problem External Test Environment Proble m External Test Environment Proble m Test Article Problem Test Procedure Proble m Test Procedure Proble m Problem Source Internal to Test Facility Problem Source External to Test Facilit y Validate Operating Conditions, Sequences of Events, Observations, & Findings Validate Operating Conditions, Sequences of Events, Observations, & Findings Submit Corrective Action Recommendation s Submit Corrective Action Recommendations Test Equipment Proble m Test Equipment Proble m Test Environment Problem Test Environment Proble m Test Environment Proble m 1 9 1510 11 12 13 14 8 16 17 4 6 Discrepancy Report (DR) Investigate Test Failur e Investigate Test Failur e 2 3 5 7 •HW •SW Observers Figure 55.3 Test Discrepancy Source Isolation Tree Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com continuities? Did the operator perform the steps correctly in the proper sequence without bypass- ing any? Is the test procedure flawed? Does it contain errors? Was a dry run conducted with the test procedure prior to verify its logic and steps? If so, the test article may be suspect. Whereas people tend to rush to judgment and take corrective action, VALIDATE the problem source by reconstructing the test configuration, operating conditions, sequence of events, test pro- cedures, and observations as documented in the test log, DR, and test personnel interviews. If the problem originated with the test article, retrace the development of the item in reverse order to its system development workflow. Was the item built correctly per its design requirements? Was it properly inspected and verified? If so, was there a problem due to component, material, process, or workmanship defect(s) or in the verification of the item? If so, determine if the test article will have to be reworked, scrapped, reprocured, or retested. If not, the design or specifica- tion may be suspect. Audit the design. Is it fully compliant with its specification? Does the design have an inherent flaw? Were there errors in translating specification requirements into the design documentation? Was the specification requirement misinterpreted? If so, redesign to correct the flaw or error will have to be performed. If not, since the specification establishes the compliance thresholds for verification testing, you may have to consider: 1) revising the specification and 2) reallocating performance budgets and margins. Based on your findings, recommend, obtain approval, and implement the corrective actions. Then, perform regression testing based on the where the last val- idate test unaffected by the failure was completed. 55.7 COMMON INTEGRATION AND TEST CHALLENGES AND ISSUES SITE practices often involve a number of challenges and issues for SEs. Let’s explore some of the more common ones. Challenge 1: SITE Data Integrity Deficiencies in establishing the test environment, poor test assumptions, improperly trained and skilled test operators, and an uncontrollable test environment compromise the integrity of engi- neering test results. Ensuring the integrity of test data and results is crucial for downstream deci- sion making involving formal acceptance, certification, and accreditation of the system. Warning! Purposeful actions to DISTORT or MISREPRESENT test data are a violation of pro- fessional and business ethics. Such acts are subject to SERIOUS criminal penalties that are pun- ishable under federal or other statutes or regulations. Challenge 2: Biased or Aliased SITE Data Measurements When instrumentation such as measuring devices are connected or “piggybacked” to “test points”, the resulting impact can bias or alias test data and/or degrade system performance. Test data capture should be not degrade system performance. Thoroughly analyze the impact of potential effects of test device bias or alias on system performance BEFORE instrumenting a test article. Investigate to see if some data may be derived implicitly from other data. Decide: 1. How critical the data is needed. 2. If there are alternative data collection mechanism or methods. 3. Whether the data “value” to be gained is worth the technical, cost, and schedule risk. 55.7 Common Integration and Test Challenges and Issues 751 Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com Challenge 3: Preserving and Archiving Test Data The end technical goal of SITE and system verification is to establish that a system, product, or service fully complies with its System Performance Specification (SPS). The validity and integrity of the compliance decision resides in the formal acceptance test procedure (ATP) results used to record objective evidence. Therefore, ALL test data recorded during a formal ATP must be pre- served by archiving in a permanent, safe, secure, and limited access facility. Witnessed or authen- ticated test data may be required to support: 1. A functional configuration audit (FCA) and a physical configuration audit (PCA) prior to system delivery and formal acceptance by the Acquirer for the User. 2. Analyses of system failures or problems in the field. 3. Legal claims. Most contracts have requirements and organizations have policies that govern the storage and reten- tion of contract data, typically several years after the completion of a contract. Challenge 4: Test Data Authentication When formal test data are recorded, the validity of the data should be authenticated, depending on end usage. Authentication occurs in a number of ways. Generally, the authentication is performed by an Independent Test Agency (ITA) or individual within the Quality Assurance (QA) organiza- tion that is trained and authorized to authenticate test data in accordance with prescribed policies and procedures. Authentication may also be required by higher level bonded, external organiza- tions. At a minimum, authentication criteria include a witnessed affirmation of the following: 1. Test article and test environment configuration 2. Test operator qualifications and methods 3. Test assumptions and operating conditions 4. Test events and occurrences 5. Accomplishment of expected results 6. Pass/fail decision 7. Test discrepancies Challenge 5: Dealing with One Test Article and Multiple Integrators and Testers Because of the expense of developing large complex systems, multiple integrators may be required to work sequentially in shifts to meet development schedules. This potentially presents problems when integrators on the next shift waste time uninstalling undocumented “patches” to a build from a previous shift. Therefore, each SITE work shift should begin with a joint coordination meeting of persons going off shift and coming on shift. The purpose of the meeting is to make sure everyone com- municates and understands the current configuration “build” that transpired during the previous shift. Challenge 6: Deviations and Waivers When a system or item fails to meet its performance, development, and/or design requirements, the item is tagged as noncompliant. For hardware, a nonconformance report (NCR) documents the dis- crepancy and dispositions it for corrective action by a Material Review Board (MRB). For soft- ware, a software developer submits a Software Change Request (SCR) to a Software Configuration Control Board (SCCB) for approval. Noncompliances are sometimes resolved by issuing a devia- tion or waiver, rework, or scrap without requiring a CCB action. 752 Chapter 55 System Integration, Test, and Evaluation Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com [...]... Merge and Split Table 56.2 System deployment design and development rules ID Title System Deployment Design and Development Rule 56.1 System deployment requirements and constraints Bound and specify the set of system deployment requirements and constraints for every system, product, or service in the System Performance Specification (SPS) 56.2 Deployment conditions When planning deployment of a system. .. US National Environmental Policy Act (NEPA) and other legislation applies to your system s development and deployment Environmental Safety and Health (ES&H) Environmental safety and health (ES&H) is a critical issue during system development and deployment The objective is to safely and securely relocate a SYSTEM without impacting the system s capabilities and performance or endangering the health of... our System Design and Development Practices: Simpo PDF Merge and Split Unregistered Version Analyze System Performance 777 57.3 Monitor and - http://www.simpopdf.com 1 Does the system add value to the User and provide the RIGHT capabilities to accomplish the User’s organizational missions and objectives—OPERATIONAL UTILITY? 2 Does the system integrate and interoperate successfully within the User’s system. .. situation, a stakeholder could become a SHOWSTOPPER and significantly impact system deployment schedules and costs Do yourself and your organization a favor Understand the deployment, site selection and development, and system installation and integration decision-making chain This is key to ensuring success when the time comes to deploy the system Simpo PDF Merge and Split Unregistered Version - http://www.simpopdf.com... service System Analysis, Design, and Development, by Charles S Wasson Copyright © 2006 by John Wiley & Sons, Inc 773 Simpo PDF 57 System Operations and Support 774 Chapter Merge and Split Unregistered Version - http://www.simpopdf.com What You Should Learn from This Chapter 1 2 3 4 5 6 What are the primary objectives for system operation and support (O&S)? What are key areas for monitoring and analyzing SYSTEM. .. Specify and bound system requirements and incorporate them into the System Performance Specification (SPS) or system design Guidepost 56.1 Our discussion has focused on deploying a MISSION SYSTEM and designing it to be compatible with an existing system performing a SUPPORT SYSTEM role Now, let’s switch the context and consider WHAT mission capabilities a SUPPORT SYSTEM requires Simpo PDF Merge and Split... Therefore, the System Developer must factor in design features that facilitate production and logistical distribution of systems and products, such as tracking bar coding and packaging for environmental conditions 56.2 SE ROLES AND RESPONSIBILITIES DURING DEPLOYMENT The major SEs activities related to system deployment occur during the System Procurement Phase and early SE Design Segment of the System Development. .. levels of system installation and checkout tests Installation and Checkout Plan Activities Installation and checkout activities cover a sequence of activities, organizational roles and responsibilities, and tasks before the newly deployed system can be located at a specific job site SYSTEM requirements that are unique to on-site system installation and integration must be identified by analysis and incorporated... to safely and properly operate and to support Operational Test and Evaluation (OT&E) during the final portions of the System Development Phase Simpo PDF 56 System DeploymentUnregistered Version - http://www.simpopdf.com 768 Chapter Merge and Split “Shadow” Operations Installation and checkout of new systems may require integration into higher level systems The integration may involve the new system as... Segment of the System Development Phase prior to the System Requirements Review (SRR) These activities include mission and system analysis, establishing site selection criteria, conducting site surveys, conducting trade-offs, deriving system requirements, and identifying system design and construction constraints Simpo PDF Merge and Split Unregisteredand Development of Operational Location 761 56.3 Selection . include mission and system analysis, establishing site selection cri- teria, conducting site surveys, conducting trade-offs, deriving system requirements, and identify- ing system design and construction. historical, ethnic, and cultural systems that must be considered and preserved when deploying a system. The same is true for NATURAL ENVIRON- MENT ecosystems such as wetlands, rivers, and habitat. The. precision adjustments in system/ product functions and outputs. 3. Tools used to measure and record the system s environment, inputs, and outputs. 4. Tools used to analyze the system responses based

Ngày đăng: 13/08/2014, 08:21

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN