Ebook Relating system quality and software architecture Part 2

214 467 0
Ebook Relating system quality and software architecture Part 2

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

(BQ) Part 2 book Relating system quality and software architecture has content Lightweight evaluation of software lightweight evaluation of software; dashboards for continuous monitoring of quality for software product under development; chieving quality in customer configurable products,....and other contents.

CHAPTER Lightweight Evaluation of Software Architecture Decisions Veli-Pekka Eloranta , Uwe van Heesch , Paris Avgeriou , Neil Harrison , and Kai Koskimies Tampere University of Technology, Tampere, Finland Capgemini, Du¨sseldorf, Germany University of Groningen, Groningen, The Netherlands Utah Valley University, Orem, UT, USA INTRODUCTION Software architecture plays a vital role in the software engineering lifecycle It provides a stable foundation upon which designers and developers can build a system that provides the desired functionality, while achieving the most important software qualities If the architecture of a system is poorly designed, a software project is more likely to fail (Bass et al., 2003) Because software architecture is so important, it is advisable to evaluate it regularly, starting in the very early stages of software design The cost for an architectural change in the design phase is negligible compared to the cost of an architectural change in a system that is already in the implementation phase (Jansen and Bosch, 2005) Thus, costs can be reduced by evaluating software architecture prior to its implementation, thereby recognizing risks and problems early Despite these benefits, many software companies not regularly conduct architecture evaluations (Dobrica and Niemela¨, 2002) This is partially due to the fact that architecture evaluation is often perceived as complicated and expensive (Woods, 2011) In particular, the presumed high cost of evaluations prevents agile software teams from considering architecture evaluations Agile development methods such as Scrum (Cockburn, 2007; Schwaber, 1995; Schwaber and Beedle, 2001; Sutherland and Schwaber, 2011) not promote the explicit design of software architecture The Agile manifesto (Agile Alliance, 2001) states that best architectures emerge from teams Developers using Scrum tend to think that while using Scrum, there is no need for up-front architecture or architecture evaluation However, this is not the case If there is value for the customer in having an architecture evaluation, it should be carried out This tension between the architecture world and the agile world has been recognized by many authors (see Abrahamsson et al., 2010; Kruchten, 2010; Nord and Tomayko, 2006) The fact that many popular architecture evaluation methods take several days when being carried out in full scale (Clements et al., 2002; Maranzano et al., 2005) amplifies this problem There have been some efforts (Leffingwell, 2007) to find best practices for combining architecture work and agile design, but they not present solutions for architecture evaluation In this chapter, we will address this problem by proposing a decision-centric software architecture evaluation approach that can be integrated with Scrum 157 158 CHAPTER Evaluating Software Architecture Decisions In the last few years, the software architecture community has recognized the importance of documenting software architecture decisions as a complement to traditional design documentation (e.g., UML diagrams) (Jansen and Bosch, 2005; Jansen et al., 2009; Weyns and Michalik, 2011) In general, software architecture can be seen as the result of a set of high-level design decisions When making decisions, architects consider previously made decisions as well as various other types of decision drivers, which we call decision forces (see van Heesch et al., 2012b) Many of the existing architecture evaluation methods focus on evaluating only the outcome of this decision-making process, namely, the software architecture However, we believe that evaluation of the decisions behind the software architecture provides greater understanding of the architecture and its ability to satisfy the system’s requirements In addition, evaluation of decisions allows organizational and economic constraints to be taken into account more comprehensively than when evaluating the resulting software architecture Cynefin (Snowden and Boone, 2007) provides a general model of decision making in a complex context According to this model, in complex environments one can only understand why things happened in retrospect This applies well to software engineering and especially software architecture design Indeed, agile software development is said to be at the edge of chaos (e.g., Cockburn, 2007; Coplien and Bjrnvig, 2010; Sutherland and Schwaber, 2011) With respect to software architecture, this means that one cannot precisely forecast which architecture decisions will work and which will not Decisions can only be evaluated reliably in retrospect Therefore, we believe that there is a need to analyze the architecture decisions after the implications of a decision can be known to at least some extent, but still at the earliest possible moment If a decision needs to be changed, the validity of the decision has to be ensured again later on in retrospect Additionally, because decisions may be invalidated by other decisions, a decision may have to be re-evaluated Because of the need to re-evaluate the decisions, an iterative evaluation process of the decisions is required This chapter is an extension of the previously published description of the decision-centric software architecture evaluation method, DCAR by van Heesch et al (2013b) In this chapter, we describe the method in more detail and describe how it can be aligned with the Scrum framework This chapter is organized as follows Section 6.1 discusses the existing architecture evaluation methods and how those methods take architecture decisions in to account Suitability of these methods for agile development is also briefly discussed Section 6.2 presents the concept of architecture decisions The decision-centric architecture review (DCAR) method is presented in Section 6.3 Section 6.4 summarizes the experiences from DCAR evaluations in industry Possibilities for integrating DCAR as a part of Scrum framework are discussed in Section 6.5, based on observed architecture practices in Scrum Finally, Section 6.6 presents concluding remarks and future work 6.1 ARCHITECTURE EVALUATION METHODS Software architecture evaluation is the analysis of a system’s capability to satisfy the most important stakeholder concerns, based on its large-scale design, or architecture (Clements et al., 2002) On the one hand, the analysis discovers potential risks and areas for improvement; on the other hand, it can raise confidence in the chosen architectural approaches As a side effect, architecture evaluation also can stimulate communication between the stakeholders and facilitate architectural knowledge sharing 6.1 Architecture Evaluation Methods 159 Software architecture evaluations should not be thought as code reviews In architecture evaluation, the code is rarely viewed The goal of architecture evaluation is to find out if made architecture decisions support the quality requirements set by the customer and to find out signs of technical debt In addition, decisions and solutions preventing road-mapped features from being developed during the evolution of the system can be identified In other words, areas of further development in the system are identified In many evaluation methods, business drivers that affect the architectural design are explicitly mentioned, and important quality attributes are specified Given that these artifacts are also documented during the evaluation, the evaluation may improve the architectural documentation (AD) as well In addition, as evaluation needs AD, some additional documentation may be created for the evaluation, contributing to the overall documentation of the system The most well-known approaches to architecture evaluation are based on scenarios, for example, SAAM (Kazman et al., 1994), ATAM (Kazman et al., 2000), ALMA (Architecture-level Modifiability Analysis) (Bengtsson, 2004), FAAM (Family-architecture Assessment Method) (Dolan, 2002), and ARID (Active Review of Intermediate Designs) (Clements, 2000) These methods are considered mature: They have been validated in the industry (Dobrica and Niemela¨, 2002), and they have been in use for a long time In general, scenario-based evaluation methods take one or more quality attributes and define a set of concrete scenarios concerning them, which are analyzed against the architectural approaches used in the system Each architectural approach is either a risk or a nonrisk with respect to the analyzed scenario Methods like ATAM (Kazman et al., 2000) also explicitly identify decisions being a trade-off between multiple quality attributes and decisions that are critical to fulfill specific quality attribute requirements (so-called sensitivity-points) Many of the existing architecture evaluation methods require considerable time and effort to carry out For example, SAAM evaluation is scheduled for one full day with wide variety of stakeholders present The SAAM report (Kazman et al., 1997) shows that in 10 evaluations performed by SEI where projects ranged from to 100 KLOC (1,000 Lines of Code) the effort was estimated to be 14 days Also, medium-sized ATAM might take up to 70 person-days (Clements et al., 2002) On the other hand, there are some experience reports indicating that less work might bring results as well (Reijonen et al., 2010) In addition, there exist techniques that can be utilized to boost the architecture evaluation (Eloranta and Koskimies, 2010) However, evaluation methods are often so time consuming that it is impractical to them repeatedly Two- or three-day evaluation methods are typically one-shot evaluations This might lead to a situation where software architecture is not evaluated at all, because there is no suitable moment for the evaluation The architecture typically changes constantly, and once the architecture is stable enough, it might be too late for the evaluation because much of the system is already implemented Many scenario-based methods consider scenarios as refinements of the architecturally significant requirements, which concern quality attributes or the functionality the target system needs to provide These scenarios are then evaluated against the decisions These methods not explicitly take other decision drivers into account, for example, expertise, organization structure, or business goals CBAM (Cost Benefit Analysis Method) (Kazman et al., 2001) is an exception to this rule because it explicitly regards financial decision forces during the analysis The method presented in this chapter holistically evaluates architecture decisions in the context of the architecturally significant requirements and other important forces like business drivers, company culture and politics, in-house experience, and the development context 160 CHAPTER Evaluating Software Architecture Decisions Almost all evaluation methods identify and utilize architecture decisions, but they not validate the reasoning behind the decisions Only CBAM operates partially also in the problem-space The other methods merely explore the solution space and try to find out which consequences of the decisions are not addressed In DCAR, the architecture decisions are a first-class entity, and the whole evaluation is carried out purely on considering the decision drivers of the made decisions Architectural software quality assurance (aSQA) (Christensen et al., 2010) is an example of a method that is iterative and incremental and has built-in support for agile software projects The method is based on the utilization of metrics, but it can be carried out using scenarios or expert judgment, although the latter option has not been validated in industry It is also considered to be more lightweight than many other evaluation methods, because it is reported to take h or less per evaluation However, aSQA does not evaluate architecture decisions, but rather uses metrics to assess the satisfaction of the prioritized quality requirements Pattern-based architecture review (PBAR) (Harrison and Avgeriou, 2010) is another example of a lightweight method that does not require extensive preparation by the company In addition, PBAR can be conducted in situations where no AD exists During the review, the architecture is analyzed by identifying patterns and pattern relationships in the architecture PBAR, however, also focuses on quality attribute requirements and does not regard the whole decision-making context It also specializes on pattern-based architectures and cannot be used to validate technology or process related decisions, for instance Many of the existing evaluation methods focus on certain quality attribute (such as maintainability in ALMA, Bengtsson, 2004, interoperability and extensibility in FAAM, Dolan, 2002, or some other single aspect of the architecture such as economics (CBAM), Kazman et al., 2001) However, architecture decisions are affected by a variety of drivers The architect needs to consider not only the wanted quality attributes and costs, but also the experience, expertise, organization structure, and resources, for example, when making a decision These drivers may change during the system development, and while a decision might still be valid, new more beneficial options might have become available and these should be taken into consideration We intend to support this kind of broad analysis of architecture decisions with DCAR Further, we aim at a method that allows the evaluation of software architecture iteratively decision by decision, so that it can be integrated with agile development methods and frameworks such as Scrum (Schwaber and Beedle, 2001) 6.2 ARCHITECTURE DECISIONS When architects commence the design of a new system or design the next version of an existing system, they struggle on the one hand with a large number of constraining factors (e.g., requirements, constraints, risks, company culture, politics, quality attribute requirements, expectations) and on the other hand with possible design options (e.g., previously applied solutions, software patterns, tactics, idioms, best practices, frameworks, libraries, and off-the-shelf products) Typically, there is not a well-defined, coherent, and self-contained set of problems that need to be solved, but a complex set of interrelated aspects of problems, which we call decision forces (van Heesch et al., 2012b) (decision forces are explained in detail in the next section) The same holds for solutions: There is not just one way of solving a problem, but a variety of potential solutions that have relationships to each other and consequences; those consequences include benefits and liabilities and may in turn cause additional problems once a solution is applied 6.2 Architecture Decisions 161 When architects make decisions, they choose solution options to address specific aspects of problems In the literature, these decisions are often referred to as architecture decisions (van Heesch et al., 2012b; Ven et al., 2006) The architecture decisions together with the corresponding design constitute the software architecture of a system They establish a framework for all other, more low-level and more specific design decisions Architecture decisions concern the overall structure and externally visible properties of a software system (Kruchten, 2004) As such, they are particularly important to make sure that a system can satisfy the desired quality attribute requirements Decisions cannot be seen in isolation; they are rather a Web of interrelated decisions that depend on, support, or contradict each other For example, consider how the selection of an operating system constrains other solutions that have to be supported by the operating system Sometimes multiple decisions have to be applied to achieve a desired property; sometimes decisions are made to compensate the negative impact of another decision In his ontology of architectural design decisions, Kruchten differentiates between existence decisions, property decisions, and executive decisions (Kruchten, 2004) Existence decisions concern the presence of architectural elements, their prominence in the architecture, and their relationships to other elements Examples for existence decisions are the choice of a software framework, the decision to apply a software pattern, or an architectural tactic (see, e.g., Harrison and Avgeriou, 2010; Kruchten, 2004) Property decisions concern general guidelines, design rules, or constraints The decision not to use third-party components that require additional license fees for redistribution is an example for a property decision These decisions implicitly influence other decisions and they are usually not visible in the architecture if they are not explicitly documented Finally, executive decisions mainly affect the process of creating the system, instead of affecting the system as a product itself Mainly financial, methodological, and organizational aspects drive them Example executive decisions are the number of developers who are assigned to a project, the software development process (e.g., RUP or Scrum), or the tool suite used for developing the software Existence decisions are of highest importance concerning a system’s ability to satisfy its objectives However, during architecture evaluation, property decisions are also important because they complement the requirements and form a basis for understanding and evaluating the existence decisions Executive decisions are of less importance in architecture evaluation, because they usually not have an important influence on the system’s ability to satisfy its goals, nor are they important to evaluate other decisions As one of the first steps in the DCAR method, presented in this chapter, the participants identify the architecture decisions made and clarify their interrelationships Understanding the relationships helps to identify influential decisions that have wide-ranging consequences for large parts of the architecture Additionally, when a specific decision is evaluated, it is important to also consider its related decisions, because—as described above—decisions can seldom be regarded in isolation In their previous work, some of the authors of this chapter developed a documentation framework for architecture decisions, following the conventions of the international architecture description standard ISO/IEC/IEEE 42010 (Avgeriou et al., 2009; ISO/IEC, 2011) The framework comprises five architecture decisions viewpoints that can be used to document different aspects of decisions for different stakeholders concerns These viewpoints can offer support for making rational architecture decisions (van Heesch et al., 2013a) One of the viewpoints, the so-called decision relationship viewpoint is used in DCAR to describe the various relationships between the elicited decisions Figure 6.1 shows an example of a relationship view The ellipses represent decisions Each decision has a name (e.g., “Connection Solver”) and a state The state of a decision can be one of the following: 162 CHAPTER Evaluating Software Architecture Decisions < < approved > > Name based communication < < caused by > > < < decided> > Connection Solver < < depends on>> < < decided > > Name Server < < caused by > > < < depends on >> < < approved > > Kernel < < decided> > Indirection for address space < < decided > > Type Library < < caused by > > FIGURE 6.1 Excerpt from a decision relationship view model • • • • • • Tentative: A considered solution that has not been decided so far Discarded: A decisions that is tentative, but not decided for a specific reason Decided: A chosen design option Approved: A decision that was approved by a review board Challenged: A formerly made decision that is currently put into question Rejected: A formerly made decision that became invalid Theoretically, each of these decision states can be identified during a DCAR session; however, in most cases, a review is either done to approve decisions in the “decided” state (e.g., in a Greenfield project) or to challenge decisions that were already in the “approved” state (during software evolution) Apart from decisions, the decision relationship view shows relationships between decisions, depicted by a directed arrow Relationships can have one of the following types: • • Depends on: A decision is only valid as long as another decision is valid As an example, the decision to use the Windows Presentation Foundation Classes depends on the decision to use a NET programming language Is caused by: A decision was caused by another decision without being dependent on it An example is the use of a third-party library (decision 1) because it is supported out-of-the-box by a chosen framework (decision 2) 6.2 Architecture Decisions • • • 163 Is excluded by: A decision is prevented by another decision Replaces: A decision was made to replace another decision In this case, the other decision must be rejected Is alternative for: A tentative decision is considered as an alternative to another decision During a DCAR session, all types of relationships can be identified As described above, it is important to identify decision relationships, because decisions cannot be evaluated in isolation A decision relationship view makes explicit all other decisions that have to be considered as forces during the evaluation of a specific decision The next subsection elaborates on the concept of forces in general The use of the decision relationship view is described in its own section later 6.2.1 Decision forces When architects make decisions, many forces influence them Here, a force is “any aspect of an architectural problem arising in the system or its environment (operational, development, business, organizational, political, economic, legal, regulatory, ecological, social, etc.), to be considered when choosing among the available decision alternatives” (van Heesch et al., 2012b) Forces are manifold, and they arise from many different sources For instance, forces can result from requirements, constraints, technical, and domain risks; software engineering principles like high cohesion or loose coupling; business-related aspects like a specific business model, company politics, quick time to market, low price, or high innovation; but also tactical considerations such as using new technologies to avoid vendor lock-in, although the old solution has proven to work reliably Architects consciously or unconsciously balance many forces It is quite common that forces contradict each other; therefore, in most situations a trade-off between multiple forces needs to be found Conceptually, decision forces have much in common with physical forces; they can be seen as vectors that have a direction (i.e., they point to one of the considered design options) and a magnitude (the degree to which the force favors the design option) The act of balancing multiple forces is similar to combining force vectors in physics, as shown in the following example Figure 6.2 illustrates the impact of two forces on the available options for a specific design problem The architect narrowed down a design problem to three design options for allowing the user to configure parts of an application Option one is to develop a domain-specific language (DSL) from scratch; option two is to extend an existing scripting language like Ruby to be used as a DSL; the third option is to use an XML file for configuration purposes Force one, providing configuration comfort for end users, attracts the architect more toward option one, because the configuration language could be streamlined for the application, leaving out all unnecessary keywords and control structures Force two attracts the architect more toward option three, because this requires the least implementation effort None of the forces favors option two, because extending an existing language requires some implementation effort, but on the other hand, it is not possible to streamline the language for the exact needs in the application domain Figure 6.3 shows the resulting combined force for the design problem This means that, considering only configuration comfort and implementation effort as decision forces, the architect would decide to use XML as configuration language In more realistic scenarios, more forces would need to be considered This is particularly true for architecture evaluations, in which the reviewers try to judge the soundness of the architecture In DCAR, 164 CHAPTER Evaluating Software Architecture Decisions Develop DSL from scratch Extend existing scripting language Develop DSL from scratch Extend existing scripting language Use XML Use XML F1: Provide configuration ease for domain users F2: Keep implementation effort low FIGURE 6.2 Two forces for a design problem Develop DSL from scratch Extend existing scripting language Resulting force Use XML FIGURE 6.3 Resulting force for the design problem in order to get to a solid judgment, the reviewers elicit all relevant forces that need to be considered in the context of a single decision Then the balance of the forces is determined To help the reviewers to get a thorough overview of the forces that influence single decisions, but also to get an overview of the decisions impacted by single forces, a decision forces view (van Heesch et al., 2012b) is created Table 6.1 shows a forces view for the three design options mentioned above together with a few forces The forces view lists decision topics (in this case “How we describe configuration data?”) and decision options for each decision topic in the top row, while the first column lists forces that have an impact on the decision topic For each decision option and force, the intersection cell shows the estimated impact of the force on the decision option This impact can be very positive (þþ), positive (þ), neutral (0), negative (À), or very negative (À À) 6.3 Decision-Centric Architecture Review 165 Table 6.1 Excerpt from a Forces View How to Describe Configuration Data? Forces\Decisions Usage comfort for domain users Implementation effort Error proneness Required developer skills Develop DSL from Scratch Extend Scripting Language Use XML þ À À ÀÀ À À þ þþ þþ þþ When evaluating decisions in DCAR, the reviewers compare the weights of all forces impacting a decision option to judge its soundness In the example shown in Figure 6.3, using XML as configuration option is a good choice in the context of the considered forces, while creating a DSL from scratch seems to be a suboptimal choice; extending an existing scripting language is neutral Note that it is not always realistic to elicit other design options originally considered as alternatives to the chosen solution The more common case is that decisions are evaluated against new design options or without considering alternatives at all In such a case, extending an existing scripting language could be approved because there are no strong forces against it We would argue that in both up-front design and in architecture evaluations, it always makes sense to consider multiple decision alternatives to form a thorough judgment on the soundness of design options Section 6.3 describes the DCAR method in detail 6.3 DECISION-CENTRIC ARCHITECTURE REVIEW DCAR comprises 10 steps; of these steps take place during the evaluation session itself The steps of the method with corresponding outputs are presented in Table 6.2 In the following subsections, participants of evaluation session and evaluation team roles are presented Then each of the steps is explained in detail 6.3.1 Participants The participants of the DCAR evaluation session can be divided into two main groups: stakeholders and evaluation team The evaluation team is responsible for facilitating the session and carrying out DCAR steps in schedule Stakeholders are persons who have a vested interest in the system being evaluated While it is beneficial to have people from different stakeholder groups present, DCAR requires only the lead architect and some of the other designers present Typically, other stakeholders join the session when their expertise is needed For example, during the analysis phase of the project under evaluation, it is beneficial to have requirements engineers, representatives of subcontractors, and the testing team available as participants 166 CHAPTER Evaluating Software Architecture Decisions Table 6.2 DCAR Steps and Outputs of Each Step Step Preparation DCAR presentation Business drivers and domain presentation Architecture presentation Decision completion Decision prioritization Decision documentation Decision analysis Retrospective 10 Reporting Outputs Material for evaluation – Decision forces Architecture decisions, decision forces Revised list of decisions, decision relationship graph Prioritized decisions Documentation of most important decisions Potential risks and issues, forces mapped to decisions, revised decision documentation, decision approval Input for process improvement Evaluation report The evaluation team has the following roles: evaluation leader, decision scribe, forces scribe, and questioner The roles can be combined, but it might require more experience from the evaluators, if they need to handle multiple roles The roles are described in the following The evaluation leader facilitates the evaluation sessions and makes sure that the evaluation progresses and does not get stuck on any specific issue The evaluation leader also opens the session and conducts the retrospective (step 9) in the end The decision scribe is the main scribe for the evaluation session The decision scribe’s responsibility is to write down the initial list of architecture decisions, elicited by the evaluation team during the presentations (steps and 4) In the decision completion phase (step 5), the decision scribe will prepare a decision relationship view showing the decisions and their relationships If necessary, the names of decisions and their relationships are rephrased During the analysis (step 8), the decision scribe will write down stakeholders’ arguments in favor or against decisions In other words, the decision scribe continuously revises the decision documentation produced The force scribe captures the forces during the presentations (steps and 4) and is responsible for producing a decision forces view during the review session The force scribe also challenges the decisions during the analysis phase (step 8), using the elicited forces list The questioner asks questions during the analysis phase (step 8) and tries to find new arguments in favor and against the reviewed decisions External domain experts can also act as questioners during the evaluation 6.3.2 Preparation In the preparation phase (step 1), the evaluation team is gathered, and each review team member is assigned to one or more roles The date for the DCAR evaluation session is settled, and stakeholders are invited to participate A preparation meeting is a good way to communicate the preparations needed The architect is asked to prepare a 45-min architecture presentation for the evaluation In the preparation Author Index Herre, H., 43 Herrmann, A., 349 Herrmannsdoerfer, M., 152 Herssens, C., 51, 60 Hess, J.A., 237–238 Hess, T., 76f, 116 Heymans, P., 62 Hickl, S., 61 Highsmith, J., 297–298 Hilbrich, R., 247, 253, 257 Hilliard, R., 158, 160, 161, 163–164, 168, 175 Hinsman, C., 184 Hirshfeld, Y., 183, 185–186, 189 Hirzalla, N., 61, 151 Hitt, L., 327–328 Hofbauer, S., 76f Hofmeister, C., 10, 26, 33, 123, 182, 297–298 Ho¨glund, C., 220 Holman, P., 257 Holsapple, C.W., 60 Hoover, H.J., 184 Horrocks, C., 330, 332–333 Ho¨st, M., 329, 350, 353 Hou, D., 184 Hu, Y., 297, 298–299 Huang, A.-C., 16–17 Huang, C.D., 327, 328 Hughes, J.A., 349 Hummel, B., 184 Hummel, O., 61 Hussain, A., 304–305 Hwonk, B., 327–328 I Iden, J., 27 In, H., 51, 61 Ingstrup, M., 23, 24, 34 Inverardi, P., 16–17 Ivers, J., 182, 183, 267–268, 269 Iyengar, M.K., 181 J Jackson, D., 184, 210 Jackson, M., 76f, 82, 83 Jacob, R.J.K., 290 Jaghoori, M.M., 257 Jamieson, S., 27–28, 30 Jamoussi, Y., 62 Jansen, A., 154, 157, 158, 161, 292, 298 Jansen-Vullers, M., 62 Janssen, M., 62 Jarke, M., 62 Jeffrey, R., 263–264, 265, 273–274, 283, 283t 365 Jeffries, R., 226 Jezequel, J.-M., 62 Johansson, L., 215–216 John, B., 304 Johnson, C.S., 297–298 Johnson, D.M., 342 Johnson, R.E., 197 Johnston, A., 297–298 Jones, C., 241 Jonsson, N., 218 Jordan, E., 184, 210 Jrgensen, M., 326 Joshi, K., 60 Juergens, E., 50, 124, 125, 184 Jureta, I.J., 51, 60, 116 Juristo, N., 304 K Kaczmarek, M., 60 Kahn, B.K., 216–217 Kaikkonen, A., 304 Kallio, T., 304 Kalnins, A., 327–328 Kalviainen, H., 349 Kang, K.C., 237–238, 290 Kang, S., 256 Kankainen, A., 304 Kannengiesser, U., 23, 26 Karlsson, G., 218, 227 Kaspar, H., 50, 124, 151 Kassab, M., 61, 352 Ka¨stner, C., 240, 255, 256 Kayed, A., 61, 151 Kazman, R., 9, 10, 11, 15, 18, 23, 26, 29, 61, 92, 123, 124, 125, 143, 153, 157, 158, 159, 160, 181, 182, 183, 185–186, 189, 263–264, 265, 273–274, 303 Keka¨la¨inen, A., 304 Kellner, M.I., 217 Kelly, D., 291 Kelly, L., 257 Kelly, T., 26 Kelton, W.D., 62 Kephart, J.O., 16–17 Kesdogan, D., 116 Key, A., 329 Khayyambashi, M.R., 154 Khosravi, R., 257 Kilpatrick, P., 62 Kim, D.-K., 61 Kim, E., 60 Kim, G.J., 290–291, 292 366 Author Index Kim, H., 34, 290, 304 Kim, J., 290, 304 Kim, S.H., 61 Kinder, S., 299 King, N., 330, 332–333 Kitchenham, B., 27, 50, 124, 217 Kitchin, D., 255 Kjeldskov, J., 304 Klas, M., 152 Klein, M., 11, 23, 34, 61, 125, 153, 157, 158, 159, 160, 263–264, 265, 273–274, 303 Kleinermann, F., 290 Kleis, W., 340 Klendauer, R., 349 Kloppenburg, S., 184 Klose, K., 184 Kneitinger, H., 77, 80–81 Knodel, J., 184 Koka, B., 327–328, 351 Kolovos, D.S., 183–184 Kontogiannis, K., 61 Kop, C., 41–74 Kopcsek, J., 255 Koskimies, K., 159, 161, 175–176 Koskinen, J., 159 Kostelijk, T., 125, 153 Ko¨stner, C., 269, 283 Kotonya, G., 294 Kowalczyk, M., 61 Kowalewski, S., 124 Krafzig, D., 195–196 Krcmar, H., 349 Kreutzmann, H., 77–78, 78f, 79, 80, 89, 103, 116 Kriege, J., 62 Krogstie, J., 61 Kronen, M., 304 Kruchten, P.B., 10, 23, 26, 33, 35, 42, 61–62, 157, 161, 236, 237, 292, 297–298 Kuhlemann, M., 240, 256 Ku¨hnel, C., 256 Kumar Dubey, S., 303 Kumar, G.S., 181 Kuriakose, J., 201 Ku¨ster, J., 76, 76f, 83, 87 Kutar, M., 304–305 L Lackner, H., 256 Lago, P., 161, 292, 325–327, 329 Lahire, P., 263, 264 Lampasona, C., 152 Lamsweerde, A., 76f, 115 Lang, J., 76f, 116 Larumbe, I., 77, 80–81 Lassche, R., 77, 80–81 Lassing, N., 61 Lattanze, A.J., 9, 61 Lattanze, T., 263–264, 265, 273–274 Lauesen, S., 61, 326, 327–328, 332, 342, 350 Layland, J.W., 257 Lebbink, H.-J., 66 Lee, D., 256 Lee, J., 23, 26, 256, 290 Lee, K., 332–333 Lee, S., 61 Lee, Y.W., 60, 216–217, 304 Leffingwell, D., 61, 157 Legeard, B., 241, 256 Leimeister, J.M., 349 Letier, E., 76f, 115 Leung, J., 257 Leveson, N.G., 126, 152 Lewis, P.R., 17 Liang, P., 154 Liao, S., 76f, 87, 115–116 Lim, Y.-K., 62 Lin, H., 303–324 Lincoln, P., 205 Lindsay, R.M., 27 Lindstro¨m, B., 160 Lindvall, M., 184 Lipow, M., 50, 124, 151 Little, R., 182, 183, 267–268, 269 Liu, C.L., 257 Lo, R.-Y., 326, 327–328 Lochau, M., 244, 247, 256 Lochmann, K., 50, 124, 125, 152 Lohmann, S., 205 Lohr, C., 42 Looker, N., 62 Loucopoulos, P., 51 Lo¨we, W., 17, 181 Lucas, F.J., 181, 183–184 Luckey, M., 152 Lun, H.H., 342 Lund, M.S., 51 Lundhal, O., 304–305 Lundin, M., 218 Lyu, M.R., 62 M MacLeod, G.J., 50, 124, 151 MacLeod, M., 304 Madhavji, N.H., 26, 325–326, 327, 328 Magee, J., 16–17 Author Index Magoutas, B., 51 Mair, M., 201 Maldonado, J.C., 329 Malek, S., 17 Mallery, P., 30 Mann, S., 255 Ma¨nnisto¨, T., 256 Mant, G., 299 Maranzano, J., 157 Markert, F., 244, 247, 256 Marler, R., 76f, 86 Martens, N., 329, 348, 349–350 Martinez, F., 197 Martinez, M., 291 Martı´nez-Torres, M.R., 27 Martı´-Oliet, N., 205 Martnez, C., 77, 80–81 Marzolla, M., 62 Masmanoy, M., 257 Masolo, C., 43, 51 Massonet, P., 61 Matinlassi, M., 61 Matsinger, A., 263 Mauri, G., 77, 80–81 Mayer, K.J., 327–328 Mayr, H.C., 41–74 McCall, J.A., 50, 124, 151 McGee, R.A., 218 McGregor, J.D., 241, 256 McGuinness, D.L., 205 McHale, J., 36 McManus, J., 61 Mead, M., 330 Meding, W., 209, 211, 215–216, 218, 219, 220, 226, 227 Medvidovic, N., 29, 123, 257 Mei, H., 256 Meis, R., 76f, 87 Mendez, D., 152 Mendonc¸anda, N., 184 Mentzas, G., 51 Merks, E., 192 Merritt, M.J., 50, 124, 151 Merson, P., 23, 24, 29, 33, 267–268, 269 Meseguer, J., 205 Metge, J.-J., 257 Meunier, R., 197, 204 Meyer, J.-J.C., 66 Mezini, M., 184 Michalik, B., 158 Miller, M., 299 Miller, S.P., 241–242 Mirakhorli, M., 14, 326–327 Miranda, E., 77, 80–81 367 Mirandola, R., 194, 198–199 Mistrı´k, I., 325–327, 329 Mitchell, M.I., 27, 28, 30, 37 Mlynarski, M., 256 Mohamed, T., 304 Molina, F., 181, 183–184 Molina, J.C., 50 Momm, C., 61 Moore, S., 290 Moreno, A.M., 303, 304 Morgan, M.J., 235 Morgaz, M., 77, 80–81 Moriwaki, Y., 263, 264 Mortimer, K., 332–333 Mosterman, P.J., 241, 256 Mugnaini, A., 60 Mu¨nch, J., 61, 152 Munro, M., 62 Murphy, G.C., 184 Mustajoki, J., 84, 116 Muthig, D., 184 Myers, G.J., 241 Mylla¨rniemi, V., 256 Mylopoulos, J., 19, 51, 60, 61, 76f, 87, 115–116 Myrtveit, I., 217 N Naab, M., 184 Nakagawa, E.Y., 329 Nardi, D., 205 Nardi, J.C., 51 Navarro, E., 77, 80–81 Nekvi, M.R., 327, 328 Nentwich, C., 183–184 Nesi, P., 217 Netjes, M., 62 Neumann, P.G., 126, 152 Nicholson, B., 349 Niemela¨, E., 125, 153, 157, 159, 263–264, 265, 273–274 Nijhuis, J.A., 161 Nikula, U., 349 Nilsson, C., 209, 215, 218, 227 Nilsson, J., 220 Nishio, Y., 263, 264 Nixon, B.A., 19, 60, 61 Nolan, A.J., 241 Nord, R.L., 10, 23, 24, 26, 29, 33, 36, 123, 157, 182, 183, 267–268, 269, 297–298 Northrop, L., 159, 233, 263–264, 265, 273–274, 282 Notkin, D., 184 Noulard, E., 257 Novak, W.E., 237–238 Nuseibeh, B.A., 14, 76, 289, 295, 326–327 368 Author Index O Obbink, H., 26, 33, 297–298 Oberhauser, R., 61 O’Brien, D., 297–298 O’Connor, R., 42 Offutt, J., 241, 256 Olsen, D.H., 27 Oltramari, A., 43 Ongkingco, N., 184 Opdahl, A.L., 27, 61 Oquendo, F., 329 Orcajada, A., 77, 80–81 Ormandjieva, O., 61, 352 Oster, S., 244, 247, 256 O’Sullivan, R., 297–298 Otto, P., 76f Ozdemir, M., 84–86, 105 Ozkose Erdogan, O., 282 Pinedo, M.L., 257 Pizka, M., 151–152 Plasil, F., 194, 198–199 Plo¨dereder, E., 184 Plosch, R., 152 Poels, G., 51 Pohl, K., 62, 263, 325, 350 Polack, F.A., 183–184 Pomerol, J.C., 76f, 84 Poort, E.R., 329, 348, 349–350 Poppendieck, M., 175, 211 Poppendieck, T., 175, 211 Prenninger, W., 256 Prester, F., 256 Pretschner, A., 241, 256 Probert, D.R., 211 Proper, E., 351 Pye, M., 66 P R Paci, F., 76f Paelke, V., 290 Pagetti, C., 257 Paige, R.F., 183–184 Pajot, A., 77, 80–81 Palm, K., 219 Pandazo, K., 215–216 Parent, C., 65 Park, J., 61 Park, S., 61 Parnas, D.L., 18 Passos, L., 184 Pastor, O., 50, 325–326, 329, 349–350 Patel, S., 201 Patel-Schneider, P.F., 205 Paternostro, M., 192 Patton, M.Q., 353–354 Pawlowski, S., 75, 76f Pedrinaci, C., 60 Pellens, B., 290 Pereira, R., 62 Perini, A., 51 Perry, D.E., 181 Perry, W.E., 124–125 Petersen, K., 226 Pfleeger, B.A., 27 Pfleeger, S.L., 50, 51, 124 Phaal, R., 211 Phillips, L., 197 Piattini, M., 51 Picht, M., 62 Pierini, P., 62 Raatikainen, M., 256 Radtke, G., 77, 80–81 Raffo, D.M., 217 Rajan, A., 241–242 Ramachandran, S., 181 Ramdane-Cherif, A., 26 Ramsdale, C., 299 Ran, A., 26, 33, 297–298 Rana, A., 303 Randall, D., 349 Rangarajan, K., 181 Rauch, T., 349 Rausch, A., 194, 198–199, 200, 201 Raymond, G., 61 Raza, A., 184 Redpath, S., 349 Regnell, B., 329, 350 Reichert, M., 61 Reijonen, V., 159 Reimann, C., 290 Remco, C., 34 Remero, G., 77, 80–81 Reussner, R., 194, 198–199 Reynolds, M., 205 Ribo´ , J.M., 51 Richards, P.K., 50, 124, 151 Richardson, I., 297–298 Ripolly, I., 257 Robinson, H., 210 Robinson, W., 75, 76f Rock, G., 255 Rockwell, C., 340 Author Index Rodden, T., 349 Rodrguez, C., 77, 80–81 Rodrı´guez, A., 51 Rogai, D., 217 Rohnert, H., 197, 204 Rombach, D.H., 65 Rombach, H.D., 185, 272–273 Rose, L.M., 183–184 Rosenbach, W., 290 Rosenmu¨ller, M., 264, 269, 283 Roshandel, R., 326–327 Ro¨ßler, F., 65 Roy, B., 263–264, 272–273 Rozinat, A., 61 Rozsypal, S., 157 Ruettinger, S., 349 Ruhe, G., 211 Ruiz Corte´s, A., 256 Runeson, P., 353 Ryan, C., 304 Ryan, M., 62 S Saake, G., 240, 256 Saaty, T., 76f, 84–86, 84f, 104, 105–108 Sabouri, H., 257 Sadowski, D.A., 62 Sadowski, R.P., 62 Sahay, S., 349 Saif ur Rahman, S., 269, 283 Sajaniemi, J., 349 Saliu, M.O., 211 Samhan, A.A., 61, 151 Sampaio, P.R.F., 51 Sanchez-Segura, I., 303 Sa´nchez-Segura, M.I., 61, 304 Sanders, R., 291 Sangal, N., 184, 210 Sanin, C., 66 Santen, T., 76f Sarcia, S.A., 296 Sarkar, S., 181 Sawyer, P., 349 Scacchi, W., 214 Schaefer, I., 240, 255, 256 Scheepers, P., 353 Schieferdecker, I., 241, 256 Schindler, I., 201 Schlingloff, H., 256 Schmerl, B., 16–17 Schmid, K., 263, 283 Schmidt, H., 76, 76f, 83, 87 Schneider, K., 62, 65 Schwaber, K., 157, 158, 175–176 Seddon, P., 353 Seffah, A.M., 303, 304 Seffah, S., 303 Segura, S., 256 Seo, J., 290 Shahin, M., 154 Shapiro, D., 349 Sharp, H., 210 Shaw, M., 29, 123 Shekhovtsov, V.A., 41–74 Shelton, C., 34 Sherman, B., 326, 327–328 Shin, G.S., 23, 26 Shollo, A., 215–216 Siegmund, N., 264, 269, 283 Silva, J.P., 61 Sindre, G., 61 Sinha, V., 184, 210 Sivagnanam, S., 181 Sjberg, D.I.K., 326 Slama, D., 195–196 Smith, C.U., 61 Snowden, D., 158, 349 Snyder, C., 62 So¨derqvist, B., 211 Sokenou, D., 256 Solingen, R., 272–273 Sommerlad, P., 197, 204 Sommerville, I., 294, 325–327, 329, 349 Song, H., 61 Song, X., 327–328 Soni, D., 10, 26, 123 Sorid, D., 290 Sostawa, B., 256 Soto, M., 61 Spaccapietra, S., 65 Spalazzese, R., 62 Spence, I., 62 Srinivasan, A., 257 Stafford, J.A., 9, 61, 184, 267–268, 269 Stage, J., 304 Staron, M., 209, 211, 215–216, 218, 219, 220, 226, 227 Starzecka, M., 60 Steen, A., 77, 80–81 Steenkiste, P., 16–17 Stegelmann, M., 116 Steghuis, C., 351 Steinberg, D., 192 Stensrud, E., 217 Stephenson, P., 299 369 370 Author Index Stevens, S.S., 30 Steward, D.V., 184 Stol, K.-J., 297–298 Stolen, K., 51 Stoll, P., 304 Stolterman, E., 62 Storey, N., 126, 152 Strauss, A.L., 63, 64 Strong, D.M., 216–217 Stuckenschmidt, H., 65 Subiaco, P., 296 Suchman-Simone, S., 61 Sullivan, K.J., 17, 18, 184 Supakkul, S., 14 Susi, A., 51 Sutcliffe, A., 61, 62 Sutherland, J., 157, 158, 175 Sweetser, P., 342 Szczerbicki, E., 66 T Tadj, C.K., 26 Taentzer, G., 76f, 115 Tahvildari, L., 61 Talcott, C., 205 Tang, A., 158, 161, 292, 298 Tanriverdi, V., 290 Tarruell, F., 77, 80–81 Taylor, H., 197 Taylor, R.N., 29, 123, 257 Tekampe, N., 77–78, 78f, 79, 80, 89, 103, 116 Tekinerdogan, B., 263–264, 282 Tenenberg, J., 62 ter Hofstede, A., 61 Terra, R., 184 Terrenghi, L., 304 Teuchert, S., 151–152 Tewoldeberhan, T., 62 Thaker, S., 255 Thu¨m, T., 240, 256 Tian, J., 61 Tibble, J., 184 Tofan, D., 292 Tolstedt, J.L., 62 Tomaszewski, P., 209, 211, 218 Tomayko, J., 157 Toral, S.L., 27 Toro, C., 66 To¨rsel, A.-M., 256 Toval, A., 181, 183–184 Trendowicz, A., 152 Trew, T., 263 Trivedi, K.S., 62 Troyer, O.D., 290 Tyack, J., 62 Tyree, J., 169 U Utting, M., 241, 256 V Valente, M.T., 184 Valle, C., 304 Vallecillo, A., 51 van de Weerd, I., 329, 348, 349–350 van der Aalst, W., 61 van der Linden, F., 263, 264, 282 van der Torre, L., 60 van Eck, P., 351 Van Gurp, J., 181 van Heesch, U., 158, 160, 161, 163–164, 168, 175, 329 van Kampenhout, J.R., 253, 257 van Ommering, R., 264 van Solingen, R., 213–214 van Vliet, H., 34, 61, 292, 329, 348, 349–350 Vastag, S., 62 Ven, J.S., 158, 161 Verbaere, M., 184 Verlage, M., 263 Vianale, A., 62 Vlissides, J., 197 Voelter, M., 255 Vogel, G., 184 Volkov, V., 75, 76f Vollmer, S., 77–78, 78f, 79, 80, 89, 103, 116 von Rhein, A., 240, 256 W Wagner, G., 51 Wagner, S., 50, 124, 125, 151–152, 256 Walczak, A., 60 Walia, G.S., 41 Walter, R., 235 Walters, G.F., 50, 124, 151 Wang, H., 76f, 87, 115–116 Wang, R.Y., 216–217 Wang, X., 66, 297–298 Ward-Dutton, N., 210 Warnken, P., 157 Watanabe, K., 263, 264 Watkins, C.B., 235 Webb, M., 125, 159 Weinberg, D., 61 Weinreich, R., 66 Author Index Weinstock, C.B., 9, 61 Weise, C., 124 Weißleder, S., 241–242, 245, 256 Weiss, D., 157 Wende, C., 255 Westfechtel, B., 205 Westland, J.C., 41 Weyns, D., 158 Whalen, M.W., 241–242 White, K., 332–333 Wholin, C., 322 Widen, T., 283 Widjaja, T., 76f, 116 Widmer, A., 287, 291, 297, 298–299 Wiedemann, T., 77, 80–81 Wieringa, R., 351 Wijnstra, J.G., 263, 264, 282, 291 Willans, J., 290 Williams, L.G., 61 Winter, M., 256 Winter, S., 152 Witteman, C.L.M., 66 Wohlin, C., 226 Wojcik, R., 23, 24, 29, 33 Wolf, A.L., 181 Wolf, T., 292 Wood, B., 23, 24, 29, 33 Wood, W.G., 9, 23, 24, 33–34, 61 Woods, E., 125, 157 Wortham, S., 332–333 Wu, D.J., 327–328 Wyeth, P., 342 Wynn, M., 61 X Xu, J., 62 Y Yannakakis, G.N., 342 Yao, X., 17 Yin, R.K., 330, 353 Yochem, A., 197 Yu, E., 33, 60, 61, 76f, 82, 87, 115–116 Yuming, Z., 217 Z Zaha, J.M., 60 Zamborlini, V., 43f Zander, J., 241, 256 Zaremski, A., 159 Zdun, U., 198–199 Zhang, W., 256 Zhang, Y., 124–125 Zhao, H., 256 Zhu, L., 23, 26, 61, 263–264, 265, 273–274, 283, 283t Zimmerman, G., 157 Zimmermann, T., 209 Zo¨lch, R., 256 Zorcic, I., 244, 247, 256 Zschaler, S., 50–51 371 Subject Index Note: Page numbers followed by f indicate figures and t indicate tables A Active Review of Intermediate Designs (ARID), 159 ADD See Attribute driven design (ADD) method Agile approach, ALMA See Architecture-level modifiability analysis (ALMA) Analytical hierarchy process (AHP), 84–86, 84f Analytic network process (ANP), 84–86, 104–105, 110–111, 110f Archample method architecture stakeholders, 270, 271f characterization, 283, 283t MPL architect, 270, 271f architecture evaluator(s), 271f, 272 bottom-up evaluation, 273, 274f decomposition, 272 top-down evaluation, 273, 273f PL architect, 270, 271f PL architecture evaluator(s), 271f, 272 preparation phase, 272 project decision makers, 270, 271f REFoRM project, Aselsan REHI˙S (see REFoRM project, Aselsan REHI˙S) reporting and workshop, 274, 275t Architectural conformance checking architectural erosion/architectural drift, 181 checking layers, 200–201 classification scheme, 189–190 CoCoME (see Common component modeling example (CoCoME)) complexity, 183 component-based systems, 186–189, 186f, 188f contribution and limitations, 202–204, 204t CQL, 184 domain-specific reference architectures, 201 DSM, 184 graphical approach, 185, 185f Lattix LDM, 184 logical formalism, 205 MDSD approach (see Model-driven software development (MDSD) approach) minimal satisfying structure, 190–191, 191f prototypical implementation, 191–194, 192f, 193f reflexion modeling, 184 restricting assumptions, 191 software development phases, 182f, 183 StoreGUI, 190–191 tool support, 182–183 transformation, 183–184, 190 Architectural erosion/architectural drift, 181 Architectural software quality assurance (aSQA), 160 Architecture-level modifiability analysis (ALMA), 159 Architecture trade-off analysis method (ATAM), 11, 159, 175–176 ARID See Active Review of Intermediate Designs (ARID) Artifact-Level Quality Assessment functions aggregated qualities, 48 definition, 47 derived qualities, 48 system-level qualities, 47 aSQA See Architectural software quality assurance (aSQA) ATAM See Architecture trade-off analysis method (ATAM) Attribute driven design (ADD) method, 10 architecting project, 29 artifacts and activities, 23 candidate architectural drivers, 24, 25f design constraints, 24, 25f embedded systems, 26 external validity, 37 fault-tolerant systems, 23 functional requirements, 24, 25f geographical information systems, 26 issues architecture tactics, 34 first iteration, 33–34 team workload division and assignment, 33 terminology, 33 iterative process, 24, 25f, 26 machine learning, 26 MVC, 26 participants, 28 quality attributes, 24, 25f recursive decomposition process, 24 research perspective, 36–37 RUP, 26 SEI, 24 Siemens Four Views, 26 TAM (see Technology acceptance model (TAM)) training, 29, 36 C CBAM See Cost benefit analysis method (CBAM) Code query languages (CQL), 184 373 374 Subject Index Common component modeling example (CoCoME) architectural rules, 198–200, 198f, 200f architecture and design models, 189, 195f Cash Desk system, 197 channels, 197–198, 197f component-based systems, 194–195 data transfer objects, 195–196 informal architectural rule, 196 inventory subsystem, 194 service-oriented layer, 196, 196f three-layer-architecture, 194 Composite product line (CPL), 264 Construct validity, 37 Contract-based software development documentation processing, 328 fixed-fee contract, 327–328 vs in-house development systems, 328 motivation, 326–327 multi-stage contract, 328 procedural part, 327 QR engineering, (see Quality requirements (QRQ/QRs)) research sources, 327 transactional part, 327 variable price contracts, 327–328 Cost benefit analysis method (CBAM), 18, 159, 160 CPL See Composite product line (CPL) CQL See Code query languages (CQL) Customer-configurable products See Quality assurance technique D Dashboards Agile and Lean principles, 211–212, 211f early warning systems, 214 indicators, 215 industrial dashboards (see Industrial dashboards) information quality, 216–217 overview, 210 stakeholders, 215 standardization, 212–214, 213f succinct visualization, 215–216 Decision-centric architecture review (DCAR) method agile development methods, 157 ALMA, 159 architecture presentation, 167–168 aSQA, 160 ATAM, 159, 175–176 business drivers and domain overview presentation, 167 CBAM, 159, 160 decision analysis, 170 decision completion, 168 decision documentation, 169–170 decision prioritization, 168–169, 169t evaluation report, 171 evaluation schedule, 171, 172t industrial experiences, 172–175 participants, 165–166 presentation, 167 retrospective, 170 SAAM, 159 scrum integration in sprints approach, 176–177 up-front architecture approach, 175–176 team preparation, 166–167 Dependency structure matrices (DSM), 184 Domain model, E E-commerce application activities, 146–147, 147f communication management, 146 content management, 146 design decision, 149 key quality issues, 149, 149f management component, 145 online trading, 146 order tracking, 146 public relationship, 146 quality attributes, 147, 148, 148f quality trade-off points, 148 report generation, 146 research questions, 145, 149–150 Embedded systems development, 15 Ethical validity, 37 F Failure modes and effects analysis (FMEA) technique, 126–127, 152, 153 Family-architecture Assessment Method (FAAM), 159 Financial report generator (FRG), 53 Flight management system (FMS) aviation electronic systems, 235 features, 238, 238t model-based deployment application components, 249, 250f, 251 product-centered software deployment, 251–253, 252t, 254t product-line-centered software deployment, 253–255 resource model, 249, 250f model-based testing configurable state machine, 242, 243f product-centered approach, 244–245, 244t, 246–247 product-line-centered, 245–247, 245f, 246f, 256 product line model level, 242–243, 243t, 244 Subject Index FMEA technique See Failure modes and effects analysis (FMEA) technique FMS See Flight management system (FMS) Force-Related Quality Intention, 48 FRG See Financial report generator (FRG) Functional Size Measurement (FSM) method, 342 G German Federal Office of Administration (BVA), 201 Goal model, 87–88, 88f, 111–112 Goal-Question-Metric (GQM) approach, 272–273 Grounded theory method, 333 H Hazard analysis of software architectural designs (HASARD) method automation techniques, 154 behavior view, 123–124 cause-consequence analysis, 130–132, 133f cause effect analysis, 126 code view, 123–124 conceptual view, 123–124 contribution factors, 136–138, 137f design decision, 138–139, 138f development view, 123–124 e-commerce application (see E-commerce application) FMEA technique, 126–127, 152, 153 graphical quality modeling notation, 127 hazard identification techniques, 126 HAZOP method, 129–130, 131t, 132t hierarchical model, 124, 125 key open problem, 153–154 model-based quality analysis method, 126, 153 module view, 123–124 quality analysis process, 127–128, 128f quality issues interrelationships, 140–142 quality model activity-based approach, 151–152, 154 construction technique, 127, 135–136, 136f graphic notation, 133–135, 134f, 135f hierarchical quality models, 151 qualitative models, 151 quality risks, 139–140 quality view, 124 relational quality models, 124–125 scenario-based quality analysis method, 125, 153 software hazard analysis method, 127 software tool, 127 SQUARE tool, 143–145, 143f, 144f trade-off points, 142 375 Hazard and operability studies (HAZOP) method, 129–130, 131t, 132t Human computer interaction (HCI) community, 303 I i* framework, 82, 88f Industrial dashboards Ericsson AB auxiliary measurement system monitoring dependencies, 220, 220f 0-defect criterion, 219 MS Vista Gadget, 219, 220, 220f overview, 217t, 218 product release readiness indicator, 219 recommendations, 225–226 Saab electronic defense systems build radiators, 223 code complexity, 224 external product quality, 224–225, 224f grayed-out field, 224 hotspots, 224 internal quality, 223, 223f overview, 217t, 218–219 tree map/heatmap, 223 Volvo Car corporation auxiliary trends, 222 check-in pace indicator, 222 check-in trend indicator, 222 components, 221 development progress monitoring, 221, 221f heatmap of revisions per model per week indicator, 222 number of check-ins during the current week indicator, 222 number of check-ins last week indicator, 222 overview, 217t, 218 stakeholder, 221 Interaction candidates detection different FRQ, different types of QRQ, 91t, 93t, 94t, 96t, 97 different FRQ, same type of QRQ, 91t, 93t, 94t, 96–97, 96t generating alternatives performance relaxation template, 101t, 102–104 quantities, 98 security relaxation template, 98–102, 100t steps, 98, 99f, 102–104 initialization phase setting up initial tables, 92–94, 93t, 94t setting up life cycle, 94 quality requirements, 89, 90f classification, 91, 91t pairwise comparisons, 89, 90f possible interactions, 89–91, 91t same FRQ, different types of QRQ, 91t, 93t, 95, 96t same FRQ, same type of QRQ, 91t, 93t, 94–95, 95t, 96t 376 Subject Index Interest-Related Quality Intention, 48 Iterative development process, 6–8, 7f K Knowledge-Related Quality Intention, 48 L Lazy acquisition, 15 Lifecycle approaches QAs agile methods, incremental approach, iterative development process, 6–8, 7f waterfall, 5–6, 5f state of the art advantage, 14 agile approach, 13–14 software architecture, 13 Twin Peaks model, 14 Lightweight evaluation architecture decision forces configuration language, 163, 164f domain-specific language (DSL), 163 implementation effort, 163, 164f resulting force, 163, 164f architecture evaluation methods architectural software quality assurance, 160 goal of, 159 pattern-based architecture review, 160 SAAM evaluation, 159 scenario-based evaluation methods, 159 side effects, 158 DCAR method (see Decision-centric architecture review (DCAR) method) decision relationship view model, 161–163, 162f executive decision, 161 existence decision, 161 PBAR, 160 property decision, 161 M Managing traffic ticket (M-ticket) system Android application, 315–317 entities interaction, 320–321 modified software architecture, 317, 318f SSF mechanism, 317–320, 318f, 319t usability requirements, 317, 322 user preference mechanism, 318f, 319–320, 319t, 320t MARTE systems See Modeling and analysis of real-time and embedded (MARTE) systems MBT See Model-based testing (MBT) Medical planning and simulation (MPS) systems architecture stage active reviews, 298 analysis, 297 documentation, 296 evaluation, 297 implementation and maintenance, 298 incremental synthesis, 298 process, 297–299 simple design, 298–299 stakeholders, 295–296, 296t synthesis proposes, 297 user interaction, 296 challenges development and architecture process constraints, 292 model correctness, 292 model-view-controller architecture, 292 requirements engineering and architecture analysis, 292 medical imaging product lines, 291 model correctness, 294–295 performance, 293 simulation screenshot, 287, 288f software development CLEVR approach, 290 conceptual modeling, 290 higher-level software engineering concept, 290 requirements-related aspects, 291 VRID approach, 290 typical MPS system, 287, 288f usability, 294 virtual reality vs soft-ware systems, 291 Mobile software development architectural patterns, 304 ATAM/ARID, 303 HCI community, 303, 304–305, 323 M-ticket system Android application, 315–317 entities interaction, 320–321 modified software architecture, 317, 318f SSF mechanism, 317–320, 318f, 319t usability requirements, 317, 322 user preference, 318f, 319–320, 319t, 320t real mobile application, 322–323 SE community, 303 stringent software, 304 usability, definition, 303 usability mechanism classification, 305–306 MVC pattern, 305–306 research questions, 306–307 SSF (see System status feedback mechanism) user preferences (see User preference mechanism) Subject Index Model-based deployment, 258 automatic/machine-supported design, 247 definition, 247–248 flight management system application components, 249, 250f, 251 product-centered software deployment, 251–253, 252t, 254t product-line-centered software deployment, 253–255 resource model, 249, 250f spatial and temporal deployment, 248–249, 249f Model-based testing (MBT), 257–258 definition, 241–242 flight management system configurable state machine, 242, 243f product-centered approach, 244–245, 244t, 246–247 product-line-centered, 245–247, 245f, 246f, 256 product line model level, 242–243, 243t, 244 UML state machine, 242 Model-driven software development (MDSD) approach architectural rules, 182 consistency checking approaches, 181–182 Modeling and analysis of real-time and embedded (MARTE) systems, 83, 92–93, 102 Model View Controller (MVC), 26, 305–306 MPS systems See Medical planning and simulation (MPS) systems M-ticket system See Managing traffic ticket (M-ticket) system Multi-objective optimization (MOO), 86–87 Multiple product line (MPL) engineering application engineering, 264–265 Archample method (see Archample method) architecture views configuration item (CI), 267 CPL, 267–269 definition, 267 existing viewpoints, 267 product line decomposition viewpoint, 267, 268–269, 268t, 269f product line dependency viewpoint, 267, 269–270, 270f composite pattern, 264–265, 265f CPL, 264 definition, 264 domain engineering, 264–265 REFoRM project at Aselsan REHI˙S, 266–267, 266f software architecture analysis methods, 265, 266f MVC See Model View Controller (MVC) O Optimization QuaRO method, 86–87, 86t requirements interaction management constraints, 112–114 377 decision variables, 112 inputs, 88f, 96t, 100t, 101t, 111–112 parameters, 111 solution, 114–115 target function, 112 P Pattern-based architecture review (PBAR), 160 Problem-oriented requirements engineering context diagram, 83 domains, 83 problem diagram, 83 problem frames, 82 UML class diagrams, 83 Product line architecture, 15 Product Line Potential Analysis (PLPA), 282–283 Product Line Technical Probe (PLTP), 282 Property-oriented harmonization, 57 Q QAs See Quality attributes (QAs) QRQ/QRs See Quality requirements (QRQ/QRs) Quality assessment, 45–46 execution-level concepts, 50–51 property-level assessments, 46f specification-level concepts artifact-level quality assessment functions, 47–48 quality-related intentions, 48 quality structure, 46–47 quality types, 46 software artifacts, 46 stakeholders, 48 view-defining sets (VDS), 48–49 tabular definition, 47f Quality assurance technique configurable models feature model, 238–239, 238t optional element, 237 parameterized element, 237 system views, 236–237 variability view, 237–238 XOR container, 237 definition, 233 FMS, 235, 235f MBT (see Model-based testing (MBT)) model-based deployment (see Model-based deployment) product-centered approaches, 240, 240f product-line-centered approaches, 240–241, 240f, 255–256 Quality attributes (QAs) architecture design ADD, S4V and RUP method, 10 378 Subject Index Quality attributes (QAs) (Continued) definition, documentation, 10–11 experience-based evaluation, 12 quantitative vs qualitative approaches, 11 scenario-based evaluation, 11 characteristics, 3–4 “FURPSþ” classification, lifecycle approach agile methods, incremental approach, iterative development process, 6–8, 7f waterfall, 5–6, 5f MPS systems (see Medical planning and simulation (MPS) systems) nonfunctional requirements, physical constraints, 4–5 Quality Aware Software Engineering (QuASE) process analysis stage, 65 dissemination stage, 65 elicitation stage, 64, 65–66, 66f integration stage, 64 Quality-driven process support approaches, 61 Quality requirements (QRQ/QRs), 89, 90f case study participants interview-based exploratory case studies, 332 organizations, 330–331, 331t pricing agreements, 331 software architects, 331–332 vendors, 330–331 classification, 91, 91t contract elements, 346–348 contract’s role, 351 data analysis strategy, 333 data collection, research instrument, 332–333 documentation, 340, 349 elicitation checklists, 338–339 computer-based environment, 339 exit point analysis, 339–340 experimental serious-game-based process, 339 functional requirements specification, 339 elicitation approaches, 338–340 limitations, 353–354 negotiation, 350 pairwise comparisons, 89, 90f possible interactions, 89–91, 91t prioritization criteria client’s willingness to pay and affordability, 340, 349 key decision makers, 341 maintainability and evolvability, 341–342 requirements, 341 technical and cost limitations, 341 quantification, 342–343, 349–350 requirements negotiations, 345–346 research objective and plan, 329–330 case study research, 330 structured open-end in-depth interviews, 330 SAs and RE staffs vocabulary, 349 software architects as a bridge, 335–336 business analysts, 334 distinctive roles, 335, 335f domain knowledge, 336 job description, 334–335 programmer, 334 QR as a service, 336 as a review gatekeepers, 336 role in, 348 software architecture perspective, 329 system quality properties, 325 terminology, 337–338 validation, 344–345, 350 Quality requirements optimization (QuaRO) method i* framework, 82, 88f Intertwining Requirements & Architecture phase, 76, 76f optimization, 86–87, 86t preparatory phases Understanding the Problem phase, 81f, 89 Understanding the Purpose phase, 87–88, 88f problem-oriented requirements engineering, 81f, 82–84 reconciliation phase (see Interaction candidates detection) software engineering process, 75–76, 76f valuation of requirements, 84–86, 84f Quality view harmonization characteristics, 67 definition and levels, 52 dimensions of, 42 empirical studies characteristics, 62–63 data processing, 63–64 postmortem analysis, 63 soundness factors, 64 structured and semi-structured interviews, 63 foundational ontology selection of, 45 Unified Foundational Ontology (see Unified Foundational Ontology (UFO)) future research, 67 implementation directions, 67 practical application QuASE process, 64–65 Subject Index QuIRepository, 65 QuOntology, 65 process activities generic process-based techniques, 61 organizational entities, 60 prototyping techniques, 62 quality subjects, 60–61 request-centered techniques, 61 scenario-centered techniques, 61 simulation-based techniques, 62 quality assessment (see Quality assessment) quality harmonization process initial assessment and negotiation decision, 58–59 negotiation process, 59–60 property-oriented harmonization, 57 property state, 57–58 rank-oriented harmonization, 57 substitution artifacts, 56–57 quality subjects’ positions in, 51–52 software artifact, 53 view harmonization harmonizing artifacts, 53–54 property types, 54 quality view alignment, 55–56 QuaRO-ANP-Network, 96t, 105–110, 107f, 109f QuaRO method See Quality requirements optimization (QuaRO) method QuASE process See Quality Aware Software Engineering (QuASE) process QuIRepository, 65 QuOntology, 65 R Rank-oriented harmonization, 57 Rational unified process (RUP), 10, 26 REFoRM project, Aselsan REHI˙S Archample method, MPL CPL, 269f, 275, 280 four project line, 275, 277f, 280 GQM evaluation approach, 275–279, 279t high-level business goals, 275–279 mission domain applications level, 278f, 280 MPL design alternative, 281–282, 281f one project line, 275, 276f, 279, 280t preparation phase, 275 reporting and workshop, 282 layered reference architecture, 266–267, 266f Requirements interaction management definition, 75 functional and quality requirements optimizing process, 75–76, 76f optimization 379 constraints, 112–114 decision variables, 112 inputs, 88f, 96t, 100t, 101t, 111–112 parameters, 111 solution, 114–115 target function, 112 smart grid system (see Smart grid system) valuation ANP advantages, 104–105 with ranking, 108–110, 110f setting up QuaRO-ANP-Network, 96t, 105–108, 107f, 109f RUP See Rational unified process (RUP) S SAAM See Software architecture analysis method (SAAM) Siemens Four Views, 26 Siemens’ Views (S4V) method, 10 Smart grid system functional requirements performance, 80–81, 81f security and privacy, 80 smart metering, 79, 79t protection profile, 77–78, 78f SNAP See Software nonfunctional assessment process (SNAP) Software architecture analysis method (SAAM), 159 Software engineering (SE) community, 303 Software Engineering Institute (SEI), 23, 282 Software nonfunctional assessment process (SNAP), 342 Software process conceptualization techniques, 61 Software product quality monitoring See Dashboards Spearman’s r correlation coefficient, 31–32, 32t SSF mechanism See System status feedback (SSF) mechanism Stakeholder consumer sub-network, 108, 109f Statement component (StoreGUI), 190–191 State of the art architecture representation, 15–16 lifecycle approaches, 13–14 loose coupling, 12–13 quality-centric design, 13 reuse design, 13 self-adaptation, 16–17 value-driven perspective CBAM, 18 design for change, 18 economics, 17–18, 19 structural and technical perfection, 17 StoreGUI See Statement component (StoreGUI) Superdecisions, 107f, 108, 109f, 110f System definition, quality, 2–3 380 Subject Index System status feedback (SSF) mechanism architectural component, 310, 311f case model, 307, 309f generic component system awareness, 307 system-initiated status, 309 system status notification, 309 user-initiated status, 309 usability-enabling design guidelines, 307, 308t, 309f T TDD See Test-driven development (TDD) Technology acceptance model (TAM) criterion validity, 37 face validity, 37 perceived ease of use (PEU) attribute driven design first iteration, 36 content validity, 37 Cronbach’s a values, 30 data collection, 29–30, 30t measure, 27, 28t medians, 31, 31t Spearman’s r correlation, 31–32, 32t terminology, 35 Wilcoxon signed rank test, 31, 32t perceived usefulness (PU) analysis of, 35 content validity, 37 Cronbach’s a values, 30 data collection, 29–30, 30t measure, 27, 28, 28t medians, 30, 31t Spearman’s r correlation, 31–32, 32t tactics, 36 team workload division and assignment, 35 Wilcoxon signed rank test, 31, 32t willingness to use (WU) Likert scale, 35 measure, 27, 28, 28t median and mode, 31, 31t Spearman’s r correlation, 31–32, 32t Wilcoxon signed rank test, 31, 32t Test-driven development (TDD), U UML tool, 158, 168, 174–175 Unified Foundational Ontology (UFO) endurants, 44 entities, 44 trope, 44 events, 44 fragments, 43f hierarchy, 43–44 property type color and domain, 44 property structures, 44 relator, 44 subset of, 45 Unified Modeling Language (UML), 241–242 Usability supporting architectural patterns, 304 User preference mechanism architectural component, 314–315, 316f case model, 310–312, 312f generic component, 312–314 usability-enabling design guidelines, 312, 313t V Vector optimization, 86 Viewpoint-Related Quality Intention, 48 W Wilcoxon signed rank test, 31, 32t ... 28 (6), 66–71 Harrison, N., Sutherland, J., 20 13 Discussion on value stream in scrum In: ScrumPLoP 20 13, May 23 ISO/IEC, 20 11 WD4 420 10, IEEE P 420 10/D9 Standard draft Retrieved December 12, 20 12, ... IEEE/IFIP Conference on Software Architecture (WICSA) IEEE, pp 109– 120 Jansen, A., Avgeriou, P., Ven, J., 20 09 Enriching software architecture documentation J Syst Softw 82 (8), 123 2– 124 8 Kazman, R.,... Warnken, P., Weiss, D., 20 05 Architecture reviews: practice and experience IEEE Softw 22 (2) , 34–43 Nord, R., Tomayko, J., 20 06 Software architecture- centric methods and agile development IEEE

Ngày đăng: 16/05/2017, 16:20

Từ khóa liên quan

Mục lục

  • Copyright

  • Acknowledgments

  • About the Editors

  • Contributors

  • Foreword by Bill Curtis

    • About the Author

    • Foreward by Richard Mark Soley

      • Quality Testing in Software

      • Enter Automated Quality Testing

      • Whither Automatic Software Quality Evaluation?

      • Architecture Intertwined with Quality

      • About the Author

      • Preface

        • Part 1: Human-centric Evaluation for System Qualities and Software Architecture

        • Part 2: Analysis, Monitoring, and Control of Software Architecture for System Qualities

        • Part 3: Domain-specific Software Architecture and Software Qualities

        • Relating System Quality and Software Architecture

          • Introduction

            • Quality

            • Architecture

            • System

            • Architectural scope

            • System quality and software quality

            • Quality Attributes

            • State of the Practice

              • Lifecycle approaches

                • Waterfall

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan