378 Chapter 30 ■ Project management Software is created by living, breathing people. However splendid the tools and tech- niques, software development relies on human creativity. There have been many attempts to analyze the problems of software projects and suggest informal ways of cre- ating a successful project team. However well organized the team, there are always informal processes at work – both individual and group. A project manager needs an awareness of these processes and needs to know what can be done to avoid weakening a team and what can be done to improve a team. One extreme school of management sees people as inherently lazy and needing to be controlled. They need to be told clearly what to do, given frequent deadlines and threatened with the consequences of poor performance. The opposite is the belief that people are motivated by rewards such as respect, praise and money. Any project faces the dilemma of control versus autonomy. Can the team members be trusted to do a good job with the minimum of supervision? Are mechanisms required to ensure that team members are performing? In a factory production plant, such as a car assembly line, the task that each team member performs is rigorously specified and timed to a fraction of a second. The degree of control is total and high levels of 30.7 ● Managing people There are approaches that can help in a situation like this. They use a process model that involves small steps. These tasks are typically as small as an hour or a day. If some- thing goes wrong with a small task, it can easily be rectified, but if something goes wrong during a task that takes months, it is hard to fix. Second scenario: a task takes longer than expected or will take longer than expected. This is similar to the above case, but here the developer is still around. If the activity is on the critical path, the deadline has already been missed. There is a huge temptation either to put additional people on the project or to ask the current people to work extra hours or to ask everyone to work faster. It is dangerous to give into any of these tactics. The likelihood is that, later, another task will overrun, compounding the problem. Here, again, if the tasks are small, the damage is small. Third scenario: the client asks for changes. The scale of changes must, of course, be assessed. However, it is unlikely that the effect is to reduce work. More likely, additional work is needed to provide additional functionality. Worse, significant changes are need- ed to existing design and code. Now it is natural to want to please the client, and it may be that the new work is full of interest and challenge, but the only answer here is to confront the client with the effects on cost and deadlines. The client can then decide whether to pursue the change, and incur the penalties or perhaps substitute the new request for an old. SELF-TEST QUESTION 30.5 A meal is in preparation. It looks as if it will be late. What do you do? BELL_C30.QXD 1/30/05 4:28 PM Page 378 30.7 Managing people 379 productivity and quality are virtually assured. By contrast, an artist who creates a painting has complete autonomy; deadlines and quality are by no means certain. Developing soft- ware probably fits somewhere between these extremes, with a need for some autonomy and some control. So, on the one hand, a heavyweight technique can be used. Processes are well-defined and reporting is frequent and stringent. The waterfall model has these characteristics – it consists of well-defined steps, each of which leads to a well-defined product. Here there is minimal dependence on the individuals skill. On the other hand, a lightweight technique can be used. Processes are ill-defined and reporting is less frequent. An example is open source development. Here the skills of the individuals are vital. Another factor is the individual variability between software developers – some are fast and effective, while some are slower and less effective. If we assume that this is cru- cial, then the logic is to hire only good developers and fire (or avoid hiring) bad ones. On the other hand, we could accept diversity and plan accordingly. SELF-TEST QUESTION 30.6 You are part of a group, preparing a meal. You know that someone works slowly. What do you do? There are no clear answers to these dilemmas. But, if we believe that developers must be respected, there are some things that should be avoided and some that are worth trying. First, some ways in which a management can weaken a team are: ■ show distrust of the team ■ overemphasize the paperwork, rather than creative thinking ■ scatter the team members in different locations ■ ask team members to work on a number of different projects (functional team), rather than focusing on the particular project (project team) ■ press for earlier delivery at the expense of reducing the quality of the software ■ set unrealistic or phony deadlines ■ break up an effective team. Some management approaches for enhancing team activity are: ■ emphasize the desirability of the quality of the software product ■ plan for a number of successful completed stages (milestones) during the lifetime of the project ■ emphasize how good the team is ■ preserve a successful team for a subsequent project BELL_C30.QXD 1/30/05 4:28 PM Page 379 380 Chapter 30 ■ Project management ■ reduce hierarchy in the team, promoting egalitarianism, placing the manager out- side the team ■ celebrate diversity within the team members. Exercises Summary ■ software project management is difficult ■ project management involves selecting a process model, a team organization, tools and methods ■ one approach to estimating the cost of a software system involves counting function points ■ planning involves deciding on milestones and scheduling tasks amongst people ■ the informal aspects of team working during software development can be as important as the technical aspects. • 30.1 Suggest facilities for a software tool that supports the planning and monitoring of software project activities. 30.2 Draw up a plan for the following software development project. Document the plan as a Pert chart, in which each activity is shown as an arc, with a bubble at its starting point (the event which triggers the activity) and a bubble at its completion (the event which concludes the activity). The plan is to adopt the waterfall model. The development must be completed in two years. The following activities are envisaged: 1. requirements analysis – 4 person months 2. architectural design – 3 person months 3. detailed design – 4 components, each at 6 person months per component. 4. coding – 2 person months for each component 5. unit testing – 6 person months for each component 6. system testing – 6 person months. How many people will be required at each stage of the project to ensure that the deadline is met? 30.3 Suggest features for a software tool to support software cost estimation. 30.4 You are the manager of a software development project. One of the team members fails to meet the deadline for the coding and testing of a component. What do you do? BELL_C30.QXD 1/30/05 4:28 PM Page 380 Further reading 381 30.5 You are the project leader for a large and complex software development. Three months before the software is due to be delivered, the customer requests a change that will require massive effort. What do you do? 30.6 For each of the systems in Appendix A: ■ suggest a process model ■ predict the development cost ■ suggest a team organization ■ suggest a package of tools and methods. Answers to self-test questions 30.1 There are various satisfactory answers including reliability 30.2 Yes. 30.3 10,000/100 = 100 person weeks. This is more than 2 person years, allowing for time spent on activities such as vacations and training. 30.4 The number of function points is 12, which gives 12 ϫ 1 = 12 person months. Multiplied by the difficulty factor of 1.5, this gives 18 person months. 30.5 Ascertain whether a reduced meal of adequate quality can be produced in the available time. Otherwise, tell the diners that the meal will be late. 30.6 You could put them under pressure, hoping they will deliver on time. But, perhaps better, you could accommodate their work rate in the planning. A good collection of articles on project management is presented in: Richard H. Thayer (Editor), Winston W. Royce and Edward Yourdon, Software Engineering Project Management, IEEE Computer Society, 1997. This book is a readable and practical discussion of dealing with software costs: T. Capers Jones, Estimating Software Costs, McGraw-Hill, 1998. The seminal book on software cost estimation is still the classic: B.W. Boehm, Software Engineering Economics, Prentice Hall, 1981. Further reading • BELL_C30.QXD 1/30/05 4:28 PM Page 381 This view is updated in: B. Boehm, C. Clark, E. Horowitz, C. Westland, R. Madachy and R. Selby, Cost models for future life cycle processes: COCOMO 2.0, Annals of Software Engineering, 1 (1) (November 1995), pp. 57–94. This account of the development of Windows NT reads like a thriller and has signifi- cant lessons for software developers. It charts the trials, tribulations and the joys of developing software: G. Pascal Zachary, Showstopper: The Breakneck Race to Create Windows NT and the Next Generation at Microsoft, Free Press, 1994. Life within Microsoft and the lessons that can be learned are well-presented in this readable book: Steve Maguire, Debugging the Development Process: Practical Strategies for Staying Focused, Hitting Ship Dates and Building Solid Teams, Microsoft Press, 1994. Accounts of failed projects are given in: Stephen Flowers, Software Failure: Management Failure: Amazing Stories and Cautionary Tales, John Wiley, 1996, and in Robert Glass, Software Runaways, Prentice Hall, 1998. The classic book that deals at length and in a most interesting way with the informal, social aspects of working in a team. It is a most enjoyable read: G. Weinberg, The Psychology of Computer Programming, Van Nostrand Reinhold, l971. This is the classic book on the problems of running a large-scale software project. It is worth reading by anyone who is contemplating leading a project. There is a section on chief programmer teams. This is a classic book, revisited in a celebratory second edition with additional essays: Frederick P. Brooks, The Mythical Man-Month, Addison-Wesley 2nd edn, 1995. A most readable book about the informal processes that are at work in software devel- opment and how teams can best be organized: Tom DeMarco and Timothy Lister, Peopleware: Productive Projects and Teams, Dorset House, 1987. There is a whole host of management books – both serious and popular – about how to run teams and projects. Many of the ideas are applicable to software projects. This is one example, actually about software development, with lessons learned at IBM. It covers recruitment, motivation, power struggles and much more: Watts S. Humphrey, Managing Technical People, Addison-Wesley, 1997. 382 Chapter 30 ■ Project management BELL_C30.QXD 1/30/05 4:28 PM Page 382 PART G REVIEW BELL_CPARTG.QXD 1/30/05 4:31 PM Page 383 BELL_CPARTG.QXD 1/30/05 4:31 PM Page 384 We saw in Chapter 1 that there are usually a number of objectives to be met in the con- struction of a piece of software. A major goal used to be high performance (speed and size), but with improved hardware cost and performance, this issue has declined in importance. Nowadays factors like software costs, reliability and ease of maintenance are increasingly important. For any particular project, it is, of course, vital to assess care- fully what the specific aims are. Having done this, we will usually find that some of them are mutually contradictory, so that we have to decide upon a blend or compromise of objectives. This book has described a variety of techniques for software construction. All of the techniques attempt in some way to improve the process of software development and to meet the various goals of software development projects. The purpose of this chap- ter is to see how we can assess methods and choose a collection of methods that are appropriate for a particular project. 31.1 ● Introduction CHAPTER 31 Assessing methods This chapter: ■ discusses the problem of assessing tools and methods ■ reviews current techniques ■ examines the evidence about verification techniques ■ suggests that there is no single best method ■ discusses the challenges of introducing new methods. BELL_C31.QXD 1/30/05 4:29 PM Page 385 386 Chapter 31 ■ Assessing methods Is it possible to identify a collection of tools and methods that are ideal in all circumstances? The answer is no. Software engineering is at an exciting time. There are a dozen schools of thought competing to demonstrate their supremacy and no single package of tools and methods seems set to succeed. Some methods seem particularly successful in specific areas, for example, the data structure design method in information systems. Other methods, like structured walkthroughs, seem generally useful. In the field of programming languages, declarative languages have become important in expert systems, while highly modular imperative languages are widely used in real-time and command and control systems. Ideally, metrics (Chapter 29) would enable us to determine the best method or com- bination of software development methods. Regrettably, this is virtually impossible. The first problem is identifying the criteria for a best method. As we saw in Chapter 1 on problems and prospects, there are usually a number of goals for any software develop- ment project. In order to choose between methods it is necessary to establish what blend of criteria is appropriate for the particular project. For example, the set of goals for a particular project might be to optimize: ■ development effort, ■ reliability, and ■ maintainability and these are in conflict with each other. In general, the most significant conflict is probably between development effort and reliability of the product. For example, a safety-critical system needs to be highly reliable. However, for a one-off program for a user to extract information from a database, the prime goal may be quick delivery. There can be no set of factors that allow universal comparison between methods. Equally, it is unlikely that there will ever be a single best method. Suppose that we had narrowed down the choice to two applicable methods, called A and B. What we would like to have is hard evidence like this: “Method A gives 25% better productivity than method B.” Regrettably, there is no such data available today, because of the enormous difficulty of creating it. Let us examine some of those diffi- culties. Because of cost, it is virtually impossible to conduct any realistic experiments in which two or more methods are compared. (The cost of developing the same piece of software twice is usually prohibitive.) Usually the only experimental evidence is based on scaled-down experiments. Suppose, for example, that we wanted to compare two design methods, A and B. We could give ten people the specification of a small system and ask them to use method A, and similarly we could ask a second group to use method B. We could measure the average time taken to complete the designs and hence hope to compare the productivities of the methods. We could go on to assign additional problems and employ more people to increase our confidence in the results. Ultimately, we might gain some confidence about the relative productivity of the two methods. But many criticisms can be aimed at experiments like these. Are the backgrounds of the participants equal? Is the experience of the participants typical? (Often students are used in experiments, because they are cheap and plentifully available. But are students typical of professional software developers?) Have sufficient number of people taken 31.2 ● How to assess methods BELL_C31.QXD 1/30/05 4:29 PM Page 386 31.3 Case study – assessing verification techniques 387 part so that the results are statistically significant? Is the problem that has been chosen typical, or is it a small “toy” problem from which it is unreasonable to extrapolate? Is there any difference between the motivation of the participants in the experiment and that of practitioners in a real situation? These questions are serious challenges to the validity of experiments and the significance of the results. The design of experiments must be examined carefully and the results used with caution. While the problem of measuring and comparing productivity is fearsome, the story gets worse when we consider software quality. Again, what we desire is a statement like, “Method A gives rise to software that is 50% more reliable than method B.” Whereas with productivity we have a ready-made measure – person months – how do we measure reliability? If we use the number of bugs as a measure, how can we actually count them? Again, do we count all the bugs equally or are some worse than others? Such questions illustrate the difficulties. Similarly, if we want to quantify how well a method cre- ates software that is easy to maintain, then ideally we need an objective measure or metric. There are, of course, additional criteria for assessing and comparing methods (see Chapter 30 on project management). We might choose from amongst the following checklist: ■ training time for the people who will use the method ■ level of skill required by the people using the method ■ whether the software produced is easy to maintain ■ whether the software will meet performance targets ■ whether documentation is automatically produced ■ whether the method is enjoyable to use ■ whether the method can be used in the required area of application. The outcomes of experiments that assess methods are not encouraging. For example, it is widely accepted in the computer industry that structured programming is the best approach. But one review of the evidence (see the references below) concluded that it was inconclusive (because of problems with the design of experiments). Similarly, there seems to be very limited evidence that object-oriented methods are better than older methods. Clearly there are many problems to be solved in assessing methods, but equally clear- ly developers need hard evidence to use in choosing between methods. We can expect that much attention will be given to the evaluation of tools and methods, and it is, in fact, an active area of current research. This research centers on the design of experi- ments and the invention of useful metrics. We now discuss the results of one of the few small-scale experiments that have been con- ducted to assess methods. This particular study assessed verification techniques, in par- ticular black box testing, white box testing and walkthroughs. Black box and white box testing techniques are explained in Chapter 19. Structured walkthroughs are explained in Chapter 20 on groups. 31.3 ● Case study – assessing verification techniques BELL_C31.QXD 1/30/05 4:29 PM Page 387 . PM Page 378 30.7 Managing people 379 productivity and quality are virtually assured. By contrast, an artist who creates a painting has complete autonomy; deadlines and quality are by no means. cost and performance, this issue has declined in importance. Nowadays factors like software costs, reliability and ease of maintenance are increasingly important. For any particular project, it. many attempts to analyze the problems of software projects and suggest informal ways of cre- ating a successful project team. However well organized the team, there are always informal processes at