1. Trang chủ
  2. » Luận Văn - Báo Cáo

A guide to selecting software measures and metrics

373 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 373
Dung lượng 2,22 MB

Nội dung

Tai ngay!!! Ban co the xoa dong chu nay!!! A Guide to Selecting Software Measures and Metrics A Guide to Selecting Software Measures and Metrics Capers Jones CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2017 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S Government works Printed on acid-free paper International Standard Book Number-13: 978-1-1380-3307-8 (Hardback) This book contains information obtained from authentic and highly regarded sources Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint Except as permitted under U.S Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers For permission to photocopy or use material electronically from this work, please access www.copyright com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organization that provides licenses and registration for a variety of users For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com Contents Preface .vii Acknowledgments xi About the Author .xiii Introduction Variations in Software Activities by Type of Software 17 Variations in Software Development Activities by Type of Software .29 Variations in Occupation Groups, Staff Size, Team Experience 35 Variations due to Inaccurate Software Metrics That Distort Reality .45 Variations in Measuring Agile and CMMI Development 51 Variations among 60 Development Methodologies 59 Variations in Software Programming Languages 63 Variations in Software Reuse from 0% to 90% .69 10 Variations due to Project, Phase, and Activity Measurements .77 11 Variations in Burden Rates or Overhead Costs 83 12 Variations in Costs by Industry 87 13 Variations in Costs by Occupation Group 93 14 Variations in Work Habits and Unpaid Overtime 97 15 Variations in Functional and Nonfunctional Requirements 105 v vi ◾ Contents 16 Variations in Software Quality Results 115 Missing Software Defect Data 116 Software Defect Removal Efficiency 117 Money Spent on Software Bug Removal 119 Wasted Time by Software Engineers due to Poor Quality .121 Bad Fixes or New Bugs in Bug Repairs 121 Bad-Test Cases (An Invisible Problem) .122 Error-Prone Modules with High Numbers of Bugs 122 Limited Scopes of Software Quality Companies 123 Lack of Empirical Data for ISO Quality Standards 134 Poor Test Case Design .135 Best Software Quality Metrics 135 Worst Software Quality Metrics 136 Why Cost per Defect Distorts Reality .137 Case A: Poor Quality 137 Case B: Good Quality 137 Case C: Zero Defects 137 Be Cautious of Technical Debt 139 The SEI CMMI Helps Defense Software Quality 139 Software Cost Drivers and Poor Quality 139 Software Quality by Application Size 140 17 Variations in Pattern-Based Early Sizing 147 18 Gaps and Errors in When Projects Start When Do They End? 157 19 Gaps and Errors in Measuring Software Quality .165 Measuring the Cost of Quality 179 20 Gaps and Errors due to Multiple Metrics without Conversion Rules 221 21 Gaps and Errors in Tools, Methodologies, Languages .227 Appendix 1: Alphabetical Discussion of Metrics and Measures 233 Appendix 2: Twenty-Five Software Engineering Targets from 2016 through 2021 .333 Suggested Readings on Software Measures and Metric Issues 343 Summary and Conclusions on Measures and Metrics 349 Index 351 Preface This is my 16th book overall and my second book on software measurement My first measurement book was Applied Software Measurement, which was published by McGraw-Hill in 1991, had a second edition in 1996, and a third edition in 2008 The reason I decided on a new book on measurement instead of the fourth edition of my older book is that this new book has a different vantage point The first book was a kind of tutorial on software measurements with practical advice in getting started and advice on how to produce useful reports for management and clients This new book is not a tutorial on measurement, but rather a critique on a number of bad measurement practices, hazardous metrics, and huge gaps and omissions in the software literature that leave major topics uncovered and unexamined In fact the completeness of software historical data among more than 100 companies and 20 government groups is only about 37% In my regular professional work, I help clients collect benchmark data In doing this, I have noticed major gaps and omissions that need to be corrected if the data are going to be useful for comparisons or estimating future projects Among the more serious gaps are leaks from software effort data that, if not corrected, will distort reality and make the benchmarks almost useless and possibly even harmful One of the most common leaks is that of unpaid overtime Software is a very labor-intensive occupation, and many of us work very long hours But few companies actually record unpaid overtime This means that software effort is underreported by around 15%, which is too large a value to ignore Other leaks include the work of part-time specialists who come and go as needed There are dozens of these specialists, and their combined effort can top 45% of total software effort on large projects There are too many to show all of these specialists, but some of the more common include the following: Agile coaches Architects (software) Architects (systems) vii viii ◾ 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 Preface Architects (enterprise?) Assessment specialists Capability maturity model integrated (CMMI) specialists Configuration control specialists Cost estimating specialists Customer support specialists Database administration specialists Education specialists Enterprise resource planning (ERP) specialists Expert-system specialists Function point specialists (certified) Graphics production specialists Human factors specialists Integration specialists Library specialists (for project libraries) Maintenance specialists Marketing specialists Member of the technical staff (multiple specialties) Measurement specialists Metric specialists Project cost analysis specialists Project managers Project office specialists Process improvement specialists Quality assurance specialists Scrum masters Security specialists Technical writing specialists Testing specialists (automated) Testing specialists (manual) Web page design specialists Web masters Another major leak is that of failing to record the rather high costs for users when they participate in software projects, such as embedded users for agile projects But users also provide requirements, participate in design and phase reviews, perform acceptance testing, and carry out many other critical activities User costs can collectively approach 85% of the effort of the actual software development teams Without multiplying examples, this new book is somewhat like a medical book that attempts to discuss treatments for common diseases This book goes through a series of measurement and metric problems and explains the damages they can cause There are also some suggestions on overcoming these problems, but the main Preface ◾ ix focus of the book is to show readers all of the major gaps and problems that need to be corrected in order to accumulate accurate and useful benchmarks for software projects I hope readers will find the information to be of use Quality data are even worse than productivity and resource data and are only about 25% complete The new technical debt metric is only about 17% complete Few companies even start quality measures until after unit test, so all early bugs found by reviews, desk checks, and static analysis are invisible Technical debt does not include consequential damages to clients, nor does it include litigation costs when clients sue for poor quality Hardly anyone measures bad fixes, or new bugs in bug repairs themselves About 7% of bug repairs have new bugs, and this can rise above 35% for modules with high cyclomatic complexity Even fewer companies measure bad-test cases, or bugs in test libraries, which average about 15% Yet another problem with software measurements has been the continuous usage for more than 50 years of metrics that distort reality and violate standard economic principles The two most flagrant metrics with proven errors are cost per defect and lines of code (LOC) The cost per defect metric penalizes quality and makes buggy applications look better than they are The LOC metric makes requirements and design invisible and, even worse, penalizes modern high-level programming languages Professional benchmark organizations such as Namcook Analytics, Q/P Management Group, Davids’ Consulting, and TI Metricas in Brazil that validate client historical data before logging it can achieve measurement accuracy of perhaps 98% Contract projects that need accurate billable hours in order to get paid are often accurate to within 90% for development effort (but many omit unpaid overtime, and they never record user costs) Function point metrics are the best choice for both economic and quality analyses of software projects The new SNAP metric for software nonfunctional assessment process measures nonfunctional requirements but is difficult to apply and also lacks empirical data Ordinary internal information system projects and web applications developed under a cost-center model where costs are absorbed instead of being charged out are the least accurate and are the ones that average only 37% Agile projects are very weak in measurement accuracy and have often less than 50% accuracy Selfreported benchmarks are also weak in measurement accuracy and are often less than 35% in accumulating actual costs A distant analogy to this book on measurement problems is Control of Communicable Diseases in Man, published by the U.S Public Health Service It has concise descriptions of the symptoms and causes of more than 50 common communicable diseases, together with discussions of proven effective therapies Another medical book with useful guidance for those of us in software is Paul Starr’s excellent book on The Social Transformation of American Medicine 344 ◾ Suggested Readings on Software Measures and Metric Issues Gack, Gary Managing the Black Hole: The Executives Guide to Software Project Risk Thomson, GA: Business Expert Publishing, 2010 Gack, Gary Applying Six Sigma to Software Implementation Projects http://software.isixsigma com/library/content/c040915b.asp (last accessed on October 13, 2016) Galorath, Dan Software Sizing, Estimating, and Risk Management: When Performance is Measured Performance Improves Philadelphia, PA: Auerbach Publishing, 2006, 576 p Garmus, David and Herron, David Measuring the Software Process: A Practical Guide to Functional Measurement Englewood Cliffs, NJ: Prentice Hall, 1995 Garmus, David and Herron, David Function Point Analysis – Measurement Practices for Successful Software Projects Boston, MA: Addison Wesley Longman, 2001, 363 p Garmus, David, Russac Janet, and Edwards, Royce Certified Function Point Counters Examination Guide Boca Raton, FL: CRC Press, 2010 Gilb, Tom and Graham, Dorothy Software Inspections Reading, MA: Addison Wesley, 1993 Glass, Robert L Software Runaways: Lessons Learned from Massive Software Project Failures Englewood Cliffs, NJ: Prentice Hall, 1998 Harris, Michael D.S., Herron, David, and Iwanicki, Stasia The Business Value of IT Boca Raton, FL: CRC Press, Auerbach, 2008 IFPUG (52 authors); The IFPUG Guide to IT and Software Measurement Boca Raton, FL: CRC Press, Auerbach publishers, 2012 International Function Point Users Group (IFPUG) IT Measurement – Practical Advice from the Experts Boston, MA: Addison Wesley Longman, 2002, 759 p Jacobsen, Ivar, Ng Pan-Wei, McMahon, Paul, Spence, Ian, and Lidman, Svente The Essence of Software Engineering: Applying the SEMAT Kernel Boston, MA: Addison Wesley, 2013 Jones, Capers Patterns of Software System Failure and Success Boston, MA: International Thomson Computer Press, 1995, 250 p Jones, Capers Software Quality – Analysis and Guidelines for Success Boston, MA; International Thomson Computer Press, 1997, 492 p Jones, Capers Sizing up software Scientific American Magazine, 1998, 279(6):104–111 Johnson, James et al The Chaos Report West Yarmouth, MA: The Standish Group, 2000 Jones, Capers Software Assessments, Benchmarks, and Best Practices Boston, MA: Addison Wesley Longman, 2000, 657 p Jones, Capers Estimating Software Costs New York: McGraw-Hill, 2007 Jones, Capers Conflict and Litigation Between Software Clients and Developers Narragansett, RI: Software Productivity Research, Inc., 2008, 45 p Jones, Capers Preventing Software Failure: Problems Noted in Breach of Contract Litigation Narragansett, RI: Capers Jones & Associates, 2008, 25 p Jones, Capers Applied Software Measurement New York: McGraw-Hill, 3rd edition, 2008, 668 p Jones, Capers Software Engineering Best Practices New York: McGraw Hill, 2010 Jones, Capers and Bonsignour, Olivier The Economics of Software Quality Reading, MA: Addison Wesley, 2011 Jones, Capers A Short History of the Cost per Defect Metric Narragansett, RI: Namcook Analytics LLC, 2014 Jones, Capers A Short History of Lines of Code Metrics Narragansett, RI: Namcook Analytics LLC, 2014 Jones, Capers The Technical and Social History of Software Engineering Boston, MA: Addison Wesley Longman, 2014 Kan, Stephen H Metrics and Models in Software Quality Engineering Boston, MA: Addison Wesley Longman, 2nd edition, 2003, 528 p Suggested Readings on Software Measures and Metric Issues ◾ 345 Pressman, Roger Software Engineering – A Practitioner’s Approach New York: McGraw-Hill, 6th edition, 2005 Putnam, Lawrence H Measures for Excellence – Reliable Software On Time, Within Budget Englewood Cliffs, NJ: Yourdon Press–Prentice Hall, 1992, 336 p Putnam, Lawrence H and Myers, Ware Industrial Strength Software - Effective Management Using Measurement Los Alamitos, CA: IEEE Press, 1997, 320 p Radice, Ronald A High Qualitiy Low Cost Software Inspections Andover, MA: Paradoxicon Publishing, 2002, 479 p Royce, Walker E Software Project Management: A Unified Framework Reading, MA: Addison Wesley Longman, 1998 Starr, Paul The Social Transformation of American Medicine (Pulitzer Prize in 1982), New York: Basic Books, 1982 Strassmann, Paul The Business Value of Computers: An Executive’s Guide Boston, MA: International Thomson Computer Press, 1994 Strassman, Paul The Squandered Computer New Canaan, CT: The Information Economics Press, 1997, 426 p Wiegers, Karl E Peer Reviews in Software – A Practical Guide Boston, MA: Addison Wesley Longman, 2002, 232 p Yourdon, Edward Death March - The Complete Software Developer’s Guide to Surviving “Mission Impossible” Projects Upper Saddle River, NJ: Prentice Hall PTR, 1997, 218 p Yourdon, Edward Outsource: Competing in the Global Productivity Race Upper Saddle River, NJ: Prentice Hall PTR, 2005, 251 p Websites Information Technology Metrics and Productivity Institute (ITMPI): www ITMPI.org International Software Benchmarking Standards Group (ISBSG): www ISBSG.org International Function Point Users Group (IFPUG): www IFPUG.org Project Management Institute (www PMI.org) Capers Jones (www Namcook.com) 346 ◾ Suggested Readings on Software Measures and Metric Issues Software Benchmark Organizations circa 2015 Software Benchmark Providers (listed in alphabetical order) 4SUM Partners www.4sumpartners.com Bureau of Labor Statistics, Dept of Commerce www.bls.gov Capers Jones (Namcook Analytics LLC) www.namcook.com CAST Software www.castsoftware.com Congressional Cyber-Security Caucus cybercaucus.langevin.house.gov Construx www.construx.com COSMIC function points www.cosmicon.com Cyber-Security and Information Systems https://s2cpat.thecsiac.com/s2cpat/ David Consulting Group www.davidconsultinggroup.com 10 Forrester Research www.forrester.com 11 Galorath Incorporated www.galorath.com 12 Gartner Group www.gartner.com 13 German Computer Society http://metrics.cs.uni-magdeburg de/ 14 Hoovers Guides to Business www.hoovers.com 15 IDC www IDC.com 16 ISBSG Limited www.isbsg.org 17 ITMPI www.itmpi.org 18 Jerry Luftman (Stevens Institute) http://howe.stevens.edu/index php?id=14 19 Level Ventures www.level4ventures.com 20 Metri Group, Amsterdam www.metrigroup.com 21 Namcook Analytics LLC www.namcook.com 22 Price Systems www.pricesystems.com (Continued) Suggested Readings on Software Measures and Metric Issues ◾ Software Benchmark Providers (listed in alphabetical order) 23 Process Fusion www.process-fusion.net 24 QuantiMetrics www.quantimetrics.net 25 Quantitative Software Management (QSM) www.qsm.com 26 Q/P Management Group www.qpmg.com 27 RBCS, Inc www.rbcs-us.com 28 Reifer Consultants LLC www.reifer.com 29 Howard Rubin www.rubinworldwide.com 30 SANS Institute www.sabs.org 31 Software Benchmarking Organization (SBO) www.sw-benchmark.org 32 Software Engineering Institute (SEI) www.sei.cmu.edu 33 Software Improvement Group (SIG) www.sig.eu 34 Software Productivity Research www SPR.com 35 Standish Group www.standishgroup.com 36 Strassmann, Paul www.strassmann.com 37 System Verification Associates LLC http://sysverif.com 38 Test Maturity Model Integrated www.experimentus.com 347 Summary and Conclusions on Measures and Metrics Software is one of the driving forces of modern industry and government operations Software controls almost every complicated device now used by human beings But software remains a difficult and intractable discipline that is hard to predict and hard to measure The quality of software is embarrassingly bad Given the economic importance of software, it is urgent to make software development and maintenance true engineering disciplines, as opposed to art forms or skilled crafts In order to make software engineering a true engineering discipline and a true profession, much better measurement practices are needed than what have been utilized to date Quantitative and qualitative data need to be collected in standard fashions that are amenable to statistical analysis 349 Index Note: Page numbers followed by f and t refer to figures and tables, respectively 80/20 rule, 296 A ABAP programming language, 93 Abeyant defect, 169, 240–241 Academic training, 289 Accuracy, 242 Activity-based benchmark, 80t–82t Activity-based costs, 241–242, 241t, 257, 268 ad hoc method, 239 Agile approach, 283 Agile concept, 39 Agile development approach, 242 Agile Manifesto, 52 Agile methodology, 59, 252 Agile methods, measurement variations in, 51–57 application of 1,000 function points, 55t fundamental problems of, 52 SRM method, 57 Agile metrics, 242–243 Agile project, 53, 240, 242 1000 function points, 12t–13t ALGOL 68 programming language, 115 American Medical Association (AMA), 47, 134 American Society for Quality (ASQ), 128 Analysis of variance (ANOVA), 243 Annual reports, 243 ANOVA (Analysis of variance), 243 Antipatterns, 69 APO (Automated Project Office), 158, 263, 304 Apparent defect density, 15 Apple company, 251, 262 Applied Software Measurement (book), Architectural defect, 264 Argon gas, 37 Ascope See Assignment scope (Ascope) ASQ (American Society for Quality), 128 Assessment, 245–246 Assignment scope (Ascope), 103, 246, 302 AT&T, 35 Attrition measures, 246 Automated function point method, 240 Automated Project Office (APO), 158, 263, 304 Automated/semi-automated dashboards, 263 Automatic function point counting, 246–247 Automatic spell checking, 309 B Backfiring, 244, 247 Bad-fix(es), 121, 177, 335 category, 167 defect, 264 injections, 247 Bad-test cases, 122, 247–248 Balanced scorecard, 248 The Balanced Scorecard (book), 248 Bancroft Prize, 133 Baselines, 248 Bayesian analysis, 248 Benchmark, 2, 11, 13, 35, 37, 248–249, 303–304 activity-based, 80t–82t data, 222, 225 elements for effective, 230t groups, 49 organizations, 127, 322 Bloat point metric, 275 Bottom line data, Brooks, Fred, Dr., 37 351 352 ◾ Index Bug(s), 166–167, 250 distribution by severity level, 169t repairs, bad fixes/new bugs in, 121–122 report, 169 Burden rate(s), 83, 250 components in companies, 85t–86t international projects, salary and, 84 Burn down charts, 250 Burn up charts, 250 Business value, 250 The Business Value of IT (book), 48 Butterfly effect, 253 C C, C#, 115 CA (criticality analysis), 274 Capability maturity model integration (CMMI), 48, 51, 139, 245, 254 development, measurement variations in, 51–57 civilian application of 1,000 function points, 54t fundamental problems of, 52 CASE (Computer Aided Software Engineering), 59 Celsius, 221 Certification process, 251 FDA and FAA, 252 reusable materials, 252 Certified reusable components, 72–73 Certified test personnel, 135 Chaos report, 252, 322 Chaos theory, 253 Civilian software projects, 114 Cloud-based software, 253 Cloud computing, 253 Cloud measures and metrics, 253 CMMI See Capability maturity model integration (CMMI) COBOL, 102, 247, 284 Code defect, 219, 264 Cognitive dissonance, 254–255 Cohesion metric, 255 Complexity metric, 255 Computer Aided Software Engineering (CASE), 59 Consequential damages, 170, 255–256 Contenders, 77 Contract litigation, breach of, 249 Control of Communicable Diseases in Man (book), 133 COQ See Cost of quality (COQ) COSMIC, 221 Cost center internal software projects, 11 model, 257 software, 302 Cost driver, 257–258, 258t Cost/effort-tracking methods, distribution of, 11, 14t Cost of quality (COQ), 138, 179, 182–220, 255, 258–259 cost elements, 183t–184t pretest and test defect removal, 185t–204t testing, 205t–218t Cost per defect metric, 45, 120, 136, 179, 223, 226, 233, 240, 259, 341 distorts reality, 137 vs cost per function point, 138t Cost per function point, 120, 137–138, 137t, 153, 236, 236t, 259 cost per defect vs, 137t Cost per LOC and KLOC, 259–260, 260t Cost per square foot, 105 Cost per story point, 260 Cost-tracking systems, 2, 11 Coupling metric, 261 Creeping requirements, 161 Criticality analysis (CA), 274 Crunch projects/mode, 102–103 Crystal Reports, 75 Cumulative defect removal, 179 Currency exchange rates, 261 Customer satisfaction metrics, 261 Customer support metrics, 261–262 Cyber attacks, 237, 262 Cyber-security, 35, 253 experts, 93 Cyclomatic complexity, 122–123, 262–263 D Dashboard, 263 Data elements for benchmarks, 230t point metrics, 263 DCUT See Design, code, and unit test (DCUT) DDE (defect detection efficiency), 265 Index Defect, 263–264 consequences, 264 delivered, 267–268 density, 264 design, 264 discovery factors, 264–265 document, 264 duplicate, 269–270 invalid defect, 282 origins, 265–266 potential(s), 72, 116, 167, 175, 178t, 334 prevention and removal activities, 180t–181t resolution time, 266 severity levels, 266–267 Defect detection efficiency (DDE), 265 Defect removal efficiency (DRE), 16, 48, 135–136, 172, 226, 237, 265, 333–334 defect prevention and, 117t–118t U.S software average DRE ranges, 176t with and without pretest activities, 219t Defects per KLOC, 264 Defense sector, 51, 245 Deferred features, 267 Delivered defect, 267–268 Delphi method, 267 Design, code, and unit test (DCUT), 6, 10t, 47, 219, 268, 341 Design defect, 264 Dilution, 268 Documentation costs, 269, 270t Document defect, 264 Draconian Sarbanes–Oxley law, 279 DRE See Defect removal efficiency (DRE) Duplicate defect reports, 269–270 E Earned value analysis (EVA), 270 Earned value measurements (EVM), 270–271 The Economics of Software Quality (book), 48, 133, 247 Effective quality control, 123 Endemic software problems, 50 Enhancement metrics, 271 Enterprise resource planning (ERP), 75, 272 companies, 312 packages, 158 Entropy, 271 ERP See Enterprise resource planning (ERP) Error-prone modules (EPM), 121, 168, 272, 335 with high numbers of bugs, 122–123 Essential complexity, 272 ◾ 353 Estimating Software Costs (book), 255 EVA (earned value analysis), 270 EVM (earned value measurements), 270–271 Experience, 273 Expert Estimation, 273 F FAA (Federal Aviation Agency), 252, 288 Fahrenheit, 221 Failing project (definition), 273–274 Failure modes and effect analysis (FMEA), 274 Failure rate, 274–275 False positive, 275 FDA (Food and Drug Administration), 252 Feature bloat, 275 Federal Aviation Agency (FAA), 252, 288 Financial debt, 324 Financial measures, 253 Fish-bone diagrams, 305 Five-point scale, 228 Fixed costs, 236, 259, 275 contracts, 256 FMEA (failure modes and effect analysis), 274 Food and Drug Administration (FDA), 252 Force-fit method, 245 Formal inspections, phase level, 78 FORTRAN, 284 FP See Function points (FP) Fully burdened rate, 83 Fully burdened salary rate, 83 Functional and nonfunctional requirements, variations in, 105–114 Function point metrics circa 2016, 243 Function points (FP), 103, 136, 167, 235, 247, 275–276 automated, 240 contracts, 256–257 measurement, advantages, 161 metric, 145, 150, 167 per month, 276 variations, 277 work hours per, 330 G Gantt chart, 159, 160f, 277–278 Gaps and omissions, 2, 4t Geneen, Harold, 115–116 Generalists versus specialists, 278 Goal-question metric (GQM), 278 Good-enough quality fallacy, 278–279 354 ◾ Index Governance, 279 Government agencies, 262 certification, 29, 106 projects, 114 service and education, 87 Graph theory, 262 H Halstead complexity, 279 Hazardous metrics, 223 High-level programming language, 115 High-speed pattern-matching method, 224 High-speed sizing method, 246 Historical data, 1, 11 omissions from, 1–2, 3t unverified, 13 Historical data leakage, 279–280 Hi-tech companies, 317 House of quality, 305 Humphrey, Watt, 52 I IBM, 13–14, 35, 116, 240, 247, 326 IMS database, 168 severity scale, 168, 168t IBM circa 1968, 265 IBM circa 1970, 115–116, 122, 135, 167, 333–334 IEC (international electrotechnical commission), 282 IFPUG See International Function Point Users Group (IFPUG) Incident, 280 Independent testing, 29, 114 Independent verification and validation (IV and V), 29, 114 Index Medicus, 133 Industry comparisons, 281 Industry Productivity Ranges Circa 2016, 89t–91t Inflation metric, 281 Inspection metrics, 281–282 Institute of Electrical and Electronic Engineers (IEEE), 47, 134 Intangible value, 251, 328 International comparison, 281 International electrotechnical commission (IEC), 282 International Function Point Users Group (IFPUG), 105, 147, 177, 221, 224, 242, 277, 339 International organization for standards (ISO), 282 The International Software Benchmarking Standards Group (ISBSG), 49, 227, 249, 292 Invalid defect, 282 ISBSG See The International Software Benchmarking Standards Group (ISBSG) ISO/IEC 9126 quality standard, 166 ISO/IEC standards, 282 ISO quality standards, 166 Iterations, 53 J Java, 39, 234 Jira tool, 158 Joint Application Design (JAD), 161 JOVIAL, 115 Juran’s QC Handbook (book), 258 Just in time, 282 K Kanban approach, 282–283 Kelvin, Lord, 283 Kelvin’s Law of 1883, 283 Key Performance Indicators (KPI), 283 KLOC, 283 L Labor-intensive commodities, 104 Language levels, 283–284 Larry Putnam of Quantitative Software Management (QSM), 38 Lean development, 284 Learning curves, 284–285 Light scattering, mathematical model of, 37 Lines of code (LOC), 136, 161, 223 metric, 45, 233, 240, 285, 341 Longevity, 93, 95 M Maintenance assignment scope, 338 metric, 285–286 Index Manufacturing economics, 236 Mark II function points, 223, 277 McCabe, Tom, 272 Mean time between failures (MTBF), 166, 309 Mean time to failure (MTTF), 166, 309 Measurement speed and cost, 286 Meetings and communications, 286–287, 287t costs, 269 Methodology comparison metrics, 287 Metrics and Models in Software Quality Engineering (book), 48 Metrics conversion, 288–289, 289t Microsoft, 35, 95, 251 Microsoft Office, 158 Microsoft Project, 158 Microsoft Windows, 75 Microsoft Word, 275 Military software projects, 29, 269 MIS projects, 29, 78–80 Monte Carlo method, 290–291 Morale metrics, 291 MTBF (mean time between failures), 166, 309 MTTF (mean time to failure), 166, 309 MUMPS (programming language), 115 The Mythical Man-Month (book), 37 N Namcook, 92, 163 pattern-matching approach, 147 Namcook Analytics LLC, 49, 97, 127, 245, 292, 333 National Averages, 292 Natural metric, 290 NDA (nondisclosure agreements), 292–293 Nominal production rate, 103 Nominal start and end point, 158 Nondisclosure agreements (NDA), 292–293 Nonfunctional requirements, 293 Nonprofit associations, 128 Norden, Peter, 37 Normal and intense work patterns, 102–103, 102t Normalization, 293–294 North American Industry Classification (NAIC) code, 171, 281, 291–292 O Objective-C, 115 Object Management Group (OMG), 246 Object-oriented (OO) languages, 155, 294 ◾ 355 Occupation groups, 294 One-trick-ponies, 123 Oracle, 158, 240, 244, 272, 312 Organization for International Economic Cooperation and Development (OECD), 97 Overhead, 250 Overlap, 159 P Pair programming, 39, 53, 179, 288, 295–296, 342 Parametric estimation, 295 tool, 1, 41, 241, 273, 315 Pareto analysis, 296 Pattern matching, 296–297, 323 approach, 150, 246 SRM, 148, 149t PBX switching system See Private branch exchange (PBX) switching system Performance metrics, 297 Personal Software Process (PSP), 51, 80 PERT (Program Evaluation and Review Technique), 297 Phase metrics, 297 PMI (Project Management Institute), 128 PMO (Project Management Office), 304 PNR See Putnam–Norden–Rayleigh (PNR) curve Portfolio metric, 298–301, 298t–301t Private branch exchange (PBX) switching system, 3, 78, 221, 234, 234t Production rate (Prate), 103, 302 nominal and virtual, 103 Productivity metric, 301 Professional malpractice, 161, 233, 302 Professional societies and metrics companies, 290 Profit center, 11, 257, 302–303 Program Evaluation and Review Technique (PERT), 297 Progress improvements, 303 Project end date, 303 office, 304 start date, 304–305 Project-level metrics, 303–304 Project Management Institute (PMI), 128 Project Management Office (PMO), 304 Project, phase, and activity measurements, 77–82 charts of accounts, 79t 356 ◾ Index Published data and common metrics, 49, 49t Pulitzer Prize, 46, 133, 339 Putnam–Norden–Rayleigh (PNR) curve, 37–38, 309 Q Quality, 165, 305 data, 116, 127, 136 data leakage, 280 Quality function deployment (QFD), 144, 305 Quality Is Free (book), 258, 318 Quantitative data, 227 Quantitative Software Management (QSM), 49, 227 Quantitative variations, 20t–27t R Rapid Application Development (RAD), 59 Rational Unified Process (RUP), 57, 242–244, 252 Rayleigh curve, 38, 38f, 253, 308–309 Rayleigh, Lord, 37 RCA (root-cause analysis), 274, 314 Reference model, 2, 4t Reliability, 166 metrics, 309 Reliable taxonomy, 230 Repair and rework costs, 309 Reports, interfaces, conversions, and enhancements (RICE) object, 272, 312 Request for proposal (RFP), 157 Requirement, 165, 309–310 bug, 266 creep, 310 metrics, 310–311 nonfunctional, 106, 293 software, 293 toxic, 310 Return on investment (ROI), 121, 251, 253, 311 Reusable components, 73–74, 312 Reusable materials, 312 Reuse and software quality levels at delivery, 72t security flaws at delivery, 73t Reuse potential, 73 software, 74t Risk avoidance probabilities, 312 metrics, 312–313, 313t severities, 312 Root-cause analysis (RCA), 274, 314 Rowboat, 17 Rules of thumb, 308 Running tested features (RTF), 53 RUP See Rational Unified Process (RUP) S Sample sizes, 314 SAP, 158, 240, 244, 272, 312 SAP R/3 integrated system, 93 Sarbanes–Oxley rules, 32 Schedule compression, 314–315 overlap, 315 slip, 315 slippage pattern, 160, 160t Scope, 316 Scrum sessions, 53 Security flaws, 72, 335 reuse and software, 73t Security metrics, 316 SEI See Software Engineering Institute (SEI) Severity level, defect, 266–267 Six-Sigma approach, 316 Size adjustment, 316 Size-by-side agile and waterfall, 56t Sizing method, 147 SNAP See Software nonfunctional assessment process (SNAP) The Social Transformation of American Medicine (book), 46, 133–134, 339 Software academic training, 302 activities, occupations, and combinations, 32, 32t–33t applications size, 107t–113t benchmark, 106 organizations, 49 bug removal, 119–120 circa 2026, 140 consequential damages scale, 170, 170t construction methods, 314–315 cost drivers and poor quality, 139–140, 141t–142t, 143t estimation, 159 cost-estimating methods, 248 tool, cost-tracking systems, 97, 102 Index defect data, missing, 116, 116t potentials, 172, 176t prediction, 173t–175t prevention and removal strategies, 170t and quality measurements, missing data, 14, 14t–15t report, 171t–172t demographic, 243 development complete costs, 7t–9t methodologies, 59–62 partial costs, 10t productivity, 41f productivity, ranges of, 305–306, 306t quality, ranges of, 306–308, 307t DRE, 117–119, 117t–118t education and training company, 48 employment statistics, 317 engineering, 46, 236, 255 goals, 140, 333–340 hazardous methods of, 341–342 needs for, 237 technology used in, 340–341 engineers, 121 estimation tools, 127, 231 industry, 47 journals, 48 literature, 83–84 metrics selection, 239–240, 240t occupation groups and specialists, 35, 36t–37t patterns, 71, 71t productivity measurements, 87 programming languages, variations in circa 2016, 63 influence on productivity, 63, 64t–67t project costs, unpaid overtime impact on, 42, 42f management tools, 319t–320t manager, 87, 88t schedules, 158 taxonomy, 322–323 quality, 115, 165–166 by application size, 140, 144–145, 144t assurance, 132 best metrics, 135–136 company, 48, 123–134 conferences, 48 control, 47 costs, 119t–120t, 179 ◾ 357 data, 116 measurements and metrics, 116 measures, 138 and risk, 48 SEI CMMI, 139, 139t worst metrics, 136 reliability, 166 reuse, variations in, 69–75 impact on productivity, 70f schedules, ranges of, 308 staffing, 38 patterns, 39t–40t structures, 250 testing, 132 work patterns, 102–103, 102t Software Engineering Institute (SEI), 48, 51, 139, 227, 245, 283 Software Metrics and Metrology (book), 48 Software nonfunctional assessment process (SNAP), 105–106, 107t–113t, 123, 150, 224, 240 metric, 67, 161, 239–240, 317 Software Productivity Research (SPR), 49, 245–247, 249 Software quality assurance (SQA), 317–318 Software Quality Assurance Curricula circa 2016, 128t–130t Software Risk Factors Circa 2016, 238t Software Risk Master™ (SRM), 1, 41–42, 127, 147, 166, 173, 241 multiyear sizing, 162t–163t pattern-matching approach, 147, 149t size predictions, 224, 225t tool, 56, 150, 162, 224, 335 Software Size Measurement Practices circa 2016, 222t–223t Software Testing Courses circa 2016, 130t–132t Software usage and consumption metrics, 318–320, 319t–320t Spell checking, automatic, 309 SPR See Software Productivity Research (SPR) Sprint, 53, 320 SQA (software quality assurance), 317–318 The Squandered Computer (book), 48 SRM See Software Risk Master™ (SRM) Staffing level, 320–321, 321t Standard industry classification (SIC) code, 171, 291 Standish report (chaos report), 251, 274, 322 Starr, Paul, 133–134 Static analysis, 115–117, 133 bug repairs, 121 358 ◾ Index Status-tracking tool, 329 Stock equity program, 95 Story point metric, 240, 242, 287 Story points, 53, 322 Successful projects (definition), 322 Supplemental data, 227, 228t Synthetic metric, 290 User costs, 326–327 U.S industry segments, 43 U.S Sarbanes–Oxley law, 252 U.S Software Cost Drivers in Rank Order for 2016, 141t–142t, 258t U.S Software occupation groups, variations in compensation, 93, 94t U.S software personnel, 97, 326 T Tangible value, 251, 328 Taxonomy patterns, 148t, 154t TCO See Total cost of ownership (TCO) Team sizes, 40, 40f Team Software Process (TSP), 51, 80, 115, 242–244, 252 Technical debt, 324, 342 metric, 45, 139, 179, 182, 240, 309, 324 Test case design mathematical, 123, 135 poor, 135 coverage, 324–325 metrics, 324 The Social Transformation of American Medicine (book), 50 TickIT, 245 Time and materials contract, 256 Total cost of ownership (TCO), 105, 244–245, 271, 325, 325t Toxic requirement, 310 TSP See Team Software Process (TSP) U Unified modeling language (UML), 326 Unpaid overtime, 41–42, 97, 103, 326 U.S average for software defect potentials for 2016, 167, 167t U.S Average Ranges of Defect Potentials Circa 2016, 175t Use-case metric, 240, 242, 287 Use-case points, 326 V Validation metric, 290 Value stream mapping, 284 Variable costs, 328 Velocity, 328 Venn diagram, 328–329 Verizon, 262 Virtual production rate, 103 Visual status and progress tracking, 329 W Warranty costs, 329–330 War room, 329 Wastage, 47, 121 Waterfall development method, 297 model, 159 Watson, Thomas J., Jr., 115–116 Work hours, 330 per function point, 103, 276, 330 World-wide military command and control system (WWMCCS), 244 Y Y2K, 266, 310 Z Zero defects, 137–138, 330–331 Zero-size software changes, 331

Ngày đăng: 05/10/2023, 16:42

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN