软件质量工程的度量与模型(第2版影印版)

软件质量工程的度量与模型(第2版影印版)
作 者: 卡恩
出版社: 清华大学出版社
丛编项: 大学计算机教育国外著名教材系列
版权说明: 本书为公共版权或经版权方授权,请支持正版图书
标 签: 软件测试及维护
ISBN 出版时间 包装 开本 页数 字数
未知 暂无 暂无 未知 0 暂无

作者简介

暂缺《软件质量工程的度量与模型(第2版影印版)》作者简介

内容简介

本书主要介绍了软件质量过程的度量与模型,内容包括质量规划、过程提高与质量控制、过程中(in-process)质量管理、产品工程(设计与代码复杂性)、可靠性评估与预测,以及用户满意度数据分析等。本书不仅阐述了有关软件质量的度量与技术知识,还给出了很多的实际案例分析,完全符合软件工程知识体(SWEBOK),具有很好的指导性和实用性。''''本书可作为软件工程相关专业高年级学生和研究生的教材,同时也是软件工程专业人员的重要参考书。随着软件规模的日益增大,软件质量问题也日益突出。它不仅决定了软件交付后使用成本的增加和过早退役,而且也是软件开发延期交付、成本飙升,以至于开发失败的主要因素之一。事实上,软件科学和软件工程一直在寻求对软件本质更清晰的认识,试图以更加合理的方法组织和开发软件,在保证高质量的前提下,大量、快速开发软件。所以,各种软件书刊从分析、设计、构造、测试、维护,到管理、配置交付都涉及质量,而且各种构造方法、解决方案、实施规范层出不穷,吸引了从业者大量精力,而直观、系统地介绍软件质量最新研究成果和度量技术的书籍并不多见。本书正是这类较好的图书之一,作者是IBM公司资深研究员StephenH.Kan。1995年第1版出版后即引起业界广泛关注,第2版在原有13章的基础上增加了当今成熟的软件度量和质量保证技术,如软件测试过程中的度量,面向对象开发中的度量,可用性度量,过程中(in-process)和项目的评估方法,软件过程改进及其功能度量方法,总共达19章。本书有以下一些特点。第一,面向工程实践、系统、完整。从软件质量的基本概念开始,介绍度量基本理论,软件开发过程中的各种度量,质量管理,七种基本的质量度量工具,直到上述最新度量方法成果。软件工程实践者可以从中得到直接的帮助。第二,取材新颖,有一定的学术深度。传统的软件质量度量模型,因其过程编程背景压缩至相当小的篇幅,如McCabe、Halstead模型等,而引入了许多新颖的质量管理与度量模型,如可靠性增长模型,缺陷消除模型等。这些模型有较深厚的概率统计学理论基础,特别对面向对象软件,提供了一批新准则和经验公式,这对研究和开发当前基于构件、Web服务软件的质量度量方法和规范的从业者,无疑是个很好的参照。第三,本书符合IEEE和ACM21年发布的软件工程知识体(SWEBOK)指南。软件质量一章界定的内容,是计算机专业课程体系制定者很好的参考资料。事实上,本书很适...

图书目录

Foreword to the Second Edition xiii

Foreword to the First Edition xv

Preface xvii

Chapter 1: What Is Software Quality?

1.I Quality: Popular Views 1

1.2 Quality: Professional Views 2

1.2.1 The Role of the Customer

1.3 Software Quality 4

1.4 Total Quality Management 7

1.5 Summary 10

References 11

Chapter 2: Software Development Process Models 13

2.1 The Waterfall Development Model 14

2.2 The Prototyping Approach 19

2.3 The Spiral Model 21

2.4 The Iterative Development Process Model 24

2.5 The Object-Oriented Development Process 27

2.6 The Cleanroom Methodology 32

2.7 The Defect Prevention Process 35

2.8 Process Maturity Framework and Quality Standards 39

2.8.1 The SEI Process Capability Maturity Model 39

2.8.2 The SPRAssessment 44

2.8.3 The Malcolm Baldrige Assessment 45

2.8.4 ISO9000 47

2.9 Summary 51

References 52

Chapter 3: Fundamentals of Measurement Theory 55

3.1 Definition, Operational Definition, and Measurement 55

3.2 Level of Measurement 59

3.3 Some Basic Measures 62

3.4 Reliability and Validity 70

3.5 Measurement Errors 73

3.5.1 Assessing Reliability 75

3.5.2 Correction for Attenuation 76

3.6 Be Careful with Correlation 77

3.7 Criteria for Causality 80

3.8 Summary 82

References 83

Chapter 4: Software Quality Metrics Overview 85

4.1 Product Quality Metrics 86

4.1.1 The Defect Density Metric 87

4.1.2 Customer Problems Metric 96

4.1.3 Customer Satisfaction Metrics 98

4.2 In-Process Quality Metrics 100

4.2.1 Defect Density During Machine Testing 100

4.2.2 Defect Arrival Pattern During Machine Testing 101

4.2.3 Phase-Based Defect Removal Pattern 103

4.2.4 Defect Removal Effectiveness 103

4.3 Metrics for Software Maintenance 105

4.3.1 Fix Backlog and Backlog Management Index 106

4.3.2 Fix Response Time and Fix Responsiveness 107

4.3.3 Percent Delinquent Fixes 108

4.3.4 Fix Quality 109

4.4 Examples of Metrics Programs 110

4.4.1 Motorola 110

4.4.2 Hewlett-Packard 115

4.4.3 IBM Rochester 116

4.5 Collecting Software Engineering Data 117

4.6 Summary 123

References 125

Chapter 5: Applying the Seven Basic Quality Tools

in Software Development 127

5.1 Ishikawa's Seven Basic Tools 128

5.2 Checklist 130

5.3 Pareto Diagram 133

5.4 Histogram 136

5.5 Run Charts 138

5.6 Scatter Diagram 140

5.7 Control Chart 143

5.8 Cause-and-Effect Diagram 152

5.9 Relations Diagram 154

5.10 Summary 156

References 158

Chapter 6: Defect Removal Effectiveness 159

6.1 Literature Review 160

6.2 A Closer Look at Defect Removal Effectiveness 164

6.3 Defect Removal Effectiveness and Quality Planning 172

6.3.1 Phase-Based Defect Removal Model 172

6.3.2 Some Characteristics of a Special Case Two-Phase Model 174

6.4 Cost Effectiveness of Phase Defect Removal 177

6.5 Defect Removal Effectiveness and Process Maturity Level 181

6.6 Summary 183

References 184

Chapter 7: The Rayleigh Model 187

7.1 Reliability Models 187

7.2 The Rayleigh Model 189

7.3 Basic Assumptions 192

7.4 Implementation 195

7.5 Reliability and Predictive Validity 203

7.6 Summary 205

References 206

Chapter 8: Exponential Distribution and Reliability Growth Models 207

8.1 The Exponential Model 208

8.2 Reliability Growth Models 211

8.2.1 Jelinski-MorandaModel 212

8.2.2 LittlewoodModels 213

8.2.3 Goel-Okumoto Imperfect Debugging Model 213

8.2.4 Goel-Okumoto Nonhomogeneous Poisson Process Model 213

8.2.5 Musa-Okumoto Logarithmic Poisson Execution Time Model 215

8.2.6 The Delayed S and Inflection S Models 215

8.3 Model Assumptions 216

8.4 Criteria for Model Evaluation 218

8.5 Modeling Process 220

8.6 Test Compression Factor 224

8.7 Estimating the Distribution of Total Defects over Time 226

8.8 Summary 229

References 231

Chapter 9: Quality Management Models 235

9.1 The Rayleigh Model Framework 236

9.2 The Code Integration Pattern 242

9.3 The PTR Submodel 245

9.4 The PTR Arrival/Backlog Projection Model 249

9.5 Reliability Growth Models 254

9.6 Criteria for Model Evaluation 257

9.7 In-Process Metrics and Reports 258

9.8 Orthogonal Defect Classification 266

9.9 Summary 270

References 270

Chapter 10: In.Process Metrics for Software Testing 271

10.1 In-Process Metrics for Software Testing 272

10.1.1 Test Progress S Curve 272

10.1.2 Testing Defect Arrivals over Time 279

10.1.3 Testing Defect Backlog over Time 283

10.1.4 Product Size over Time 285

10.1.5 CPU Utilization during Test 286

10.1.6 System Crashes and Hangs 289

10.1.7 Mean Time to Unplanned IPL 291

10.1.8 Critical Problems: Show Stoppers 293

10.2 In-Process Metrics and Quality Management 294

10.2.1 Effort/Outcome Model 298

10.3 Possible Metrics for Acceptance Testing to Evaluate

Vendor-Developed Software 302

10.4 How DoYou Know Your Product is Good Enough to Ship? 304

10.5 Summary 308

References 309

Chapter 11: Complexity Metrics and Models 311

11.1 Lines of Code 312

11.2 Halstead's Software Science 314

11.3 Cyclomatic Complexity 315

11.4 Syntactic Constructs 318

11.5 Structure Metrics 319

11.6 An Example of Module Design Metrics in Practice 322

11.7 Summary 328

References 329

Chapter 12: Metrics and Lessons Learned for Object-Oriented Projects 331

12.1 Object-Oriented Concepts and Constructs 331

12.2 Design and Complexity Metrics 334

12.2.1 Lorenz Metrics and Rules of Thumb 334

12.2.2 Some Metrics Examples 336

12.2.3 The CK OO Metrics Suite 337

12.2.4 Validation Studies and Further Examples 339

12.3 Productivity Metrics 343

12.4 Quality and Quality Management Metrics 347

12.5 Lessons Learned for OO Projects 351

12.6 Summary 356

References 357

Chapter 13: Availability Metrics 359

13.1 Definition and Measurements of System Availability 360

13.2 Reliability, Availability, and Defect Rate 362

13.3 Collecting Customer Outage Data for Quality Improvement 366

13.4 In-Process Metrics for Outage and Availability 372

13.5 Summary 394

References 394

Chapter 14: Measuring and Analyzing Customer Satisfaction 375

14.1 Customer Satisfaction Surveys 376

14.1.1 Methods of Survey Data Collection 376

14.1.2 Sampling Methods 377

14.1.3 Sample Size 379

14.2 Analyzing SatisfactionData 381

14.2.1 Specific Attributes and Overall Satisfaction 382

14.3 Satisfaction with Company 388

14.4 How Good Is Good Enough? 390

15.5 Summary 410

References 411

Chapter 16: Conducting Software Project Assessments 413

16.1 Audit and Assessment 414

16.2 Software Process Maturity Assessment and Software Project Assessment 415

16.3 Software Process Assessment Cycle 417

16.4 A Proposed Software Project Assessment Method 420

16.4.1 Preparation Phase 421

16.4.2 Facts Gathering Phase 1 422

16.4.3 Questionnaire Customization and Finalization 423

16.4.4 Facts Gathering Phase 2 425

16.4.5 Possible Improvement Opportunities and Recommendations 426

16.4.6 Team Discussions of Assessment Results and Recommendations 428

16.4.7 Assessment Report 429

16.4.8 Summary 433

16.5 Summary 434

References 435

Chapter 17: Dos and Don'ts of Software Process Improvement 437

17.1 Measuring Process Maturity 438

17.2 Measuring Process Capability 440

17.3 Staged versus Continuous--Debating Religion 440

17.4 Measuring Levels Is Not Enough 441

17.5 Establishing the Alignment Principle 443

17.6 Take Time Getting Faster 444

17.7 Keep It Simple--or Face Decomplexification 446

17.8 Measuring the Value of Process Improvement 447

17.9 Measuring Process Adoption 448

17.10 Measuring Process Compliance 449

17.11 Celebrate The Journey, Not Just the Destination 450

17.12 Summary 451

References 452

Chapter 18: Using Function Point Metrics to Measure

Software Process Improvement 453

18.1 Software Process Improvement Sequences 455

18.1.1 Stage 0: Software Process Assessment and Baseline 455

18.1.2 Stage 1: Focus on Management Technologies 456

18.1.3 Stage 2: Focus on Software Processes and Methodologies 457

18.1.4 Stage 3: Focus on New Tools and Approaches 457

18.1.5 Stage 4: Focus on Infrastructure and Specialization 457

18.1.6 Stage 5: Focus on Reusability 458

18.1.7 Stage 6: Focus on Industry Leadership 458

18.2 Process Improvement Economies 459

18.3 Measuring Process Improvements at Activity Levels 462

18.4 Summary 466

References 467

Chapter 19: Concluding Remarks 469

19.1 Data Quality Control 470

19.2 Getting Started with a Software Metrics Program 472

19.3 Software Quality Engineering Modeling 475

19.4 Statistical Process Control in Software Development 481

19.5 Measurement and the Future 484

References 485

Appendix: A Project Assessment Questionnaire 487

Index 509