多线程、并行与分布式程序设计基础(影印版)

多线程、并行与分布式程序设计基础(影印版)
作 者: 美Gregory Andrews
出版社: 高等教育出版社
丛编项: 国外优秀信息科学与技术系列教学用书
版权说明: 本书为公共版权或经版权方授权,请支持正版图书
标 签: 分布式系统设计
ISBN 出版时间 包装 开本 页数 字数
未知 暂无 暂无 未知 0 暂无

作者简介

暂缺《多线程、并行与分布式程序设计基础(影印版)》作者简介

内容简介

20世纪末,以计算机和通信技术为代表的信息科学和技术对世界经济、科技、军事、教育和文化等产生了深刻影响。信息科学技术的迅速普及和应用,带动了世界范围信息产业的蓬勃发展,为许多国家带来了丰厚的回报。进入21世纪,尤其随着我国加入WTO,信息产业的国际竞争将更加激烈。我国信息产业虽然在20世纪末取得了迅猛发展,但与发达国家相比,甚至与印度、爱尔兰等国家相比,还有很大差距。国家信息化的发展速度和信息产业的国际竞争能力,最终都将取决于信息科学技术人才的质量和数量。引进国外信息科学和技术优秀教材,在有条件的学校推动开展英语授课或双语教学,是教育部为加快培养大批高质量的信息技术人才采取的一项重要举措。为此,教育部要求由高等教育出版社首先开展信息科学和技术教材的引进试点工作。同时提出了两点要求,一是要高水平,二是要低价格。在高等教育出版社和信息科学技术引进教材专家组的努力下,经过比较短的时间,第一批引进的20多种教材已经陆续出版。这套教材出版后受到了广泛的好评,其中有不少是世界信息科学技术领域著名专家、教授的经典之作和反映信息科学技术最新进展的优秀作品,代表了目前世界信息科学技术教育的一流水平,而且价格也是最优惠的,与国内同类自编教材相当。这项教材引进工作是在教育部高等教育司和高教社的共同组织下,由国内信息科学技术领域的专家、教授广泛参与,在对大量国外教材进行多次过选的基础上,参考了国内和国外著名大学相关专业的课程设置进行系统引进的。其中,JohnWiley公司出版的贝尔实验室信息科学研究中心副总裁Silberschatz教授的经典著作《操作系统概念》,是我们经过反复谈判,做了很多努力才得以引进的。WilliamStallings先生曾编写了在美国深受欢迎的信息科学技术系列教材,其中有多种教材获得过美国教材和学术著作者协会颁发的计算机科学与工程教材奖,这批引进教材中就有他的两本著作。留美中国学者JiaweiHan先生的《数据挖掘》是该领域中具有里程碑意义的著作。由达特茅斯学院ThomasCormen和麻省理工学院、哥伦比亚大学的几位学者共同编著的经典著作《算法导论》,在经历了11年的锤炼之后于2001年出版了第一版。目前任教于美国MassacMsetts大学的JamesKurose教授,曾在美国三所高校先后10次获得杰出教师或杰出教学奖,由他主编的《计算机网络》出版后,以其体系新颖、内容先进而倍受欢迎。在努力降低引进教...

图书目录

Preface

Chapter 1: The Concurrent Computing Landscape

1.1 The Essence of Concurrent Programming

1.2 Hardware Architectures

1.2.1 Processors and Caches

1.2.2 Shared-Memory Multiprocessors

1.2.3 Distributed-Memory Multicomputers and Networks

1.3 Applications and Programming Styles

1.4 Iterative Parallelism: Matrix Multiplication

1.5 Recursive Parallelism: Adaptive Quadrature

1.6 Producers and Consumers: Unix Pipes

1.7 Clients and Servers: File Systems

1.8 Peers: Distributed Matrix Multiplication

1.9 Summary of Programming Notation

1.9.1 Declarations

1.9.2 Sequential Statements

1.9.3 Concurrent Statements, Processes, and Procedures

1.9.4 Comments

Historical Notes

References

Exercises

Part 1: Shared-Variable Programming

Chapter 2: Processes and Synchronization

2.1 States, Actions, Histories, and Properties

2.2 Parallelization: Finding Patterns in a File

2.3 Synchronization: The Maximum of an Array

2.4 Atomic Actions and Await Statements

2.4.1 Fine-Grained Atomicity

2.4.2 Specifying Synchronization: The Await Statement

2.5 Producer/Consumer Synchronization

2.6 A Synopsis of Axiomatic Semantics

2.6.1 Formal Logical Systems

2.6.2 A Programming Logic

2.6.3 Semantics of Concurrent Execution

2.7 Techniques for Avoiding Interference

2.7.1 Disjoint Variables

2.7.2 Weakened Assertions

2.7.3 Global Invariants

2.7.4 Synchronization

2.7.5 An Example: The Array Copy ProbIem Revisited

2.8 Safety and Liveness Properties

2.8.1 Proving Safety Propertes

2.8.2 Scheduling Policies and Fairness

Historical Notes

References

Exercises

Chapter 3: Locks and Barriers

3.1 The Critical Section Problem

3.2 Critical Sections: Spin Locks

3.2.1 Test and Set

3.2.2 Test and Test and Set

3.2.3 Implementing Await Statements

3.3 Critical Sections: Fair Solutions

3.3.1 The Tie-Breaker Algorithm

3.3.2 The Ticket Algorithm

3.3.3 The Bakery Algorithm

3.4 Barrier Synchronization

3.4.1 Shared Counter

3.4.2 Flags and Coordinators

3.4.3 Symmetric Barriers

3.5 Data Parallel Algorithms

3.5.1 Parallel Prefix Computations

3.5.2 Operations on Linked Lists

3.5.3 Grid Computations: Jacobi Iteration

3.5.4 Synchronous Multiprocessors

3.6 Parallel Computing with a Bag of Tasks

3.6.1 Matrix Multiplication

3.6.2 Adaptive Quadrature

Historical Notes

References

Exercises

Chapter 4: Semaphores

4.1 Syntax and Semantics

4.2 Basic Problems and Techniques

4.2.1 Critical Sections' Mutual Exclusion

4.2.2 Barriers: Signaling Events

4.2.3 Producers and Consumers: Split Binary Semaphores

4.2.4 Bounded Buffers: Resource Counting

4.3 The Dining Philosophers

4.4 Readers and Writers

4.4.1 Readers/Writers as an Exclusion Problem

4.4.2 Readers/Writers Using Condition Synchrnization

4.4.3 The Technique of Passing the Baton

4.4.4 Alternative Scheduling Policies

4.5 Resource Allocation and Scheduling

4.5.1 Problem Definition and General Solution Pattern

4.5.2 Shortest-Job-Next Allocation

4.6 Case Study: Pthreads

4.6.1 Wad Creation

4.6.2 Semaphores

4.6.3 Example: A Simple Producer and Consumer

Historical Notes

References

Exercises

Chapter 5: Monitors

5.1 Syntax and Semantics

5.1.1 Mutual Exclusion

5.1.2 Condition Variables

5.1.3 Signaling Disciplines

5.1.4 Additional Operations on Condition Variables

5.2 Synchroulzation Techniques

5.2.1 Bounded Buffers: Basic Condition Synchronization

5.2.2 Reades and Writers: Broadcast Signal

5.2.3 Shortest-Job-Next Allocation: Priority Wait

5.2.4 Interval Timer: Covering Conditions

5.2.5 The Sleeping Barber: Rendezvous

5.3 Disk Scheduling: Program Structures

5.3.1 Using a Separate Monitor

5.3.2 Using an Intermediary

5.3.3 Using a Nested Monitor

5.4 Case Study: Java

5.4.1 The Threads Class

5.4.2 Synchronized Methods

5.4.3 Parallel Reades/Writers

5.4.4 Exclusive Reades/Writers

5.4.5 True Readers/Writers

5.5 Case Study: Pthrads

5.5.1 Locks and Condition Variables

5.5.2 Example: Summing the Elements of a Matrix

Historical Notes

References

Exercises

Chapter 6: Implementations

6.1 A Single-Processor Kernel

6.2 A Multiprocessor Kernel

6.3 ImPlementing Semaphores in a Kernel

6.4 Implementing Monitors in a Kernel

6.5 Implementing Monitors Using Semaphores

Historical Notes

References

Exercises

Part 2: Distributed Programming

Chapter 7: Message Passing

7.1 Asynchronous Message Passing

7.2 Filters' A Sorting Network

7.3 Clients and Servers

7.3.1 Active Monitors

7.3.2 A Self Scheduling Disk Server

7.3.3 File Servers: Conversational Continuity

7.4 Interating Peers: Exchanging VaIues

7.5 Synchronous Message Passing

7.6 Case Study: CSP

7.6.1 Cornmunication Statements

7.6.2 Guarded Communication

7.6.3 Example: The Sieve of Eratosthenes

7.6.4 Occam and Modern CSP

7.7 Case Study: Linda

7.7.1 Tuple Space and Process Interaction

7.7.2 Example: Prime Numbers with a Bag of Tasks

7.8 Case Study: MPI

7.8.1 Basic Functions

7.8.2 Global Commnication and Synchronization

7.9 Case Study: Java

7.9.1 Networks and Sockets

7.9.2 Example: A Remote File Reader

Historical Notes

References

Exercises

Chapter 8: RPC and Rendezvous

8.1 Remote Procedur Call

8.1.1 Synchronization in Modules

8.1.2 A Time Server

8.1.3 Caches in a Distributed File System

8.1.4 A Sorting Network of Merge Filters

8.1.5 Interacting Peers: Exchanging Values

8.2 Rendezvous

8.2.1 Input Statements

8.2.2 Client/Server Examples

8.2.3 A Sorting Network of Merge Filters

8.2.4 Interachng Peers: Exchanging Values

8.3 A Multiple Primitives Notation

8.3.1 Invoking and Servicing Operations

8.3.2 Examples

8.4 Readers/Writers Revisited

8.4.1 Encapsulated Access

8.4.2 Replicated Files

8.5 Case Study: Java

8.5.1 Remote Method Invocation

8.5.2 Example: A Remote Database

8.6 Case Study: Ada

8.6.1 Tasks

8.6.2 Rendezvous

8.6.3 Protected Types

8.6.4 Example: The Dining Philosophers

8.7 Case Study: SR

8.7.1 Resources and Globals

8.7.2 Commnication and Synchroinzation

8.7.3 Example' Critical Section Simulation

Historical Notes

References

Exercises

Chapter 9: Paradigms for Process Interaction

9.1 Manager/Workers (Distributed Bag of Tasks)

9.1.1 Sparse Matrix Multiplication

9.1.2 Adaptive Quadrature Revisited

9.2 Heartbeat Algorithms

9.2.1 Image Processing: Region Labeling

9.2.2 Cellular Automata: The Game of Life

9.3 Pipeline Algorithms

9.3.1 A Distributed Matrix Multiplication Pipeline

9.3.2 Matrix Multiplication by Blocks

9.4 Probe/Echo Algorithms

9.4.1 Broadcast in a Network

9.4.2 Computing the Topology of a Network

9.5 Broadcast Algorithms

9.5.1 Logical Clocks and Event Ordering

9.5.2 Distributed Semaphores

9.6 Token-Passing Algorithms

9.6.1 Distributed Mutual Exclusion

9.6.2 Termination Detection in a Ring

9.6.3 Termination Detection in a Graph

9.7 Replicated Servers

9.7.1 Distributed Dining Philosophers

9.7.2 Decentralized Dining Philosophers

Historical Notes

References

Exercises

Chapter 10: lmpIementations

10.1 Asynchronous Message Passing

10.1.1 Shared-Memory Kernel

10.1.2 Distributed Kernel

10.2 Synchronous Message Passing

10.2.1 Direct Communication Using Asynchronous Messages

10.2.2 Guarded Communication Using a Clearinghouse

10.3 RPC and Rendezvous

10.3.1 RPC in a Kernel

10.3.2 Rendezvous Using Asynchronous Message Passing

10.3.3 Multiple Primitives in a Kernel

10.4 Distributed Shared Memory

10.4.1 Implementation Overview

10.4.2 Page Consistency Protocols

Historical Notes

References

Exercises

Part 3: ParalleI Programming

Chapter 11: Scientific Computing

11.1 Grid Computations

11.1.1 Laplace's Equation

11.1.2 Sequential Jacobi Iteration

11.1.3 Jacobi Iteration Using Shared Variables

11.1.4 Jacobi Iteration Using Message Passing

11.1.5 Red/Black Successive Over-Relaxation (SOR)

11.1.6 Multigrid Methods

11.2 Particle Computations

11.2.1 The Gravitational N-Body Problem

11.2.2 Shared-Variable Program

11.2.3 Message-Passing Programs

11.2.4 Approximate Methods

11.3 Matrix Computations

11.3.1 Gaussian Elimination

11.3.2 LU Decomposition

11.3.3 Shared-Variable Program

11.3.4 Message-Passing Program

Historical Notes

References

Exercises

Chapter 12: Languages, Compilers,Libraries, and TooIs

12.1 Parallel Programming Libraries

12.1.1 Case Study: Pthreads

12.1.2 Case Study' MPI

12.1.3 Case Study: OpenMP

12.2 Parallelizing Compilers

12.2.1 Dependence Analysis

12.2.2 Program Transformations

12.3 Languagps and Models

12.3.1 Imperative Languages

12.3.2 Coordination Languages

12.3.3 Data Parallel Languages

12.3.4 Functional Languages

12.3.5 Abstract Models

12.3.6 Case Study: High-Performance Fortran (HPF)

12.4 Parallel Programming Tools

12.4.1 Performance Measurement and Visualization

12.4.2 Metacomputers and Metacomputing

12.4.3 Case Study: The Globus Toolkit

Historical Notes

References

Exercises

Glossary

Index