Systems programming designing and developing distributed applications
Systems Programming: Designing and Developing Distributed Applications explains how the development of distributed applications depends on a foundational understanding of the relationship among operating systems, networking, distributed systems, and programming. Uniquely organized around four viewpo...
Otros Autores: | |
---|---|
Formato: | Libro electrónico |
Idioma: | Inglés |
Publicado: |
Amsterdam :
Morgan Kaufmann is an imprint of Elsevier
[2016]
|
Materias: | |
Ver en Biblioteca Universitat Ramon Llull: | https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009752735206719 |
Tabla de Contenidos:
- Front Cover
- Systems Programming: Designing and Developing Distributed Applications
- Copyright
- Dedication
- Contents
- Preface
- The origin and purpose of this book
- The intended audience
- The organization of the book
- In-text activities
- How to use the book
- The support materials
- Acknowledgments
- Chapter 1: Introduction
- 1.1. Rationale
- 1.1.1. The Traditional Approach to Teaching Computer Science
- 1.1.2. The Systems Approach Taken in This Book
- Chapter 2: Process View
- Chapter 3: Communication View
- Chapter 4: Resource View
- Chapter 5: Architecture View
- Chapter 6: Distributed Systems
- Chapter 7: Case Studies-Putting it All Together
- 1.2. The Significance of Networking and Distributed Systems in Modern Computing-A Brief Historical Perspective
- 1.3. Introduction to Distributed Systems
- 1.3.1. Benefits and Challenges of Distributed Systems
- 1.3.2. The Nature of Distribution
- 1.3.3. Software Architectures for Distributed Applications
- 1.3.4. Metrics for Measuring the Quality of Distributed Systems and Applications
- 1.3.5. Introduction to Transparency
- 1.3.5.1. Access Transparency
- 1.3.5.2. Location Transparency
- 1.3.5.3. Replication Transparency
- 1.3.5.4. Concurrency Transparency
- 1.3.5.5. Migration Transparency
- 1.3.5.6. Failure Transparency
- 1.3.5.7. Scaling Transparency
- 1.3.5.8. Performance Transparency
- 1.3.5.9. Distribution Transparency
- 1.3.5.10. Implementation Transparency
- 1.3.5.11. Achieving Transparency
- 1.4. Introduction to the Case Studies
- 1.4.1. The Main (Distributed Game) Case Study
- 1.4.2. The Additional Case Studies
- 1.5. Introduction to Supplementary Material and Exercises
- 1.5.1. In-Text Activities
- 1.6. The Workbenches Suite of Interactive Teaching and Learning Tools
- 1.6.1. Operating Systems Workbench 3.1 "Systems Programming Edition".
- 1.6.2. The Networking Workbench 3.1 "Systems Programming Edition"
- 1.6.3. Distributed Systems Workbench 3.1 "Systems Programming Edition"
- 1.7. Sample Code and Related Exercises
- 1.7.1. Source Code, in C + +, C#, and Java
- 1.7.2. Application Development Exercises
- Chapter 2: The process View
- 2.1. Rationale and Overview
- 2.2. Processes
- 2.2.1. Basic Concepts
- 2.2.2. Creating a Process
- 2.3. Process Scheduling
- 2.3.1. Scheduling Concepts
- 2.3.1.1. Time Slices and Quanta
- 2.3.1.2. Process States
- 2.3.1.3. Process Behavior
- IO-Intensive Processes
- Compute-Intensive (or CPU-Intensive) Processes
- Balanced Processes
- 2.3.1.4. Scheduler Behavior, Components, and Mechanisms
- 2.3.1.5. Additional Process States: Suspended-Blocked and Suspended-Ready
- 2.3.1.6. Goals of Scheduling Algorithms
- 2.3.1.7. Scheduling Algorithms
- First Come, First Served (FCFS)
- Shortest Job First (SJF)
- Round Robin (RR)
- Shortest Remaining Job Next (SRJN)
- Multilevel Queues
- Multilevel Feedback Queues (MLFQ)
- 2.4. Scheduling for Real-Time Systems
- 2.4.1. Limitations of General-Purpose Schedulers for Real-Time Systems
- 2.4.1.1. The Deadline Scheduling Algorithm
- 2.4.1.2. The Rate Monotonic Scheduling Algorithm
- Introducing Variable, Bursty Workloads
- 2.5. Specific Scheduling Algorithms and Variants, Used in Modern Operating Systems
- 2.6. Interprocess Communication
- 2.6.1. Introduction to Sockets
- 2.7. Threads: An Introduction
- 2.7.1. General Concepts
- 2.7.2. Thread Implementations
- 2.7.3. Thread Scheduling Approaches
- 2.7.4. Synchronous (Sequential) Versus Asynchronous (Concurrent) Thread Operation
- 2.7.5. Additional Complexity Accompanying Threading
- 2.7.5.1. Application Scenarios for Multithreading
- 2.7.6. A Multithreaded IPC Example
- 2.8. Other Roles of the Operating System.
- 2.9. Use of Timers Within Programs
- 2.9.1. Use of Timers to Simulate Threadlike Behavior
- 2.10. Transparency from the Process Viewpoint
- 2.10.1. Concurrency Transparency
- 2.10.2. Migration Transparency
- 2.11. The Case Study from the Process Perspective
- 2.11.1. Scheduling Requirements
- 2.11.2. Use of Timers
- 2.11.3. Need for Threads
- 2.11.4. IPC, Ports, and Sockets
- 2.12. End-of-Chapter Exercises
- 2.12.1. Questions
- 2.12.2. Exercises with the Workbenches
- 2.12.3. Programming Exercises
- 2.12.4. Answers to End-of-Chapter Questions
- 2.12.5. List of In-text Activities
- 2.12.6. List of Accompanying Resources
- Chapter 3: The Communication View
- 3.1. Rationale and Overview
- 3.2. The Communication View
- 3.2.1. Communication Basics
- 3.3. Communication Techniques
- 3.3.1. One-Way Communication
- 3.3.2. Request-Reply Communication
- 3.3.3. Two-Way Data Transfer
- 3.3.4. Addressing Methodologies
- 3.3.4.1. Unicast Communication
- 3.3.4.2. Broadcast Communication
- 3.3.4.3. Multicast Communication
- 3.3.4.4. Anycast Communication
- 3.3.5. Remote Procedure Call
- 3.3.6. Remote Method Invocation
- 3.3.6.1. Java Interfaces
- 3.4. Layered Models of Communication
- 3.4.1. The OSI Model
- 3.4.2. The TCP/IP Model
- 3.5. The TCP/IP Suite
- 3.5.1. The IP
- 3.5.2. The TCP
- 3.5.3. The TCP Connection
- 3.5.3.1. Higher-Layer Protocols That Use TCP as a Transport Protocol
- 3.5.4. The UDP
- 3.5.4.1. Higher-Layer Protocols That Use UDP as a Transport Protocol
- 3.5.5. TCP and UDP Compared
- 3.5.6. Choosing Between TCP and UDP
- 3.6. Addresses
- 3.6.1. Flat Versus Hierarchical Addressing
- 3.6.2. Addresses in the Link Layer
- 3.6.3. Addresses in the Network Layer
- 3.6.3.1. IP Addresses
- 3.6.3.2. IPv4 Addresses
- 3.6.3.3. IPv6 Addresses
- 3.6.3.4. Translation Between IP Addresses and MAC Addresses.
- 3.6.4. Addresses in the Transport Layer (Ports)
- 3.6.5. Well-Known Ports
- 3.7. Sockets
- 3.7.1. The Socket API: An Overview
- 3.7.2. The Socket API: UDP Primitive Sequence
- 3.7.3. The Socket API: TCP Primitives Sequence
- 3.7.4. Binding (Process to Port)
- 3.8. Blocking and Nonblocking Socket Behaviors
- 3.8.1. Handling Nonblocking Socket Behavior
- 3.8.2. Communication Deadlock
- 3.9. Error Detection and Error Correction
- 3.9.1. A Brief Introduction to Error Detection and Error Correction Codes
- 3.10. Application-Specific Protocols
- 3.11. Integrating Communication with Business Logic
- 3.12. Techniques to Facilitate Components Locating Each Other
- 3.13. Transparency Requirements from the Communication Viewpoint
- 3.13.1. Logical and Physical Views of Systems
- 3.14. The Case Study from the Communication Perspective
- 3.15. End-of-Chapter Exercises
- 3.15.1. Questions
- 3.15.2. Exercises with the Workbenches
- 3.15.3. Programming Exercises
- 3.15.4. Answers to End-of-Chapter Questions
- 3.15.5. Answers/Outcomes of the Workbench Exercises
- 3.15.6. List of in-Text Activities
- 3.15.7. List of Accompanying Resources
- Appendix. Socket API Reference
- A1. Socket
- A2. Socket Options
- A3. Socket Address Formats
- A4. Setting a Socket to Operate in Blocking or Nonblocking IO Mode
- A5. Bind
- A6. Listen
- A7. Connect
- A8. Accept
- A9. Send (Over a TCP Connection)
- A10. Recv (Over a TCP Connection)
- A11. SendTo (Send a UDP Datagram)
- A12. Recvfrom (Receive a UDP Datagram)
- A13. Shutdown
- A14. CloseSocket
- Chapter 4: The resource View
- 4.1. Rationale and Overview
- 4.2. The CPU as a Resource
- 4.3. Memory as a Resource for Communication
- 4.3.1. Memory Hierarchy
- 4.4. Memory Management
- 4.4.1. Virtual Memory
- 4.4.1.1. VM Operation
- 4.4.1.2. Page Replacement Algorithms.
- 4.4.1.3. General Mechanism
- 4.4.1.4. Specific Algorithms
- 4.5. Resource Management
- 4.5.1. Static Versus Dynamic Allocation of Private Memory Resources
- 4.5.2. Shared Resources
- 4.5.3. Transactions
- 4.5.4. Locks
- 4.5.5. Deadlock
- 4.5.6. Replication of Resources
- 4.6. The Network as a Resource
- 4.6.1. Network Bandwidth
- 4.6.1.1. Minimal Transmissions
- 4.6.1.2. Frame Size (Layer 2 Transmission)
- 4.6.1.3. Packet Size (Layer 3 Transmission)
- 4.6.1.4. Message Size (Upper Layers Transmission)
- 4.6.2. Data Compression Techniques
- 4.6.2.1. Lossy Versus Lossless Compression
- 4.6.2.2. Lossless Data Compression
- 4.6.2.3. Lossy Data Compression
- 4.6.3. Message Format
- 4.6.3.1. Fixed Versus Variable-Length Fields
- 4.6.3.2. Application-Level PDUs
- 4.6.4. Serialization
- 4.6.5. The Network as a Series of Links
- 4.6.6. Routers and Routing
- 4.6.7. Overheads of Communication
- 4.6.8. Recovery Mechanisms and Their Interplay with Network Congestion
- 4.7. Virtual Resources
- 4.7.1. Sockets
- 4.7.2. Ports
- 4.7.3. Network Addresses
- 4.7.4. Resource Names
- 4.8. Distributed Application Design Influences on Network Efficiency
- 4.9. Transparency from the Resource Viewpoint
- 4.9.1. Access Transparency
- 4.9.2. Location Transparency
- 4.9.3. Replication Transparency
- 4.9.4. Concurrency Transparency
- 4.9.5. Scaling Transparency and Performance Transparency
- 4.10. The Case Study from the Resource Perspective
- 4.11. End-of-Chapter Exercises
- 4.11.1. Questions
- 4.11.2. Exercises with the Workbenches
- 4.11.3. Programming Exercises
- 4.11.4. Answers to End-of-Chapter Questions
- 4.11.5. Answers/Outcomes of the Workbenches Exercises
- 4.11.6. List of in-Text Activities
- 4.11.7. List of Accompanying Resources
- Chapter 5: The Architecture View
- 5.1. Rationale and Overview.
- 5.2. The Architecture View.