Understanding Distributed Systems: Concepts, Challenges, and Solutions

1. Defining Distributed Systems and Their Consequences

A Distributed System is a collection of independent computers that appear to users as a single coherent system. These computers communicate and coordinate their actions by passing messages to achieve a common goal.

Significant Consequences of Distributed Systems:

  • Concurrency: Distributed systems allow multiple components to run concurrently, enabling parallel execution of tasks.
    • Consequence: Increased performance and scalability but also challenges in synchronization and resource sharing.
  • Lack of a Global Clock: Distributed systems do not have a single, universal clock to synchronize all nodes.
    • Consequence: Events must be ordered logically using techniques like Lamport timestamps or vector clocks, making time coordination complex.
  • Independent Failures: Each component in a distributed system can fail independently.
    • Consequence: Fault tolerance mechanisms such as replication, recovery, and consensus algorithms are essential to ensure system reliability and availability.
  • Resource Sharing: Distributed systems allow sharing of hardware, software, and data resources across different locations.
    • Consequence: Efficient management of shared resources is critical to prevent conflicts and deadlocks.
  • Scalability: The system can be scaled by adding more nodes without degrading performance.
    • Consequence: Design decisions must account for performance bottlenecks and network latency.
  • Heterogeneity: Distributed systems consist of different hardware, operating systems, and networks.
    • Consequence: Middleware solutions are needed to handle interoperability and ensure a seamless user experience.
  • Transparency: The system should hide the complexities of distribution from the user (e.g., access, location, replication, failure transparency).
    • Consequence: Achieving complete transparency is challenging and often involves trade-offs in performance and reliability.

2. Key Challenges in Distributed Systems

  • Heterogeneity
    • Different hardware, software, and networks are used.
    • Challenge: Making them work together.
    • Solution: Use tools like middleware to hide differences.
  • Scalability
    • The system needs to handle more users or devices without slowing down.
    • Challenge: Avoid performance issues as it grows.
    • Solution: Use techniques like load balancing and caching.
  • Fault Tolerance
    • Components can fail at any time.
    • Challenge: Keeping the system running even when something breaks.
    • Solution: Use backups, replication, and recovery mechanisms.
  • Concurrency
    • Many tasks happen at the same time.
    • Challenge: Prevent conflicts and ensure proper coordination.
    • Solution: Use locks and synchronization methods.
  • Security
    • The system is open.

3. Request-Reply Protocol in Distributed Systems

In remote invocation, a request-reply protocol governs the communication between the client and the server. The client sends a request to the server, asking it to perform a task, and the server replies with the result once the task is complete.

wPWvQeSFnaIiQAAAABJRU5ErkJggg==

Steps in the Request-Reply Protocol: Client Side:

  1. doOperation (Initiating the Request):
    • The client starts by performing an operation or task that involves communicating with the server.
    • To perform the operation, the client formulates a request message. This message includes all the necessary details for the server to understand what action to take. For example, it may include the operation type, parameters, or any other data needed to fulfill the request.
  2. Send Request Message:
    • The client sends the request message to the server over a communication network (e.g., TCP, HTTP, etc.).
    • This message is delivered to the server using some form of transport protocol.
  3. Wait (Blocking Until Response):
    • After the client sends the request, it enters a waiting or blocking state. The client essentially “pauses” its own operations until it receives a response from the server.
    • This is a synchronous form of communication, meaning the client cannot continue its work until the server has processed the request and replied.
  4. Receive Reply Message:
    • The client eventually receives a reply message from the server. This message contains the result of the operation or any relevant data.
    • Once the reply is received, the client exits the waiting state.
  5. Continuation:
    • After receiving the server’s response, the client processes the result and continues with its own workflow or logic.
    • This is where the operation concludes, and the client is free to make further requests or continue with other tasks.

4. Design Issues for Remote Procedure Call (RPC)

Remote Procedure Call (RPC) allows a program to invoke a procedure on a remote system as if it were a local call. However, designing RPC involves addressing several challenges to ensure efficiency, transparency, and reliability.

Key Design Issues for RPC:

  1. Transparency
    • Remote calls should feel like local ones.
    • Solution: Use stubs to hide communication details.
  2. Data Conversion
    • Convert data for transmission (marshaling/unmarshaling).
    • Solution: Use standard formats like JSON or Protocol Buffers.
  3. Communication Failures
    • Handle network or remote server failures.
    • Solution: Use retries, timeouts, and error detection.
  4. Latency
    • Remote calls are slower than local ones.
    • Solution: Use caching or asynchronous calls.
  5. Binding
    • Connect client and server.
    • Solution: Use dynamic binding or service directories.
  6. Security
    • Protect data and prevent unauthorized access.
    • Solution: Use encryption and authentication.
  7. Concurrency
    • Handle multiple requests at once.
    • Solution: Use threads and synchronization.
  8. Scalability
    • Handle more users without slowing down.
    • Solution: Load balancing and efficient resource use.
  9. Error Handling
    • Manage errors during remote calls.
    • Solution: Use clear error codes or exceptions.

5. Characteristics of File Systems

  1. Hierarchical Structure
    • Files are organized in a tree-like directory structure.
    • Benefit: Makes file navigation and management easier.
  2. Data Storage
    • Files store data in blocks on storage devices.
    • Characteristic: File systems optimize storage space by managing block allocation.
  3. File Naming
    • Files have names to make them identifiable.
    • Characteristic: Supports extensions and naming rules based on the operating system.
  4. Access Methods
    • Sequential Access: Data is read/written in order.
    • Random Access: Data can be accessed directly.
  5. Security and Permissions
    • Controls access to files using user/group permissions (e.g., read, write, execute).
    • Benefit: Protects files from unauthorized access.
  6. Fault Tolerance
    • Handles errors and recovers data during failures.
    • Characteristic: May use journaling or backups for reliability.
  7. Scalability
    • Handles increasing storage capacity and number of files efficiently.
    • Characteristic: Supports large files and multiple users.
  8. Portability
    • Allows file systems to be used across different platforms.
    • Characteristic: Formats like FAT, NTFS, and ext4 vary in compatibility.
  9. Concurrency
    • Supports multiple users or processes accessing files simultaneously.
    • Benefit: Manages conflicts and ensures data integrity.
  10. Caching
    • Temporarily stores frequently accessed data for faster retrieval.
    • Benefit: Improves system performance.
  11. File Sharing
    • Allows multiple users to share files over a network.
    • Characteristic: Uses locking mechanisms to avoid conflicts.

6. Key Requirements for Distributed File System

  1. Transparency

    Transparency in DFS aims to provide a seamless experience to users as if they were dealing with a local file system. It involves several types:

    • Location Transparency: Users should not need to know the physical location of files. Regardless of where a file is stored (locally, on a remote server, or even replicated across servers), the file should appear to reside in a single, consistent namespace. The system automatically locates and accesses the file without user intervention.
    • Access Transparency: The method of accessing files (opening, reading, writing, etc.) should be identical for both local and remote files. DFS achieves this by making remote file operations look just like local ones through a uniform interface.
    • Failure Transparency: A robust DFS can mask the effects of failures, such as server or network outages. When part of the system goes down, the DFS can reroute access requests to replicated data on other servers or provide backup mechanisms to ensure continuity of service.
    • Replication Transparency: If a file is replicated on multiple servers, the user should not need to know this. Replicas ensure that even if one server fails, the file is still available from another server. Users and applications should interact with the file as though it were a single instance, not multiple copies
    • Migration Transparency: Files can be moved between servers or storage systems without the user being aware of the move. The system automatically keeps track of the new location and provides access
  2. Scalability

    Distributed systems often need to handle large numbers of clients and growing data volumes. A well-designed DFS is scalable in terms of:

    • Storage capacity: As more data needs to be stored, new servers can be added to the system, allowing more storage space to be made available without significant reconfiguration.
  3. Replication

    Replication ensures data is copied across multiple nodes or servers to improve both performance and fault tolerance. This feature includes several key aspects:

    • High Availability: In case one server or node goes down, other replicas are available to serve the data without interruption.
    • Improved Performance: Replication allows users to access data from the closest replica, reducing latency and improving speed for geographically dispersed users.
  4. Consistency

    Maintaining consistency across multiple replicas of the same file is a key challenge. All users see the same version of a file at all times. Any changes made to a file are immediately reflected across all replicas. While this ensures data integrity, it can lead to performance bottlenecks, especially in large systems with many users or geographically distant nodes.

  5. Concurrency

    Control DFS must allow multiple users to access and modify files concurrently. The key challenge here is to ensure that the system maintains data integrity and correctness without conflicts when multiple processes access the same file:

  6. Fault

    A DFS must be resilient to failures in the network, storage devices, or servers. This feature ensures continuous operation even when components fail:

    • Replication: As mentioned, file replication across multiple servers ensures that if one server fails, the file is still accessible from another.
    • Failover Mechanisms: DFS can automatically switch to backup servers or replicas in case of failure. Advanced systems may use techniques such as checkpointing, where the system periodically saves the state of ongoing operations to help recover in case of failure.
    • Data Recovery: In case of hardware failure or data corruption, the system must be able to recover lost files or roll back to previous versions. Backup and replication play a key role here.
  7. Security

    Security is a critical concern in DFS, particularly since data is transmitted over potentially insecure networks:

    • Access Control: DFS should implement robust authentication and authorization mechanisms, ensuring that only authorized users can access or modify files. This can involve role-based access controls, identity management, and user authentication protocols.
    • Encryption: Data transmitted across the network and stored on servers should be encrypted to prevent unauthorized access or tampering.
    • Auditing: Many DFS systems keep logs of who accessed
  8. Load Balancing

    To ensure optimal performance, DFS systems distribute file requests across multiple servers in a process known as load balancing. Load balancing ensures no single server is overwhelmed with too many requests, and it improves the efficiency and responsiveness of the system.

  9. Heterogeneity

    A DFS often operates in environments where clients and servers run different hardware platforms and operating systems. Heterogeneity support ensures:

    • Platform Independence: The DFS should be able to handle different types of devices

7. Distributed File Service Architecture

File service architecture In Distributed Systems (DS), the File Service Architecture refers to the design and implementation of services that provide users with access to files distributed across multiple systems. This architecture aims to ensure file sharing, consistency, transparency, and efficient access while abstracting the complexity of the underlying distribution of files across different machines or networks.

w+E0sg0srVMkwAAAABJRU5ErkJggg==

The image depicts the file service architecture, which shows the interaction between a client computer and a server computer in a distributed system.

  1. Client Computer:

    The client computer typically refers to the machine that requests services from a server. It has two main components:

    • Application Program(s): These are programs running on the client-side that require access to files or directories stored on the server. Examples of application programs could be text editors, word processors, or any software that deals with data stored remotely.
    • Client Module: This acts as the intermediary between the application programs and the server. It sends requests for file access to the server and processes responses. The client module handles operations like reading or writing files, retrieving directory listings, and ensuring the data is correctly requested and delivered between the client and server.
  2. Server Computer:

    The server provides file storage and management services to client computers. Its key components include:

    • Directory Service:

Flat File Service: The flat file service is responsible for handling the actual storage and retrieval of files. It operates on the physical disks (represented by the disk stacks at the bottom). This service deals with reading, writing, deleting, and modifying the contents of files.

Communication between Client and Server: Client-Server Interaction: The client computer interacts with the server via a network. The client module sends requests from the application programs to the server for accessing files or directories. The server’s directory service first processes the request by checking the metadata (such as file location and access permissions), and the flat file service then retrieves or updates the actual file data from the storage.

Example of a Typical Workflow:

  • The client module on the client computer sends a request to access a file stored on the server.
  • The directory service on the server checks where the file is located and if the client has permission to access it.
  • The flat file service retrieves the requested file from storage.
  • The file is sent back to the client module, which then makes it available to the application program that requested it.

This architecture emphasizes the division of responsibilities between the client and server to ensure efficient management and retrieval of files in a distributed system. The server’s role is to store, organize, and manage file access while the client focuses on requesting and using those files.

8. Uniform Resource Identifiers (URIs) and Uniform Resource Locators (URLs)

  1. Uniform Resource Identifiers (URIs)
    1. Definition:

      A Uniform Resource Identifier (URI) is a string of characters that uniquely identifies a resource on the internet or within a system. It is a broader concept that encompasses both URLs and URNs.

    2. Structure:
      • A URI typically consists of: scheme:[//authority]path[?query][#fragment]
      • Example:
    3. Characteristics:
      • Scheme: Identifies the protocol (e.g., HTTP, FTP, mailto).
      • Path: Specifies the location of the resource.
      • Query and Fragment: Provide additional information or access specific parts of the resource.
    4. Purpose:
      • URIs are used to identify resources universally, making them essential in distributed systems for locating files, services, or data.
  2. Uniform Resource Locators (URLs)
    1. Definition:

      A Uniform Resource Locator (URL) is a subset of URIs that not only identifies a resource but also provides the means to locate it (e.g., its address).

    2. Structure:
    3. Components:
      • Scheme: Protocol to access the resource (e.g., HTTP, HTTPS, FTP).
      • Host: Domain name or IP address of the server (e.g., www.example.com).
      • Port: Optional, specifies the port number (e.g., :80 for HTTP).
      • Path: Location of the resource on the server (e.g., /page).
      • Query and Fragment: Additional parameters or specific sections of the resource.
    4. Purpose:
      • URLs are specifically designed to locate resources on a network, such as web pages or files.

9. Navigation in Name Servers

Navigation refers to the process of resolving a name into a corresponding address or identifier using a Name Service. Name servers assist in resolving these names by communicating with each other or directly with clients. There are different methods for navigating this resolution process, as explained below:

  1. Iterative Navigation
    1. Definition:

      The client interacts with multiple name servers step by step to resolve a name.

    2. How It Works:
      • The client sends a request to a name server.
      • If the server does not have the answer, it provides a referral to another name server.
      • The client then contacts the referred server, repeating the process until the name is resolved.
    3. Example:

      A client resolving www.example.com might contact the root server, then the .com server, and finally the example.com server.

    4. Advantages:
      • The client has control over the process.
      • Reduces the load on individual servers.
    5. Disadvantage:

      Requires more effort from the client.

  2. Multicast Navigation
    1. Definition:

      The client broadcasts a query to multiple name servers simultaneously.

    2. How It Works:
      • The client sends a multicast request to all name servers in a specific group.
      • Any server with the required information responds to the client.
    3. Example:

      Used in local networks where multiple name servers exist, such as multicast DNS (mDNS).

    4. Advantages:
      • Quick resolution if multiple servers are available.
      • Reduces dependency on a single server.
    5. Disadvantage:

      Inefficient in large-scale networks due to high communication overhead.

  3. Non-Recursive Server-Controlled Navigation
    1. Definition:

      The client sends a request to the first server, and the server provides referrals for the next steps without resolving the name completely.

    2. How It Works:
      • The name server returns a list of other servers that the client should query.
      • The client follows these referrals until the name is resolved.
    3. Example:

      A DNS server responding with “try the .com server for this query.”

    4. Advantages:
      • The server workload is reduced since it doesn’t resolve the entire query.
      • Provides flexibility to the client.
    5. Disadvantage:

      Similar to iterative navigation, more effort is required from the client.

10. Domain Name System (DNS)

  1. Definition:

    The Domain Name System (DNS) is a hierarchical and distributed system that translates human-readable domain names (e.g., www.example.com) into machine readable IP addresses (e.g., 192.0.2.1) and vice versa.

  2. Purpose:

    To make it easier for users to access internet resources by using domain names instead of remembering complex numerical IP addresses.

  3. Key Features:
    • Distributed: DNS is spread across multiple servers worldwide.
    • Hierarchical: Follows a tree-like structure with root, top-level domains, and subdomains.
    • Scalable: Handles billions of queries daily.

Components of DNS

  1. Domain Names:
    • Hierarchical names that identify resources.
    • Example: www.example.com
      • com: Top-Level Domain (TLD).
      • example: Second-Level Domain.
      • www: Subdomain.
  2. Name Servers:
    • Specialized servers that store DNS records and handle name resolution.
    • Types:
      • Root Name Servers: Handle top-level domain queries.
      • TLD Servers: Handle specific top-level domains (e.g., .com, .org).
      • Authoritative Servers: Store records for specific domains.
      • Caching Resolvers: Cache query results to improve speed.
  3. DNS Records:
    • Store information about domain names. Common types:
      • A Record: Maps a domain name to an IPv4 address.
      • AAAA Record: Maps a domain name to an IPv6 address.
      • CNAME Record: Aliases one domain name to another.
      • MX Record: Specifies mail servers for email.

How DNS Works (Example)

Let’s resolve the domain www.example.com:

  1. Query to DNS Resolver:

    A user types www.example.com in a browser. The request is sent to a local DNS resolver.

  2. Root Server:

    If the resolver doesn’t know the answer, it queries a Root Name Server, which provides the address of the TLD Server for .com.

  3. TLD Server:

    The resolver queries the TLD Server for .com, which provides the address of the Authoritative Server for example.com.

  4. Authoritative Server:

    The resolver queries the Authoritative Server, which returns the IP address for www.example.com.

  5. Response to User:

    The resolver sends the IP address back to the user’s device, and the browser connects to the web server.

Example

  • User types: www.example.com
  • DNS resolves to: 192.0.2.1
  • Browser connects to 192.0.2.1 to fetch the website content.

11. Cristian’s Method for Synchronizing Clocks

p> <p><img src=

p> <p><img src=

12. Logical Clock and Lamport’s Logical Clock

Logical Clock: A mechanism to order events in distributed systems when there is no global clock.

vrTC9IIEggSCBIIEkSKAAK+5o0gjKPAl7JFwySCBIIEggSCBIIJUSCMo8ldIO9woSCBIIEggSCBJIggSCMk+CUMMlgwSCBIIEggSCBFIpgf8DuhKFTVYqQlMAAAAASUVORK5CYII=

wDGOc88VnM9oAAAAABJRU5ErkJggg==

s1lAAAAAElFTkSuQmCC

13. Network Time Protocol (NTP)

NTP Overview:

NTP is essential for synchronizing the clocks of computers over a network in a distributed system. Accurate time synchronization is crucial for coordinating activities across different systems, ensuring that events are logged in a consistent order, and managing time-dependent tasks such as database transactions and file synchronization.

NTP Hierarchical Structure:

NTP operates in a hierarchical, stratified manner, organized into different levels or “stratum.”

  1. Stratum 0 (Reference Clocks): These are high-precision clocks, such as atomic clocks or GPS clocks, directly connected to the NTP servers. These clocks provide the base time for synchronization.
  2. Stratum 1 (Primary Time Servers): These servers are directly connected to Stratum 0 reference clocks. They serve as the primary time servers that distribute time to other systems.
  3. Stratum 2 (Secondary Time Servers): These servers synchronize their clocks with Stratum 1 servers and pass this time along to lower stratum servers and clients.
  4. Stratum 3 and lower: These include servers and clients that synchronize time with higher stratum servers. The accuracy of time decreases as you move further down the strata, but the system remains effective for large-scale distributed environments.

NTP Message Exchange Process:

XeDCCKIIIII4kb4f7T17flLfi87AAAAAElFTkSuQmCC

BmUbcf5mAAAAAElFTkSuQmCC

3QlfZy6RCSxgnGQmmCQzRGMBYwFjAWOBjLGAcZIZY3fzrcYCxgLGAsYCmcACxklmgkkyQzQWMBYwFjAWyBgLGCeZMXY332osYCxgLGAskAksYJxkqieJZI9J3I2pvoz5oLFAZreAeRUy+wya8f+HBYyTNI+HsYCxgLGAsYCxwG0sYJykeTSMBYwFjAWMBYwFjJM0z4CxgLGAsYCxgLGAYxYwkaRj9jJnGwtkfguYGqJlDo0xMv8D7do7ME4yBfY1r1EKjGROMRYwFjAWyIIWME4yC06quSVjAWMBYwFjAedYwDhJ59jRXMVYwFjAWMBYIAtawDjJLDip5paMBYwFjAWMBZxjgf8D76sXdXKIdfMAAAAASUVORK5CYII=

14. Global States and Consistent Cuts

In distributed systems, global state and consistent cuts are key concepts used to reason about the state of the system across multiple processes, especially when working with events or checkpoints. These concepts are essential for understanding how different components of a distributed system behave and interact over time.

  1. Global State:

    The global state of a distributed system refers to the state of all processes and communication channels in the system at a particular point in time. Since there is no global clock in a distributed system, the global state cannot be directly observed or captured at a single moment. Instead, it must be inferred by examining the states of individual processes and their communication messages.

    Components of Global State:

    • Local states of processes: The local state of each process includes its variables and the state of its execution (e.g., instruction being executed, values of variables).
    • Messages in transit: Since messages are being passed between processes, the global state must also account for messages that are in transit between processes but have not yet been received.
  2. Consistent Cuts:

    A cut in a distributed system is a subset of events in the system that represents the state of the system at a particular point in time. It is essentially a snapshot of the system’s state across all processes and communication channels. A cut is consistent if it reflects a valid execution of the system, meaning the events in the cut obey the causality constraints of the system.

    Key properties of a consistent cut:

    • A consistent cut must respect the happens-before relationship, which is a causal ordering of events.
    • If an event in one process causally depends on an event in another process, the event from the first process must appear before the event from the second process in any consistent cut.

Example of Global States and Consistent Cuts:

Consider a distributed system with two processes (P1 and P2) and a communication channel between them. Let’s say that:

  1. P1 performs an action (e.g., sends a message m).
  2. P2 receives the message m and performs an action (e.g., processes the message).

Global State at Different Points in Time:

  • At Time 1: P1 is at state s1, and P2 is at state s2. No message has been sent or received.

Important Points:

  • A consistent cut reflects a possible history of the distributed system where the events are causally consistent with one another.
  • An inconsistent cut would represent a situation that cannot possibly occur due to the inherent causality constraints of the system.

Real-world Example:

Consider a distributed file system where Process A is writing to a file and Process B is reading the file. Let’s say:

  • Process A writes the data at time t1.
  • Process B reads the data at time t2, after Process A has written it.

If you take a global state at time t1 (after A writes the data but before B reads it

), that would be a consistent cut, because Process A’s write event causally precedes Process B’s read event. However, if you took a cut where Process B reads the data before Process A writes it, that would be an inconsistent cut, as it would violate the causal ordering (B cannot read data before A writes it). Importance of Consistent Cuts: • Checkpointing and recovery: Consistent cuts are used in distributed systems to create checkpoints that can be used for recovery in case of failure. A consistent cut ensures that the system can be restored to a valid state after recovery. • Deadlock detection: Consistent cuts help in detecting deadlocks and race conditions by providing a way to examine the state of the system across processes. • Logging: In systems that require distributed logging, consistent cuts are used to ensure that the logs reflect the correct ordering of events. 15). What are the properties of Reliable multicast? Explain the Reliable multicast algorithm. Properties of Reliable Multicast: Reliable multicast refers to the communication protocol where data is sent from one sender to multiple receivers in a multicast group, and the protocol ensures that the data is reliably delivered to all the receivers, even in the presence of network failures or other issues. The following are key properties of a reliable multicast: 1. Message Delivery: o Reliable delivery ensures that all messages sent to a multicast group are successfully received by all members, regardless of network failures or congestion. If any receiver fails to receive a message, it must be retransmitted2. Ordered Delivery: o Message order preservation guarantees that messages are delivered in the same order they were sent, ensuring that the receiving processes can correctly interpret the data. This is especially important for applications like video streaming or collaborative systems. 

3. Duplicate Prevention: o Duplicate suppression ensures that each message is delivered only once to each receiver, even in the case of retransmissions due to network failures or congestion. 4. Fault Tolerance: o Fault tolerance guarantees the system can handle receiver failures (e.g., receivers may join or leave the multicast group) and still maintain reliable delivery. This may involve mechanisms for handling the loss of messages or dynamically adapting to changes in the group membership. 5. Scalability: o Scalability ensures that the reliable multicast protocol works efficiently even when the number of receivers in the multicast group grows large, without requiring an excessively high amount of resources from the sender or the network. 6. Congestion Control: o Congestion control ensures that the protocol adjusts its transmission rate to avoid overwhelming the network, which can be important in systems with large groups of receivers. 7. Receiver Acknowledgment: o Acknowledgment mechanism involves receivers confirming that they have received messages. This can be done via either individual or group-based acknowledgments to allow the sender to retransmit lost packets.Key Features of the Reliable Multicast Algorithm: 1. Multicast Group Setup: o The sender broadcasts a message to a multicast group (set of receivers). The message includes an identifier and other essential information for the receivers to know which group they belong to. 2. Receiver Acknowledgment: o Each receiver acknowledges the receipt of a message. There are several methods to handle acknowledgment: ▪ Receiver-based acknowledgment: Receivers send individual acknowledgment messages for each message they receive, informing the sender. ▪ Receiver set-based acknowledgment: Receivers wait for a specific time before acknowledging, reducing the number of acknowledgment messages sent to the sender. ▪ Feedback suppression: This mechanism is used to prevent overload on the sender by suppressing acknowledgment messages until necessary. 


16). Explain the Maekawa’s voting algorithm for mutual exclusion in Distributed Systems. 8BL2JKium02PYAAAAASUVORK5CYII=