A Raft library.
Project description
Raft Consensus Algorithm Implementation with Cluster Manager
This repository contains an implementation of the Raft consensus algorithm, designed for distributed systems. In addition to the core Raft features such as leader election and log replication, it includes a cluster manager for managing the membership of the cluster. However, it's important to note that the membership change rules in this implementation may not strictly adhere to the Raft paper. Instead, they are handled in a more centralized manner within the cluster manager.
Key Features
- Leader Election
- Log Replication
- Support for Customizable State Machines
- Example API for Recording Logs for a Distributed Key-Value Store
- Cluster Manager for Adding or Removing Members from the Cluster
This implementation primarily focuses on providing a basic Raft consensus algorithm with additional functionalities for managing the cluster's membership.
If you'd like to use this implementation for purposes other than a distributed key-value store, you can adapt it according to your specific requirements. See the Integration section for more details.
Table of Contents
Background
Raft is a consensus algorithm developed in 2013 by Diego Ongaro and John Ousterhout as an alternative to Paxos, aiming for better understandability without sacrificing functionality. It addresses three key challenges:
-
Leader Election: Raft ensures each term has a single leader responsible for client requests, like appending log entries.
-
Log Replication: The leader replicates log entries to all nodes, ensuring consistency across the cluster.
-
Safety: Raft guarantees safety by ensuring committed log entries are replicated to all nodes, preventing inconsistencies.
Raft involves state transitions between followers, candidates, and leaders, depending on communication and election outcomes. Understanding these transitions is crucial for grasping how Raft achieves consensus in distributed systems.
Installation
To use this implementation of Raft, you have two options:
Option 1: Install from PyPI
-
Install the
raft-cluster
package via pip:pip install raft-cluster
-
Run the cluster manager using the following command:
run-cluster-manager
Option 2: Install from Source
-
Clone the repository:
git clone https://github.com/Muti-Kara/raft.git
-
Navigate to the cloned directory:
cd raft
-
Use docker-compose to run the peers:
docker-compose up
The cluster manager runs on port 8000, while the HTTP servers for nodes run on ports 5001-5005, and the RPC endpoints for nodes run on ports 15001-15005.
Integration
Integrating this Raft consensus algorithm implementation into your distributed systems projects is straightforward. Follow these steps to incorporate it into your own projects:
-
Customize the BaseMachine Class: Tailor this implementation to your specific use case by overriding the
BaseMachine
class found inraft.utils.machine
. This customization allows you to adjust the behavior of the state machine according to your project's requirements.-
post_request(command): This method applies commands to the state machine, ensuring consistency across all nodes in the cluster. It executes operations such as updating data or performing transactions. Customize this method in your own
BaseMachine
subclass to define how commands are processed and applied to the state machine. -
get_request(command): Handles requests for information or data retrieval from the state machine. It is crucial for processing queries from clients or other nodes in the cluster. Override this method in your
BaseMachine
subclass to define how queries are processed and responses are generated.
By extending the
BaseMachine
class and providing implementations for these methods, you can tailor the behavior of the state machine to meet your project's specific requirements. -
-
Implement Custom Database: Develop your database implementations by subclassing
BaseDatabase
. The current implementation,FileDatabase
, may not be suitable for real-time and efficient use cases.
Integrating this Raft consensus algorithm into your projects enables you to build robust and fault-tolerant distributed systems seamlessly.
Contributing
Contributions are welcome! If you'd like to contribute to this Raft implementation, please follow these steps:
- Fork the repository
- Create a new branch (
git checkout -b feature/your-feature
) - Make your changes
- Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin feature/your-feature
) - Create a new Pull Request
Package Overview
The Raft consensus algorithm implementation is organized into several packages, each serving a specific purpose. Here's an overview of the main packages and their contents:
-
node.py: Example concrete implementation of a Raft node for a simple key-value store.
- Demonstrates how to use the Raft algorithm to build a distributed system with basic key-value store functionality.
-
raft: Core packages for Raft consensus algorithm.
-
Contains essential classes and utilities for implementing the Raft consensus algorithm.
-
main.py: Main server for cluster management.
- Responsible for handling HTTP requests and managing the cluster's configuration and state transitions.
-
node.py: Contains the
RaftNode
class.- This class represents a single node in the Raft cluster and provides methods for handling RPC requests.
-
cluster.py: Contains the
Cluster
class.- Responsible for managing the cluster configuration, maintaining information about all nodes in the cluster.
-
utils: Utilities used throughout the Raft implementation.
- timer.py: Provides functionality for managing timers used for timeout and periodic events.
- models.py: Defines Pydantic models for representing data structures used in the Raft algorithm, such as logs, cluster configurations, and RPC message payloads.
- machine.py: Contains the
BaseMachine
class, which defines the interface for custom state machine implementations. - database.py: Defines the
BaseDatabase
interface for creating custom database implementations to store cluster data.
-
states: Contains classes representing different states of a Raft node.
- state.py: Defines the base
State
class, which serves as the parent class for all state implementations. - idle.py: Represents the idle state of a Raft node, where it is not actively participating in leader election or log replication.
- follower.py: Represents the follower state of a Raft node, where it awaits instructions from the leader and responds to RPC requests.
- candidate.py: Represents the candidate state of a Raft node, where it initiates leader election and requests votes from other nodes.
- leader.py: Represents the leader state of a Raft node, where it coordinates log replication and manages the cluster's consistency and consensus.
- state.py: Defines the base
-
License
This project is licensed under the MIT License.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for raft_cluster-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 54548711d1226de3f62db95ceab87d3b403542dc93c46bea0140779be6e98544 |
|
MD5 | d8aa6297236b86317223f4d1d0f93134 |
|
BLAKE2b-256 | 154bb4fb2cddb0d30b87b0d157db81550a007ac137a0ab5d67bc0bd34ff07e66 |