ReinforceNow CLI - Reinforcement Learning platform command-line interface
Project description
Documentation
See the documentation for a technical overview of the platform and train your first agent
Quick Start
1. Install uv (Python package manager)
# macOS/Linux:
$ curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows:
PS> powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
2. Install ReinforceNow
uv init && uv venv --python 3.11
source .venv/bin/activate # Windows: .\.venv\Scripts\Activate.ps1
uv pip install rnow
3. Authenticate
rnow login
4. Create & Run Your First Project
rnow init --template sft
rnow run
That's it! Your training run will start on ReinforceNow's infrastructure. Monitor progress in the dashboard.
Core Concepts
Go from raw data to a reliable AI agent in production. ReinforceNow gives you the flexibility to define:
1. Reward Functions
Define how your model should be evaluated using the @reward decorator:
from rnow.core import reward, RewardArgs
@reward
async def accuracy(args: RewardArgs, messages: list) -> float:
"""Check if the model's answer matches ground truth."""
response = messages[-1]["content"]
expected = args.metadata["answer"]
return 1.0 if expected in response else 0.0
→ Write your first reward function
2. Tools (for Agents)
Give your model the ability to call functions during training:
from rnow.core import tool
@tool
def search(query: str, max_results: int = 5) -> dict:
"""Search the web for information."""
# Your implementation here
return {"results": [...]}
→ Train an agent with custom tools
3. Training Data
Create a train.jsonl file with your prompts and reward assignments:
{"messages": [{"role": "user", "content": "Balance the equation: Fe + O2 → Fe2O3"}], "rewards": ["accuracy"], "metadata": {"answer": "4Fe + 3O2 → 2Fe2O3"}}
{"messages": [{"role": "user", "content": "Balance the equation: H2 + O2 → H2O"}], "rewards": ["accuracy"], "metadata": {"answer": "2H2 + O2 → 2H2O"}}
{"messages": [{"role": "user", "content": "Balance the equation: N2 + H2 → NH3"}], "rewards": ["accuracy"], "metadata": {"answer": "N2 + 3H2 → 2NH3"}}
→ Learn about training data format
Contributing
We welcome contributions! ❤️ Please open an issue to discuss your ideas before submitting a PR
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rnow-0.4.7.tar.gz.
File metadata
- Download URL: rnow-0.4.7.tar.gz
- Upload date:
- Size: 12.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
92dda7d8331cc3014e4e8db08b09da28dc0fb9acf1f5f2117f8152b059564a03
|
|
| MD5 |
1a58248a836ca8631f4c39f720ab46ea
|
|
| BLAKE2b-256 |
112e63927459ae0273b179ff34a3e67508079ce840ff80d63130fbae68d9f55d
|
File details
Details for the file rnow-0.4.7-py3-none-any.whl.
File metadata
- Download URL: rnow-0.4.7-py3-none-any.whl
- Upload date:
- Size: 12.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
06da022efcd10d2a44e0450ebecef8e5659c7c02c5c41ae8af80dc9a7f24424d
|
|
| MD5 |
ee23a35412d6f0ad17f39ac3d6f85013
|
|
| BLAKE2b-256 |
6bb4d33d635789844f26f5642f3a21527a4db9db5da473af93b87d9dca14d2e2
|