Parse natural language search queries into structured fields using fine-tuned Qwen3.5-0.8B LoRA adapters.
Project description
search-expert
Parse natural language search queries into structured fields using fine-tuned Qwen3.5-0.8B LoRA adapters.
from search_expert import SearchExpert
expert = SearchExpert()
result = expert.parse("noise cancelling headphones under $200 with fast charging")
print(result.fields)
# {
# "domain": "ecommerce",
# "product": "headphones",
# "feature": "noise cancelling",
# "price": "lt:200"
# }
print(result.get_numeric_constraint("price"))
# {'operator': 'lt', 'value': 200.0, 'value_hi': None}
Models
Two fine-tuned LoRA adapters trained on top of Qwen3.5-0.8B:
| Adapter | HuggingFace repo | Output format |
|---|---|---|
| JSON | sarthakrastogi/search-expert-json-0.8b |
JSON |
| YAML | sarthakrastogi/search-expert-yaml-0.8b |
YAML |
Both adapters return the same Python dict regardless of which one you use — the format only affects the model's internal output language.
Installation
pip install search-expert
With GPU (recommended) — use unsloth for fast loading:
pip install "search-expert[unsloth]"
Without unsloth — use standard HF PEFT:
pip install "search-expert[peft]"
Usage
Basic
from search_expert import SearchExpert, ModelFormat
# JSON adapter (default)
expert = SearchExpert()
result = expert.parse("3BR house in Austin under $600k with pool")
print(result.fields)
print(result.to_json(indent=2))
print(result.to_yaml())
YAML adapter
expert = SearchExpert(fmt=ModelFormat.YAML)
result = expert.parse("remote senior ML engineer job paying over $150k")
print(result.fields)
Numeric constraints
Numeric fields are returned with operator prefixes so downstream search logic can apply filters directly:
| Operator | Example value | Meaning |
|---|---|---|
lt:N |
lt:200 |
< 200 |
lte:N |
lte:200 |
≤ 200 |
gt:N |
gt:150000 |
> 150,000 |
gte:N |
gte:150000 |
≥ 150,000 |
approx:N |
approx:300 |
≈ 300 |
between:Lo:Hi |
between:80000:120000 |
80,000 – 120,000 |
result = expert.parse("jobs paying between $80k and $120k in NYC")
salary = result.get_numeric_constraint("salary")
# {'operator': 'between', 'value': 80000.0, 'value_hi': 120000.0}
# Decode all numeric fields at once
print(result.numeric_fields())
Batch parsing
queries = [
"Python ML course for beginners under $30",
"5-star hotel in Paris with breakfast under $400/night",
"Taylor Swift concert in London in July",
]
results = expert.parse_batch(queries)
for r in results:
print(r.query, "→", r.fields)
Custom adapter
expert = SearchExpert(model_id="your-org/your-fine-tuned-adapter")
Custom generation config
expert = SearchExpert(
generation_config={"temperature": 0.0, "max_new_tokens": 128}
)
Eager loading
By default the model loads on the first .parse() call. Pass eager=True to load immediately:
expert = SearchExpert(eager=True) # loads model in __init__
Supported domains
| Domain | Example query |
|---|---|
real_estate |
"2BR apartment in Austin under $1500/month" |
ecommerce |
"Sony noise cancelling headphones under $300" |
jobs |
"Remote senior ML engineer paying over $150k" |
flights |
"Non-stop business class JFK to Tokyo under $3000" |
hotels |
"5-star hotel in Paris with breakfast under $400/night" |
cars |
"Electric SUV with 300+ mile range under $50k" |
restaurants |
"Vegan Italian in NYC with outdoor seating under $40" |
movies |
"Thriller on Netflix with 8+ IMDB rating" |
healthcare |
"Female therapist in Chicago accepting Aetna" |
courses |
"Python ML course for beginners under $30" |
events |
"Taylor Swift concert in London in July" |
Repo structure
search-expert/
├── search_expert/ # Library source
│ ├── __init__.py
│ ├── expert.py # SearchExpert class (main API)
│ ├── config.py # Model IDs, prompts, format enum
│ ├── loader.py # HF model loading (unsloth / peft / plain)
│ ├── parser.py # Raw output → dict parsers
│ ├── result.py # ParseResult dataclass
│ └── exceptions.py # Custom exceptions
├── training/ # Fine-tuning pipeline
│ ├── finetune.py # Training script
│ └── evaluate.py # Format comparison leaderboard
├── tests/
│ └── test_search_expert.py
├── examples/
│ └── basic_usage.py
├── pyproject.toml
└── README.md
Development
git clone https://github.com/sarthakrastogi/search-expert
cd search-expert
pip install -e ".[dev]"
pytest tests/ -v # unit tests (no GPU needed)
SEARCH_EXPERT_RUN_MODEL_TESTS=1 pytest tests/ -v # includes model tests
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file search_expert-0.1.0.tar.gz.
File metadata
- Download URL: search_expert-0.1.0.tar.gz
- Upload date:
- Size: 13.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07845fe80b72284e093bd2dd8d8670eb39879dc6170f491fb02812751b6c8d3b
|
|
| MD5 |
cefa7e53e04edb7c9a9ec1377dd0d5ce
|
|
| BLAKE2b-256 |
791c5e423bcc19fe72c216eee99499fc835fe64c8ec3f5a64a300ba954a599e1
|
File details
Details for the file search_expert-0.1.0-py3-none-any.whl.
File metadata
- Download URL: search_expert-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a7bf2ee80d19ce6bf7b073bc2091947d10da818d73744a1072034d12f810e873
|
|
| MD5 |
b64c4917c236a19982f7083519c64de5
|
|
| BLAKE2b-256 |
2f66e8707a0232f6397c6eebb5cc2ead24118fe1254e27c904525881e7401fa2
|