Package to evaluate complex alignments.
Project description
Complex Evaluate
A Python library for evaluating complex ontology alignments in EDOAL (Expressive and Declarative Ontology Alignment Language) format adapting precision, recall, and f-measure metrics to the complex matching case.
Requirements
- Python >= 3.9
- NumPy
- SciPy
📦 Installation
pip install complex_evaluate
📖 Usage
Basic Example
from complex_evaluate.evaluate import evaluate_edoal
# Compare two alignment files
precision, recall, f_measure = evaluate_edoal(
'predicted_alignment.edoal',
'reference_alignment.edoal'
)
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
print(f"F-measure: {f_measure:.3f}")
Comparing from strings
from complex_evaluate.evaluate import evaluate_edoal_string
predicted = '''<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns="http://knowledgeweb.semanticweb.org/heterogeneity/alignment#">
<Alignment>
<map>
<Cell>
<entity1>
<Class rdf:about="http://example.org#ClassA" />
</entity1>
<entity2>
<Class rdf:about="http://example.org#ClassB" />
</entity2>
</Cell>
</map>
</Alignment>
</rdf:RDF>'''
reference = predicted # Use same for identity test
p, r, f = evaluate_edoal_string(predicted, reference)
print(f"F-measure: {f}") # Should be 1.0 for identical alignments
📊 Use Cases
This metric was used in the evaluation of OAEI 2025 in the Complex Matching track https://oaei.ontologymatching.org/2025/results/complex/index.html.
Also, this library is particularly useful for:
- Ontology Alignment Evaluation: Benchmarking alignment approaches on complex matching tasks.
- LLM reasoning training: The metric can enable the training of LLMs to reason about complex alignments, by providing a verifiable reward signal based on the score of the predicted alignment against a reference alignment.
🤝 Contributing
Contributions are welcome! Some areas for improvement:
- Additional similarity metrics.
- Performance optimizations.
- Support for other alignment formats.
- Extended documentation and examples.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
📚 Citation
If you use this library in your research, please cite it as follows:
@inproceedings{DBLP:conf/esws/SousaLS25,
author = {Guilherme Henrique Santos Sousa and
Rinaldo Lima and
C{\'{a}}ssia Trojahn dos Santos},
title = {On Evaluation Metrics for Complex Matching Based on Reference Alignments},
booktitle = {{ESWC} {(1)}},
series = {Lecture Notes in Computer Science},
volume = {15718},
pages = {77--93},
publisher = {Springer},
year = {2025}
}
Built with ❤️ for the Semantic Web and Ontology Matching community.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file complex_evaluate-0.0.1.tar.gz.
File metadata
- Download URL: complex_evaluate-0.0.1.tar.gz
- Upload date:
- Size: 5.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3369fe26027007317dd60cad774fededc42f29c911363193dd837c324e408100
|
|
| MD5 |
5d143e0951870f67212aca487699df1c
|
|
| BLAKE2b-256 |
c91ddb0f6df453f3ba7227f156d350616a48179a1f854c7f82019ae1b1874bbe
|
File details
Details for the file complex_evaluate-0.0.1-py3-none-any.whl.
File metadata
- Download URL: complex_evaluate-0.0.1-py3-none-any.whl
- Upload date:
- Size: 6.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5e0e644376304b9af64395ebb734d02b68bcb24e97020712a5eda77e25d274a8
|
|
| MD5 |
46c86a384c76bd4ea868c58a20251491
|
|
| BLAKE2b-256 |
85a8128207e30dca0e810816fc62a38df4a8e0265005ab60b1a30423352f0329
|