Extraction-based Turkish news summarizer.
Project description
SadedeGel: A General Purpose NLP library for Turkish
SadedeGel is initially designed to be a library for unsupervised extraction-based news summarization using several old and new NLP techniques.
Development of the library started as a part of Açık Kaynak Hackathon Programı 2020 in which SadedeGel was the 2nd place winner.
We are keeping on adding features with the goal of becoming a general purpose open source NLP library for Turkish language.
💫 Version 0.21 out now! Check out the release notes here.
📖 Documentation
Documentation | |
---|---|
Contribute | How to contribute to the sadedeGel project and code base. |
💬 Where to ask questions
The SadedeGel project is initialized by @globalmaksimum AI team members @dafajon, @askarbozcan, @mccakir, @husnusensoy and @ertugruldemir.
Other community maintainers
- @doruktiktiklar contributes TFIDF Summarizer
Type | Platforms |
---|---|
🚨 Bug Reports | GitHub Issue Tracker |
🎁 Feature Requests | GitHub Issue Tracker |
Questions | Slack Workspace |
Features
-
Several datasets
-
Basic corpus
- Raw corpus (
sadedegel.dataset.load_raw_corpus
) - Sentences tokenized corpus (
sadedegel.dataset.load_sentences_corpus
) - Human annotated summary corpus (
sadedegel.dataset.load_annotated_corpus
)
- Raw corpus (
-
- Raw corpus (
sadedegel.dataset.extended.load_extended_raw_corpus
) - Sentences tokenized corpus (
sadedegel.dataset.extended.load_extended_sents_corpus
)
- Raw corpus (
-
TsCorpus(
sadedegel.dataset.tscorpus
)- Thanks to Taner Sezer, over 300K documents from tscorpus is also a part of sadedegel. Allowing us to
- Evaluate our tokenizers (word tokenizers)
- Build our prebuilt news category classifier
- Thanks to Taner Sezer, over 300K documents from tscorpus is also a part of sadedegel. Allowing us to
-
Various domain specific datasets (e-commerce, social media, tourism etc.)
-
-
ML based sentence boundary detector (SBD) trained for Turkish language
-
Sadedegel Extractive Summarizers
-
Various baseline summarizers
- Position Summarizer
- Length Summarizer
- Band Summarizer
- Random Summarizer
-
Various unsupervised/supervised summarizers
- ROUGE1 Summarizer
- TextRank Summarizer
- Cluster Summarizer
- Lexrank Summarizer
- BM25 Summarizer
- TfIdf Summarizer
-
-
Various Word Tokenizers
- BERT Tokenizer - Trained tokenizer (
pip install sadedegel[bert]
) - Simple Tokenizer - Regex Based
- IcU Tokenizer (default by
0.19
)
- BERT Tokenizer - Trained tokenizer (
-
Various Sparse and Dense Embeddings implemented for
Sentences
andDocument
objects.- BERT Embeddings (
pip install sadedegel[bert]
) - TfIdf Embeddings
- BERT Embeddings (
-
Word Vectors for your tokens (
pip install sadedegel[w2v]
) -
A
sklearn
compatible Feature Extraction API -
Word Vectors for your tokens (
pip install sadedegel[w2v]
) -
A
sklearn
compatible Feature Extraction API -
[Experimental] Prebuilt models for several common NLP tasks (
sadedegel.prebuilt
).
from sadedegel.prebuilt import news_classification
model = news_classification.load()
doc_str = ("Bilişim sektörü, günlük devrimlerin yaşandığı ve hızına yetişilemeyen dev bir alan haline geleli uzun bir zaman olmadı. Günümüz bilgisayarlarının tarihi, yarım asırı yeni tamamlarken; yaşanan gelişmeler çok "
"daha büyük ölçekte. Türkiye de bu gelişmelere 1960 yılında Karayolları Umum Müdürlüğü (şimdiki Karayolları Genel Müdürlüğü) için IBM’den satın aldığı ilk bilgisayarıyla dahil oldu. IBM 650 Model I adını taşıyan bilgisayarın "
"satın alınma amacı ise yol yapımında gereken hesaplamaların daha hızlı yapılmasıydı. Türkiye’nin ilk bilgisayar destekli karayolu olan 63 km uzunluğundaki Polatlı - Sivrihisar yolu için yapılan hesaplamalar IBM 650 ile 1 saatte yapıldı. "
"Daha öncesinde 3 - 4 ayı bulan hesaplamaların 1 saate inmesi; teknolojinin, ekonomik ve toplumsal dönüşüme büyük etkide bulunacağının habercisiydi.")
y_pred = model.predict([doc_str])
📖 For more details, refer to sadedegel.ai
Install sadedeGel
- Operating system: macOS / OS X · Linux · Windows (Cygwin, MinGW, Visual Studio)
- Python version: 3.6+ (only 64 bit)
- Package managers: pip
pip
Using pip, sadedeGel releases are available as source packages and binary wheels.
pip install sadedegel
or update now
pip install sadedegel -U
When using pip it is generally recommended to install packages in a virtual environment to avoid modifying system state:
python -m venv .env
source .env/bin/activate
pip install sadedegel
Vocabulary Dump
Certaing attributes of SadedeGel's NLP objects are dependent on shipped vocabulary dumps that are created over sadedegel.dataset.extened_corpus
via each of the existing SadedeGel tokenizers. Those tokenizers are listed above. If you want to re-train a specific tokenizer's vocabulary with custom settings:
python -m sadedegel.bblock.cli build-vocabulary -t [bert|icu|simple]
This will create a vocabulary dump using sadedegel.dataset.extended_corpus
based on custom user settings.
For all options to customize your vocab dump refer to:
python -m sadedegel.bblock.cli build-vocabulary --help
Optional
To keep core sadedegel as light as possible we decomposed our initial monolitic design.
To enable BERT embeddings and related capabilities use
pip install sadedegel[bert]
We ship 100-dimension word vectors with the library. If you need to re-train those word embeddings you can use
python -m sadedegel.bblock.cli build-vocabulary -t [bert|icu|simple] --w2v
--w2v
option requires w2v
option to be installed. To install option use
This will create a vocabulary dump with keyed vectors of arbitrary size using sadedegel.dataset.extended_corpus
based on custom user settings.
pip install sadedegel[w2v]
Quickstart with SadedeGel
To load SadedeGel, use sadedegel.load()
from sadedegel import Doc
from sadedegel.dataset import load_raw_corpus
from sadedegel.summarize import Rouge1Summarizer
raw = load_raw_corpus()
d = Doc(next(raw))
summarizer = Rouge1Summarizer()
summarizer(d, k=5)
To trigger sadedeGel NLP pipeline, initialize Doc
instance with a document string.
Access all sentences using Python built-in list
function.
from sadedegel import Doc
doc_str = ("Bilişim sektörü, günlük devrimlerin yaşandığı ve hızına yetişilemeyen dev bir alan haline geleli uzun bir zaman olmadı. Günümüz bilgisayarlarının tarihi, yarım asırı yeni tamamlarken; yaşanan gelişmeler çok "
"daha büyük ölçekte. Türkiye de bu gelişmelere 1960 yılında Karayolları Umum Müdürlüğü (şimdiki Karayolları Genel Müdürlüğü) için IBM’den satın aldığı ilk bilgisayarıyla dahil oldu. IBM 650 Model I adını taşıyan bilgisayarın "
"satın alınma amacı ise yol yapımında gereken hesaplamaların daha hızlı yapılmasıydı. Türkiye’nin ilk bilgisayar destekli karayolu olan 63 km uzunluğundaki Polatlı - Sivrihisar yolu için yapılan hesaplamalar IBM 650 ile 1 saatte yapıldı. "
"Daha öncesinde 3 - 4 ayı bulan hesaplamaların 1 saate inmesi; teknolojinin, ekonomik ve toplumsal dönüşüme büyük etkide bulunacağının habercisiydi.")
doc = Doc(doc_str)
list(doc)
['Bilişim sektörü, günlük devrimlerin yaşandığı ve hızına yetişilemeyen dev bir alan haline geleli uzun bir zaman olmadı.',
'Günümüz bilgisayarlarının tarihi, yarım asırı yeni tamamlarken; yaşanan gelişmeler çok daha büyük ölçekte.',
'Türkiye de bu gelişmelere 1960 yılında Karayolları Umum Müdürlüğü (şimdiki Karayolları Genel Müdürlüğü) için IBM’den satın aldığı ilk bilgisayarıyla dahil oldu.',
'IBM 650 Model I adını taşıyan bilgisayarın satın alınma amacı ise yol yapımında gereken hesaplamaların daha hızlı yapılmasıydı.',
'Türkiye’nin ilk bilgisayar destekli karayolu olan 63 km uzunluğundaki Polatlı - Sivrihisar yolu için yapılan hesaplamalar IBM 650 ile 1 saatte yapıldı.',
'Daha öncesinde 3 - 4 ayı bulan hesaplamaların 1 saate inmesi; teknolojinin, ekonomik ve toplumsal dönüşüme büyük etkide bulunacağının habercisiydi.']
Access sentences by index.
doc[2]
Türkiye de bu gelişmelere 1960 yılında Karayolları Umum Müdürlüğü (şimdiki Karayolları Genel Müdürlüğü) için IBM’den satın aldığı ilk bilgisayarıyla dahil oldu.
SadedeGel Server
In order to integrate with your applications we provide a quick summarizer server with sadedeGel.
python3 -m sadedegel.server
SadedeGel Server on Heroku
SadedeGel Server is hosted on free tier of Heroku cloud services.
PyLint, Flake8 and Bandit
sadedeGel utilized pylint for static code analysis, flake8 for code styling and bandit for code security check.
To run all tests
make lint
Run tests
sadedeGel comes with an extensive test suite. In order to run the
tests, you'll usually want to clone the repository and build sadedeGel from source.
This will also install the required development dependencies and test utilities
defined in the requirements.txt
.
Alternatively, you can find out where sadedeGel is installed and run pytest
on
that directory. Don't forget to also install the test utilities via sadedeGel's
requirements.txt
:
make test
📓 Kaggle
- Check comprehensive notebook of Kaggle Master Ertugrul Demir explaining the capabilities of sadedegel on Turkish clickbate dataset
Youtube Channel
Some videos from sadedeGel YouTube Channel
SkyLab YTU Webinar Playlist
References
Special Thanks
-
Starlang Software for their contribution to open source Turkish NLP development and corpus preperation.
-
Olcay Taner Yıldız, Ph.D., one of our refrees in Açık Kaynak Hackathon Programı 2020, for helping our development on sadedegel.
-
Taner Sezer for his contribution on tokenization corpus and labeled news corpus.
Our Community Contributors
We would like to thank our community contributors for their bug/enhancement requests and questions to make sadedeGel better everyday
Software Engineering
-
Special thanks to spaCy project for their work in showing us the way to implement a proper python module rather than merely explaining it.
- We have borrowed many document and style related stuff from their code base :smile:
-
There are a few free-tier service providers we need to thank:
- GitHub for
- Hosting our projects.
- Making it possible to collobrate easily.
- Automating our SLM via Github Actions
- Google Cloud Google Storage Service for providing low cost storage buckets making it possible to store
sadedegel.dataset.extended
data. - Heroku for hosting sadedeGel Server in their free tier dynos.
- CodeCov for allowing us to transparently share our test coverage
- PyPI for allowing us to share sadedegel with you.
- binder for
- Allowing us to share our example notebooks
- Hosting our learn by example boxes in sadedegel.ai
- GitHub for
Machine Learning (ML), Deep Learning (DL) and Natural Language Processing (NLP)
-
Resources on Extractive Text Summarization:
- Leveraging BERT for Extractive Text Summarization on Lectures by Derek Miller
- Fine-tune BERT for Extractive Summarization by Yang Liu
-
Other NLP related references
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sadedegel-0.21.2.tar.gz
.
File metadata
- Download URL: sadedegel-0.21.2.tar.gz
- Upload date:
- Size: 49.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.7 tqdm/4.62.3 importlib-metadata/4.8.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ede659d9a0d6c0bbbe60fcc2b93d6a957ec13c503a95eccc413821f75e1093d3 |
|
MD5 | 5c5e317121482a1938f0b1e73c5d51e1 |
|
BLAKE2b-256 | da1a138f91345a46559f8130190c83722791da5e36166acc2893a54bf97a8343 |
File details
Details for the file sadedegel-0.21.2-py3-none-any.whl
.
File metadata
- Download URL: sadedegel-0.21.2-py3-none-any.whl
- Upload date:
- Size: 49.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.7 tqdm/4.62.3 importlib-metadata/4.8.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 79cfb25423178c7381eb6fb1afcf31d576428bddf13a89e68e55a6f9278e7359 |
|
MD5 | b1f638d6d2d7c55a009ff07a6d6ffd5c |
|
BLAKE2b-256 | e27840c004e55bbc19032aafc2bb7f6d0a7a642b54dea69a6e4266b9d46ca445 |