Skip to main content

DaNLP: NLP in Danish

Project description

Documentation Status

DaNLP is a repository for Natural Language Processing resources for the Danish Language. It is a collection of available datasets and models for a variety of NLP tasks. The aim is to make it easier and more applicable to practitioners in the industry to use Danish NLP and hence this project is licensed to allow commercial use. The project features code examples on how to use the datasets and models in popular NLP frameworks such as spaCy, Transformers and Flair as well as Deep Learning frameworks such as PyTorch. See our documentation pages for more details about our models and datasets, and definitions of the modules provided through the DaNLP package.

If you are new to NLP or want to know more about the project in a broader perspective, you can start on our microsite.


Help us improve DaNLP

  • :raising_hand: Have you tried the DaNLP package? Then we would love to chat with you about your experiences from a company perspective. It will take approx 20-30 minutes and there's no preparation. English/danish as you prefer. Please leave your details here and then we will reach out to arrange a call.

News

  • :tada: Version 0.1.2 has been released with
    • 2 new models for hate speech detection (Hatespeech) based on BERT and ELECTRA
    • 1 new model for hate speech classification

Next up

  • new model and data for discourse coherence

Installation

To get started using DaNLP in your python project simply install the pip package. Note that installing the default pip package will not install all NLP libraries because we want you to have the freedom to limit the dependency on what you use. Instead we provide you with an installation option if you want to install all the required dependencies.

Install with pip

To get started using DaNLP simply install the project with pip:

pip install danlp 

Note that the default installation of DaNLP does not install other NLP libraries such as Gensim, SpaCy, flair or Transformers. This allows the installation to be as minimal as possible and let the user choose to e.g. load word embeddings with either spaCy, flair or Gensim. Therefore, depending on the function you need to use, you should install one or several of the following: pip install flair, pip install spacy or/and pip install gensim .

Alternatively if you want to install all the required dependencies including the packages mentionned above, you can do:

pip install danlp[all]

You can check the requirements.txt file to see what version the packages has been tested with.

Install from source

If you want to be able to use the latest developments before they are released in a new pip package, or you want to modify the code yourself, then clone this repo and install from source.

git clone https://github.com/alexandrainst/danlp.git
cd danlp
# minimum installation
pip install .
# or install all the packages
pip install .[all]

To install the dependencies used in the package with the tested versions:

pip install -r requirements.txt

Install from github

Alternatively you can install the latest version from github using:

pip install git+https://github.com/alexandrainst/danlp.git

Install with Docker

To quickly get started with DaNLP and to try out the models you can use our Docker image. To start a ipython session simply run:

docker run -it --rm alexandrainst/danlp ipython

If you want to run a <script.py> in your current working directory you can run:

docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app alexandrainst/danlp python <script.py>

Quick Start

Read more in our documentation pages.

NLP Models

Natural Language Processing is an active area of research and it consists of many different tasks. The DaNLP repository provides an overview of Danish models for some of the most common NLP tasks (and is continuously evolving).

Here is the list of NLP tasks we currently cover in the repository.

You can also find some of our transformers models on HuggingFace.

If you are interested in Danish support for any specific NLP task you are welcome to get in contact with us.

We also recommend to check out the list of Danish NLP corpora/tools/models maintained by Finn Årup Nielsen (Warning: not all items are available for commercial use, check the licence).

Datasets

The number of datasets in the Danish language is limited. The DaNLP repository provides an overview of the available Danish datasets that can be used for commercial purposes.

The DaNLP package allows you to download and preprocess datasets.

Examples

You will find examples that shows how to use NLP in Danish (using our models or others) in our benchmark scripts and jupyter notebook tutorials.

This project keeps a Danish written blog on medium where we write about Danish NLP, and in time we will also provide some real cases of how NLP is applied in Danish companies.

Structure of the repo

To help you navigate we provide you with an overview of the structure in the github:

.
├── danlp		   			# Source files
│	├── datasets   			# Code to load datasets with different frameworks 
│	└── models     			# Code to load models with different frameworks 
├── docker         			# Docker image
├── docs	       			# Documentation and files for setting up Read The Docs
│   ├── docs	   			# Documentation for tasks, datasets and frameworks
│	    ├── tasks  			# Documentation for nlp tasks with benchmark results
│	    ├── frameworks 		# Overview over different frameworks used
│		├── gettingstarted 	  # Guides for installation and getting started  
│	    └── imgs   			 # Images used in documentation
│   └── library     		# Files used for Read the Docs
├── examples	   			# Examples, tutorials and benchmark scripts
│   ├── benchmarks 			# Scripts for reproducing benchmarks results
│   └── tutorials 			# Jupyter notebook tutorials
└── tests   	   			# Tests for continuous integration with Travis

How do I contribute?

If you want to contribute to the DaNLP repository and make it better, your help is very welcome. You can contribute to the project in many ways:

  • Help us write good tutorials on Danish NLP use-cases
  • Contribute with your own pretrained NLP models or datasets in Danish (see our contributing guidelines for more details on how to contribute to this repository)
  • Create GitHub issues with questions and bug reports
  • Notify us of other Danish NLP resources or tell us about any good ideas that you have for improving the project through the Discussions section of this repository.

Who is behind?

The DaNLP repository is maintained by the Alexandra Institute which is a Danish non-profit company with a mission to create value, growth and welfare in society. The Alexandra Institute is a member of GTS, a network of independent Danish research and technology organisations.

Between 2019 and 2020, the work on this repository was part of the Dansk For Alle performance contract (RK) allocated to the Alexandra Institute by the Danish Ministry of Higher Education and Science. Since 2021, the project is funded through the Dansk NLP activity plan which is part of the Digital sikkerhed, tillid og dataetik performance contract.

An overview of the project can be found on our microsite.

Cite

If you want to cite this project, please use the following BibTeX entry:

@inproceedings{danlp2021,
    title = {{DaNLP}: An open-source toolkit for Danish Natural Language Processing},
    author = {Brogaard Pauli, Amalie  and
      Barrett, Maria  and
      Lacroix, Ophélie  and
      Hvingelby, Rasmus},
    booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021)},
    month = june,
    year = "2021"
}

Read the paper here.

See our documentation pages for references to specific models or datasets.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

danlp-0.1.2-py3-none-any.whl (82.7 kB view details)

Uploaded Python 3

File details

Details for the file danlp-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: danlp-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 82.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.5.0.1 requests/2.25.0 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.3

File hashes

Hashes for danlp-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 ca79e4fea96defa2b7a93c088d17c549355242324077bd140de22908ed2208b3
MD5 1dbe4ee0e23d33e97fe699bf84d287fe
BLAKE2b-256 e2f9d199a32799b854a3650c7708f2c3fd209708ef320a014f5f936f5f10f0bb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page