Deep learning inference and NLP toolkit for game development.
Project description
npc-engine
NPC-Engine is a deep learning and NLP toolkit for designing NPC AI with natural language.
Features
- Chat-bot dialogue system.
- SoTA tools like text semantic similarity and text to speech.
- Easy, open source deep learning model standard (ONNX with YAML configs).
- GPU accelerated inference with onnxruntime.
- Engine agnostic API through ZMQ server via JSONRPC 2.0.
Getting started
The easiest way to get started is to use NPC Engine through the Unity integration
You can also use it directly through ZMQ or HTTP. See Documentation for more details.
Roadmap
Done:
- Real-time end-to-end chatbot dialogue system
- Semantic similarity
- Real-time speech to text (experimental)
- Unity integration
- CLI tool for importing models from Huggingface
- Asynchronous API features
In progress:
- Actions and planning
- Unreal integration
- Importing models from popular TTS libraries
- Emotion features
- Multiple languages support
- Much more
Build on Windows
-
Create virtualenv and activate it:
> python3 -m venv npc-engine-venv > .\npc-engine-venv\activate.bat
-
Install dependencies
> pip install -e .[dev,dml]
-
(Optional) Compile, build and install your custom ONNX python runtime
Build instructions here https://onnxruntime.ai/
-
(Optional) Run tests
> tox
-
Compile to exe with:
> pyinstaller --hidden-import="sklearn.utils._cython_blas" --hidden-import="sklearn.neighbors.typedefs" ^ --hidden-import="sklearn.neighbors.quad_tree" --hidden-import="sklearn.tree._utils" ^ --hidden-import="sklearn.neighbors._typedefs" --hidden-import="sklearn.utils._typedefs" ^ --hidden-import="sklearn.neighbors._partition_nodes" --additional-hooks-dir hooks ^ --exclude-module tkinter --exclude-module matplotlib .\npc_engine\cli.py --onedir
Docker
If you wish to host NPC Engine somewhere you can use our the docker image. It's Linux image with TensorRT ONNX Runtime provider.
You can build it yourself with:
docker build -t npc-engine .
To run the image you must mount the models directory to /app/models
e.g.
docker run --gpus all -it --mount type=bind,source=%cd%\tests\resources\models,target=/app/models -p 5000:5000 npc-engine/inference-engine:latest npc-engine run --port 5000
Where --gpus all
will give access to the GPU, -it
will output logs and let you use the container interactively, --mount
will mount the models directory to the container, -p 5000:5000
will expose the port 5000 on the host machine.
Community
We have a Discord server where you can get support, ask questions and show off your creations.
If you would like to donate, you can check out our Patreon.
Our Patrons
- Marrech Games
Authors
See also the list of contributors who participated in this project.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for npc_engine-0.1.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2f1c940d35bb41134a2b13fdba57b9c009a743808d083fb698ddd68f379a8020 |
|
MD5 | 7774e1987c22b386f140a79e69384ec9 |
|
BLAKE2b-256 | dd23517232c3d9afc39e903ffc665d2700f0b0cc8cff4dc61b0a5e7fd6262682 |