Optimized version of llama3 / gemma.
Project description
llaminate
This project is a showcase for a neural tokenization technique. Since the inputs are compressed and have a smaller shape, the LLM is downsized accordingly.
For example, llama3-8b is brought down to 34 million parameters instead of 8 billion.
Installation
Usage
Resources
Models
Notebooks
Final model:
- pretraining: file / Google Colab
- fine-tuning: file / Google Colab
TODO
See TODO.
Credits
This project winks at llama3 from Meta, but doesn't actually its weights nor code.
License
Licensed under the aGPLv3.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
llaminate-0.6.6.tar.gz
(5.8 kB
view details)
Built Distribution
File details
Details for the file llaminate-0.6.6.tar.gz
.
File metadata
- Download URL: llaminate-0.6.6.tar.gz
- Upload date:
- Size: 5.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.12.5 Linux/6.10.9-arch1-2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d3e222597b3709a0a3f3b97e54a0ccc5d07e6e55b6ff302db4054ff6a865b73d |
|
MD5 | 07f47fa48fc5e77f4506805983ab3bee |
|
BLAKE2b-256 | 6a128926df5d102269a9e1f4b3a30724fcb804edfbef1c349cf68d941445e8a0 |
File details
Details for the file llaminate-0.6.6-py3-none-any.whl
.
File metadata
- Download URL: llaminate-0.6.6-py3-none-any.whl
- Upload date:
- Size: 6.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.12.5 Linux/6.10.9-arch1-2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c1fd51a8e41e29f4b389260d06a1dac3de9f3b058279b0fe26bc05d862668ee7 |
|
MD5 | fe2d82750b8c441bcb1b056bccbe940b |
|
BLAKE2b-256 | de4d6bea1782caf920769af382798cda62c70ff8c49c723f2c5dd915d41c22ab |