Dreamer 4
Project description
Dreamer 4
Implementation of Danijar's latest iteration for his Dreamer line of work
Discord channel for collaborating with other researchers interested in this work
Appreciation
-
@dirkmcpherson for fixes to typo errors and unpassed arguments!
-
@witherhoard99 and Vish for contributing improvements to video tokenizer convergence, proprioception handling, identifying a bug with no discrete actions, and tensorboard logging with video reconstruction!
Install
$ pip install dreamer4
Usage
import torch
from dreamer4 import VideoTokenizer, DynamicsWorldModel
# video tokenizer, learned through MAE + lpips
tokenizer = VideoTokenizer(
dim = 512,
dim_latent = 32,
patch_size = 32,
image_height = 256,
image_width = 256
)
video = torch.randn(2, 3, 10, 256, 256)
# learn the tokenizer
loss = tokenizer(video)
loss.backward()
# dynamics world model
world_model = DynamicsWorldModel(
dim = 512,
dim_latent = 32,
video_tokenizer = tokenizer,
num_discrete_actions = 4
)
# state, action, rewards
video = torch.randn(2, 3, 10, 256, 256)
discrete_actions = torch.randint(0, 4, (2, 10, 1))
rewards = torch.randn(2, 10)
# learn dynamics / behavior cloned model
loss = world_model(
video = video,
rewards = rewards,
discrete_actions = discrete_actions
)
loss.backward()
# do the above with much data
# then generate dreams
dreams = world_model.generate(
10,
batch_size = 2,
return_decoded_video = True,
return_for_policy_optimization = True
)
# learn from the dreams
actor_loss, critic_loss = world_model.learn_from_experience(dreams)
(actor_loss + critic_loss).backward()
# learn from environment
from dreamer4.mocks import MockEnv
mock_env = MockEnv((256, 256), vectorized = True, num_envs = 4)
experience = world_model.interact_with_env(mock_env, max_timesteps = 8, env_is_vectorized = True)
actor_loss, critic_loss = world_model.learn_from_experience(experience)
(actor_loss + critic_loss).backward()
Moving MNIST
To train a simple tokenizer on Moving MNIST for 20000 steps and then use it to generate action-conditioned dynamics models
$ uv run train_moving_mnist_tokenizer.py --num_train_steps 20000
$ uv run train_moving_mnist_dynamics.py --num_train_steps 20000 --condition_on_actions True
The baseline will synthesize unconditionally digits floating in a random direction (with 2 frame prompt to see if it has learnt to continue detected velocity).
Passing --condition_on_actions True lets you explicitly prompt with velocity actions to command the digit's trajectory. The conditioned samples display a digit with action velocities arranged in the position of the grid, with center being zerod velocities (staying still).
Citation
@misc{hafner2025trainingagentsinsidescalable,
title = {Training Agents Inside of Scalable World Models},
author = {Danijar Hafner and Wilson Yan and Timothy Lillicrap},
year = {2025},
eprint = {2509.24527},
archivePrefix = {arXiv},
primaryClass = {cs.AI},
url = {https://arxiv.org/abs/2509.24527},
}
@misc{fang2026racrectifiedflowauto,
title = {RAC: Rectified Flow Auto Coder},
author = {Sen Fang and Yalin Feng and Yanxin Zhang and Dimitris N. Metaxas},
year = {2026},
eprint = {2603.05925},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2603.05925},
}
@misc{chefer2026self,
title = {Self-Supervised Flow Matching for Scalable Multi-Modal Synthesis},
author = {Hila Chefer and Patrick Esser and Dominik Lorenz and Dustin Podell and Vikash Raja and Vinh Tong and Antonio Torralba and Robin Rombach},
year = {2026},
url = {https://bfl.ai/research/self-flow},
note = {Preprint}
}
@misc{li2025basicsletdenoisinggenerative,
title = {Back to Basics: Let Denoising Generative Models Denoise},
author = {Tianhong Li and Kaiming He},
year = {2025},
eprint = {2511.13720},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.13720},
}
@misc{kimiteam2026attentionresiduals,
title = {Attention Residuals},
author = {Kimi Team and Guangyu Chen and Yu Zhang and Jianlin Su and Weixin Xu and Siyuan Pan and Yaoyu Wang and Yucheng Wang and Guanduo Chen and Bohong Yin and Yutian Chen and Junjie Yan and Ming Wei and Y. Zhang and Fanqing Meng and Chao Hong and Xiaotong Xie and Shaowei Liu and Enzhe Lu and Yunpeng Tai and Yanru Chen and Xin Men and Haiqing Guo and Y. Charles and Haoyu Lu and Lin Sui and Jinguo Zhu and Zaida Zhou and Weiran He and Weixiao Huang and Xinran Xu and Yuzhi Wang and Guokun Lai and Yulun Du and Yuxin Wu and Zhilin Yang and Xinyu Zhou},
year = {2026},
eprint = {2603.15031},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2603.15031},
}
@misc{zhang2026beliefformer,
title = {BeliefFormer: Belief Attention in Transformer},
author = {Guoqiang Zhang},
year = {2026},
url = {https://openreview.net/forum?id=Ard2QzPAUK}
}
@misc{osband2026delightfulpolicygradient,
title = {Delightful Policy Gradient},
author = {Ian Osband},
year = {2026},
eprint = {2603.14608},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2603.14608},
}
@misc{gopalakrishnan2025decouplingwhatwherepolar,
title = {Decoupling the "What" and "Where" With Polar Coordinate Positional Embeddings},
author = {Anand Gopalakrishnan and Robert Csordás and Jürgen Schmidhuber and Michael C. Mozer},
year = {2025},
eprint = {2509.10534},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2509.10534},
}
@misc{maes2026leworldmodelstableendtoendjointembedding,
title = {LeWorldModel: Stable End-to-End Joint-Embedding Predictive Architecture from Pixels},
author = {Lucas Maes and Quentin Le Lidec and Damien Scieur and Yann LeCun and Randall Balestriero},
year = {2026},
eprint = {2603.19312},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2603.19312},
}
@misc{balestriero2025lejepa,
title = {LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics},
author = {Randall Balestriero and Yann LeCun},
year = {2025},
eprint = {2511.08544},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2511.08544},
}
the conquest of nature is to be achieved through number and measure - angels to Descartes in a dream
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dreamer4-0.6.17.tar.gz.
File metadata
- Download URL: dreamer4-0.6.17.tar.gz
- Upload date:
- Size: 49.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21645245a34e6caa4cf0bca6f34cca6382d9c6e6953d96044f2c6e3e1cb8f9b0
|
|
| MD5 |
7b29d7e86a3a8be862a2ebedba87c264
|
|
| BLAKE2b-256 |
4299c903601b704565bc3379e7d45e3bf3d36781c7ffcee0686ea7d4d02c211d
|
File details
Details for the file dreamer4-0.6.17-py3-none-any.whl.
File metadata
- Download URL: dreamer4-0.6.17-py3-none-any.whl
- Upload date:
- Size: 50.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
220364715cab6008c36d07ca316d149f345e194c0af52eb1a44b68acfdfe7b0a
|
|
| MD5 |
f2d15f006b9d29a3b3548d6932482834
|
|
| BLAKE2b-256 |
6561df466bad0561f8b39e7360fb990349dd64eee06d69e967d3fca77445b038
|