Skip to main content

unified pytorch framework for vision task

Project description

UDL

UDL is a unified pytorch framework for vision research:

  • UDL has faster library loading speed and more convenient reflection mechanism to call different models and methods.
  • UDL is based on MMCV which provides the following functionalities.
  • UDL is based on NNI to peform automatic machine learning.

English | 简体中文

See the repo for more detailed descriptions.

Features

Requirements

  • Python3.7+, Pytorch>=1.6.0
  • NVIDIA GPU + CUDA
  • Run python setup.py develop

Note: Our project is based on MMCV, but you needn't to install it currently.

Quick Start

Step0. We use UDL in PanCollection, first please set your Python environment.

git clone https://github.com/XiaoXiao-Woo/UDL

git clone https://github.com/XiaoXiao-Woo/PanCollection

Then,

python setup.py develop

or

pip install -i udl-vis https://pypi.org/simple

Step1.

  • Download datasets (WorldView-3, QuickBird, GaoFen2, WorldView2) from the homepage. Put it with the following format.

  • Verify the dataset path in PanCollection/UDL/Basis/option.py, or you can print the output of run_pansharpening.py, then set cfg.data_dir to your dataset path.

|-$ROOT/Datasets
├── pansharpening
│   ├── training_data
│   │   ├── train_wv3.h5
│   │   ├── ...
│   ├── validation_data
│   │   │   ├── valid_wv3.h5
│   │   │   ├── ...
│   ├── test_data
│   │   ├── WV3
│   │   │   ├── test_wv3_multiExm.h5
│   │   │   ├── ...

Step2. Open PanCollection/UDL/pansharpening, run the following code:

python run_pansharpening.py

step3. How to train/validate the code.

  • A training example:

    run_pansharpening.py

    where arch='BDPN', and configs/option_bdpn.py has:

    cfg.eval = False,

    cfg.workflow = [('train', 50), ('val', 1)], cfg.dataset = {'train': 'wv3', 'val': 'wv3_multiExm.h5'}

  • A test example:

    run_test_pansharpening.py

    cfg.eval = True or cfg.workflow = [('val', 1)]

Step4. How to customize the code.

One model is divided into three parts:

  1. Record hyperparameter configurations in folder of PanCollection/UDL/pansharpening/configs/option_<modelName>.py. For example, you can load pretrained model by setting model_path = "your_model_path" or cfg.resume_from = "your_model_path".

  2. Set model, loss, optimizer, scheduler in folder of PanCollection/UDL/pansharpening/models/<modelName>_main.py.

  3. Write a new model in folder of PanCollection/UDL/pansharpening/models/<modelName>/model_<modelName>.py.

Note that when you add a new model into PanCollection, you need to update PanCollection/UDL/pansharpening/models/__init__.py and add option_.py.

Others

  • if you want to add customized datasets, you need to update:
PanCollection/UDL/AutoDL/__init__.py.
PanCollection/UDL/pansharpening/common/psdata.py.
  • if you want to add customized tasks, you need to update:
1.Put model_<newModelName> and <newModelName>_main in PanCollection/UDL/<taskName>/models.
2.Create a new folder of PanCollection/UDL/<taskName>/configs to put option_<newModelName>.
3.Update PanCollection/UDL/AutoDL/__init__.
4.Add a class in PanCollection/UDL/Basis/python_sub_class.py, like this:
class PanSharpeningModel(ModelDispatcher, name='pansharpening'):
  • if you want to add customized training settings, such as saving model, recording logs, and so on. you need to update:
PanCollection/UDL/mmcv/mmcv/runner/hooks

Note that: Don't put model/dataset/task-related files into the folder of AutoDL.

  • if you want to know more details of runner about how to train/test in PanCollection/UDL/AutoDL/trainer.py, please see PanCollection/UDL/mmcv/mmcv/runner/epoch_based_runner.py

Contribution

We appreciate all contributions to improving PanCollection. Looking forward to your contribution to PanCollection.

Citation

Please cite this project if you use datasets or the toolbox in your research.

@misc{PanCollection,
    author = {Xiao Wu, Liang-Jian Deng and Ran Ran},
    title = {"PanCollection" for Remote Sensing Pansharpening},
    url = {https://github.com/XiaoXiao-Woo/PanCollection/},
    year = {2022},
}

@InProceedings{Wu_2021_ICCV,
    author    = {Wu, Xiao and Huang, Ting-Zhu and Deng, Liang-Jian and Zhang, Tian-Jing},
    title     = {Dynamic Cross Feature Fusion for Remote Sensing Pansharpening},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14687-14696}
}

Acknowledgement

  • MMCV: OpenMMLab foundational library for computer vision.

License & Copyright

This project is open sourced under GNU General Public License v3.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

udl_vis-0.3.2-py3-none-any.whl (421.6 kB view details)

Uploaded Python 3

File details

Details for the file udl_vis-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: udl_vis-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 421.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.10

File hashes

Hashes for udl_vis-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 069e28fa08935212bfc8dd6f585ee37fd1f7c015e110159e51284f13412ec4ba
MD5 1a844e011ec138e661169e8217ea85d1
BLAKE2b-256 7da14a42214a9d0e60cbbc2ea164a235e19200a4e8c53a88e6ec22f0078f161c

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page