A GW data manager package and more
Project description
gwdama
GW Data Manager
0. Clone this repo
In your working directory, clone this repository:
$ git clone https://gitlab.com/gwprojects/gwdama.git
This will create a gwdma
repository. Change directory to this one and check the available branches:
$ cd gwdama
$ git branch -a
By default, only the master directory is synchronised. If you want to try different branches, like fradev
:
$ git checkout fradev
Now you are good to go!
1. Getting started
Environment setup
The following installation procedure works correctly on Virgo farm machines. Remember to put the resulting env
directory
into a .gitignore
file in order to avoid pushing it!
- Create a Python3 environment called
env
, or whatever (in this case, remember to substitute it toenv
in the following commands). This should be donewithout-pip
, which is installed in the next step.$ python3 -m venv --without-pip env
- Activate the environment, which is going to be empty (really!):
$ source env/bin/activate
- Get pip, setuptools and wheel from the web:
$ curl https://bootstrap.pypa.io/get-pip.py | python
- Deactivate and reactivate the environment and check if the previous packages are installed and up-to dated. Check also the versions atc.:
$ deactivate $ source env/bin/activate $ python --version $ pip list
- Install the required modules. The procedure varies depending on whether a
requirements.txt
file is available (provided somebody has created one withpip freeze > requirements.txt
) or not:-
install the packages from the requirements:
$ pip install -r requirements.txt
-
install everything manually:
$ pip install numpy, scipy, matplotlib, pandas, jupyter, scikit-learn, gwpy
Also, it will be necessary to install
python-ldas-tools-framecpp
to use the methodread
of GWpy TimeSeries:$ pip install lalsuite
-
- Check that the previous steps have been completed successfully: entering the following command shouldn't return any arror, warning etc.
$ python -c "import numpy, matplotlib, pandas, sklearn, scipy"
Notice: for code developing and benchmark tests, it could also be useful to install the line_profiler
and
memory_profiler
packages. These are not included in the requirements.txt
file but you can install them easily,
within the environment, typing:
$ pip install line_profiler memory_profiler
Then, you can exploit the IPython megic commands:
%prun
: Run code with the profiler%lprun
: Run code with the line-by-line profiler%memit
: Measure the memory use of a single statement%mprun
: Run code with the line-by-line memory profiler
Install the package (locally)
We can install the package locally (for use on our system), and import it anywhere else.
Passing the parameter -e
, we can install the package with a symlink, so that changes
to the source files will be immediately available to other users of the package on our system.
From the main directory containing the package:
$ pip install -e .
Done! You are all set up now, and go testing with some jupyter notebook.
2. Play with data
There are some test and development notebooks:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.