Skip to main content

No project description provided

Project description

pv-system-profiler

Estimating PV array location and orientation from real-world power datasets.

Latest Release latest release
License license
Build Status CircleCi build status
Code Quality Language grade: Python Total alerts
Publications DOI
PyPI Downloads PyPI downloads
Conda Downloads conda-forge downloads

Install & Setup

1) Recommended: Set up conda environment with provided .yml file

We recommend setting up a fresh Python virtual environment in which to use pv-system-profiler. We recommend using the Conda package management system, and creating an environment with the environment configuration file named pvi-user.yml, provided in the top level of this repository. This will install the statistical-clear-sky and solar-data-tools packages as well.

Creating the env:

$ conda env create -f pvi-user.yml

Starting the env:

$ conda activate pvi_user

Stopping the env

$ conda deactivate

Additional documentation on setting up the Conda environment is available here.

2) PIP Package

$ pip install pv-system-profiler

Alternative: Clone repo from GitHub

Mimic the pip package by setting up locally.

$ pip install -e path/to/root/folder

3) Anaconda Package

$ conda install -c slacgismo pv-system-profiler

Solver Dependencies

Refer to solar-data-tools documentation to get more info about solvers being used.

Usage / Run Scripts

Serial run

The parameter_estimation_script.py script creates a report of all systems based on the csv files with the system signals located in a given folder. The script takes all input parameters as kwargs. The example below illustrates the use of report_script:

python 'repository location of run script'/parameter_estimation_script.py report None all 
s3://s3_bucket_with_signals/ 'repeating_part_of label' /home/results.csv True False 
False False s3://'s3_path_to_file_containing_metadata/metadata.csv' None s3

In the example above the full path to parameter_estimation_script.py is specified to run a report. The script allows to provide a csv file with list of sites to be analyzed. In this case no list is provided and therefore the kwarg None is entered. The script also allows to run an analysis on the first n_files containing input signals in the s3 repository. In this, case the all kwarg specifies that all input signals are to be analyzed. In this example, all csv files containing the input signals are located in the s3 bucket with the name s3://s3_bucket_with_signals/. Usually these csv files are of the form ID_repeating_part_of_label.csv, for example: 1_composite_10.csv, 2_composite_10.csv, where _composite_10 is the repeating part of the label. The repeating part of the label is either None or a string as in the example above. Next, an absolute path to the desired location of the results file is provided, in this case /home/results.csv. The two following kwargs are type Boolean and are used to set the values of the correct_tz and fix_shifts pipeline kwargs. The next kwarg, check_json is also Boolean. It is used to indicate if there is a json file present in s3://s3_bucket_with_signals/ with additional site information that is to be analyzed. The next Boolean kwarg is used to set the convert_to_ts kwarg when instantiating the data handler. The next kawrg contains the full path to the csv file containing site metadata, here called metadata.csv. The information that this file should contain varies depending on the estimation to be performed. This file is optional and the kwarg can be set to None. For the case of a report, a csv file with columns labeled site, system and gmt_offset and their respective values need to be provided. Alternatively, if the gmt_offset kwarg, the next kwarg (in the example above set to None), has a numeric value different to None, all sites will use that single value when running the report. For the case of the report estimation, the metadata file should contain site, system and gmt_offset columns with the respective values for each system. For the case of the longitude estimation, the metadata file should contain site, system and latitude columns with the respective values for each system. For the case of the tilt_azimuth estimation, the metadata file should contain site, system, gmt_offset, estimated_longitude and estimated_latitude, tilt, azimuth columns and with the respective values for each system. Additionally, if a manual inspection for time shifts was performed, another column labeled time_shift_manual having a zero for systems with no time shift and ones for systems with time shift may be included. If a time_shift_manual column is included, it will be used to determine whether the fix_dst() method is run after instantiating the data handler. The next karg is gmt_offset and in this case it is set to None. The last kwarg corresponds to the data_source. In this case the value is s3 since files with the input signals are located in an s3 bucket.

Partitioned run

A script that runs the site report, the longitude, latitude and tilt and azimuth scripts using a number of prescribed Amazon Web Services (AWS), instances is provided. The script reads the folder containing the system signals and partitions these signals to run in a n user prescribed AWS instances in parallel. Here is an example shell command for a partitioned run:

python 'repository location of run script'/run_partition_script.py parameter_estimation_script.py report None all 
s3://s3_bucket_with_signals/ 'repeating_part_of label' /home/results.csv True False 
False False s3://'s3_path_to_file_containing_metadata/metadata.csv' None s3
'repository location of run script'/parameter_estimation_script.py pvi-dev my_instance

where the individual value of each kwarg are defined in run_partition_script.py. This script takes the same inputs as the parameter_estimation_script.py plus three additional parameters. Note that the first kwarg is the partitioning script repository location of run script /run_partition_script.py parameter_estimation_script.py. The estimation run script /parameter_estimation_script.p is specified as the third to last kwarg. The second to last kwarg is the conda enviroment to be used to run the estimation, in this case pvi-dev. The last kwarg is the name of the AWS instances to be used to run run_partition_script.py, in this case my_instance. Previous to running this command it is necessary to create n identical AWS instances that correspond to the number of desired partitions. These instances need to have the same Name='instance name' AWS tag. The simplest way to accomplish this is by parting from an AWS image of a previously configured instance. This image needs to have all the repositories and conda environments that would be needed in a serial run. Once each partitioned run is finished, results will be automatically collected in the local folder where run_partition_script.py was run.

Unit tests

In order to run unit tests:

python -m unittest -v

Test Coverage

In order to view the current test coverage metrics:

coverage run --source pvsystemprofiler -m unittest discover && coverage html
open htmlcov/index.html

Versioning

We use Semantic Versioning for versioning. For the versions available, see the tags on this repository.

License

This project is licensed under the BSD 2-Clause License - see the LICENSE file for details

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pv-system-profiler-0.1.4.tar.gz (39.5 kB view details)

Uploaded Source

Built Distribution

pv_system_profiler-0.1.4-py3-none-any.whl (56.1 kB view details)

Uploaded Python 3

File details

Details for the file pv-system-profiler-0.1.4.tar.gz.

File metadata

  • Download URL: pv-system-profiler-0.1.4.tar.gz
  • Upload date:
  • Size: 39.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.7.12

File hashes

Hashes for pv-system-profiler-0.1.4.tar.gz
Algorithm Hash digest
SHA256 b16af8fabc38d9243693920441f077d0f78e1e49f53b0279218d291b0c755af0
MD5 db32be094f8def1a21fae1f1ab7a928e
BLAKE2b-256 1ae57fb343c5a65d6670330cd912eb4956fa3011ef204d97d33f84fb3c6d9878

See more details on using hashes here.

File details

Details for the file pv_system_profiler-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: pv_system_profiler-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 56.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.7.12

File hashes

Hashes for pv_system_profiler-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 5d01d07baa3eabb11eaf8603113842848cd34320b0dbde72772e22028de30799
MD5 fee6ac06e83d006e61bac1482287be6d
BLAKE2b-256 1c8ae28038dac418b0fe2b2f54ad1f7ce97da32d5b0b997ffbb5524fa230cefe

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page