Use aggregated processes for quicker calculations
Project description
bw_aggregation
Installation
You can install bw_aggregation via [pip] from [PyPI]:
$ pip install bw_aggregation
It is also available via conda
or mamba
on the channel cmutel
.
Theory
This library allows you to trade space for time by pre-computing some inventory results. Each Brightway Database
can be aggregated - i.e. we can calculate the cumulative biosphere flows needed for each process in that flow. We then store these values separately, to be used instead of the normal technosphere supply chain entries. This is faster as we don't need to solve the linear proble Ax=b
for that database subgraph.
As the supply chain data is removed, we can't do calculations which would use that supply chain data. That means we can't do:
- Uncertainty analysis (no values in the technosphere array to sample from)
- Graph traversal (the graph is cutoff for each process)
- Regionalized LCIA (every biosphere flow would be matched to the location of the aggregated process)
- Temporal LCA (no temporal supply chain data available)
- Contribution analysis (no supply chain data to get contributions from)
As these downsides are significant, this library keeps both the unit process and aggregated data, and allows you to choose which to use during each calculation.
Usage
Start by getting an estimate on how much faster an aggregated calculation would be with:
import bw_aggregated as bwa
bwa.AggregatedDatabase.estimate_speedup("<database label>")
That will return something like:
Speedup(
database_name='USEEIO-2.0',
time_with_aggregation=0.06253910064697266,
time_without_aggregation=0.026948928833007812,
time_difference_absolute=0.035590171813964844,
time_difference_relative=2.3206525585674855
)
The times reported include LCA
object creation, data loading, matrix construcion, and inventory calculations.
As you can see, creating aggregated activities to avoid solving linear systems will not always lead to faster calculations, as the linear algebra libraries we use are pretty fast, and loading lots of data into the biosphere can take a lot of time. Please check on potential speedups before deciding to aggregate background databases.
If you want to convert that database, you can with:
bwa.AggregatedDatabase.convert_existing("<database label>")
From now on, calling bw2data.Database("<database label>")
will return an instance of AggregatedDatabase
. You can do everything you normally would with this database, including making changes.
:warning: Any existing
Database("<database label>")
reference is out of date: You need to create newDatabase
class instances.
The conversion command will also set the default to use the aggregated values during calculations. You can change the default back to using unit process data with:
import bw2data as bd
bd.Database("<database label>").use_aggregated(False)
To create a new Database
as aggregated from the beginning, use:
bd.Database('<name>', backend='aggregated')
You can then write data with .write(some_data)
, and the aggregated datapackage will be generated automatically. However, individual changes to nodes or edges won't trigger a recalculation of the aggregated results - that needs to be done manually, see below.
You can also use a context manager to control which aggregated databases use their aggregated values during a calculation. The context manager allows you to set things globally - for example, to force the use of aggregated values for all aggregated databases:
import bw2calc as bc
with bwa.AggregationContext(True):
lca = bc.LCA(my_functional_unit)
lca.lci()
Passing in False
will disable all use of aggregated values during the calculation. You can also be more fine grained by using a dictionary of database labels:
with bwa.AggregationContext({"<database label>": True, "<another database label>": False}):
lca = bc.LCA(my_functional_unit)
lca.lci()
As above, True
forces the use of aggregated values, False
prohibits their use.
Aggregated database results are checked at calculation time to make sure they are still valid. If the aggregated results are out of date, an ObsoleteAggregatedDatapackage
error will be raised. You can then refresh the aggregation result cache with:
bd.Database("<database label>").refresh()
We don't do that for you automatically as it is usually quite computationally expensive.
You can build inventories such that two aggregated databases mutually reference each other. If both are obsolete, trying to refresh one will raise an error that the other is obsolete. In this case, you can refresh all obsolete aggregated databases with:
bwa.AggregatedDatabase.refresh_all()
Implementation
This library gets the possibility of using both aggregated and unit process data by overriding the .datapackage
method, and loading one or two different datapackages depending on the current context. This approach is compatiable with both manual loading of datapackages, and with the bw2data
function prepare_lca_inputs
. The .datapackage
method of an AggregatedDatabase
is roughly:
if global_context is True:
load_aggregated()
elif local_context(this_database) is True:
load_aggregated()
elif this_database.prefer_aggregated is True:
load_aggregated()
else:
load_unit_process()
Contributing
Contributions are very welcome. To learn more, see the Contributor Guide.
License
Distributed under the terms of the MIT license, bw_aggregation is free and open source software.
Issues
If you encounter any problems, please file an issue along with a detailed description.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file bw_aggregation-1.1.tar.gz
.
File metadata
- Download URL: bw_aggregation-1.1.tar.gz
- Upload date:
- Size: 16.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 71ff0e695e127ac693629bdd45af07ae7957401d7b96be15c48efd41d4b026ad |
|
MD5 | 27048055c57ba65c53c8d0dc72af5e89 |
|
BLAKE2b-256 | 9460d0240a5f7b164483ef834c503fbe110f311707fde0b64f17ad6adc242cc0 |
File details
Details for the file bw_aggregation-1.1-py3-none-any.whl
.
File metadata
- Download URL: bw_aggregation-1.1-py3-none-any.whl
- Upload date:
- Size: 12.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b5cbc62dcab4f8b8409cae52c88de61ddd49e28c4c6c024bb02adac6c054768a |
|
MD5 | a400bccb057a8b68d86d85e57f2216a0 |
|
BLAKE2b-256 | d5d20c56bd5bb0d96dccfde6b4f4393094931b2a8afe84579937862ac2e0289e |