Worker decorator for background tasks re-using ThreadPoolExecutor.
Project description
# Soon
Worker decorator for background tasks re-using [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor).
Installation
------------
Soon is conveniently available via pip:
```
pip install soon
```
or installable via `git clone` and `setup.py`
```
git clone git@github.com:dotpot/Soon.git
sudo python setup.py install
```
To ensure Soon is properly installed, you can run the unittest suite from the project root:
```
pipenv run pytest -v
```
Usage
-----
The Soon library enables you to utilize the benefits of multi-threading with minimal concern about the implementation details.
Website fetcher example
-----------------
You've collected a list of urls and are looking to download the HTML of the lot. The following is a perfectly reasonable first stab at solving the task.
```python
urls = [
'https://cnn.com',
'https://news.ycombinator.com/',
'https://stackoverflow.com/',
]
```
---
```python
import time
import requests
def fetch(url):
return requests.get(url)
if __name__ == "__main__":
start = time.time()
responses = [fetch(url) for url in urls]
html = [response.text for response in responses]
end = time.time()
print("Time: %f seconds" % (end - start))
```
---
More efficient website fetcher example
--------------------------
Using Soon's decorator syntax, we can define a function that executes in multiple threads. Individual calls to `download` are non-blocking, but we can largely ignore this fact and write code identically to how we would in a synchronous paradigm.
```python
import time
import requests
from soon import workers
@workers(5)
def fetch(url):
return requests.get(url)
if __name__ == "__main__":
start = time.time()
responses = [fetch(url) for url in urls]
html = [response.text for response in responses]
end = time.time()
print("Time: %f seconds" % (end - start))
```
We can now download websites more efficiently.
---
You can also optionally pass in `timeout` argument, to prevent hanging on a task that is not guaranteed to return.
```python
import time
from soon import workers
@workers(1, timeout=0.1)
def timeout_error():
time.sleep(1)
if __name__ == "__main__":
timeout_error()
```
Worker decorator for background tasks re-using [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor).
Installation
------------
Soon is conveniently available via pip:
```
pip install soon
```
or installable via `git clone` and `setup.py`
```
git clone git@github.com:dotpot/Soon.git
sudo python setup.py install
```
To ensure Soon is properly installed, you can run the unittest suite from the project root:
```
pipenv run pytest -v
```
Usage
-----
The Soon library enables you to utilize the benefits of multi-threading with minimal concern about the implementation details.
Website fetcher example
-----------------
You've collected a list of urls and are looking to download the HTML of the lot. The following is a perfectly reasonable first stab at solving the task.
```python
urls = [
'https://cnn.com',
'https://news.ycombinator.com/',
'https://stackoverflow.com/',
]
```
---
```python
import time
import requests
def fetch(url):
return requests.get(url)
if __name__ == "__main__":
start = time.time()
responses = [fetch(url) for url in urls]
html = [response.text for response in responses]
end = time.time()
print("Time: %f seconds" % (end - start))
```
---
More efficient website fetcher example
--------------------------
Using Soon's decorator syntax, we can define a function that executes in multiple threads. Individual calls to `download` are non-blocking, but we can largely ignore this fact and write code identically to how we would in a synchronous paradigm.
```python
import time
import requests
from soon import workers
@workers(5)
def fetch(url):
return requests.get(url)
if __name__ == "__main__":
start = time.time()
responses = [fetch(url) for url in urls]
html = [response.text for response in responses]
end = time.time()
print("Time: %f seconds" % (end - start))
```
We can now download websites more efficiently.
---
You can also optionally pass in `timeout` argument, to prevent hanging on a task that is not guaranteed to return.
```python
import time
from soon import workers
@workers(1, timeout=0.1)
def timeout_error():
time.sleep(1)
if __name__ == "__main__":
timeout_error()
```
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
soon-0.2.tar.gz
(2.6 kB
view details)
Built Distribution
soon-0.2-py3-none-any.whl
(3.6 kB
view details)
File details
Details for the file soon-0.2.tar.gz
.
File metadata
- Download URL: soon-0.2.tar.gz
- Upload date:
- Size: 2.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.0 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.24.0 CPython/3.6.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 36a465a5097248a485e8d332a3dc26dcf50298362df687dd3b7a7f214cc97727 |
|
MD5 | 021b90451a3355a56dae953fd3291961 |
|
BLAKE2b-256 | 8a2f3b1cfc1024e72bd04761d3b4100c59776e882b3fedff48a13a869c45947b |
File details
Details for the file soon-0.2-py3-none-any.whl
.
File metadata
- Download URL: soon-0.2-py3-none-any.whl
- Upload date:
- Size: 3.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.20.0 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.24.0 CPython/3.6.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d605b605b73785af1a7f139c1e9c2e0b4020d1f4043027dff8e16c793ed89add |
|
MD5 | d74b659358dd7f9fc709a4594df909e1 |
|
BLAKE2b-256 | e56b74adc4615cb3132dd78617d5455b9fd38d5a21d42134cf16aa47b1277032 |