Utility function to convert an iterable of bytes or str to a readable file-like object.
Project description
to-file-like-obj
Python utility function to convert an iterable of bytes
or str
to a readable file-like object.
It can be seen as the inverse of the two-argument iter function. The iter function allows conversion of file-like objects to iterables, but the function here converts from iterables to file-like objects. This allows you to bridge the gap between incompatible streaming APIs - passing data from sources that offer data as iterables to destinations that only accept file-like objects.
Features
- Inherits from
IOBase
- some APIs require this - The resulting file-like object is well-behaved - it does not return more data than requested
- It evaluates the iterable lazily - avoiding loading all its data into memory
- Under the hood copying is avoided as much as possible
- Converts iterables of
bytes
to bytes-based file-like objects, which can be passed to boto3's upload_fileobj or to io.TextIOWrapper which is useful in stream CSV parsing. - Converts iterables of
str
to text-based file-like objects, which can be passed to psycopg2's copy_expert
Installation
pip install to-file-like-obj
Usage
If you have an iterable of bytes
instances, you can pass them to the to_file_like_obj
function, and it will return the corresponding file-like object.
from to_file_like_obj import to_file_like_obj
f = to_file_like_obj((b'one', b'two', b'three',))
print(f.read(5)) # b'onetw'
print(f.read(6)) # b'othree'
If you have an iterable of str
instances, you can pass them to the to_file_like_obj
, along with base=str
as a named argument, and it will return the corresponding file-like object.
from to_file_like_obj import to_file_like_obj
f = to_file_like_obj(('one', 'two', 'three',), base=str)
print(f.read(5)) # 'onetw'
print(f.read(6)) # 'othree'
These examples have the iterables hard coded and so loaded all into memory. However, to_file_like_obj
works equally well with iterables that are generated dynamically, and without loading them all into memory.
Recipe: parsing a CSV file while downloading it
Using httpx it's possible to use the to_file_like_obj
function to parse a CSV file while downloading it.
import csv
import io
import httpx
from to_file_like_obj import to_file_like_obj
with httpx.stream("GET", "https://www.example.com/my.csv") as r:
bytes_iter = r.iter_bytes()
f = to_file_like_obj(bytes_iter)
lines_iter = io.TextIOWrapper(f, newline='', encoding='utf=8')
rows_iter = csv.reader(lines):
for row in rows_iter:
print(row)
Recipe: parsing a zipped CSV file while downloading it
Similarly, using httpx and stream-unzip, it's possible to use the to_file_like_obj
function to robustly parse a zipped CSV file while downloading it.
import csv
import io
import httpx
from stream_unzip import stream_unzip
from to_file_like_obj import to_file_like_obj
with httpx.stream("GET", "https://www.example.com/my.zip") as r:
zipped_bytes_iter = r.iter_bytes()
# Assumes a single CSV file in the ZIP (in the case of more, this will concatanate them together)
unzipped_bytes_iter = (
chunk
for _, _, chunks in stream_unzip(zipped_bytes_iter)
for chunk in chunks
)
f = to_file_like_obj(unzipped_bytes_iter)
lines_iter = io.TextIOWrapper(f, newline='', encoding='utf=8')
rows_iter = csv.reader(lines):
for row in rows_iter:
print(row)
Recipe: uploading large objects to S3 while receiving their contents
boto3's upload_fileobj is a powerful function, but it's not obvious that it can be used with iterables of bytes that are returned from various APIs, such as those in httpx.
import httpx
from to_file_like_obj import to_file_like_obj
s3 = boto3.client('s3')
with httpx.stream("GET", "https://www.example.com/my.zip") as r:
bytes_iter = r.iter_bytes()
f = to_file_like_obj(bytes_iter)
s3.upload_fileobj(f, 'my-bucket', 'my.zip')
Recipe: stream-zipping while uploading to S3
stream-zip can be used with boto3 and this package to upload objects to S3 while zipping them.
import datetime
import httpx
from stat import S_IFREG
from to_file_like_obj import to_file_like_obj
from stream_zip import ZIP_32, stream_zip
s3 = boto3.client('s3')
with httpx.stream("GET", "https://www.example.com/my.txt") as r:
unzipped_bytes_iter = r.iter_bytes()
member_files = (
(
'my.txt',
datetime.now(),
S_IFREG | 0o600,
ZIP_32,
unzipped_bytes_iter,
),
)
zipped_bytes_iter = stream_zip(member_files)
f = to_file_like_obj(zipped_bytes_iter)
s3.upload_fileobj(f, 'my-bucket', 'my.zip')
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file to_file_like_obj-0.0.6.tar.gz
.
File metadata
- Download URL: to_file_like_obj-0.0.6.tar.gz
- Upload date:
- Size: 5.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | af75a77502c7f90d123f416c17ebdc67e3bfecfe2f99149656b7b751c33b2709 |
|
MD5 | 3a2d73ba911c2253b03ca487dc3a95e1 |
|
BLAKE2b-256 | 7d8f74b91092f8e56236682ec41b580f7fbfe4dbdd89dadf3476df9e46211ee5 |
File details
Details for the file to_file_like_obj-0.0.6-py3-none-any.whl
.
File metadata
- Download URL: to_file_like_obj-0.0.6-py3-none-any.whl
- Upload date:
- Size: 4.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.0 CPython/3.12.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | acfe6c4eb6c6934ae348cc80ea30c1d5820eba56cb78a615c2e807daeb529c30 |
|
MD5 | 2c7532cdeb6ae22f0104335d655d1b2c |
|
BLAKE2b-256 | 4424f1d67bcd877dcaea4c3540940ff17cb2fa5f81e51387a9175b0759851f80 |