Skip to main content

Read and write CSV or TXT files in a simple manner

Project description

Quick CSV

Read and write small or large CSV/TXT files in a simple manner

Installation

pip install quick-csv

Examples for small files

Example 1: read and write csv or txt files

from quickcsv.file import *
# read a csv file
list_model=read_csv('data/test.csv')
for idx,model in enumerate(list_model):
    print(model)
    list_model[idx]['id']=idx
# save a csv file
write_csv('data/test1.csv',list_model)

# write a text file
write_text('data/text1.txt',"Hello World!")
# read a text file
print(read_text('data/text1.txt'))

Example 2: create dataframe from a list of models

from quickcsv.file import *
# read a csv file
list_model=read_csv('data/test.csv')
# create a dataframe from list_model
df=create_df(list_model)
# print
print(df)

Examples for large files

Example 1: read large csv file

from quickcsv.largefile import *
if __name__=="__main__":
    csv_path=r"umls_atui_rels.csv" # a large file (>500 MB)
    total_count=0

    def process_partition(part_df,i):
        print(f"Part {i}")

    def process_row(row,i):
        global total_count
        print(i)
        total_count+=1

    list_results=read_large_csv(csv_file=csv_path,row_func=process_row,partition_func=process_partition)

    print("Return: ")
    print(list_results)

    print("Total Record Num: ",total_count)

Example 2: query from a large csv file

from quickcsv.largefile import *

if __name__=="__main__":
    csv_path=r"umls_sui_nodes.csv" # a large file (>500 MB)
    total_count=0
    # process each partition in the large file
    def process_partition(part_df,i):
        print(f"Part {i}")
        print()
    # process each row in a partition while reading
    def process_row(row,i):
        global total_count
        print(row)
        total_count+=1
    # field is a field in the csv file, and value is the value you need to find within the csv file
    list_results=read_large_csv(csv_file=csv_path, field="SUI",value="S0000004", append_row=True, row_func=process_row,partition_func=process_partition)

    print("Return: ")
    print(list_results)

    print("Total Record Num: ",total_count)

Example 3: read top N records from the large csv file

from quickcsv.largefile import *

if __name__=="__main__":
    csv_path=r"umls_atui_rels.csv"
    total_count=0
    # return top 10 rows in the csv file
    list_results=read_large_csv(csv_file=csv_path,head_num=10)

    print("Return: ")
    print(list_results)

    print("Total Record Num: ",total_count)

License

The quick-csv project is provided by Donghua Chen.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quick-csv-0.0.5.tar.gz (12.7 kB view details)

Uploaded Source

Built Distribution

quick_csv-0.0.5-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file quick-csv-0.0.5.tar.gz.

File metadata

  • Download URL: quick-csv-0.0.5.tar.gz
  • Upload date:
  • Size: 12.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.9 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.11

File hashes

Hashes for quick-csv-0.0.5.tar.gz
Algorithm Hash digest
SHA256 1b8919350268c6e59cd905941afd1e05b2a4c3a3cf46109a42bee54e73074dcb
MD5 1218f2fc6f4a4f113a75ed6a3cb3c636
BLAKE2b-256 ce8cb2c6ada29f6f37875cf558d3813e05b880af968f34ddc45b1b508c2d5ee6

See more details on using hashes here.

File details

Details for the file quick_csv-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: quick_csv-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.9 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.11

File hashes

Hashes for quick_csv-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c1f80629677ef3416234716bc085eea79c5af9842b9c9b6783f657aed952a3fe
MD5 981eaa46be93dbe5ac49af61ea3b27fe
BLAKE2b-256 3eae2ae08479b33e6c05da9e9034d13d3166abf9bac88273f0387a1c0c63a896

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page