Skip to main content

Another library for iterating through the contents of a directory

Project description

logo

Downloads Downloads Coverage Status Lines of code Hits-of-Code Test-Package Python versions PyPI version Checked with mypy Ruff DeepWiki

There are many libraries for traversing directories. You can also do this using the standard library. This particular library is a bit different in that:

  • ⚗️ Filtering by file extensions, text patterns in .gitignore format, and using custom callables.
  • 🐍 Natively works with both Path objects from the standard library and strings.
  • ❌ Support for cancellation tokens.
  • 👯‍♂️ Combining multiple crawling methods in one object.

Table of contents

Installation

You can install dirstree using pip:

pip install dirstree

You can also quickly try out this and other packages without having to install using instld.

Basic usage

It's very easy to work with the library in your own code:

  • Create a crawler object, passing the path to the base directory and, if necessary, additional arguments.
  • Iterate through it.

The simplest code example would look like this:

from dirstree import Crawler

crawler = Crawler('.')

for file in crawler:
    print(file)

↑ Here we output recursively (that is, including the contents of nested directories) all files from the current directory. At each iteration, we get a new Path object.

Filtering

Iterating through the files in the directory, you may not want to view all files, but only files of a certain type. To do this, ignore all other files. How to do it? There are 3 ways:

  • Bypass only files with the specified extensions, such as .txt, .doc, or .py.
  • Bypass files whose paths follow a specific text pattern.
  • Use an arbitrary function to determine whether you need each specific path or not.

To select a specific method, you need to pass a specific parameter when creating the crawler object. Of course, all the methods can be combined with each other.

To set the file extensions you are interested in, use the extensions parameter:

crawler = Crawler('.', extensions=['.txt'])  # Iterate only on .txt files.

Also, if you only need Python files, you can use a special class to bypass them only, without specifying extensions:

from dirstree import PythonCrawler

crawler = PythonCrawler('.')  # Iterate only on .py files.

To specify which files and directories you do NOT want to iterate over, use the exclude parameter:

crawler = Crawler('.', exclude=['.git', 'venv'])  # Exclude ".git" and "venv" directories.

↑ Please note that we use the .gitignore format here.

If you need a universal way to filter out unnecessary paths, pass your function as the filter parameter:

crawler = Crawler('.', filter = lambda path: len(str(path)) == 7)  # Iterate only on paths that are 7 characters long.

Working with Cancellation Tokens

You can set an arbitrary condition under which file traversal will stop using cancellation tokens from the cantok library.

There are 2 ways to do this ↓

  1. If you use the crawler as a one-time object for a single iteration, set the token when creating it:
for path in Crawler('.', token=TimeoutToken(0.0001)): # Limit the iteration time to 0.0001 seconds.
  print(path)
  1. If you plan to use the crawler object several times, use the go() method for iteration and pass a new token to it everytime:
crawler = Crawler('.')

for path in crawler.go(token=TimeoutToken(0.0001)): # Limit the iteration time to 0.0001 seconds.
  print(path)

↑ Follow these rules to avoid accidentally "baking" an expired token inside a crawler object.

Combination

You can combine multiple crawler objects into one using the usual addition operator, like this:

for path in Crawler('../dirstree') + Crawler('../cantok'):
    print(path)

↑ The paths that you will iterate on will be automatically deduplicated.

↑ You can also impose arbitrary restrictions on each of the summed objects, all of them will be taken into account.

You can also pass multiple paths to a single crawler object:

for path in Crawler('../dirstree', '../cantok'):
    print(path)

↑ In this case, there is no deduplication of paths.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dirstree-0.0.3.tar.gz (8.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dirstree-0.0.3-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file dirstree-0.0.3.tar.gz.

File metadata

  • Download URL: dirstree-0.0.3.tar.gz
  • Upload date:
  • Size: 8.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dirstree-0.0.3.tar.gz
Algorithm Hash digest
SHA256 cd7f139118aa575f0c0ee2aa6422e855a6bb2eb64871c591b23fe9b18a256450
MD5 3302820f0894cb55c32a806d394c5d1f
BLAKE2b-256 bcb2896c913e80374aaf3440ffe9aa1936bd270fc19d4620499e72623911d729

See more details on using hashes here.

Provenance

The following attestation bundles were made for dirstree-0.0.3.tar.gz:

Publisher: release.yml on pomponchik/dirstree

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dirstree-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: dirstree-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 7.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dirstree-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 655681d61c413549962c083a480ba99a20bce80b4956edd924f5c800d6420630
MD5 588f883398bffa95b24a7c0f4ef633e5
BLAKE2b-256 0070e69a32f9a05e8fb9d9060c1a43a6f28ac6384637957da49c99fafe978b60

See more details on using hashes here.

Provenance

The following attestation bundles were made for dirstree-0.0.3-py3-none-any.whl:

Publisher: release.yml on pomponchik/dirstree

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page