Skip to main content

Library to find URLs and check their validity.

Project description

urlfinderlib

This is a Python (3.6+) library for finding URLs in documents and checking their validity.

Supported Documents

Extracts URLs from the following types of documents:

  • Binary files (finds URLs within strings)
  • CSV files
  • HTML files
  • iCalendar/vCalendar files
  • PDF files
  • Text files (ASCII or UTF-8)
  • XML files

Every extracted URL is validated such that it contains a domain with a valid TLD (or a valid IP address) and does not contain any invalid characters.

URL Permutations

This was originally written to accommodate finding both valid and obfuscated or slightly malformed URLs used by malicious actors and using them as indicators of compromise (IOCs). As such, the extracted URLs will also include the following permutations:

  • URL with any Unicode characters in its domain
  • URL with any Unicode characters converted to its IDNA equivalent

For both domain variations, the following permutations are also returned:

  • URL with its path %-encoded
  • URL with its path %-decoded
  • URL with encoded HTML entities in its path
  • URL with decoded HTML entities in its path
  • URL with its path %-decoded and HTML entities decoded

Child URLs

This library also attempts to extract or decode child URLs found in the paths of URLs. The following formats are supported:

  • Barracuda protected URLs
  • Base64-encoded URLs found within the URL's path
  • Google redirect URLs
  • Mandrill/Mailchimp redirect URLs
  • Outlook Safe Links URLs
  • Proofpoint protected URLs
  • URLs found in the URL's path query parameters

Basic usage

from urlfinderlib import find_urls

with open('/path/to/file', 'rb') as f:
    print(find_urls(f.read())

base_url Parameter

If you are trying to find URLs inside of an HTML file, the paths in the URLs are often relative to their location on the server hosting the HTML. You can use the base_url parameter in this case to extract these "relative" URLs.

from urlfinderlib import find_urls

with open('/path/to/file', 'rb') as f:
    print(find_urls(f.read(), base_url='http://example.com')

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

urlfinderlib-0.18.6.tar.gz (20.1 kB view details)

Uploaded Source

Built Distribution

urlfinderlib-0.18.6-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file urlfinderlib-0.18.6.tar.gz.

File metadata

  • Download URL: urlfinderlib-0.18.6.tar.gz
  • Upload date:
  • Size: 20.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.10

File hashes

Hashes for urlfinderlib-0.18.6.tar.gz
Algorithm Hash digest
SHA256 5e100a04459da0834f08901a6c99eee48aa94da1ea740ae82d0cebd8425c58ce
MD5 3c5a6a19c2becb6b69b1699875c8367f
BLAKE2b-256 f743bb555dc65a18849062bc69f494b90bb47da0d4553c41f747d70b693c08b9

See more details on using hashes here.

File details

Details for the file urlfinderlib-0.18.6-py3-none-any.whl.

File metadata

File hashes

Hashes for urlfinderlib-0.18.6-py3-none-any.whl
Algorithm Hash digest
SHA256 234fc41df1ecd1da0d2f1f2e55f20cecc33981b4a6cbea5ccf45d4224e40f13a
MD5 9aff751ab6361523cc4d6b02cab0a611
BLAKE2b-256 dcf1c4b845e1f02a9382bd330f9ed0124b6bab1213e6c25cb5e94acab5e0d4bf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page