Skip to main content

SecretScraper is a web scraper tool that can scrape the content through target websites and extract secret information via regular expression.

Project description

SecretScraper

Tests

Overview

SecretScraper is a web scraper tool that can crawl through target websites, scrape from response and extract secret information via regular expression.

Feature

  • Web crawler: scrape through target url and extract all extending urls, roll around through new links until depth or page number exceeds the limitation
  • Support domain white list and black list
  • Support multiple targets, input target URLs from a file
  • Scalable customization: header, proxy, timeout, cookie, scrape depth, follow redirect, etc.
  • Built-in regex to search for sensitive information
  • Flexible configuration in yaml format

Prerequisite

  • Platform: MaxOS or Ubuntu, no support for Windows temporarily(due to hyperscan is employed in this project).
  • Python Version: 3.11

Usage

Install

pip install secretscraper

Basic Usage

Start with single target:

secretscraper -u https://scrapeme.live/shop/

Start with multiple targets:

secretscraper -f urls
# urls
http://scrapeme.live/1
http://scrapeme.live/2
http://scrapeme.live/3
http://scrapeme.live/4
http://scrapeme.live/1

A sample output:

> secretscraper -u http://127.0.0.1:8888
Target urls num: 1
Max depth: 1, Max page num: 1000
Output file: /Users/padishah/Documents/Files/Python_WorkSpace/secretscraper/src/secretscraper/crawler.log
Target URLs: http://127.0.0.1:8888

1 URLs from http://127.0.0.1:8888 [200] (depth:0):
http://127.0.0.1:8888/index.html [200]

1 Domains:
127.0.0.1:8888


13 Secrets found in http://127.0.0.1:8888/1.js 200:
Email: 3333333qqqxxxx@qq.com
Shiro: =deleteme
JS Map: xx/static/asdfaf.js.map
Email: example@example.com
Swagger: static/swagger-ui.html
ID Card: 130528200011110000
URL as a Value: redirect=http://
Phone: 13273487666
Internal IP:  192.168.1.1
Cloud Key: Accesskeyid
Cloud Key: AccessKeySecret
Shiro: rememberme=
Internal IP:  10.0.0.1


1 JS from http://127.0.0.1:8888:
http://127.0.0.1:8888/1.js [200]

All supported options:

> secretscraper --help
Usage: secretscraper [OPTIONS]

  Main commands

Options:
  -V, --version                Show version and exit.
  --debug                      Enable debug.
  -a, --ua TEXT                Set User-Agent
  -c, --cookie TEXT            Set cookie
  -d, --allow-domains TEXT     Domain white list, wildcard(*) is supported,
                               separated by commas, e.g. *.example.com,
                               example*
  -D, --disallow-domains TEXT  Domain black list, wildcard(*) is supported,
                               separated by commas, e.g. *.example.com,
                               example*
  -f, --url-file FILE          Target urls file, separated by line break
  -i, --config FILE            Set config file, defaults to settings.yml
  -m, --mode [1|2]             Set crawl mode, 1(normal) for max_depth=1,
                               2(thorough) for max_depth=2, default 1
  --max-page INTEGER           Max page number to crawl, default 100000
  --max-depth INTEGER          Max depth to crawl, default 1
  -o, --outfile FILE           Output result to specified file
  -s, --status TEXT            Filter response status to display, seperated by
                               commas, e.g. 200,300-400
  -x, --proxy TEXT             Set proxy, e.g. http://127.0.0.1:8080,
                               http://127.0.0.1:7890
  -H, --hide-regex             Hide regex search result
  -F, --follow-redirects       Follow redirects
  -u, --url TEXT               Target url
  -l, --local PATH             Local file or directory, scan local
                               file/directory recursively
  --help                       Show this message and exit.

Advanced Usage

Thorough Crawl

The max depth is set to 1, which means only the start urls will be crawled. To change that, you can specify via --max-depth <number>. Or in a simpler way, use -m 2 to run the crawler in thorough mode which is equivalent to --max-depth 2. By default the normal mode -m 1 is adopted with max depth set to 1.

secretscraper -u https://scrapeme.live/shop/ -m 2

Write Results to File

secretscraper -u https://scrapeme.live/shop/ -o result.log

Hide Regex Result

Use -H option to hide regex-matching results. Only found links will be displayed.

secretscraper -u https://scrapeme.live/shop/ -H

Extract secrets from local file

secretscraper -l <dir or file>

Customize Configuration

The built-in config is shown as below. You can assign custom configuration via -i settings.yml.

verbose: false
debug: false
loglevel: warning
logpath: log

proxy: "" # http://127.0.0.1:7890
max_depth: 1 # 0 for no limit
max_page_num: 1000 # 0 for no limit
timeout: 5
follow_redirects: false
workers_num: 1000
headers:
  Accept: "*/*"
  Cookie: ""
  User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36 SE 2.X MetaSr 1.0

rules:
  - name: Swagger
    regex: \b[\w/]+?((swagger-ui.html)|(\"swagger\":)|(Swagger UI)|(swaggerUi)|(swaggerVersion))\b
    loaded: true
  - name: ID Card
    regex: \b((\d{8}(0\d|10|11|12)([0-2]\d|30|31)\d{3}\$)|(\d{6}(18|19|20)\d{2}(0[1-9]|10|11|12)([0-2]\d|30|31)\d{3}(\d|X|x)))\b
    loaded: true
  - name: Phone
    regex: \b((?:(?:\+|00)86)?1(?:(?:3[\d])|(?:4[5-79])|(?:5[0-35-9])|(?:6[5-7])|(?:7[0-8])|(?:8[\d])|(?:9[189]))\d{8})\b
    loaded: true
  - name: JS Map
    regex: \b([\w/]+?\.js\.map)
    loaded: true
  - name: URL as a Value
    regex: (\b\w+?=(https?)(://|%3a%2f%2f))
    loaded: true
  - name: Email
    regex: \b(([a-z0-9][_|\.])*[a-z0-9]+@([a-z0-9][-|_|\.])*[a-z0-9]+\.([a-z]{2,}))\b
    loaded: true
  - name: Internal IP
    regex: '[^0-9]((127\.0\.0\.1)|(10\.\d{1,3}\.\d{1,3}\.\d{1,3})|(172\.((1[6-9])|(2\d)|(3[01]))\.\d{1,3}\.\d{1,3})|(192\.168\.\d{1,3}\.\d{1,3}))'
    loaded: true
  - name: Cloud Key
    regex: \b((accesskeyid)|(accesskeysecret)|\b(LTAI[a-z0-9]{12,20}))\b
    loaded: true
  - name: Shiro
    regex: (=deleteMe|rememberMe=)
    loaded: true
  - name: Suspicious API Key
    regex: "[\"'][0-9a-zA-Z]{32}['\"]"
    loaded: true

TODO

  • Support headless browser
  • Support url-finder output format, add --tree option
  • Add regex doc reference
  • Generate configuration file
  • Support windows
  • Scan local file
  • Extract links via regex

Change Log

2024.4.28 Version 1.3.2

  • New Features
    • Extract links via regex

2024.4.26 Version 1.3.1

  • New Features
    • Support scan local files

2024.4.15

  • Add status to url result
  • All crawler test passed

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

secretscraper-1.3.3.tar.gz (23.2 kB view details)

Uploaded Source

Built Distribution

secretscraper-1.3.3-py3-none-any.whl (26.1 kB view details)

Uploaded Python 3

File details

Details for the file secretscraper-1.3.3.tar.gz.

File metadata

  • Download URL: secretscraper-1.3.3.tar.gz
  • Upload date:
  • Size: 23.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Darwin/21.6.0

File hashes

Hashes for secretscraper-1.3.3.tar.gz
Algorithm Hash digest
SHA256 e83f5277262f1fa51e725d8e99386cacfeabe566f479397dd7d877018c7b1d38
MD5 ee6f920ed14725e289425b8b820ad717
BLAKE2b-256 191ac8657d0d24a2ba3aa92004753a1bf911e5072d50ec4aba7bcf332e95754c

See more details on using hashes here.

File details

Details for the file secretscraper-1.3.3-py3-none-any.whl.

File metadata

  • Download URL: secretscraper-1.3.3-py3-none-any.whl
  • Upload date:
  • Size: 26.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Darwin/21.6.0

File hashes

Hashes for secretscraper-1.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c3fc32d8bc3e831255d72b62c1e9cd17bbd5f764753bbb7d64182774b0831aa2
MD5 4d7e53e6e3920d1e2ffead7fb666a6e0
BLAKE2b-256 ab70f7fa0c920994b8e1da03fefc875e6aa7ec01669d43def573a78d1f57656f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page