Skip to main content

Crawl telegra.ph for nude pictures and videos

Project description

nude crawler

Nude crawler crawls all pages on telegra.ph for today and N past days for specific words, checks number of nude, non-nude images, videos (not analysed) and reports pages which looks interesting (e.g. has more then 10 nude images, or has one video)

Ineffective intriguing warning

No matter how old you are, no matter how tolerant you are, no matter what your sexual orientation is, no matter what your favorite perversion is, no matter how big your sexual horizons are, with NudeCrawler you will find a lot of things that you will NOT like.

I wrote this warning because I have seen some shit. LITERALLY.

Please use it only for legal and ethical purposes. And it's 18+ surely.

Install

pip3 install nudecrawler

alternatively, install right from git repo:

pip3 install git+https://github.com/yaroslaff/nudecrawler

start adult-image-detector

If you want nudity detection, we use optional adult-image-detector:

docker run -d -p 9191:9191 opendating/adult-image-detector

Or just add -a option if you do not want to filter by number of nude images.

Launch Nude Crawler!

(I intentionally changed links, do not want to violate github policy)

$ nudecrawler sasha-grey
INTERESTING https://telegra.ph/sasha-grey-XXXXXXXX
  Nude: 0 non-nude: 0
  Total video: 1

INTERESTING https://telegra.ph/sasha-grey-XXXXX
  Nude: 9 non-nude: 6

INTERESTING https://telegra.ph/sasha-grey-XXXXX
  Nude: 9 non-nude: 6

INTERESTING https://telegra.ph/sasha-grey-XXXXX
  Nude: 6 non-nude: 3

Working with wordlists

In simplest case (not so big wordlist), just use -w, like:

# verbose, no-filtering (report all pages), use wordlist
nudecrawler -v -a -w wordlist.txt

If you have very large wordlist, better to pre-check it with faster tool like bulk-http-check, it's much faster, doing simple check (we need only filter-out 200 vs 404 pages) millions of page per hour on smallest VPS server.

Convert wordlist to urllist

# only generate URLs 
nudecrawler -v -w wordlist.txt --urls > urls.txt

Verify it with bulk-http-check and get output file with this format:

https://telegra.ph/abazhurah-02-26 OK 404
https://telegra.ph/ab-03-01 OK 200
https://telegra.ph/aaronov-02-22 OK 404
https://telegra.ph/abazhurami-02-25 OK 404

Filter it, to leave only existing pages, and strip date from it:

grep "OK 200" .local/urls-status.log | cut -f 1 -d" "| sed 's/-[0-9]\+-[0-9]\+$//g' | sort | uniq > .local/urs.txt

List (urls.txt) will look like:

https://telegra.ph/
https://telegra.ph/a
https://telegra.ph/ab
https://telegra.ph/aba
https://telegra.ph/Abakan
....

This list (~300Kb, 11k urls) created from 1.5M words russian wordlist. There are only words which had at least one page with this title for last 10 days. So it has words 'Анжелика' or 'Анфиса' (beautiful woman names), but has no words 'Абажурами' or 'Абажуродержателем' (Because there are no pages with these titles on telegra.ph).

Now you can use this file as wordlist (nudecrawler will detect it's already base URL, and will only append date to URL).

Example usage:

bin/nudecrawler -w urls.txt --nude 5 -t 0.5 -d 30 -f 5 --stats .local/mystats.json  --log .local/nudecrawler.log 

process urls from urls.txt, report page if 5+ nude images (or 1 any video, default), nudity must be over 0.5 threshold, check from todays date to 30 days ago, append all found pages to .local/nudecrawler.log, save periodical statistics to .local/mystats.json

If crawler will see page Sasha-Grey-01-23-100, but Sasha-Grey-01-23-101 is 404 Not Found, it will try -102 and so on. It will stop only if 5 (-f) pages in a row will fail.

If you will stop nude crawler for some reason, you can resume it. Repeat full command (peek it from stats file) and append --resume.

Options

usage: nudecrawler [-h] [-d DAYS] [--nude NUDE] [--video VIDEO] [-u URL] [-a] [-f FAILS]
                   [-t THRESHOLD] [--day MONTH DAY] [-v] [--unbuffered] [--urls] [--log LOG]
                   [-w WORDLIST] [--stats STATS] [--resume]
                   [words ...]

Telegra.ph Spider

positional arguments:
  words

optional arguments:
  -h, --help            show this help message and exit
  -d DAYS, --days DAYS
  --nude NUDE           Interesting if N nude images
  --video VIDEO         Interesting if N video
  -u URL, --url URL     process one url
  -a, --all             do not detect, print all found pages
  -f FAILS, --fails FAILS
                        stop searching next pages with same words after N failures
  -t THRESHOLD, --threshold THRESHOLD
                        nudity threshold (0..1), 0 will match almost everything
  --day MONTH DAY       Current date (default is today) example: --day 12 31

Output options:
  -v, --verbose         verbose
  --unbuffered, -b      Use unbuffered stdout
  --urls                Do not check, just generate and print URLs
  --log LOG             print all precious treasures to this logfile

list-related options:
  -w WORDLIST, --wordlist WORDLIST
                        wordlist (urllist) file
  --stats STATS         periodical statistics file
  --resume              skip all words before WORD in list, resume starting from it

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nudecrawler-0.0.13.tar.gz (10.5 kB view details)

Uploaded Source

Built Distribution

nudecrawler-0.0.13-py3-none-any.whl (9.4 kB view details)

Uploaded Python 3

File details

Details for the file nudecrawler-0.0.13.tar.gz.

File metadata

  • Download URL: nudecrawler-0.0.13.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.2

File hashes

Hashes for nudecrawler-0.0.13.tar.gz
Algorithm Hash digest
SHA256 418c6cc45c68ebedc6233353c969c561a340e9df0866f09f98d0551c83745bc9
MD5 c6881354ba1fdb7c8d391927844bf794
BLAKE2b-256 f9635d9bba831deec4ad8081ae0d419d6cb4c22b1cc63df42f9fb54b0037583f

See more details on using hashes here.

File details

Details for the file nudecrawler-0.0.13-py3-none-any.whl.

File metadata

File hashes

Hashes for nudecrawler-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 50462684618b2f69a30d932e7b4972828fc428f77e7ed419e3d35f4e2ce2e87d
MD5 5ae0782bc6b0805c84e8844e21599541
BLAKE2b-256 33b52c57b780b4c7c413c2b781f01c1f2a25d082dfab2d2ed3c597ed45cf47d3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page