Skip to main content

FreeProxy: Collecting free proxies from internet.

Project description


docs PyPI - Python Version PyPI license PyPI - Downloads PyPI - Downloads issue resolution open issues

๐Ÿ“š Documentation

https://freeproxy.readthedocs.io/

โšก Live Proxies (24ๅฐๆ—ถๅ†…ๆœ€ๆ–ฐไปฃ็†)

https://charlespikachu.github.io/freeproxy/

demo

โœจ What's New

  • 2025-12-03: Released pyfreeproxy v0.3.1 โ€” Add support for more proxy sources to make a massive proxy pool a reality.
  • 2025-12-03: Released pyfreeproxy v0.3.0 โ€” Code refactoring, removal of two low-quality free proxy sources, addition of multiple high-quality free proxy sources, and introduction of more features such as proxy rule filtering, more stable proxy scraping, and so on.
  • 2025-11-19: Released pyfreeproxy v0.2.2 โ€” Fix potential in-place modified bugs.
  • 2025-11-16: Released pyfreeproxy v0.2.1 โ€” Add support for ZdayeProxiedSession and FineProxyProxiedSession.
  • 2025-11-16: Released pyfreeproxy v0.2.0 โ€” Refactored the code to improve the quality of the retrieved proxies and added support for fetching proxies from seven additional free proxy sources.

๐Ÿ“˜ Introduction

FreeProxy continuously discovers and updates lists of free proxies. If you find value here, please star the project to keep it on your radar.

๐ŸŒ Supported Proxy Sources

Proxy Source (EN) Proxy Source (CN) HTTP HTTPS SOCKS4 SOCKS5 Code Snippet
KuaidailiProxiedSession ๅฟซไปฃ็† โœ” โœ” โŒ โŒ kuaidaili.py
IP3366ProxiedSession ไบ‘ไปฃ็† โœ” โœ” โŒ โŒ ip3366.py
IP89ProxiedSession IP89 โœ” โŒ โŒ โŒ ip89.py
QiyunipProxiedSession ้ฝไบ‘ไปฃ็† โœ” โœ” โŒ โŒ qiyunip.py
ProxyhubProxiedSession ProxyHub โœ” โœ” โœ” โœ” proxyhub.py
ProxydbProxiedSession ProxyDB โœ” โœ” โŒ โœ” proxydb.py
Tomcat1235ProxiedSession ๅŒ—ๆžๅ…‰ไปฃ็† โŒ โŒ โŒ โœ” tomcat1235.py
ProxydailyProxiedSession ProxyDaily โœ” โœ” โœ” โœ” proxydaily.py
SpysoneProxiedSession SPYS.ONE โœ” โŒ โŒ โœ” spysone.py
FreeproxylistProxiedSession FreeProxyList โœ” โœ” โŒ โŒ freeproxylist.py
KxdailiProxiedSession ๅผ€ๅฟƒไปฃ็† โœ” โœ” โŒ โŒ kxdaili.py
ProxylistProxiedSession ProxyList โœ” โœ” โœ” โœ” proxylist.py
IhuanProxiedSession ๅฐๅนปไปฃ็† โœ” โœ” โŒ โŒ ihuan.py
ProxiflyProxiedSession Proxifly โœ” โœ” โœ” โœ” proxifly.py
FineProxyProxiedSession FineProxy โœ” โœ” โœ” โœ” fineproxy.py
DatabayProxiedSession Databay โœ” โœ” โŒ โœ” databay.py
IPLocateProxiedSession IPLocate โœ” โœ” โœ” โœ” iplocate.py
JiliuipProxiedSession ็งฏๆตไปฃ็† โœ” โŒ โŒ โŒ jiliuip.py
TheSpeedXProxiedSession TheSpeedX โœ” โŒ โœ” โœ” thespeedx.py
GeonodeProxiedSession Geonode โœ” โœ” โœ” โœ” geonode.py
FreeProxyDBProxiedSession FreeProxyDB โœ” โŒ โœ” โœ” freeproxydb.py

๐ŸŽฎ Playground

Here are some projects built on top of pyfreeproxy,

Project WeChat Article Project Location
ICU996 ็”จๆ•ฐไธ‡ๆกๆ•ฐๆฎๅธฆๅคงๅฎถ็œ‹็œ‹ๅˆฐๅบ•ๆ˜ฏๅ“ชไบ›ไบบๅœจๅๅฏน996~ click

๐Ÿ“ฆ Install

You have three installation methods to choose from,

# from pip
pip install pyfreeproxy
# from github repo method-1
pip install git+https://github.com/CharlesPikachu/freeproxy.git@master
# from github repo method-2
git clone https://github.com/CharlesPikachu/freeproxy.git
cd freeproxy
python setup.py install

Please note that some proxy sources need to be crawled using Playwright. Playwright will automatically download and configure the browser drivers, so there is no need to worry โ€” it is not malware. For more details, you can refer to the official Playwright documentation.

๐Ÿš€ Quick Start

Common Usage Scenarios

After installing freeproxy, you can write a script like this to gather basic statistics about the proxy sources you want to use and save all the retrieved proxies to a specified JSON file:

import json
import random
from tqdm import tqdm
from freeproxy.modules import BaseProxiedSession, ProxyInfo, BuildProxiedSession, printtable, colorize

'''main'''
def main():
    # proxy_sources
    proxy_sources = ['ProxiflyProxiedSession', 'KuaidailiProxiedSession', 'QiyunipProxiedSession', 'ProxylistProxiedSession']
    # iter scraping
    free_proxies = {}
    print_titles, print_items = ['Source', 'Retrieved Examples', 'HTTP', 'HTTPS', 'SOCKS4', 'SOCKS5', 'Chinese IP', 'Elite', 'Total'], []
    for proxy_source in tqdm(proxy_sources):
        try:
            module_cfg = {'max_pages': 1, 'type': proxy_source, 'disable_print': False}
            proxied_session: BaseProxiedSession = BuildProxiedSession(module_cfg=module_cfg)
            candidate_proxies: list[ProxyInfo] = proxied_session.refreshproxies()
        except:
            candidate_proxies = []
        if len(candidate_proxies) > 0:
            example_proxy = random.choice(candidate_proxies).proxy
            count_http = sum([(p.protocol.lower() in ['http']) for p in candidate_proxies])
            count_https = sum([(p.protocol.lower() in ['https']) for p in candidate_proxies])
            count_socks4 = sum([(p.protocol.lower() in ['socks4']) for p in candidate_proxies])
            count_socks5 = sum([(p.protocol.lower() in ['socks5']) for p in candidate_proxies])
            count_cn = sum([p.in_chinese_mainland for p in candidate_proxies])
            count_elite = sum([(p.anonymity.lower() in ['elite']) for p in candidate_proxies])
            print_items.append([
                proxy_source.removesuffix('ProxiedSession'), colorize(example_proxy, 'green'), colorize(count_http, 'number'), colorize(count_https, 'number'), 
                colorize(count_socks4, 'number'), colorize(count_socks5, 'number'), colorize(count_cn, 'number'), colorize(count_elite, 'number'),
                colorize(len(candidate_proxies), 'number'),
            ])
        else:
            print_items.append([
                proxy_source.removesuffix('ProxiedSession'), 'NULL', colorize('0', 'number'), colorize('0', 'number'), 
                colorize('0', 'number'), colorize('0', 'number'), colorize('0', 'number'), colorize('0', 'number'),
                colorize(len(candidate_proxies), 'number'),
            ])
        free_proxies[proxy_source] = [p.todict() for p in candidate_proxies]
    # visualize scraping results
    print("The proxy distribution for each source you specified is as follows:")
    printtable(titles=print_titles, items=print_items, terminal_right_space_len=1)
    # save scraping results
    json.dump(free_proxies, open('free_proxies.json', 'w'), indent=2)

'''tests'''
if __name__ == '__main__':
    main()

The terminal output of the above code will look roughly like this:

C:\Users\xxxx\Desktop>python naive_scraping_proxies.py
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4/4 [00:37<00:00,  9.38s/it]
The proxy distribution for each source you specified is as follows:
+-----------+------------------------------+------+-------+--------+--------+------------+-------+-------+
|   Source  |      Retrieved Examples      | HTTP | HTTPS | SOCKS4 | SOCKS5 | Chinese IP | Elite | Total |
+-----------+------------------------------+------+-------+--------+--------+------------+-------+-------+
|  Proxifly | socks4://69.61.200.104:36181 | 5975 |   0   |  706   |  365   |     21     |  4052 |  7046 |
| Kuaidaili | https://118.175.131.176:3128 |  20  |   13  |   0    |   0    |     18     |   33  |   33  |
|  Qiyunip  | http://112.244.231.189:9000  |  6   |   9   |   0    |   0    |     15     |   12  |   15  |
| Proxylist |  http://185.88.177.197:8936  | 287  |   8   |  114   |   83   |     33     |  368  |  492  |
+-----------+------------------------------+------+-------+--------+--------+------------+-------+-------+

All proxies will be saved to free_proxies.json in the current directory, in a format similar to this:

{
  "KuaidailiProxiedSession": [
    {
      "source": "KuaidailiProxiedSession",
      "protocol": "http",
      "ip": "58.216.109.17",
      "port": "800",
      "country_code": "CN",
      "in_chinese_mainland": true,
      "anonymity": "elite",
      "delay": 124,
      "test_timeout": 5,
      "test_url": "http://www.baidu.com",
      "test_headers": {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36"
      },
      "failed_connection_default_timeout": 3600000,
      "created_at": "2025-12-03T12:43:25.018208",
      "extra": {}
    },
    ...
  ],
  "ProxiflyProxiedSession": [...],
  "QiyunipProxiedSession": [...],
  "ProxylistProxiedSession": [...],

In the above code, you can also set the max_pages argument to a larger value to obtain a larger number of high-quality proxies.

You can also enter the following command in the terminal to list all proxy sources supported by your current version of freeproxy:

python -c "from freeproxy.modules import ProxiedSessionBuilder; print(ProxiedSessionBuilder.REGISTERED_MODULES.keys())"

Sample output is as follows:

dict_keys([
  'ProxiflyProxiedSession', 'FreeproxylistProxiedSession', 'IhuanProxiedSession', 'IP89ProxiedSession', 'IP3366ProxiedSession', 
  'KuaidailiProxiedSession', 'KxdailiProxiedSession', 'ProxydailyProxiedSession', 'ProxydbProxiedSession', 'ProxyhubProxiedSession', 
  'ProxylistProxiedSession', 'QiyunipProxiedSession', 'SpysoneProxiedSession', 'Tomcat1235ProxiedSession'
])

Stricter Proxy Filtering Strategy

By default, freeproxy fetches all proxies provided by the specified free proxy sources, only validating their format and performing simple de-duplication. If you want to further filter the crawled proxies to obtain a higher-quality proxy pool, you can do so by setting the filter_rule argument.

For example, if you want to ensure that the proxy IPs are located in mainland China, you can do the following:

from freeproxy.modules.proxies import IP3366ProxiedSession

ip3366_session = IP3366ProxiedSession(filter_rule={'country_code': ['CN']})
# all obtained proxies can be accessed by `ip3366_session.candidate_proxies`
print(ip3366_session.getrandomproxy(proxy_format='freeproxy'))

Sample output is as follows:

ProxyInfo(
  source='IP3366ProxiedSession', 
  protocol='https', 
  ip='114.231.82.145', 
  port='8888', 
  country_code='CN', 
  in_chinese_mainland=True, 
  anonymity='elite', 
  delay=3000, 
  test_timeout=5, 
  test_url='http://www.baidu.com', 
  test_headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36'}, 
  failed_connection_default_timeout=3600000, 
  created_at=datetime.datetime(2025, 12, 3, 14, 3, 42, 535847), 
  extra={}
)

If you want high-anonymity proxies whose addresses are in the United States, you can do the following:

from freeproxy.modules.proxies import SpysoneProxiedSession

spy_session = SpysoneProxiedSession(filter_rule={'anonymity': ['elite'], 'country_code': ['US']})
# all obtained proxies can be accessed by `spy_session.candidate_proxies`
print(spy_session.getrandomproxy(proxy_format='freeproxy'))

Sample output is as follows:

ProxyInfo(
  source='SpysoneProxiedSession', 
  protocol='http', 
  ip='88.216.98.209', 
  port=48852, 
  country_code='US',
  in_chinese_mainland=False, 
  anonymity='elite', 
  delay=21515, 
  test_timeout=5, 
  test_url='http://www.baidu.com', 
  test_headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36'}, 
  failed_connection_default_timeout=3600000, 
  created_at=datetime.datetime(2025, 12, 3, 14, 17, 41, 502702), 
  extra={}
)

The anonymity argument can be either a single string or a list. The available options include elite (high anonymity proxies), anonymous (standard anonymous proxies), and transparent (transparent proxies).

If you have specific requirements for the proxy type, you can set the protocol argument. If you care about server response speed, you can set the max_tcp_ms or max_http_ms arguments. An example code snippet can be written as follows:

from freeproxy.modules.proxies import FreeproxylistProxiedSession

fpl_session = FreeproxylistProxiedSession(filter_rule={'protocol': ['http', 'https'], 'max_tcp_ms': 10000, 'max_http_ms': 10000})
# all obtained proxies can be accessed by `fpl_session.candidate_proxies`
print(fpl_session.getrandomproxy(proxy_format='freeproxy'))

The above code means that it only retrieves HTTP and HTTPS proxies, requires the TCP connection latency between your machine and the proxy server to be no more than 10,000 ms, and also requires the proxy serverโ€™s response time when requesting the test_url (which defaults to http://www.baidu.com) to be no more than 10,000 ms.

The protocol argument can be either a single string or a list. The available options include http, https, socks4 and socks5.

Currently, the implementation of freeproxy does not use asynchronous operations or spawn a large number of threads for parallel testing. Therefore, when too many proxies are scraped, setting max_tcp_ms or max_http_ms can cause the program to freeze for a long time. In general, it is not recommended to use these two speed-test arguments during crawling; if needed, you can run a separate script to test the proxies after the crawl is finished.

freeproxy.freeproxy.ProxiedSessionClient

ProxiedSessionClient provides a unified interface for all supported proxy sources. You can call it as shown in the following example:

from freeproxy.freeproxy import ProxiedSessionClient

proxy_sources = ['KuaidailiProxiedSession']
init_proxied_session_cfg = {'filter_rule': {'country_code': ['CN', 'US']}}
proxied_session_client = ProxiedSessionClient(proxy_sources=proxy_sources, init_proxied_session_cfg=init_proxied_session_cfg)
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36'
}
resp = proxied_session_client.get('https://space.bilibili.com/406756145', headers=headers)
print(resp.text)

When using freeproxy as a third-party package, if you donโ€™t want it to print too much extra information, you can set disable_print=True, for example:`

from freeproxy import freeproxy

proxy_sources = ['ProxydbProxiedSession']
proxied_session_client = freeproxy.ProxiedSessionClient(proxy_sources=proxy_sources, disable_print=True)

ProxiedSessionClient automatically maintains a proxy pool in which all proxies satisfy the filter_rule criteria. Each time you call the .get or .post method, it will consume at least one proxy from the pool, and when the pool is running low, it will automatically fetch and replenish more proxies. The usage of .get and .post is exactly the same as requests.get and requests.post.

The arguments supported when initializing the ProxiedSessionClient class are as follows:

  • proxy_sources (list): The proxy sources to use. Currently supported proxies see Supported Proxy Sources or call from freeproxy.modules import ProxiedSessionBuilder; print(ProxiedSessionBuilder.REGISTERED_MODULES.keys()).
  • init_proxied_session_cfg (dict): Accepts the same options as requests.Session, plus an extra max_pages field and an extra filter_rule field that specifies how many pages of proxies to fetch from each free source and the rules to filter fetched proxies.
  • disable_print (bool): Whether to suppress proxy usage logs in the terminal.
  • max_tries (int): The maximum number of attempts for each .get and .post call.

You can refer to freeproxyโ€™s source code to unlock more features. The overall codebase is not very large.

โญ Recommended Projects

  • Games: Create interesting games in pure python.
  • DecryptLogin: APIs for loginning some websites by using requests.
  • Musicdl: A lightweight music downloader written in pure python.
  • Videodl: A lightweight video downloader written in pure python.
  • Pytools: Some useful tools written in pure python.
  • PikachuWeChat: Play WeChat with itchat-uos.
  • Pydrawing: Beautify your image or video.
  • ImageCompressor: Image compressors written in pure python.
  • FreeProxy: Collecting free proxies from internet.
  • Paperdl: Search and download paper from specific websites.
  • Sciogovterminal: Browse "The State Council Information Office of the People's Republic of China" in the terminal.
  • CodeFree: Make no code a reality.
  • DeepLearningToys: Some deep learning toys implemented in pytorch.
  • DataAnalysis: Some data analysis projects in charles_pikachu.
  • Imagedl: Search and download images from specific websites.
  • Pytoydl: A toy deep learning framework built upon numpy.
  • NovelDL: Search and download novels from some specific websites.

๐Ÿ“š Citation

If you use this project in your research, please cite the repository.

@misc{freeproxy2022,
    author = {Zhenchao Jin},
    title = {FreeProxy: Collecting free proxies from internet},
    year = {2022},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/CharlesPikachu/freeproxy}},
}

๐ŸŒŸ Star History

Star History Chart

โ˜• Appreciation (่ตž่ต / ๆ‰“่ต)

WeChat Appreciation QR Code (ๅพฎไฟก่ตž่ต็ ) Alipay Appreciation QR Code (ๆ”ฏไป˜ๅฎ่ตž่ต็ )

๐Ÿ“ฑ WeChat Official Account (ๅพฎไฟกๅ…ฌไผ—ๅท):

Charles็š„็šฎๅกไธ˜ (Charles_pikachu)
img

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyfreeproxy-0.3.1.tar.gz (40.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyfreeproxy-0.3.1-py3-none-any.whl (57.4 kB view details)

Uploaded Python 3

File details

Details for the file pyfreeproxy-0.3.1.tar.gz.

File metadata

  • Download URL: pyfreeproxy-0.3.1.tar.gz
  • Upload date:
  • Size: 40.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.14

File hashes

Hashes for pyfreeproxy-0.3.1.tar.gz
Algorithm Hash digest
SHA256 33dd75efd470d49b251887bcde8a66b7477bec187c940564e8ef91efc2b8315b
MD5 f67ac5cd6815d1183f5cae97abae6cac
BLAKE2b-256 434e9737984ebf13632dc99823ceef4b35acc1d09130ae10e0392b26f87298e7

See more details on using hashes here.

File details

Details for the file pyfreeproxy-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: pyfreeproxy-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 57.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.14

File hashes

Hashes for pyfreeproxy-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f00513c23c35c5a8ed806e948d3b669f2f172ed0a232ade08cbdf2f923132dd6
MD5 51b929bb7d31b66e9494b657846c184c
BLAKE2b-256 e478cfe5825f423351c1e6ae51aa7304909a269dfceb1e7322b2d2dfff4ca389

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page