Skip to main content

network utilities

Project description

net module 0.3.6 documentation

Author: JoStudio, Date: 2022/9/28

net Module

This net package provides tools to perform net ping, scan port, send email, http, web spider, access web API.

net.binary submodule

Read/write binary data in memory or data in a file

net.dns submodule

DNS protocol client / server

net.mail submodule

e-mail sender

# Send e-mail example



from net import Mail



# Set parameters

username = 'xxxxxxx@host.com'  # username to login SMTP server

password = 'xxxxxxxxxx'      # password to login SMTP server

receiver = 'xxxxxxxx@host.com'  # receiver e-mail address



mail = Mail(username, password)  # Create mail object



# Send mail with attachment file '1.jpg'

mail.send([receiver], 'My Subject', "This is body", ['1.jpg'])

net.scan submodule

Network ping and scan functions

# Net ping and scan examples



from net import Net



# ping a server(or an IP)

t = Net.ping("www.bing.com")  # return milliseconds, return -1 means not available

print('milliseconds', t)



# get IP address of this computer

my_ip = Net.local_ip()



# create an IP range (a list of IP address

ip_list = Net.ip_range(my_ip, 1, 100)

print(ip_list)



# Scan the IPs, return list of pingable IPs

exists_ips = Net.ip_scan(ip_list)

print(exists_ips)



# whether a port of specified IP is opened

if Net.is_port_open(my_ip, 80):

    print('port 80 of', my_ip, 'opened')

else:

    print('port 80 of', my_ip, 'not opened')





# scan a list of port on specified IP address, return opened port list

port_list = Net.port_scan(my_ip, [80, 8080, 21, 22, 443, 445])

print('opened ports', port_list)

net.spider submodule

Spider, get web page, extract word from the content, download file.

# Spider usage examples:



from net import Spider





# create a Spider object for specified url

url = "https://www.python.org/"

spider = Spider(url)





# You can user Spider.find_xxx() to find information in the web page



# find all links in the page

link_urls = spider.find_links()

print('links:', link_urls)



# find the links after 'Latest News'

link_urls = spider.find_links('Latest News')

print('links:', link_urls)





# find the links 'Latest News' and the link's url contains "blog."

link_urls = spider.find_links('Latest News', contains="blog.")

print('links:', link_urls)





# find the urls of the images in the page

img_urls = spider.find_images()

print('images:', img_urls)



if len(img_urls) > 0:

    # create a new spider to download the first image url, save image to filename 'pic.xxx'

    # (file extension will be added automatically)

    filename = Spider(url, img_urls[0]).download("pic")

    print('file %s downloaded' % repr(filename))



print("-----------------------------------")



# find the codes in the page

codes = spider.find_codes()

if codes:

    print('code:\n', codes[0])



print("-----------------------------------")



# find the list items after 'Latest News'

words = spider.find_list_items('Latest News')

print('Latest News:', words)



# find the text of the paragraph after 'Download'

text = spider.find_paragraph('Download')

print('Download paragraph: ', text)



tables = Spider("https://www.w3school.com.cn/tags/tag_table.asp").find_tables(text_only=True)

if len(tables) > 0:

    print('table 0:', tables[0].to_list())





# Advanced find example

#

# understanding the structure of webpage's HTML source code, find words in the HTML



# example: find the text of menu items

begin = ['<ul', 'menu']      # find '<ul' tag and 'menu' class as the beginning

end = ['</ul>']              # find '</ul>' tag as the ending

# word is the menu item text

before = ['<li', '<a', '>']  # find '<li' and '<a' and '>' which is before the word

after = ['</a>']             # find '</a>' which is after the word

words = spider.find_list(before, after, begin, end)

print("menus:", words)   # ['Python', 'PSF', 'Docs', 'PyPI', 'Jobs', 'Community']



# find the text of menu items and its links

# word1 is the link

before1 = ['<li', '<a', 'href="']  # find '<li' and '<a' and 'href="' which is before the word1

after1 = ['"']     # find '"' which is after the word1



# word2 is the menu text

before2 = ['>']  # find '>' which is before the word2, after word1

after2 = ['</a>']  # find '</a>' which is after the word2



# compose a list definition, each item of the list is a word define (before, after)

betweens = [(before1, after1), (before2, after2)]



# perform finding

words_list = spider.find_words_list(betweens, begin, end)

print("menus2:", words_list)  # words_list will be a list, each item is a list of two words

# the result is  [['/', 'Python'], ['/psf-landing/', 'PSF'], ...

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jojo-net-0.3.7.tar.gz (66.9 kB view details)

Uploaded Source

File details

Details for the file jojo-net-0.3.7.tar.gz.

File metadata

  • Download URL: jojo-net-0.3.7.tar.gz
  • Upload date:
  • Size: 66.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.0

File hashes

Hashes for jojo-net-0.3.7.tar.gz
Algorithm Hash digest
SHA256 5b9afa4ddf8c59988d63f7be590f74af0ff14691231d439ec73be6146f9e095d
MD5 91b114b9859d7aa809457f698ba88104
BLAKE2b-256 79b0f3352ac66952f8d5ce19f827eac006d28ebf7ce2eb247a13ff6ff5ca2b87

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page