Abstract Web Tools is a Python package that provides various utility functions for web scraping tasks. It is built on top of popular libraries such as `requests`, `BeautifulSoup`, and `urllib3` to simplify the process of fetching and parsing web content.
Project description
# abstract_webtools for parsing web content.
## Installation
You can install the package via pip:
```bash
pip install abstract_webtools
Usage
Get Status Code
The get_status
function fetches the status code of the URL.
from abstract_webtn_url
urls = clean_url('https://example.com')
print(urls) # Output: ['https://example.com', 'http://example.com']
tps://example.com'
Try Request
The try_request
function makes HTTP requests to a URL and returns the response if successful.
from abstract_webtools import try_request
response = try_request('https://www.example.com')
print(response) # Output: <Response [200]>
Is Valid URL
The is_valid
function checks whether a given URL is valid.
from abstract_webtools import is_valid
valid = is_valid('https://www.example.com')
print(valid) # Output: True
Get Source Code
The get_Source_code
function fetches the source code of a URL with a custom user-agent.
from abstract_webtools import get_Source_code
source_code = get_Source_code('https://www.example.com')
print(source_code) # Output: HTML source code of the URL
Parse React Source
The parse_react_source
function fetches the source code of a URL and extracts JavaScript and JSX source code (React components).
from abstract_webtools import parse_react_source
react_code = parse_react_source('https://www.example.com')
print(react_code) # Output: List of JavaScript and JSX source code found in <script> tags
Get All Website Links
The get_all_website_links
function returns all URLs found on a specified URL that belong to the same website.
from abstract_webtools import get_all_website_links
links = get_all_website_links('https://www.example.com')
print(links) # Output: List of URLs belonging to the same website as the specified URL
Parse All
The parse_all
function fetches the source code of a URL and extracts information about HTML elements, attribute values, attribute names, and class names.
from abstract_webtools import parse_all
HTML_components = parse_all('https://www.example.com')
print(HTML_components["element_types"]) # Output: List of HTML element types
print(HTML_components["attribute_values"]) # Output: List of attribute values
print(HTML_components["attribute_names"]) # Output: List of attribute names
print(HTML_components["class_names"]) # Output: List of class names
Extract Elements
The extract_elements
function fetches the source code of a URL and extracts portions of the source code based on provided filters.
from abstract_webtools import extract_elements
elements = extract_elements('https://www.example.com', element_type='div', attribute_name='class', class_name='container')
print(elements) # Output: List of HTML elements that match the provided filters
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for abstract_webtools-0.0.16-py3.11.egg
Algorithm | Hash digest | |
---|---|---|
SHA256 | 35ff9870cc8765ef58c8329eca73ae4a2d83dd8bdb6b1f141d3b33d82e983182 |
|
MD5 | b58fd9804dd45de590a4a37a27259afd |
|
BLAKE2b-256 | 881fb8737e8395d949d0f75c3118e85b987a89489f55a0047af98f08784a81c7 |
Hashes for abstract_webtools-0.0.16-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 586cb01e7aa0772583cebfea3411d1741c0a253f0d2b417ec8c9133bbfbff4c7 |
|
MD5 | 3efb15b1c33934fe7d028585b339c73c |
|
BLAKE2b-256 | ad0d518b66aba22955197ac0c92da9bbd84d8bb6044fb5af2fad36b95552ed01 |