Abstract Web Tools is a Python package that provides various utility functions for web scraping tasks. It is built on top of popular libraries such as `requests`, `BeautifulSoup`, and `urllib3` to simplify the process of fetching and parsing web content.
Project description
# abstract_webtools for parsing web content.
## Installation
You can install the package via pip:
```bash
pip install abstract_webtools
Usage
Get Status Code
The get_status
function fetches the status code of the URL.
from abstract_webtn_url
urls = clean_url('https://example.com')
print(urls) # Output: ['https://example.com', 'http://example.com']
tps://example.com'
Try Request
The try_request
function makes HTTP requests to a URL and returns the response if successful.
from abstract_webtools import try_request
response = try_request('https://www.example.com')
print(response) # Output: <Response [200]>
Is Valid URL
The is_valid
function checks whether a given URL is valid.
from abstract_webtools import is_valid
valid = is_valid('https://www.example.com')
print(valid) # Output: True
Get Source Code
The get_Source_code
function fetches the source code of a URL with a custom user-agent.
from abstract_webtools import get_Source_code
source_code = get_Source_code('https://www.example.com')
print(source_code) # Output: HTML source code of the URL
Parse React Source
The parse_react_source
function fetches the source code of a URL and extracts JavaScript and JSX source code (React components).
from abstract_webtools import parse_react_source
react_code = parse_react_source('https://www.example.com')
print(react_code) # Output: List of JavaScript and JSX source code found in <script> tags
Get All Website Links
The get_all_website_links
function returns all URLs found on a specified URL that belong to the same website.
from abstract_webtools import get_all_website_links
links = get_all_website_links('https://www.example.com')
print(links) # Output: List of URLs belonging to the same website as the specified URL
Parse All
The parse_all
function fetches the source code of a URL and extracts information about HTML elements, attribute values, attribute names, and class names.
from abstract_webtools import parse_all
element_types, attribute_values, attribute_names, class_names = parse_all('https://www.example.com')
print(element_types) # Output: List of HTML element types
print(attribute_values) # Output: List of attribute values
print(attribute_names) # Output: List of attribute names
print(class_names) # Output: List of class names
Extract Elements
The extract_elements
function fetches the source code of a URL and extracts portions of the source code based on provided filters.
from abstract_webtools import extract_elements
elements = extract_elements('https://www.example.com', element_type='div', attribute_name='class', class_name='container')
print(elements) # Output: List of HTML elements that match the provided filters
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for abstract_webtools-0.0.13-py3.11.egg
Algorithm | Hash digest | |
---|---|---|
SHA256 | b3e3bf0f40f386e672135a555c3d79b08a8a691ea78640b1cb21f1309f8bdb92 |
|
MD5 | 7af78a2141adc00c5a3e4a356f932961 |
|
BLAKE2b-256 | dcbdf22051f185e6f615937d0554ddc9dc81824c44ccf6a1931267a5fcbc3fa3 |
Hashes for abstract_webtools-0.0.13-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 41600e86d18dfe7e658273fc1a81979c9c70356ded9e79e1a5a5f572d4ce5ec5 |
|
MD5 | 9c8d2b22dcf1578609c86f0483f86dba |
|
BLAKE2b-256 | b119c61d7c7eafe9a09e852f3dcf6373cd956ad1c3615cf4c63d3945310a640f |