Abstract Web Tools is a Python package that provides various utility functions for web scraping tasks. It is built on top of popular libraries such as `requests`, `BeautifulSoup`, and `urllib3` to simplify the process of fetching and parsing web content.
Project description
# abstract_webtools for parsing web content.
## Installation
You can install the package via pip:
```bash
pip install abstract_webtools
Usage
Get Status Code
The get_status
function fetches the status code of the URL.
from abstract_webtn_url
urls = clean_url('https://example.com')
print(urls) # Output: ['https://example.com', 'http://example.com']
tps://example.com'
Try Request
The try_request
function makes HTTP requests to a URL and returns the response if successful.
from abstract_webtools import try_request
response = try_request('https://www.example.com')
print(response) # Output: <Response [200]>
Is Valid URL
The is_valid
function checks whether a given URL is valid.
from abstract_webtools import is_valid
valid = is_valid('https://www.example.com')
print(valid) # Output: True
Get Source Code
The get_Source_code
function fetches the source code of a URL with a custom user-agent.
from abstract_webtools import get_Source_code
source_code = get_Source_code('https://www.example.com')
print(source_code) # Output: HTML source code of the URL
Parse React Source
The parse_react_source
function fetches the source code of a URL and extracts JavaScript and JSX source code (React components).
from abstract_webtools import parse_react_source
react_code = parse_react_source('https://www.example.com')
print(react_code) # Output: List of JavaScript and JSX source code found in <script> tags
Get All Website Links
The get_all_website_links
function returns all URLs found on a specified URL that belong to the same website.
from abstract_webtools import get_all_website_links
links = get_all_website_links('https://www.example.com')
print(links) # Output: List of URLs belonging to the same website as the specified URL
Parse All
The parse_all
function fetches the source code of a URL and extracts information about HTML elements, attribute values, attribute names, and class names.
from abstract_webtools import parse_all
HTML_components = parse_all('https://www.example.com')
print(HTML_components["element_types"]) # Output: List of HTML element types
print(HTML_components["attribute_values"]) # Output: List of attribute values
print(HTML_components["attribute_names"]) # Output: List of attribute names
print(HTML_components["class_names"]) # Output: List of class names
Extract Elements
The extract_elements
function fetches the source code of a URL and extracts portions of the source code based on provided filters.
from abstract_webtools import extract_elements
elements = extract_elements('https://www.example.com', element_type='div', attribute_name='class', class_name='container')
print(elements) # Output: List of HTML elements that match the provided filters
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for abstract_webtools-0.0.15-py3.11.egg
Algorithm | Hash digest | |
---|---|---|
SHA256 | cef173c315768bfe6301d3e487cd75d1e2049c488f94dcd9027628acd1e3f429 |
|
MD5 | 62fe2b11c74d0901e2fbfbed305da31f |
|
BLAKE2b-256 | 467d94a5825905d6ec405cadea0f51b6e75a643829e282245a218190f3c37e91 |
Hashes for abstract_webtools-0.0.15-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f3ab8b6398a41116576b9076e857cceb1ed6fd67b93d3001b52941318249e6c2 |
|
MD5 | 216963cb889c3b6b62a119edb29a42cb |
|
BLAKE2b-256 | ab54e74b18ec58eb3bb9e3bd9f77059457379d45a5bcf2421b4ac5a517900d9f |