Skip to main content

No project description provided

Project description

codecov Python Versions PyPI - Version GitHub

Articulo

Tiny library for extraction articles from html.
It can extract the content of an article, both in text and HTML, and it's title.

Usage

Basic usage

This library is designed to be as simple as possible.
To start using it just import it and instantiate with link you want to parse as a parameter.

Also the library designed to work in lazy manner.
So, until you make a request for some property, it does not send any requests.

from articulo import Articulo

# Step 1: initializing Articulo instance
article = Articulo('https://info.cern.ch/')

# Step 2: requesting article properties. All properties resolve lazily.
print(article.title) # article title as a string
print(article.text) # article content as a string
print(article.markup) # article content as an html markup string
print(article.icon) # link to article icon
print(article.description) # article meta description
print(article.preview) # link to article meta preview image
print(article.keywords) # article meta keywords list

Verbose mode

In case you want to see the whole procees just provide parameter verbose=True to the instance. It can be helpful for debugging.

from articulo import Articulo

# Step 1: initializing Articulo instance
article = Articulo('https://info.cern.ch/', verbose=True)

Controlling information loss coefficient

The whole idea of parsing article content is to define the part of the document that has the highest information density. To find that part there is the so-called information loss coefficient. This coefficient determines the decrease in the text density of the document during parsing.

The default value is 0.7 which stands for 70% information density decrease. In most cases, this works fine.
Nevertheless, you can change it in case you have insufficient parsing results. Just provide theshold parameter to the articulo instance, it might help.

from articulo import Articulo

# Step 1: initializing Articulo instance
article = Articulo('https://info.cern.ch/', threshold=0.3)

Providing headers

In some cases you need to provide additional headers to get an article html from url.
For that case you can provide headers with http_headers parameter when you create new instance of articulo.

from articulo import Articulo

# Initializing Articulo instance with custom user agent
headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36' }
article = Articulo('https://info.cern.ch/', http_headers=headers)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

articulo-0.1.8.tar.gz (17.5 kB view details)

Uploaded Source

Built Distribution

articulo-0.1.8-py3-none-any.whl (18.5 kB view details)

Uploaded Python 3

File details

Details for the file articulo-0.1.8.tar.gz.

File metadata

  • Download URL: articulo-0.1.8.tar.gz
  • Upload date:
  • Size: 17.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.9 Darwin/23.1.0

File hashes

Hashes for articulo-0.1.8.tar.gz
Algorithm Hash digest
SHA256 6fa8048ba3636dd876a5fc4df6357d472ecac108c5ceca7f7112371d2021533d
MD5 59137095c519eee7394517f43b4ca6c4
BLAKE2b-256 5ffdfe2f5456fbe7b854dcfe7eae2cf1e106b14c811f4261f2fae43459c6d2b8

See more details on using hashes here.

File details

Details for the file articulo-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: articulo-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 18.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.9 Darwin/23.1.0

File hashes

Hashes for articulo-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 77cc14274b9d5f60c927caef20a1d191b2b48f4ffa0331980203e024c8c1dfe5
MD5 9bf166c6973ba0c9d5c7e8f9ddee9b73
BLAKE2b-256 81a0f3c2eb74afdcdb11151065152322feefbe01d6fe4bb9cd3e1919e3a22fc1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page