Skip to main content

No project description provided

Project description

mrkdwn_analysis

mrkdwn_analysis is a powerful Python library designed to analyze Markdown files. It provides extensive parsing capabilities to extract and categorize various elements within a Markdown document, including headers, sections, links, images, blockquotes, code blocks, lists, tables, tasks (todos), footnotes, and even embedded HTML. This makes it a versatile tool for data analysis, content generation, or building other tools that work with Markdown.

Features

  • File Loading: Load any given Markdown file by providing its file path.

  • Header Detection: Identify all headers (ATX # to ######, and Setext === and ---) in the document, giving you a quick overview of its structure.

  • Section Identification (Setext): Recognize sections defined by a block of text followed by = or - lines, helping you understand the document’s conceptual divisions.

  • Paragraph Extraction: Distinguish regular text (paragraphs) from structured elements like headers, lists, or code blocks, making it easy to isolate the body content.

  • Blockquote Identification: Extract all blockquotes defined by lines starting with >.

  • Code Block Extraction: Detect fenced code blocks delimited by triple backticks (```), optionally retrieve their language, and separate programming code from regular text.

  • List Recognition: Identify both ordered and unordered lists, including task lists (- [ ], - [x]), and understand their structure and hierarchy.

  • Tables (GFM): Detect GitHub-Flavored Markdown tables, parse their headers and rows, and separate structured tabular data for further analysis.

  • Links and Images: Identify text links ([text](url)) and images (![alt](url)), as well as reference-style links. This is useful for link validation or content analysis.

  • Footnotes: Extract and handle Markdown footnotes ([^note1]), providing a way to process reference notes in the document.

  • HTML Blocks and Inline HTML: Handle HTML blocks (<div>...</div>) as a single element, and detect inline HTML elements (<span style="...">... </span>) as a unified component.

  • Front Matter: If present, extract YAML front matter at the start of the file.

  • Counting Elements: Count how many occurrences of a certain element type (e.g., how many headers, code blocks, etc.).

  • Textual Statistics: Count the number of words and characters (excluding whitespace). Get a global summary (analyse()) of the document’s composition.

Installation

Install mrkdwn_analysis from PyPI:

pip install markdown-analysis

Usage

Using mrkdwn_analysis is straightforward. Import MarkdownAnalyzer, create an instance with your Markdown file path, and then call the various methods to extract the elements you need.

from mrkdwn_analysis import MarkdownAnalyzer

analyzer = MarkdownAnalyzer("path/to/document.md")

headers = analyzer.identify_headers()
paragraphs = analyzer.identify_paragraphs()
links = analyzer.identify_links()
...

Example

Consider example.md:

---
title: "Python 3.11 Report"
author: "John Doe"
date: "2024-01-15"
---

Python 3.11
===========

A major **Python** release with significant improvements...

### Performance Details

```python
import math
print(math.factorial(10))

Quote: "Python 3.11 brings the speed we needed"

HTML block example

This paragraph contains inline HTML: Red text.

  • Unordered list:
    • A basic point
    • A task to do
    • A completed task
  1. Ordered list item 1
  2. Ordered list item 2

After analysis:

```python
analyzer = MarkdownAnalyzer("example.md")

print(analyzer.identify_headers())
# {"Header": [{"line": X, "level": 1, "text": "Python 3.11"}, {"line": Y, "level": 3, "text": "Performance Details"}]}

print(analyzer.identify_paragraphs())
# {"Paragraph": ["A major **Python** release ...", "This paragraph contains inline HTML: ..."]}

print(analyzer.identify_html_blocks())
# [{"line": Z, "content": "<div class=\"note\">\n  <p>HTML block example</p>\n</div>"}]

print(analyzer.identify_html_inline())
# [{"line": W, "html": "<span style=\"color:red;\">Red text</span>"}]

print(analyzer.identify_lists())
# {
#   "Ordered list": [["Ordered list item 1", "Ordered list item 2"]],
#   "Unordered list": [["A basic point", "A task to do [Task]", "A completed task [Task done]"]]
# }

print(analyzer.identify_code_blocks())
# {"Code block": [{"start_line": X, "content": "import math\nprint(math.factorial(10))", "language": "python"}]}

print(analyzer.analyse())
# {
#   'headers': 2,
#   'paragraphs': 2,
#   'blockquotes': 1,
#   'code_blocks': 1,
#   'ordered_lists': 2,
#   'unordered_lists': 3,
#   'tables': 0,
#   'html_blocks': 1,
#   'html_inline_count': 1,
#   'words': 42,
#   'characters': 250
# }

Key Methods

  • __init__(self, input_file): Load the Markdown from path or file object.
  • identify_headers(): Returns all headers.
  • identify_sections(): Returns setext sections.
  • identify_paragraphs(): Returns paragraphs.
  • identify_blockquotes(): Returns blockquotes.
  • identify_code_blocks(): Returns code blocks with content and language.
  • identify_lists(): Returns both ordered and unordered lists (including tasks).
  • identify_tables(): Returns any GFM tables.
  • identify_links(): Returns text and image links.
  • identify_footnotes(): Returns footnotes used in the document.
  • identify_html_blocks(): Returns HTML blocks as single tokens.
  • identify_html_inline(): Returns inline HTML elements.
  • identify_todos(): Returns task items.
  • count_elements(element_type): Counts occurrences of a specific element type.
  • count_words(): Counts words in the entire document.
  • count_characters(): Counts non-whitespace characters.
  • analyse(): Provides a global summary (headers count, paragraphs count, etc.).

Checking and Validating Links

  • check_links(): Validates text links to see if they are broken (e.g., non-200 status) and returns a list of broken links.

Global Analysis Example

analysis = analyzer.analyse()
print(analysis)
# {
#   'headers': X,
#   'paragraphs': Y,
#   'blockquotes': Z,
#   'code_blocks': A,
#   'ordered_lists': B,
#   'unordered_lists': C,
#   'tables': D,
#   'html_blocks': E,
#   'html_inline_count': F,
#   'words': G,
#   'characters': H
# }

Contributing

Contributions are welcome! Feel free to open an issue or submit a pull request for bug reports, feature requests, or code improvements. Your input helps make mrkdwn_analysis more robust and versatile.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mrkdwn_analysis-0.1.6.tar.gz (27.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mrkdwn_analysis-0.1.6-py3-none-any.whl (25.8 kB view details)

Uploaded Python 3

File details

Details for the file mrkdwn_analysis-0.1.6.tar.gz.

File metadata

  • Download URL: mrkdwn_analysis-0.1.6.tar.gz
  • Upload date:
  • Size: 27.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.0

File hashes

Hashes for mrkdwn_analysis-0.1.6.tar.gz
Algorithm Hash digest
SHA256 04d74c3fdcf93e1207036eca2939592459cbbbc5b1f9fa9b4e140143f4f48694
MD5 ece6b1c9f120e31d34854f8bc3e12e3a
BLAKE2b-256 fe391eeeb9d6eac75b907bb5efb444f895462790d50f55b99b8382a63f21d1d0

See more details on using hashes here.

File details

Details for the file mrkdwn_analysis-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for mrkdwn_analysis-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 c31eb95bc34f690327c114e3d16b07f5714a5856cd84df3cf919ad128b3b2002
MD5 7f595ebd9820275b8b00aa44378b7b92
BLAKE2b-256 8e49954e202f4a8b5c95232df886aa61218375ec2c7e03345183256df7e31bcf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page