KNIGHT: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration. Builds a topic-specific, reusable knowledge graph from external sources (default: Wikipedia/Wikidata) and generates difficulty-controlled MCQs from graph paths, with optional LLM validation.
Project description
KNIGHT: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration
Install: pip install knight-mcq · Package: knight-mcq on PyPI
This repository is the reference implementation for the paper KNIGHT (CPAL 2026). It builds a topic-specific, reusable knowledge graph from external sources and generates difficulty-controlled multiple-choice question (MCQ) datasets from graph paths, with optional LLM validation.
Paper: KNIGHT on OpenReview (CPAL 2026)
Citation: If you use this code or the paper, please cite using the BibTeX in CITATION.bib. Example:
@inproceedings{knight2026cpal,
title = {{KNIGHT}: Knowledge Graph-Driven Multiple-Choice Question Generation with Adaptive Hardness Calibration},
author = {Amanlou, Mohammad and {Shafiee Moghaddam}, Erfan and Nouri, Mahdi and {Amou Jafary}, Yasaman and Farsi, Farhan and Bahrak, Behnam},
booktitle = {Proceedings of the Conference on Parsing and Linguistic Theories (CPAL)},
year = {2026},
url = {https://openreview.net/forum?id=8kA9oO5gEc},
}
The system constructs a dynamic KG from conversational interactions and LLM outputs, then synthesizes QA pairs of varying complexity (including multi-hop) from the graph. The default instantiation uses Wikipedia/Wikidata for term descriptions. KNIGHT is model-agnostic (default setup uses OpenAI) and external knowledge is replaceable (default source is Wikipedia; PDF files and URLs are also supported; custom sources can be plugged in via the ExternalKnowledgeLookup interface).
Overview
KNIGHT constructs and maintains a topic-specific knowledge graph by processing natural language queries. It uses an LLM (configurable; any LangChain-compatible chat model can be used) to generate responses and extract structured knowledge (triplets) into a Neo4j graph. The KG is then reused to synthesize multiple-choice question/answer pairs of varying complexity (including multi-hop) from graph paths, using node descriptions augmented by external knowledge (default: Wikipedia). To use a custom external source, implement the ExternalKnowledgeLookup interface and pass it into term description generation.
External Knowledge Sources: KNIGHT supports multiple external knowledge sources:
- Wikipedia (default): Automatic lookup via the
WikipediaLookupclass. - PDF files: Use
PDFLookup(pdf_path_or_url)to load a PDF (local file path or URL). The PDF text is extracted, chunked, and relevant passages are found using LLM-based relevance checks. - Custom sources: Implement the
ExternalKnowledgeLookupprotocol for other sources (e.g. databases, APIs, text files).
Example: Using a PDF instead of Wikipedia:
from app.core.utils.external_knowledge import PDFLookup
from app.core.agents.gpt.term_description import generate_term_description
# Create PDF lookup (supports local path or URL)
pdf_lookup = PDFLookup("path/to/document.pdf") # or "https://example.com/doc.pdf"
# Use it when generating term descriptions
description, used_external = generate_term_description(
llm=your_llm,
term="some term",
external_lookup=pdf_lookup
)
Key features include:
- Dual Agent Implementations:
- GPT Agent: Primarily uses carefully crafted prompts to instruct the LLM to directly extract knowledge triplets (
subject-predicate-object) from text. - REBEL Agent: Uses a dedicated transformer model (originally designed for relation extraction, adapted here via
text_processing.py) to identify triplets, followed by an LLM-based validation step to verify triplet accuracy against the source text.
- GPT Agent: Primarily uses carefully crafted prompts to instruct the LLM to directly extract knowledge triplets (
- External Knowledge Integration & Node Descriptions: Supports Wikipedia (default), PDF files (local or URL), and custom sources via
ExternalKnowledgeLookup. The LLM generates node descriptions using external knowledge context when available (wiki_fact_checked='Yes'), or falls back to structured prompts with source/relationship context (wiki_fact_checked='No'). - Ambiguity Resolution: If Wikipedia lookup yields ambiguous results, the LLM uses the original conversation context (when generating the description without Wikipedia) to improve disambiguation.
- Fact-Checking Status: Tracks whether the LLM description generation was primarily informed by Wikipedia (
'Yes') or by its internal knowledge guided by the structured prompt and source context ('No'). - Robust Neo4j Storage: Optimized for storing and managing nodes (
Termlabel) and relationships, handling normalization and preventing duplicate relationship creation. - Configurable Recursive Exploration: Allows graph traversal based on extracted triplets, with controls for maximum depth and branching factor to manage exploration scope.
- Detailed Logging: Provides separate logs for each agent (
gpt_agent,rebel_agent) within thelogs/chatbot.logfile (with rotation) for easier debugging and tracing. - Error Handling & Retries: Uses
tenacityfor robust handling of transient errors during API calls (LLM, Wikipedia) and database operations. - Knowledge Graph QA Generation: Automatically creates Question/Answer pairs directly from the relationships stored in the knowledge graph. This allows testing understanding and generating training data based on the verified connections within the graph.
A knowledge graph is a structured representation where entities (terms, concepts) become nodes and their relationships become edges. This chatbot dynamically builds and expands this graph based on user interactions and LLM-generated insights.
File structure
.
├── .env.example # Template for environment variables (copy to .env)
├── .gitignore
├── CITATION.bib # BibTeX for citing the paper
├── LICENSE # MIT License
├── pyproject.toml # Project configuration and dependencies (uv)
├── README.md # This file
├── uv.lock # Lock file for reproducible installs
├── app/
│ ├── core/
│ │ ├── agents/
│ │ │ ├── gpt/ # GPT Agent
│ │ │ │ ├── chatbot.py # LLM interaction, triplet processing, Neo4j ops
│ │ │ │ ├── term_description.py # Term description generation (external-knowledge lookup, default Wikipedia)
│ │ │ │ └── text_processing.py # Triplet extraction via LLM prompting
│ │ │ └── rebel/ # REBEL Agent
│ │ │ ├── chatbot.py # LLM interaction, triplet processing, LLM validation, Neo4j ops
│ │ │ ├── term_description.py # Term description generation (external-knowledge lookup, default Wikipedia)
│ │ │ └── text_processing.py # Triplet extraction (e.g. REBEL model logic)
│ │ ├── common/
│ │ │ ├── check_connection.py # Connection checks
│ │ │ ├── config.py # Environment variables and constants
│ │ │ └── neo4j_connection.py # Neo4j connection handler
│ │ └── utils/
│ │ ├── external_knowledge.py # External-knowledge lookup interface (Wikipedia, PDF, custom)
│ │ ├── graph_utils.py # KG utilities (e.g. prune descriptions)
│ │ └── wikipedia_lookup.py # Wikipedia search, LLM relevance check, content fetching
│ └── generation/
│ ├── qa_generation.py # QA pair generation from graph paths
│ └── __init__.py
├── logs/ # Log files (e.g. chatbot.log)
└── app/tests/ # Test suite (sanity + integration)
🚀 Features
- Dual Agent Approaches:
- GPT Agent: LLM-prompt based triplet extraction, primarily targeting structured JSON output with a regex fallback mechanism.
- REBEL Agent: Model-based extraction + LLM-based validation.
- Knowledge Enrichment & Description Synthesis:
- Looks up terms on Wikipedia and uses LLM to check relevance.
- The LLM synthesizes the final node description, prioritizing relevant Wikipedia summary context (
wiki_fact_checked='Yes'). - If Wikipedia context is unavailable/ambiguous, both agents use a detailed, 8-point scientific prompt structure (Definition/Scope, Domains, Subfields, Concepts, Applications, Examples, Related Terms, Research Trends) to generate the description. This prompt incorporates original source text or parent term context to guide the generation. The node gets
wiki_fact_checked='No'.
- Robust Ambiguity Handling:
- Detects ambiguous Wikipedia results.
- Provides source text/relationship context to the LLM during description synthesis (especially when Wikipedia context isn't used) to aid disambiguation.
- Tracks
wiki_fact_checkedstatus ('Yes'/'No') based on the primary context used by the LLM for generation.
- Neo4j Knowledge Graph:
- Stores terms as nodes with descriptions.
- Creates typed relationships based on extracted/validated triplets.
- Dynamic querying and depth-controlled exploration.
- Performance & Reliability:
- Global tracking of processed descriptions per session to reduce redundancy.
- Per-query tracking of processed terms to avoid duplicate node saving/logging within concurrent operations.
- Parallel processing (
ThreadPoolExecutor) for triplet processing, validation, and sub-triplet exploration. - Automatic retries for API calls and DB operations.
- Enhanced Logging:
- Named loggers (
gpt_agent,rebel_agent) distinguish agent output. - Logs saved to
logs/chatbot.logwith file rotation.
- Named loggers (
- Automated QA Generation from Knowledge Graph:
- Generates relevant Question/Answer pairs by analyzing multi-step paths within the Neo4j graph (e.g., (Term A)-[:REL_1]->(Term B)-[:REL_2]->(Term C)).
- Allows configuring the complexity (number of steps/relationships) for generated questions.
- Includes an optional LLM validation step to check generated Q&A for clarity, correctness based on the path, and relevance to a specific topic.
- Can generate reverse questions where the answer is the starting point of the path, providing different perspectives.
- Outputs validated Q&A pairs to a file (e.g.,
generated_qa_pairs.json) for review or use.
- Knowledge Graph Curation:
- Provides a utility to prune descriptions from nodes that were not fact-checked against Wikipedia (i.e.,
wiki_fact_checked='No'), allowing for manual quality control of the graph's descriptive content.
- Provides a utility to prune descriptions from nodes that were not fact-checked against Wikipedia (i.e.,
Knowledge Graph QA Generation Workflow
This feature allows the chatbot to automatically generate Question/Answer pairs directly from the structure of the knowledge graph it has built. This is useful for creating evaluation datasets, flashcards, or simply exploring the graph's content in a new way. The process, primarily handled by app/generation/qa_generation.py, follows these steps:
- Initiation: The user triggers the process via the
/generate_qacommand in the chat interface. - Configuration: The user interactively provides settings:
- Complexity: Specifies whether to use paths of an exact length (number of relationships) or paths up to a maximum length.
- Limit (Optional): Sets a maximum number of paths to fetch from the graph, useful for managing processing time and cost on large graphs.
- Validation: Determines if the generated Q&A pairs should be checked for quality. If enabled, the user also sets a sample rate (0.0 to 1.0) to validate only a portion or all generated pairs.
- Topic Focus (Optional): If a session topic was set, it's used during generation and validation to keep Q&A relevant.
- Reverse QA: Option to generate additional questions where the start node of the path is the answer, alongside the standard questions where the end node is often the answer.
- Path Finding: The system queries the Neo4j database (
neo4j_connection.find_paths) to find paths matching the specified complexity criteria (e.g.,MATCH p=(:Term)-[*2]->(:Term)for exact complexity 2). It retrieves the nodes (including names and descriptions) and relationship types for each path. - Concurrent Path Processing: To speed up generation, the system processes the fetched paths in parallel using multiple threads (
ThreadPoolExecutor). Each path is handled independently. - QA Pair Generation per Path (
_process_single_path):- For each path, specialized prompts (
_format_multihop_qa_prompt) are constructed. These prompts provide the LLM with the path structure (e.g.,(Start)-[:REL]->(Middle)-[:REL]->(End)), the descriptions of the start and end nodes, and instructions to formulate a question based on the multi-step relationship, aiming for the end node as the answer. - If Reverse QA is enabled, a separate prompt (
_format_multihop_qa_prompt_reverse) is used to generate a question where the start node is the intended answer. - The LLM is called using a safe wrapper (
safe_generate) that includes timeouts and retries to handle potential API issues. - The LLM's response (expected in "Question: ... Answer: ..." format) is parsed.
- Each successfully generated Q&A pair is assigned a unique ID (
uuid) and stored with metadata about its source path, complexity, etc.
- For each path, specialized prompts (
- Validation (
validate_qa_pairs):- If validation was enabled, the generated pairs (or a sample based on the rate) are evaluated.
- Basic Checks: Question/Answer length and structure are verified.
- LLM Validation: A separate LLM call uses another specialized prompt (
_format_combined_validation_prompt) to assess:- Grammar/Fluency: Is the question well-formed?
- Answerability: Can the answer be reasonably inferred only from the provided path structure/details?
- Topic Relevance: (If a topic was provided) Is the Q&A relevant to the topic?
- Pairs failing validation are logged and discarded.
- Saving Results: The final set of validated Q&A pairs is saved to a JSON file (
generated_qa_pairs.jsonby default) in the project's root directory.
Requirements and Configuration
-
Software: Python 3.11+, Neo4j (4.x or 5.x), access to an OpenAI-compatible API (or another LLM provider). KNIGHT is model-agnostic; the default setup uses OpenAI. To use another LLM, instantiate your LangChain chat model and pass it where the code expects an LLM.
-
Dependencies: Managed by
pyproject.toml/uv.lock(e.g.langchain-openai,neo4j,tenacity,wikipedia,wikipedia-api; seepyproject.tomlfor the full list). -
Environment: Copy
.env.exampleto.envin the project root and fill in your values. Required/optional variables:Variable Description OPENAI_API_KEYRequired for default LLM. OPENAI_MODELModel name (e.g. gpt-4o,gpt-4).OPENAI_API_BASEOptional; base URL for OpenAI-compatible API (e.g. custom endpoint). GPT_NEO4J_URI,GPT_NEO4J_USER,GPT_NEO4J_PASSWORDNeo4j for GPT agent. REBEL_NEO4J_URI,REBEL_NEO4J_USER,REBEL_NEO4J_PASSWORDNeo4j for REBEL agent (can match GPT). MAX_DEPTHOptional; default 2. Example
.env:GPT_NEO4J_URI=bolt://localhost:7687 GPT_NEO4J_USER=neo4j GPT_NEO4J_PASSWORD=your-password REBEL_NEO4J_URI=bolt://localhost:7687 REBEL_NEO4J_USER=neo4j REBEL_NEO4J_PASSWORD=your-password OPENAI_API_KEY=your-openai-api-key OPENAI_MODEL=gpt-4o
Ensure your Neo4j instance is running.
Installation
Recommended (PyPI):
pip install knight-mcq
For REBEL agent (requires ML dependencies):
pip install knight-mcq[ml]
Then copy the env template and configure (see Requirements and Configuration):
# From the folder where you run the app, or clone the repo just to get .env.example
curl -O https://raw.githubusercontent.com/ErfanShm/knight-mcq/main/.env.example
# Rename to .env and fill in your values
Development / from source: Clone the repo, then uv sync (or pip install -e .). Use this if you need to modify the code.
Launch
After installing and configuring .env:
- GPT Agent:
python -m app.core.agents.gpt.chatbot(works with base installation) - REBEL Agent:
python -m app.core.agents.rebel.chatbot(requirespip install knight-mcq[ml])
Reproducing results / Quick start
The paper uses Wikipedia/Wikidata and the pipeline: build a topic-specific KG from queries, then generate difficulty-controlled MCQs from graph paths.
- Install:
pip install knight-mcq - Configure: Copy .env.example to
.env; set Neo4j credentials andOPENAI_API_KEY(andOPENAI_API_BASEif using a custom endpoint). - Start Neo4j, then run:
python -m app.core.agents.gpt.chatbot - Set an optional session topic (e.g. History, Biology, or Mathematics as in the paper).
- Ask a question to grow the KG; when the graph has enough structure, run
/generate_qaand choose complexity and options to produce MCQs.
Full reproduction uses the same Neo4j/API setup and topic flow as in the paper.
📝 Usage Guide
Setting the Session Topic: When you first launch the chatbot (e.g., python -m app.core.agents.gpt.chatbot), it will prompt you to enter an optional main topic for the session. This topic is primarily used by the /generate_qa feature to ensure the automatically created Question/Answer pairs stay relevant to your area of interest. If you don't provide one, QA generation will not be filtered by topic.
Interact with the running chatbot via the command line:
- Set Exploration Depth: Before each question or command, the chatbot will ask you to set the maximum exploration depth (default is 1, see
.envsection). This controls how many relationship steps (e.g.,TermA -> TermB -> TermCis depth 2) the chatbot will explore when processing the information within an LLM's response to extract sub-topics (triplets) and build the knowledge graph. A higher depth explores more connections but takes longer and uses more resources. Press Enter to use the default/previous depth or enter a number (e.g.,2). - Interact: After setting the depth, enter your command or question:
- Ask a question: Simply type your query (e.g.,
What is Persian literature?). The chatbot uses the depth set in step 1 for building the graph from the answer. - Generate Q&A from Graph: Type
/generate_qa. This starts an interactive process where you'll be prompted to configure settings (like path complexity, limits, validation) to automatically create Question/Answer pairs based on the existing graph structure. - Prune Descriptions: Type
/prune_descriptions. This command will ask for confirmation and then set thedescriptionproperty to null for all nodes wherewiki_fact_checkedis'No'. Use this carefully for cleanup. - Show related terms:
show related to [term](e.g.,show related to hafiz). - Help:
help - Exit:
byeorexit
- Ask a question: Simply type your query (e.g.,
⚙️ Technical Implementation Details
Knowledge Extraction & Validation
- GPT Agent: Relies on a detailed system prompt (
app/core/agents/gpt/text_processing.py) that instructs the LLM to act as an "information-extraction specialist" and returnsubject-predicate-objecttriplets in a specific JSON schema. The agent attempts to parse this JSON directly. If parsing fails (e.g., due to minor LLM deviations from the schema), a regex-based fallback mechanism is employed to extract triplets from the raw text response. Quality depends heavily on the LLM's ability to follow instructions for the JSON format. - REBEL Agent: Uses a model-based approach (
text_processing.py) for initial triplet extraction. Crucially, it then employs an LLM validation step (validate_triplet_with_llminchatbot.py) where a separate LLM call verifies if each extracted triplet is directly and accurately stated in the source text. This adds an accuracy layer at the cost of performance.
Term Description Generation
- Both agents use
term_description.pywhich orchestrates the process. wikipedia_lookup.pysearches Wikipedia and uses an LLM relevance check.- The core function (
generate_term_description) determines the context for the final LLM call:- If a relevant, unambiguous Wikipedia page is found, its summary is the primary context. The node gets
wiki_fact_checked='Yes'. - If Wikipedia lookup fails or is ambiguous, both agents use a detailed, 8-point scientific prompt structure (Definition/Scope, Domains, Subfields, Concepts, Applications, Examples, Related Terms, Research Trends) to generate the description. This prompt incorporates original source text or parent term context to guide the generation. The node gets
wiki_fact_checked='No'.
- If a relevant, unambiguous Wikipedia page is found, its summary is the primary context. The node gets
- The LLM call synthesizes the actual description text based on the provided prompt and context.
Knowledge Graph Updates
- Nodes (
Termlabel) are created/merged usingMERGEin Neo4j (save_term_as_node). - Descriptions and
wiki_fact_checkedstatus are added/updated usingSET. - Relationships (
typederived from triplet relation) are created usingMERGEbetween existing nodes (create_relationship). - Concurrency control (
current_query_processed_termsset) prevents duplicate node saving logs during parallel triplet processing.
Logging
- Uses Python's standard
loggingmodule. - Handlers are configured in each agent's
chatbot.py. - Named loggers (
gpt_agent,rebel_agent) differentiate output. - Logs are directed to both the console (INFO level) and a rotating file (
logs/chatbot.log, DEBUG level).
Knowledge Graph Curation
- The system includes a utility (
app/core/utils/graph_utils.py) to prune descriptions from nodes. - Specifically, the
prune_non_wiki_descriptionsfunction can be invoked (e.g., via the/prune_descriptionscommand in the GPT agent) to set thedescriptionproperty to null for allTermnodes where thewiki_fact_checkedproperty is 'No'. - This allows users to selectively remove descriptions that were generated by the LLM without the direct backing of a verified Wikipedia summary, offering a way to manage the overall factuality or source-preference of the descriptions within the KG.
LLM Prompt Usage
The Large Language Model (LLM) is utilized in several distinct ways throughout the application:
-
Relation Extraction:
- Analyzes user input or text to extract knowledge graph triples (Subject, Predicate, Object), forming the basis of the graph construction. (Primarily in
app/core/agents/gpt/chatbot.py)
- Analyzes user input or text to extract knowledge graph triples (Subject, Predicate, Object), forming the basis of the graph construction. (Primarily in
-
Wikipedia Relevance Check:
- Determines if a candidate Wikipedia page title is semantically relevant for defining a specific term, aiding the node description process. (Implemented in
app/core/utils/wikipedia_lookup.py)
- Determines if a candidate Wikipedia page title is semantically relevant for defining a specific term, aiding the node description process. (Implemented in
-
Node Description Generation:
- The LLM generates the definitive description for every node added to the graph.
- Input Context: The prompt provided to the LLM varies based on the success of the Wikipedia lookup:
- If a relevant, unambiguous Wikipedia summary is found (
wiki_fact_checked='Yes'), the LLM uses a simpler prompt incorporating that summary as primary context. - If Wikipedia lookup fails or returns ambiguous results (
wiki_fact_checked='No'), both agents use a detailed, 8-point scientific prompt structure (Definition/Scope, Domains, Subfields, Concepts, Applications, Examples, Related Terms, Research Trends) to generate the description. This prompt can incorporate context (like a parent term or source text) to guide the LLM.
- If a relevant, unambiguous Wikipedia summary is found (
- (Logic resides in
app/core/agents/gpt/term_description.pyandapp/core/agents/rebel/term_description.py)
-
QA Generation (Multi-hop Forward):
- Creates a question and answer pair based on the information implied by a multi-step path retrieved from the knowledge graph. (Uses
_format_multihop_qa_promptinapp/generation/qa_generation.py)
- Creates a question and answer pair based on the information implied by a multi-step path retrieved from the knowledge graph. (Uses
-
QA Generation (Multi-hop Reverse):
- Creates a question based on a multi-step path where the start node of the path is the predefined answer. (Uses
_format_multihop_qa_prompt_reverseinapp/generation/qa_generation.py)
- Creates a question based on a multi-step path where the start node of the path is the predefined answer. (Uses
-
QA Validation:
- Evaluates generated Question-Answer pairs for grammatical correctness, logical consistency with the source data (e.g., graph path), and optional topic relevance. (Uses
_format_combined_validation_promptinapp/generation/qa_generation.py)
- Evaluates generated Question-Answer pairs for grammatical correctness, logical consistency with the source data (e.g., graph path), and optional topic relevance. (Uses
Testing
Run tests with:
uv run pytest app/tests/
The sanity tests in app/tests/test_sanity.py run without external services (no Neo4j or API keys). Full tests (e.g. test_chatbot.py) require a running Neo4j instance and OPENAI_API_KEY; they may be skipped or fail if those are not available.
Current status
- Functional GPT and REBEL agents; Neo4j-backed KG; Wikipedia-based (replaceable) external knowledge; difficulty-controlled MCQ generation with optional validation; configurable depth and logging.
Publishing to PyPI (maintainers)
Option A – GitHub Action (recommended)
- In PyPI Account create an API token (scope: entire account or just this project).
- In your repo: Settings → Secrets and variables → Actions → New repository secret: name
PYPI_API_TOKEN, value = the token. - Bump
versioninpyproject.toml, commit and push. - Create a Release (e.g. tag
v0.1.0): Releases → Create a new release → choose tagv0.1.0, publish. The workflow will build and upload to PyPI.
Option B – Manual
pip install build twine(oruv sync --extra dev).- Bump
versioninpyproject.toml. python -m buildthentwine upload dist/*(use your PyPI token when prompted). Test first withtwine upload --repository testpypi dist/*if you prefer.
🙌 Acknowledgments
- The creators of the REBEL model and other foundational NLP models.
- The OpenAI team and developers of similar large language models.
- The Neo4j team for their graph database technology.
- The developers of the
wikipedia,wikipedia-api,langchain, andtenacitylibraries.
📜 License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file knight_mcq-0.1.2.tar.gz.
File metadata
- Download URL: knight_mcq-0.1.2.tar.gz
- Upload date:
- Size: 71.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c3a1f323f5d685a9077ce5c1158925b884a3acadec96fdbb148382c29e89a0c
|
|
| MD5 |
bcf31ca853604d357b6d5c5d8dcc0b75
|
|
| BLAKE2b-256 |
b3e9dfd1ec97e06631e549a8c7b08fe89c06c9f16ce83c30eeaa291a8d948c51
|
File details
Details for the file knight_mcq-0.1.2-py3-none-any.whl.
File metadata
- Download URL: knight_mcq-0.1.2-py3-none-any.whl
- Upload date:
- Size: 69.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
edad76a9c93436c6fd328087ddf0eb2576efaf98d5262adf3541e5d202ecf999
|
|
| MD5 |
46e60311aa60a843b5b530c5b8f1bd5e
|
|
| BLAKE2b-256 |
1561488dc62901b4fb51bd7482998552ead58f9c96767467cf0ec31910c98571
|