A node parser which can create a hierarchy of all code scopes in a directory.
Project description
CodeHierarchyAgentPack
# install
pip install llama-index-packs-code-hierarchy
# download source code
llamaindex-cli download-llamapack CodeHierarchyAgentPack -d ./code_hierarchy_pack
The CodeHierarchyAgentPack
is useful to split long code files into more reasonable chunks, while creating an agent on top to navigate the code. What this will do is create a "Hierarchy" of sorts, where sections of the code are made more reasonable by replacing the scope body with short comments telling the LLM to search for a referenced node if it wants to read that context body.
Nodes in this hierarchy will be split based on scope, like function, class, or method scope, and will have links to their children and parents so the LLM can traverse the tree.
from llama_index.core.text_splitter import CodeSplitter
from llama_index.llms.openai import OpenAI
from llama_index.packs.code_hierarchy import (
CodeHierarchyAgentPack,
CodeHierarchyNodeParser,
)
llm = OpenAI(model="gpt-4", temperature=0.2)
documents = SimpleDirectoryReader(
input_files=[
Path("../llama_index/packs/code_hierarchy/code_hierarchy.py")
],
file_metadata=lambda x: {"filepath": x},
).load_data()
split_nodes = CodeHierarchyNodeParser(
language="python",
# You can further parameterize the CodeSplitter to split the code
# into "chunks" that match your context window size using
# chunck_lines and max_chars parameters, here we just use the defaults
code_splitter=CodeSplitter(
language="python", max_chars=1000, chunk_lines=10
),
).get_nodes_from_documents(documents)
pack = CodeHierarchyAgentPack(split_nodes=split_nodes, llm=llm)
pack.run(
"How does the get_code_hierarchy_from_nodes function from the code hierarchy node parser work? Provide specific implementation details."
)
A full example can be found here in combination with `.
Repo Maps
The pack contains a CodeHierarchyKeywordQueryEngine
that uses a CodeHierarchyNodeParser
to generate a map of a repository's structure and contents. This is useful for the LLM to understand the structure of a codebase, and to be able to reference specific files or directories.
For example:
- code_hierarchy
- _SignatureCaptureType
- _SignatureCaptureOptions
- _ScopeMethod
- _CommentOptions
- _ScopeItem
- _ChunkNodeOutput
- CodeHierarchyNodeParser
- class_name
- init
- _get_node_name
- recur
- _get_node_signature
- find_start
- find_end
- _chunk_node
- get_code_hierarchy_from_nodes
- get_subdict
- recur_inclusive_scope
- dict_to_markdown
- _parse_nodes
- _get_indentation
- _get_comment_text
- _create_comment_line
- _get_replacement_text
- _skeletonize
- _skeletonize_list
- recur
Usage as a Tool with an Agent
You can create a tool for any agent using the nodes from the node parser:
from llama_index.agent.openai import OpenAIAgent
from llama_index.core.tools import QueryEngineTool
from llama_index.packs.code_hierarchy import CodeHierarchyKeywordQueryEngine
query_engine = CodeHierarchyKeywordQueryEngine(
nodes=split_nodes,
)
tool = QueryEngineTool.from_defaults(
query_engine=query_engine,
name="code_lookup",
description="Useful for looking up information about the code hierarchy codebase.",
)
agent = OpenAIAgent.from_tools(
[tool], system_prompt=query_engine.get_tool_instructions(), verbose=True
)
Adding new languages
To add a new language you need to edit _DEFAULT_SIGNATURE_IDENTIFIERS
in code_hierarchy.py
.
The docstrings are infomative as how you ought to do this and its nuances, it should work for most languages.
Please test your new language by adding a new file to tests/file/code/
and testing all your edge cases.
People often ask "how do I find the Node Types I need for a new language?" The best way is to use breakpoints.
I have added a comment TIP: This is a wonderful place to put a debug breakpoint
in the code_hierarchy.py
file, put a breakpoint there, input some code in the desired language, and step through it to find the name
of the node you want to capture.
The code as it is should handle any language which:
- expects you to indent deeper scopes
- has a way to comment, either full line or between delimiters
Future
I'm considering adding all the languages from aider
by incorporating .scm
files instead of _SignatureCaptureType
, _SignatureCaptureOptions
, and _DEFAULT_SIGNATURE_IDENTIFIERS
Contributing
You will need to set your OPENAI_API_KEY
in your env to run the notebook or test the pack.
You can run tests with pytest tests
in the root directory of this pack.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_packs_code_hierarchy_blar-0.1.8.tar.gz
.
File metadata
- Download URL: llama_index_packs_code_hierarchy_blar-0.1.8.tar.gz
- Upload date:
- Size: 18.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-45-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 54781ea00288cde077c7fefc2039119bace52e4be7e6805ee2bc4e0c5d5747e4 |
|
MD5 | ee76c501c2298aca554db9d248023d35 |
|
BLAKE2b-256 | 6dde99448a9bb249ebf47b7301a11246977dad63bf7c1aa94607bfe81fbb0bc0 |
File details
Details for the file llama_index_packs_code_hierarchy_blar-0.1.8-py3-none-any.whl
.
File metadata
- Download URL: llama_index_packs_code_hierarchy_blar-0.1.8-py3-none-any.whl
- Upload date:
- Size: 23.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-45-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf2ceb73f030787c1410eb81888270815581d5c7145530bde82cccd518925334 |
|
MD5 | 5483db22d2717e3be79ce021a5adcb49 |
|
BLAKE2b-256 | 39eefd883ba5f5aaec9c43b527ffd0178d440d6eb09c2e200e85fd45c9aa271f |