Automates Software Security Analysis and Testing (SAST) by receiving user-inputted code snippets or text descriptions of functionality and returning a structured response highlighting potential securi
Project description
sast-secureai
sast-secureai is a Python package that automates software security analysis and testing (SAST) by leveraging AI-powered language models. It allows users to input code snippets or textual descriptions of software functionality and returns a structured report highlighting potential security vulnerabilities, threats, or risks.
Installation
You can install the package using pip:
pip install sast_secureai
Usage
Here's an example of how to use the package in Python:
from sast_secureai import sast_secureai
user_input = "Your code snippet or description here..."
response = sast_secureai(user_input)
print(response)
Function Parameters
user_input(str): The code snippet or description of functionality to analyze.llm(Optional[BaseChatModel]): An optional LangChain language model instance. If not provided, the defaultChatLLM7will be used.api_key(Optional[str]): An optional API key for LLM7. If not provided, the package attempts to retrieve it from the environment variableLLM7_API_KEY.
Custom LLM Support
You can pass your own language model instance to suit your preferred LLM provider by importing and initializing it accordingly. Supported examples include:
from langchain_openai import ChatOpenAI
from sast_secureai import sast_secureai
llm = ChatOpenAI()
response = sast_secureai(user_input, llm=llm)
or
from langchain_anthropic import ChatAnthropic
from sast_secureai import sast_secureai
llm = ChatAnthropic()
response = sast_secureai(user_input, llm=llm)
or
from langchain_google_genai import ChatGoogleGenerativeAI
from sast_secureai import sast_secureai
llm = ChatGoogleGenerativeAI()
response = sast_secureai(user_input, llm=llm)
The default LLM used is ChatLLM7 from langchain_llm7, which can be configured with an API key. Obtain your free API key at https://token.llm7.io/.
Rate Limits
The included default rate limits for LLM7's free tier are sufficient for most use cases. For higher rate limits, set your API key via the environment variable LLM7_API_KEY or pass it directly:
response = sast_secureai(user_input, api_key="your_api_key")
Support and Issues
For issues or feature requests, please use the GitHub issues page:
https://github.com/chigwell/sast-secureai/issues
Author
Eugene Evstafev
Email: hi@eugene.plus
GitHub: chigwell
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sast_secureai-2025.12.21084550.tar.gz.
File metadata
- Download URL: sast_secureai-2025.12.21084550.tar.gz
- Upload date:
- Size: 5.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec6ba0e000955e026b2772277803068190b0e8b0aa324d3597f41a8b2a60ddba
|
|
| MD5 |
943939eb86ae0140d55f377ea06bd12e
|
|
| BLAKE2b-256 |
83edd6bf2c7c0a1ae1ecc5023195f438ac6318664902132c35687b64331d2ef6
|
File details
Details for the file sast_secureai-2025.12.21084550-py3-none-any.whl.
File metadata
- Download URL: sast_secureai-2025.12.21084550-py3-none-any.whl
- Upload date:
- Size: 6.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dc3f1df1dc0741a24a43a807fe9e8b5302452eadee3cc5b4f9c838975fe5d71e
|
|
| MD5 |
e267ac7b0098841cfe83e85e83841787
|
|
| BLAKE2b-256 |
968cc26560c40a596f0ee36d8399585ec263c353a656c30992f56d898801db1f
|