A project with Flask backend and ML service for search answers in documents.
Project description
ML-Inz-example
This project contains two main components:
- A Flask backend serving HTML, CSS, and JavaScript.
- A machine learning service using the
nvidia/Llama3-ChatQA-1.5-8B
model to answer questions based on documents.
Project Overview
Flask Backend
The Flask backend serves the frontend application, which includes HTML, CSS, and JavaScript files. It provides the user interface where users can input their question and document, and view the generated answer.
Machine Learning Service
The machine learning service utilizes the nvidia/Llama3-ChatQA-1.5-8B
model. This service accepts a question and a document as input, processes them using the model, and returns an answer based on the content of the document.
Features
- Flask Backend: Serves the frontend and handles user interactions.
- ML Service: Uses a state-of-the-art language model to provide answers based on the provided documents.
- Docker Support: Both components can be easily deployed using Docker and Docker Compose.
- CORS Handling: Ensures smooth interaction between the backend and the ML service.
- Loading Indicator: Shows a loading spinner while the ML model processes the input.
Project Structure
my_project/
│
├── backend/
│ ├── app/
│ │ ├── __init__.py
│ │ ├── main.py
│ │ ├── templates/
│ │ │ └── index.html
│ │ ├── static/
│ │ │ ├── css/
│ │ │ │ └── style.css
│ │ │ └── js/
│ │ │ └── script.js
│ ├── __init__.py
│ ├── Dockerfile
│ ├── requirements.txt
│
├── ml_service/
│ ├── app/
│ │ ├── __init__.py
│ │ ├── main.py
│ │ ├── model.py
│ ├── __init__.py
│ ├── Dockerfile
│ ├── requirements.txt
│
├── tests/
│ ├── __init__.py
│ ├── test_backend.py
│ ├── test_ml_service.py
├── __init__.py
├── docker-compose.yml
├── setup.py
├── README.md
├── MANIFEST.in
└── pytest.ini
Running the Project
Using Docker Compose
To build and run the project with Docker Compose, use the following command:
docker-compose up --build
This command will build the Docker images for both the Flask backend and the ML service, and then start the containers.
Backend
The Flask backend will be accessible at http://localhost:8000
. It serves the frontend application where users can input their question and document.
ML Service
The ML service will be accessible at http://localhost:8001
. This service handles the processing of the question and document using the nvidia/Llama3-ChatQA-1.5-8B
model and returns the answer.
Usage
- Open your browser and navigate to
http://localhost:8000
. - Enter your question in the provided input field.
- Enter the document text in the provided textarea.
- Click the "Generate" button.
- Wait for the response to be generated and displayed below the form.
Development
Prerequisites
- Docker and Docker Compose
- Python 3.11 or higher
Setting Up the Environment
- Clone the repository:
git clone https://github.com/Appjey/ML-Inz-example.git
cd my_project
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows use `venv\ScriptsActivate`
- Install the dependencies:
pip install -r backend/requirements.txt
pip install -r ml_service/requirements.txt
Running Locally
To run the backend locally:
cd backend
python app/main.py
To run the ML service locally:
cd ml_service
python app/main.py
Running Tests
To run the tests, use the following command:
pytest
Contributing
If you would like to contribute, please open a pull request with your changes. Make sure to update the tests as appropriate.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ml_inz_example-0.0.1.tar.gz
.
File metadata
- Download URL: ml_inz_example-0.0.1.tar.gz
- Upload date:
- Size: 10.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 11b8080cd4f4afbef03aa4967cda203767983b84e0b7dbc8d7e09e0cd1717cf3 |
|
MD5 | 03b192b38d9f1938342db27077a39bf1 |
|
BLAKE2b-256 | 02cb6735d6c6451965f3ee37007443da81a89a3439701ebf7ba3727b46653f11 |
File details
Details for the file ML_Inz_example-0.0.1-py3-none-any.whl
.
File metadata
- Download URL: ML_Inz_example-0.0.1-py3-none-any.whl
- Upload date:
- Size: 3.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 79fe60e206c4348a661786df076d73d20f69e0afe65166d06812e503574aa3af |
|
MD5 | 38df9040b54eedbe61d4306eaa136dee |
|
BLAKE2b-256 | fafd74819f34f45b00b3ceb4a3aa91becde23e161cea18c47df918a97f24d9cc |