Skip to main content

No project description provided

Project description

Custom Intents

V0.8.0 (it's still in buggy alpha)

a simple way to create chatbots Ai, image classification Ai and more!!

installation

you can install the package from pypi (pip)

pip install CustomIntents

examples

Demo1

Demo2

A package build on top of keras for creating and training deep learning chatbots (text classification), binary image classification and linear regression models in just three lines of code

the package is inspired by NeuralNine, neuralintents package a packege that let you build chatbot models with 3 lines of code

Setting Up A Basic Chatbot

from CustomIntents import ChatBot
chatbot = ChatBot(model_name="test_model", intents="intents.json")
assistant.train_model()
assistant.save_model()
done = False
while not done:
    message = input("Enter a message: ")
    if message == "STOP":
        done = True
    else:
        assistant.request(message)

Binding Functions To Requests

this is inspired by neuralintents

from CustomIntents import ChatBot
def function_for_greetings():
    print("You triggered the greetings intent!")
    # Some action you want to take
def function_for_stocks():
    print("You triggered the stocks intent!")
    # Some action you want to take
mappings = {'greeting' : function_for_greetings, 'stocks' : function_for_stocks}
assistant = ChatBot('intents.json', intent_methods=mappings ,model_name="test_model")
assistant.train_model()
assistant.save_model()
done = False
while not done:
    message = input("Enter a message: ")
    if message == "STOP":
        done = True
    else:
        assistant.request(message)

Sample intents.json File

{"intents": [
  {"tag": "greeting",
    "patterns": ["Hi", "Salam", "Nice to meet you", "Hello", "Good day", "Hey", "greetings"],
    "responses": ["Hello!", "Good to see you again!", "Hi there, how can I help?"]
  },
  {"tag": "goodbye",
    "patterns": ["bye", "good bye", "see you later"],
    "responses": ["bye", "good bye"],
    "context_set": ""
  },
  {"tag": "something",
    "patterns": ["something", "something else", "etc"],
    "responses": ["the response to something"],
  }
]
}

ChatBot Class

the first class in CustomIntent package is ChatBot its exacly what you thing a chatbot

Init arguaments

def __init__(self, intents, intent_methods, model_name="assistant_model", threshold=0.25, w_and_b=False,
             tensorboard=False):

intents : its the path of your intents file

intents_method : its a dictionary of mapped functions

model_name : its just the name of your model

threshold : its the accuracy threshold of your model its set to 0.25 by default

w_and_b : it will connect to wandb if set to True (you will need to login first)

tensorboard : Not available at the time

Training

you can start training your model with one function call train_model

training model arguments :

def train_model(self, epoch=None, batch_size=5, learning_rate=None,
                ignore_letters=None, timeIt=True, model_type='s1',
                validation_split=0, optimizer=None, accuracy_and_loss_plot=True):

epoch : an epoch basicly means training the neural network with all the training data for one cycle and this arguament says how many of this circles it will go

batch_size : Integer or None. Number of samples per gradient update (you can just ignore this)

learning_rate : Learning rate is a hyper-parameter that controls the weights of our neural network with respect to the loss gradient. It defines how quickly the neural network updates the concepts it has learned. (in simple terms if its bigger our model learn faster but it can go of track faster)

ignore_letters : a list of letters you want to ignore (by defualt it will ignore (? . , !) (you can pas a empty list if you dont want to ignore (?.,!)))

timeIt : it will just time the training

model_type : you can select one of the defined models (we will look at the available models later on)

validation_split : you can split a portion of your data for validation only (model will not get trained on them) it should be float between 0 and 1 (i will recommend to not create a validation split unless you have a really huge data set with lots of similar patterns)

optimizer : you can choose beetwin SGD, Adam, Adamx and Adagard

save_model

it will save your model as two .pkl files and a .h5 file (don't add .h5 or .pkl)

def save_model(self, model_name=None):

model_name : if its None (defualt), it will save the files like (model_name.h5)(model_name_words.pkl)(model_name_classes.pkl) where the model_name is the name you specified in the first place

load_model

it will load a model from those three files

def load_model(self, model_name=None):

model_name : if its None (defualt), it will look for files like (model_name.h5)(model_name_words.pkl)(model_name_classes.pkl) where the model_name is the name you specified in the first place

request_tag

you will pass it a massege and it will return the predicted tag for you

def request_tag(self, message, debug_mode=False, threshold=None):

message : the actual message

debug_mode : it will print every step of the procces for debuging perpes

threshold : you can set a accuracy threshold if not specified it will use the threshold you set when initilizing the bot and if you didn't specified there either it is set to 0.25 by default

request_response

the same as request_tag but it will return a random response from intents

def request_response(self, message, debug_mode=False, threshold=None):

gradio_preview

it will open up a nice gui for testing your model in your browser

def gradio_preview(self, ask_for_threshold=False, share=False, inbrowser=True):

ask_for_threshold : if set to True it will create a slider that you can set the threshold of the model with it

share : if set to True it will make the demo public

inbrowser : it will aoutomaticlly open a new browser page if set to True

Demo

cli_preview

def cli_preview(self):

a simple cli interface for testing your model

gui_preview

a custom gui for triyng or even deploying your model

def gui_preview(self, user_name=""):

user_name : it will only say hello to you if you pass your name for now

Demo

model types

you can choose one of the defined models according to the size of diffrente patterns and tags you have (you can just try and see wich one is right for your use case)

xs1 : a very fast and small model

xs2 : still small but better for more tags

s1 : the default model (hidden layers : 128-64)

s2 : its better than s1 when you have small number of similar tags that s1 cant predict

s3 : most of the time you dont need this (hidden layers : 128-64-64)

s4 : its like a s2 on streoid its suited when you have a lot of patterns for tags that have similar patterns

s5 : most of the time you dont need this either (hidden layers : 128-64-64-32)

m1 : great balance of perfomance and accuracy for medium size intent files

m2 : great accuracy for medium size intent files

m3 : m3 to m1 is like s2 to s1 its more suited when you have smaller number of tags but hard to difrentiat

l1 - l2 - l3 - l4 - l5 - xl1 - xl2 - xl3 - xl5 : are bigger models for more information read MODELS.md

JsonIntents Class

this class is used to add and edit Json files containing intents

def __init__(self, json_file_adrees):

you just need to pass the path of the json file the function you want

add_pattern_app

its a function that ask you to input new patterns for tags (you can pass an especific tag to ask for that or it will cycle through them all and will go to the next tag by inputing D or d)

def add_pattern_app(self, tag=None):

add_tag_app

it will add new tags to your json file

def add_tag_app(self, tag=None, responses=None):

tag : the name of the new tag you want to add

responses : a list of responses (you can add later on as well)

delete_duplicate_app

it will check for duplicate in patterns and deletes them for you

an example of using this class

file = JsonIntents("internet intents.json")
file.delete_duplicate_app()
file.add_tag_app(tag="about")
file.add_pattern_app("about")

ImageClassificator class

the seccond class in CustomIntent package is ImageClassificator it let you create and train deep learning image classification models with just three line of code !!

init arguments

def __init__(self, data_folder="data",
                 model_name="imageclassification_model",
                 number_of_classes=2,
                 classes_name=None,
                 gpu=None, 
                 checkpoint_filepath='/tmp/checkpoint_epoch:{epoch}'):

data_folder : the path to where you located your data

model_name : name your model

number_of_classes : number of difrent classes you have in your data (you should put the pictures of every class in a sub folder in your data folder)

classes_name : a list of names correspunding to your classes (they should be the same as the folder name of correspunding data folder for example if you have 3 sub folder in your data folder as banana apple pineapple you should pass ["banana", "apple", "pineapple"])

gpu : you can pass True or False if you dont pass anything it will try to use your gpu if you have a cuda enaibled graphic card and you have cudatoolkit and cuDNN installed and if you dont it will use your cpu

checkpoint_filepath : path to where you want your checkpoints

Training

you can start training your model with one function call train_model

training model arguments :

def train_model(self, epochs=20,
                    model_type="s1",
                    logdir=None,
                    optimizer_type="adam",
                    learning_rate=0.00001,
                    class_weight=None,
                    prefetching=False,
                    plot_model=True,
                    validation_split=0.2,
                    test_split=0,
                    augment_data=False,
                    tensorboard_usage=False,
                    stop_early=False,
                    checkpoint=False):

epoch : an epoch basicly means training the neural network with all the training data for one cycle and this arguament says how many of this circles it will go

model_type : you can select one of the defined models (we will look at the available models later on)

logdir : a directory to hold your tensorboard log files you can leave is empty if you dont care

optimizer_type : you can only choose adam right now

learning_rate : Learning rate is a hyper-parameter that controls the weights of our neural network with respect to the loss gradient. It defines how quickly the neural network updates the concepts it has learned. (in simple terms if its bigger our model learn faster but it can go of track faster)

class_weight : if you have an unbalanced dataset you can path a dictionary with the weight that you what to assosiate with every class ()

prefetching : prefetching data

plot_model : it will plot the model architecture for you

validation_split : you can split a portion of your data for validation only (model will not get trained on them) it should be float between 0 and 1 (i will recommend to not create a validation split unless you have a really huge data set with lots of similar patterns)

augment_data : if set to true the model will also be trained on augmented data

tensorboard_usage : it will use tensorboard

stop_early : if set to true it will stop training if validation loss is the same or increasing for more than 5 epochs

checkpoint : if set to true it will save checkpoints if the validation loss is the lovest ever seen

save_model

it will save your model a .h5 file

def save_model(self, model_file_name=None):

model_name : if its None (defualt), it will save the files like (model_name.h5) where the model_name is the name you specified in the first place

load_model

it will load a model from those three files

def load_model(self, name="imageclassification_model"):

model_name : if its None (defualt), it will look for files like (imageclassification_model.h5)

predict

now you can predict

def predict(self, image, image_type=None, full_mode=False, accuracy=False):

image : a path to an image file or a numpy array of the image or a cv2 image

image_type : if its None (defualt), it will try to detect if the image is a cv2 image or a numpy array of the image or a path to the image

full_mode : if you set it to true it will return every class and its probability

accuracy : if you set it to true it will return a tuple of the class name and the probability

(if both full_mode and accuracy set to false (defualt behavier) it will just return the most likly class name)

predict_face

    def predict_face(self, img, image_type=None, full_mode=False,
                     accuracy=False, return_picture=False,
                     save_returned_picture=False, saved_returned_picture_name="test.jpg",
                     show_returned_picture=False):

img : a path to an image file or a numpy array of the image or a cv2 image

image_type : if its None (defualt), it will try to detect if the image is a cv2 image or a numpy array of the image or a path to the image

full_mode : if you set it to true it will return every class and its probability

accuracy : if you set it to true it will return a tuple of the class name and the probability

return_picture : if set to true it will return a picture with faces in rectangles and their predicted class writen on top of them

save_returned_picture : if set to True it will save the returned picture

saved_returned_picture_name : if you set save_returned_picture to true you can use this to especifie the name of the saved picture

show_returned_picture : if set to true it will open the returned picture in a cv2 preview

realtime_prediction

def realtime_prediction(self, src=0):

src : if you have multiple webcams or virtual webcams it will let you choose from them if you only have one live it empty

realtime face prediction

its exacly like the realtime_prediction() method but it will detect facec with a haarcascadde and will feed the model with the facec to predict

def realtime_face_prediction(self, src=0, frame_rate=5):

src : if you have multiple webcams or virtual webcams it will let you choose from them if you only have one live it empty

frame_rate : its the number of frames to skip before predicting again

gradio_preview

it will open up a nice gui for testing your model in your browser

def gradio_preview(self, share=False, inbrowser=True, predict_face=False):

share : if set to True it will make the demo public

inbrowser : it will aoutomaticlly open a new browser page if set to True

predict_face : if set to True it will look for faces and feed them to the model

example of using ImageClassificator

in this example i have a folder in data/full that contains 4 sub folders (beni, khodm, matin, parsa) and in every one of them i have a lot of pictures of my friends (the folder name corredpunds to their names for example in beni folder there are beni's pictures, btw khodm means myself in my languge) and i want to train a model to detect which one of us we are in the picture

from CustomIntents import ImageClassificator

model = ImageClassificator(model_name="test_m1", data_folder="data/full", number_of_classes=4, classes_name=["beni", "khodm", "matin", "parsa"])
model.train_model(epochs=10, model_type="m1", logdir="logs", learning_rate=0.0001, prefetching=False)
model.save_model(model_file_name="test_m1")
from CustomIntents import BinaryImageClassificator

model = ImageClassificator(model_name="test_m1", data_folder="data/full", number_of_classes=4, classes_name=["beni", "khodm", "matin", "parsa"])
model.load_model(name="test_m1")
result = model.realtime_face_prediction()

Demo3 and as you see in the picture above you can see it under the that it is me in the picture with a really high accuracy

StyleTransformer class

the fourth class in CustomIntent package is StyleTransformer it let you transform an image to the style of another image

Demo3

init arguments

def __init__(self, image_path=None,
             style_reference_image_path=None,
             result_prefix="test_generated"):

image path : the path to the original image

style_reference_image_path : the path to the style reference image

result_prefix : the prefix for the result file

transform method

the main method of this class

def transfer(self, iterations=4000, iteration_per_save=100):

iterations : the number of iterations

iteration_per_save : save every _ iteration (where _ is the number you pass)

gradio_preview method

a browser based app to use this class

def gradio_preview(self, share=False, inbrowser=True):

share : if set to True it will make the demo public

inbrowser : it will aoutomaticlly open a new browser page if set to True

example of using StyleTransformer

passing the path of base and reference image

from CustomIntents import StyleTransformer

model = StyleTransformer(image_path="base_image.jpg", style_reference_image_path="style_reference_image.jpg")
model.transfer(iterations=500, iteration_per_save=50)

this code will perform the teransformation for 500 times and save them every 50 steps

Using gradio preview

from CustomIntents import StyleTransformer

model = StyleTransformer()
model.gradio_preview()

Demo3

*this model is slow so use reasonable iteration counts

don't use ridiculous numbers like 4000 like me, it took about 15 minutes on a 1660ti

BinaryImageClassificator class

the fourth class in CustomIntent package is BinaryImageClassificator it let you create and train deep learning image classification models with just three line of code !!

Init arguaments

def __init__(self, data_folder="data", model_name="imageclassification_model",
             first_class="1", second_class="2"):

data_folder : it's the path of your data folder (you should put your training images in two subfolder representing their label (class))

model_name : your model's name

first_class : you can name your classes so when you whant to predict it returns their name insted of 1s and 2s

seccond_class : //

Training

you can start training your model with one function call train_model

training model arguments :

def train_model(self, epochs=20, model_type="s1", logdir=None,
                optimizer_type="adam", learning_rate=0.00001,
                class_weight=None, prefetching=False, plot_model=True,
                validation_split=0.2):

epoch : an epoch basicly means training the neural network with all the training data for one cycle and this arguament says how many of this circles it will go

model_type : you can select one of the defined models (read MODELS.md for more information)

logdir : a directory to hold your tensorboard log files you can leave is empty if you don't care

optimizer_type : you can only choose adam right now

learning_rate : Learning rate is a hyper-parameter that controls the weights of our neural network with respect to the loss gradient. It defines how quickly the neural network updates the concepts it has learned. (in simple terms if its bigger our model learn faster but it can go of track faster)

class_weight : if you have an unbalanced dataset you can path a dictionary with the weight that you what to assosiate with every class ()

prefetching : prefetching data

plot_model : it will plot the model architecture for you

validation_split : you can split a portion of your data for validation only (model will not get trained on them) it should be float between 0 and 1 (i will recommend to not create a validation split unless you have a really huge data set with lots of similar patterns)

save_model

it will save your model a .h5 file (don't add .h5)

def save_model(self, model_file_name=None):

model_name : if its None (defualt), it will save the files like (model_name.h5) where the model_name is the name you specified in the first place

load_model

it will load a model from those three files

def load_model(self, name="imageclassification_model"):

model_name : if its None (defualt), it will look for files like (imageclassification_model.h5)

predict

now you can predict

def predict(self, image, image_type=None, accuracy=False):

image : a path to an image file or a numpy array of the image or a cv2 image

image_type : if its None (defualt), it will it will try to detect if the image is a cv2 image or a numpy array of the image or a path to the image

accuracy : if you set it to true it will return a tuple of the class name and the probability

predict from file path (legacy)

it will predict what class the image blongs to from a path

def predict_from_files_path(self, image_file_path):

image_file_path : the path of the image you want to predict

it will return the name of the class and the percentage that its correct

predict from imshow (legacy)

it will predict what class the image blongs to from a cv2 object

def predict_from_imshow(self, img):

img : a cv2 image object

it will return the name of the class and the percentage that its correct

realtime prediction

it will predict from a live video feed (it will open a live cv2 video feed)

def realtime_prediction(self, src=0):

src : if you have multiple webcams or virtual webcams it will let you choose from them if you only have one live it empty

realtime face prediction

its exacly like the realtime_prediction() method but it will detect facec with a haarcascadde and will feed the model with the facec to predict

def realtime_face_prediction(self, src=0):

src : if you have multiple webcams or virtual webcams it will let you choose from them if you only have one live it empty

gradio_preview

it will open up a nice gui for testing your model in your browser

def gradio_preview(self, share=False, inbrowser=True):

share : if set to True it will make the demo public

inbrowser : it will aoutomaticlly open a new browser page if set to True

example of using BinaryImageClassificator

from CustomIntents import BinaryImageClassificator

model = BinaryImageClassificator(model_name="test1", data_folder="data/parsa", first_class="sad", second_class="happy")
model.train_model(epochs=5, model_type="s1", logdir="logs", learning_rate=0.0001, prefetching=True) #, class_weight={0: 1.0, 1: 2.567750677506775})
model.save_model(model_file_name="test1")
from CustomIntents import BinaryImageClassificator

model.load_model("models/test1")
model.realtime_face_prediction()

PLinearRegression class

it's a simple linear regression class with one input and one output

Init arguaments

def __init__(self, data=None, x_axes1=None, y_axes1=None, model_name="test model"):

data : if you have your data as a aray like [[x_axes], [y_axes]] you can pass it here

x_axes1 : if you have your x values (inputs) and y values (output) seperetly you can pass the array that contains x valeus here

y_axes1 : if you have your x values (inputs) and y values (output) seperetly you can pass the array that contains y valeus here

model_name : name your model

train model

you will train your model on your data with this method

def train_model(self, algorythm="1", training_steps=10000,
                start_step=None, verbose=1, plot_input_data=True,
                learning_rate=0.01, plot_result=True):

algorythm : you can choose bitween 1, 1.1 and 2 (1 is really simple and fast but 2 is the propper linear reggresion one)

training_steps : it's how many steps you want to train your model

start_step : it's the starting stepfor algorythm 1 and 1.1

verbose : if it's set to 1 it will show you the details of trainong in every step

plot_input_data : it will plot the training data

learning_rate : it's the learning rate used for algorythm 2

plot_result : it will plot the line of best fit that it found along the side of the training data

save the model to csv

it will save your model as a csv file containing the information of the line of best fit

def save_model_to_csv(self, file_dir="test_model.csv"):

file_dir : the name and dir you want to save your model to (include .csv)

load model from csv

it will load your model from a csv file containing the information of the line of best fit

def load_model_from_csv(self, file_dir="test_model.csv")

file_dir : the name and dir you want to load your model from (include .csv)

make prediction

it will make predictions for you

def make_prediction(self, x):

x : your input data either in a numerical form or a numpy array containing multiple numerical values

it will return either a float (if you input is just a numerical value) or a numpy array containing multiple floats

Visitors :


Visitors

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

CustomIntents-0.8.0.tar.gz (199.6 kB view details)

Uploaded Source

Built Distribution

CustomIntents-0.8.0-py3-none-any.whl (198.8 kB view details)

Uploaded Python 3

File details

Details for the file CustomIntents-0.8.0.tar.gz.

File metadata

  • Download URL: CustomIntents-0.8.0.tar.gz
  • Upload date:
  • Size: 199.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.15

File hashes

Hashes for CustomIntents-0.8.0.tar.gz
Algorithm Hash digest
SHA256 adaf9c19e6b9fbce1189741bad075ede3a67c4a4d98ff1115973d71dc64caa97
MD5 f06ca80e2ca093d722e712f40ae91bb6
BLAKE2b-256 a70671415e1e74000e855456b61346e963a61af34bc56b8c4f87fe871917069c

See more details on using hashes here.

File details

Details for the file CustomIntents-0.8.0-py3-none-any.whl.

File metadata

File hashes

Hashes for CustomIntents-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2de907ddd448a272d71b0b7202e955691798fb1878825a0e28ea4b7f5a2f6d92
MD5 e74fb25b24095768223bdd2eede2920a
BLAKE2b-256 9ca67c299ee7a6d2bde791ef70688e3a04a3ac53e9eb4555db43067d2a436080

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page