Skip to content

This GitHub repository hosts an innovative project featuring an LSTM-based embedding GPT-like neural network. This network is designed to fuse diverse data modalities such as images, audio, sensor inputs, and text, creating a holistic and human-like sentient AI system with the ability to comprehend and respond across multiple data formats.

License

Notifications You must be signed in to change notification settings

Quicksticks-oss/LFAI-LLM

Repository files navigation

Logo

Discord Online Demo Open in colab

About

This GitHub repository hosts an innovative project featuring an LSTM-based embedding GPT-like neural network. This network is designed to fuse diverse data modalities such as images, audio, sensor inputs, and text, creating a holistic and human-like sentient AI system with the ability to comprehend and respond across multiple data formats.

Models V1

Models V2

Screenshots

Training Inference

Documentation

Documentation V1

Usage/Examples

Inference V2

Just open the script inference.py and change the MODEL and DEVICE variable if needed and then run the script.

Inference V1

from inference import Inference

if __name__ == '__main__':
    inference = Inference('Model Path Here')
    output, hidden = inference.run('MENENIUS:')
    print(output)

Training Simple LFAI V1

clear && python3 train.py --name="Model Name Here" --dataset="Dataset File or Path here" --contextsize=128

Training Simple LFAI V2

All training settings can be set in the TRAIN_SETTINGS.py script.

The main settings you want to pay attention to are TEXT_DATASET and max_iters.

TEXT_DATASET is the file path that contains all of yor utf-8 or ascii text. This could be for example Tiny Shakespeare.

max_iters is how many itterations through the dataset you would like to run. I would reccoment setting this to a variable like 5000 if you have a lower teir GPU or CPU but if you have a high teir GPU I would set it to 25000 or 50000.

Finetuning V2

If you want to finetune a dataset all you need to do is set FINETUNE to True and set LOAD_FILE to the model you want to finetune.

Roadmap V2

  • Train More Public Models
  • Additional networks like GRU

Roadmap V1

  • Train More Public Models

  • Additional networks like GRU

  • Use last token as new token in inference

  • Add more integrations

License

Non-Share and Non-Modify License

Authors

About

This GitHub repository hosts an innovative project featuring an LSTM-based embedding GPT-like neural network. This network is designed to fuse diverse data modalities such as images, audio, sensor inputs, and text, creating a holistic and human-like sentient AI system with the ability to comprehend and respond across multiple data formats.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published