Skip to content

Jaseci-Labs/slam

Repository files navigation

SLaM Tool: Small Language Model Evaluation Tool

Query Engine Tests SLaM App Tests

SLaM Tool is an advanced application designed to assess the performance of Large Language Models (LLMs) for your specific use cases. It employs both Human Evaluation and Automatic Evaluation techniques, enabling you to deploy the application locally or via Docker. With SLaM Tool, you can generate responses for a given prompt using various LLMs (proprietary or open-source), and subsequently evaluate those responses through human evaluators or automated methods.

Key Features

  • Admin Panel: Set up and manage the Human Evaluation UI, as well as oversee human evaluators.
  • Real-time Insights and Analytics: Gain valuable insights and analytics on the performance of the LLMs under evaluation.
  • Human Evaluation: Leverage human evaluators to assess the responses generated by the LLMs.
  • Automatic Evaluation: Employ LLMs and embedding similarity techniques for automated response evaluation.
  • Multiple Model Support: Generate responses for a given prompt using a diverse range of LLMs (proprietary or open-source, such as Ollama).

Installation

First, clone the repository:

git clone https://github.com/Jaseci-Labs/slam.git && cd slam

Prerequisites

  • Python 3.12 or higher
  • Docker (Optional)

Only the Human Evaluation Tool

Using Docker

  1. Build the Docker Image:

    cd app
    docker build -t slam/slam-app:latest .
  2. Run the container with environment variables:

    docker run -p 8501:8501 -e SLAM_ADMIN_USERNAME=<user_name> -e SLAM_ADMIN_PASSWORD=<password> slam/slam-app:latest
  3. Open your browser and navigate to http://localhost:8501

Using Local Installation

  1. Create a virtual environment (optional):

    cd app
    conda create -n slam-app python=3.12 -y
    conda activate slam-app
  2. Install the requirements:

    pip install -r requirements.txt
  3. Set environment variables:

    export SLAM_ADMIN_USERNAME=<username>
    export SLAM_ADMIN_PASSWORD=<password>
  4. Run the application:

    streamlit run app.py
  5. Open your browser and navigate to http://localhost:8501

With the Query Engine and Ollama

Notice: Ensure you are running in an environment with GPU support.

Using Docker Compose (Recommended)

  1. Build the Docker Images:

    docker compose up -d --build
  2. Open your browser and navigate to http://localhost:8501

Using Local Installation

Follow the steps above to install the app, and then proceed with the steps below to install the Query Engine and Ollama.

For Response Generation & Automatic Evaluation (Optional)

For a streamlined experience, SLAM offers the option to leverage LLMs and SLMs for response generation and automated evaluation.

Open a new terminal window and navigate to the root directory of the SLAM repository.

  1. Create a separate virtual environment (Recommended):

    cd engine
    conda create -n slam-engine python=3.12 -y
    conda activate slam-engine
  2. Install the dependencies:

    pip install -r engine/requirements.txt
  3. Run the Query Engine:

    jac run src/query_engine.jac
  4. Run the Ollama Server:

    curl https://ollama.ai/install.sh | sh
    ollama serve
  5. If you plan to use OpenAI's GPT-4, set the API key:

    export OPENAI_API_KEY=<your_api_key>

    If you have a remote Ollama server, set the server URL:

    export OLLAMA_SERVER_URL=http://<remote_server_ip>:11434/

Tutorials

Tips and Tricks

Continuous Backup of Results

SLAM offers a convenient option to maintain a continuous backup of your results to a Google Drive folder, ensuring your data is securely stored and easily accessible.

  1. Set the Google Drive Folder ID

    First, set the Google Drive folder ID as an environment variable:

    export GDRIVE_FOLDER_ID=<your_folder_id>
  2. Initiate a CRON Job

    Next, initiate a CRON job to run the scripts/backup.jac script every 5 minutes. Ensure that you have an OAuth file (settings.yaml and credentials.json) in the folder from which you initiate the CRON job.

    # Activate the virtual environment where JacLang is installed
    */5 * * * * jac run scripts/backup.jac

Follow the PyDrive OAuth instructions to set up the OAuth files.

Loading Backups

To load your backups, follow these simple steps:

  1. Open the App and Log In

    • Launch the SLAM application and log in with your admin credentials.
  2. Navigate to the Dashboard

    • Once logged in, navigate to the Dashboard page.
  3. Upload and Unzip

    • Drag and drop your ZIP file onto the designated area.
    • Click the "Upload and Unzip" button.
  4. Refresh and View

    • After the upload process is complete, click the "Refresh" button to see the updated diagrams and visualizations.

Frequently Asked Questions

  1. Error: "listen tcp :11434: bind: address already in use" when trying to run ollama serve

    • This error occurs when the port 11434 is already in use. To resolve this issue, you can either kill the process running on the port using sudo systemctl stop ollama and then run ollama serve again.
  2. Error: "No module named 'jac'" when trying to run the Query Engine

    • This error occurs when the jaclang package is not installed. To resolve this issue, first, make sure you are in the slam-engine environment, and then retry installing the requirements using pip install -r engine/requirements.txt.
  3. If you have any other questions, please feel free to reach out to us through the Issues section.

Contributing

We welcome contributions to enhance SLAM's capabilities. Please review the CONTRIBUTING.md file for our code of conduct and the process for submitting pull requests. We appreciate your interest in contributing to SLAM and look forward to your valuable contributions.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published