Skip to content

Commit

Permalink
Awesomely Refactored the Code with Docker Compose Support (#29)
Browse files Browse the repository at this point in the history
* major refactor

* engine works perfectly

* updated query engine test action

* added comments

* Add LLM as an Evaluator component and Human Evaluation Implementations

* Refactor component imports and update file paths

* Update compose.yml with new service configurations

* Refactor component imports and update file paths

* Refactor component imports and update file paths in dashboard and generator components

* Refactor component imports and update file paths in dashboard and generator components

* Update query engine test action to use the correct file path

* Refactor component imports and update file paths in login and utils components. Updated the login and setup tests

* Update copyright year in LICENSE file
  • Loading branch information
chandralegend committed May 2, 2024
1 parent 9841a97 commit 4987cf6
Show file tree
Hide file tree
Showing 72 changed files with 600 additions and 479 deletions.
3 changes: 0 additions & 3 deletions .env.sample

This file was deleted.

5 changes: 3 additions & 2 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
These owners will be the default owners for everything in the repo. Unless a later match takes precedence, [@chandralegend](https://) will be requested for review when someone opens a pull request.

### Code Owners
- [@chandralegend](https://)
- [@ashish.mahendra](https://)
- [@chandralegend](https://github.com/chandralegend)
- [@ashish.mahendra](https://github.com/ashish.mahendra)
- [@ypkang](https://github.com/ypkang)
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/1-bug-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: "🐞 Bug Report"
about: "Report an issue to help the project improve."
title: "[Bug] "
labels: "Type: Bug"
assignees:
assignees:

---

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/2-feature-request.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: "🚀🆕 Feature Request"
about: "Suggest an idea or possible new feature for this project."
title: ""
labels: "Type: Feature"
assignees:
assignees:

---

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/3-enhancement-request.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: "🚀➕ Enhancement Request"
about: "Suggest an enhancement for this project. Improve an existing feature"
title: ""
labels: "Type: Enhancement"
assignees:
assignees:

---

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/4-question-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: "❓ Question or Support Request"
about: "Questions and requests for support."
title: ""
labels: "Type: Question"
assignees:
assignees:

---

Expand Down
2 changes: 1 addition & 1 deletion .github/settings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ repository:
#homepage: https://example.github.io/

# A comma-separated list of topics to set on the repository
#topics: project, template,
#topics: project, template,

# Either `true` to make the repository private, or `false` to make it public.
#private: false
Expand Down
19 changes: 10 additions & 9 deletions .github/workflows/query_engine_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@ name: Query Engine Tests
on:
pull_request:
paths:
- 'src/query_engine.impl.jac'
- 'src/query_engine.jac'
- 'requirements.dev.txt'
- 'engine/src/query_engine.impl.jac'
- 'engine/src/query_engine.jac'
- 'engine/requirements.txt'
push:
branches:
- main
paths:
- 'src/query_engine.impl.jac'
- 'src/query_engine.jac'
- 'requirements.dev.txt'
- 'engine/src/query_engine.impl.jac'
- 'engine/src/query_engine.jac'
- 'engine/requirements.txt'

jobs:
query_engine_test:
Expand All @@ -24,14 +24,15 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: 3.12

- name: Installing Ollama
run: curl -fsSL https://ollama.com/install.sh | sh

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.dev.txt
pip install -r engine/requirements.txt
- name: Run tests
run: jac test src/query_engine.jac
run:
jac test engine/src/query_engine.jac
12 changes: 12 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
args: [--allow-multiple-documents]
- id: check-json
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 24.1.1
hooks:
- id: black
11 changes: 0 additions & 11 deletions .vscode/settings.json

This file was deleted.

File renamed without changes.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2023 Chandra Irugalbandara
Copyright (c) 2024 Jaseci Labs

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
150 changes: 88 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,118 +1,142 @@
# SLaM Tool : *S*mall *La*nguage *M*odel Evaluation Tool
# SLaM Tool: *S*mall *La*nguage *M*odel Evaluation Tool

[![Query Engine Tests](https://github.com/Jaseci-Labs/slam/actions/workflows/query_engine_test.yml/badge.svg)](https://github.com/Jaseci-Labs/slam/actions/workflows/query_engine_test.yml)
[![SLaM App Tests](https://github.com/Jaseci-Labs/slam/actions/workflows/app_test.yml/badge.svg)](https://github.com/Jaseci-Labs/slam/actions/workflows/app_test.yml)

SLaM Tool is a helper tool to evaluate the performance of Large Language Models (LLMs) for your personal use cases with the help of Human Evaluation and Automatic Evaluation. You can deploy the application on your local machine or use Docker to generate responses for a given prompt with different LLMs (Proprietary or OpenSource), and then evaluate the responses with the help of human evaluators or automated methods.

## Features
SLaM Tool is an advanced application designed to assess the performance of Large Language Models (LLMs) for your specific use cases. It employs both Human Evaluation and Automatic Evaluation techniques, enabling you to deploy the application locally or via Docker. With SLaM Tool, you can generate responses for a given prompt using various LLMs (proprietary or open-source), and subsequently evaluate those responses through human evaluators or automated methods.

- **Admin Panel**: Set up the Human Evaluation UI and manage the human evaluators.
- **Realtime Insights and Analytics**: Get insights and analytics on the performance of the LLMs.
- **Human Evaluation**: Evaluate the responses of the LLMs with the help of human evaluators.
- **Automatic Evaluation**: Evaluate the responses of the LLMs with the help of LLMs and using embedding similarity.
- **Multiple Model Support**: Generate responses for a given prompt with different LLMs (Proprietary or OpenSource(Ollama)).
## Key Features

- **Admin Panel**: Set up and manage the Human Evaluation UI, as well as oversee human evaluators.
- **Real-time Insights and Analytics**: Gain valuable insights and analytics on the performance of the LLMs under evaluation.
- **Human Evaluation**: Leverage human evaluators to assess the responses generated by the LLMs.
- **Automatic Evaluation**: Employ LLMs and embedding similarity techniques for automated response evaluation.
- **Multiple Model Support**: Generate responses for a given prompt using a diverse range of LLMs (proprietary or open-source, such as Ollama).

## Installation

First, clone the repository:

```bash
git clone https://github.com/Jaseci-Labs/slam.git && cd slam
```

### Prerequisites

- Python 3.12 or higher
- Docker (Optional)

### Docker Installation
### Only the Human Evaluation Tool

#### Using Docker

1. Build the Docker Image:
```bash
docker build -t jaseci/slam-tool:latest .
cd app
docker build -t slam/slam-app:latest .
```

2. Run the container with environment variables:
```bash
docker run -p 8501:8501 -e SLAM_ADMIN_USERNAME=<user_name> -e SLAM_ADMIN_PASSWORD=<password> jaseci/slam-tool:latest
docker run -p 8501:8501 -e SLAM_ADMIN_USERNAME=<user_name> -e SLAM_ADMIN_PASSWORD=<password> slam/slam-app:latest
```

3. Open your browser and go to `http://localhost:8501`
3. Open your browser and navigate to `http://localhost:8501`

### Local Installation
#### Using Local Installation

1. Clone the repository:
1. Create a virtual environment (optional):
```bash
git clone https://github.com/Jaseci-Labs/slam.git && cd slam
cd app
conda create -n slam-app python=3.12 -y
conda activate slam-app
```

2. Create a virtual environment (optional):
```bash
conda create -n slam-tool python=3.12 -y
conda activate slam-tool
```

3. Install the requirements:
2. Install the requirements:
```bash
pip install -r requirements.txt
```

4. Set environment variables:
3. Set environment variables:
```bash
export SLAM_ADMIN_USERNAME=<username>
export SLAM_ADMIN_PASSWORD=<password>
```

5. Run the application:
4. Run the application:
```bash
streamlit run app.py
```

5. Open your browser and navigate to `http://localhost:8501`

### With the Query Engine and Ollama
Notice: Ensure you are running in an environment with GPU support.

#### Using Docker Compose (Recommended)

1. Build the Docker Images:
```bash
docker compose up -d --build
```

2. Open your browser and navigate to `http://localhost:8501`

#### Using Local Installation

Follow the steps above to install the app, and then proceed with the steps below to install the Query Engine and Ollama.

### For Response Generation & Automatic Evaluation (Optional)

For a streamlined experience, SLAM offers the option to leverage LLMs and SLMs for response generation and automated evaluation.

1. **Configure Language Models**
Open a new terminal window and navigate to the root directory of the SLAM repository.

1. Create a separate virtual environment (Recommended):

If you prefer utilizing OpenAI's GPT-4, you'll need to set up an API key:

```bash
export OPENAI_API_KEY=<your_api_key>
cd engine
conda create -n slam-engine python=3.12 -y
conda activate slam-engine
```

Alternatively, if you choose to employ Ollama's cutting-edge language models, ensure that you have Ollama installed and the server running:
2. Install the dependencies:

```bash
curl https://ollama.ai/install.sh | sh
ollama serve
pip install -r engine/requirements.txt
```

2. **Installing Dependencies & Launch the Query Engine**
3. Run the Query Engine:

Query Engine Requires more complex dependancies than the normal app. (Use of Sepeate Python Environment is Recommended)

```bash
pip install -r requirements.dev.txt
```
```bash
jac run src/query_engine.jac
```

Once the language models are configured, initiate the Query Engine:

```bash
jac run src/query_engine.jac
```
4. Run the Ollama Server:

3. **Optional Environment Variables**
```bash
curl https://ollama.ai/install.sh | sh
ollama serve
```

5. If you plan to use OpenAI's GPT-4, set the API key:

For added flexibility, you can set the following environment variables:

```bash
export ACTION_SERVER_URL=http://localhost:8000/
export OLLAMA_SERVER_URL=http://localhost:11434/
```
```bash
export OPENAI_API_KEY=<your_api_key>
```
If you have a remote Ollama server, set the server URL:
```bash
export OLLAMA_SERVER_URL=http://<remote_server_ip>:11434/
```

## Tutorials

- [How to use SLaM for Human Evaluation](docs/tutorials/human_eval.md)
- [How to Generate Responses using SLaM](docs/tutorials/response_generator.md)
- [How to use SLaM for Human Evaluation](docs/tutorials/human_eval.md)
- [How to use SLaM for Automatic Evaluation](docs/tutorials/automatic_eval.md)
- [LLM as an Evaluator](docs/tutorials/automatic_eval.md#llm-as-an-evaluator)
- [Using Semantic Similarity to Evaluate Responses](docs/tutorials/automatic_eval.md#using-semantic-similarity-to-evaluate-responses)
- [How to Get Realtime Insights and Analytics from your Evaluations](docs/tutorials/insights_analytics.md)
- [How to Get Real-time Insights and Analytics from your Evaluations](docs/tutorials/insights_analytics.md)

## Tips and Tricks

Expand All @@ -123,15 +147,15 @@ SLAM offers a convenient option to maintain a continuous backup of your results
1. **Set the Google Drive Folder ID**

First, set the Google Drive folder ID as an environment variable:

```bash
export GDRIVE_FOLDER_ID=<your_folder_id>
```

2. **Initiate a CRON Job**

Next, initiate a CRON job to run the `scripts/backup.jac` script every 5 minutes. Ensure that you have an OAuth file (`settings.yaml` and `credentials.json`) in the folder from which you initiate the CRON job.

```bash
# Activate the virtual environment where JacLang is installed
*/5 * * * * jac run scripts/backup.jac
Expand All @@ -156,14 +180,16 @@ To load your backups, follow these simple steps:
4. **Refresh and View**
- After the upload process is complete, click the "Refresh" button to see the updated diagrams and visualizations.

## Contributing
## Frequently Asked Questions

We welcome contributions to enhance SLAM's capabilities. Please review the [CONTRIBUTING.md](CONTRIBUTING.md) file for our code of conduct and the process for submitting pull requests.
1. **Error: "listen tcp :11434: bind: address already in use" when trying to run `ollama serve`**
- This error occurs when the port 11434 is already in use. To resolve this issue, you can either kill the process running on the port using `sudo systemctl stop ollama` and then run `ollama serve` again.

To run the test suite, execute the following command:
2. **Error: "No module named 'jac'" when trying to run the Query Engine**
- This error occurs when the `jaclang` package is not installed. To resolve this issue, first, make sure you are in the `slam-engine` environment, and then retry installing the requirements using `pip install -r engine/requirements.txt`.

```bash
sh scripts/run_tests.sh
```
3. If you have any other questions, please feel free to reach out to us through the [Issues](https://github.com/Jaseci-Labs/slam/issues) section.

## Contributing

We appreciate your interest in contributing to SLAM and look forward to your valuable contributions.
We welcome contributions to enhance SLAM's capabilities. Please review the [CONTRIBUTING.md](CONTRIBUTING.md) file for our code of conduct and the process for submitting pull requests. We appreciate your interest in contributing to SLAM and look forward to your valuable contributions.
File renamed without changes.
4 changes: 4 additions & 0 deletions app/.env.sample
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
SLAM_ADMIN_USERNAME=username
SLAM_ADMIN_PASSWORD=password
ACTION_SERVER_URL=http://localhost:8000
OLLAMA_SERVER_URL=http://localhost:11434
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit 4987cf6

Please sign in to comment.