diff --git a/src/SUMMARY.md b/src/SUMMARY.md
index 6b3408ff..d3f87682 100644
--- a/src/SUMMARY.md
+++ b/src/SUMMARY.md
@@ -268,7 +268,9 @@
- [IPFS on a Micro VM](documentation/system_administrators/advanced/ipfs/ipfs_microvm.md)
- [MinIO Operator with Helm3](documentation/system_administrators/advanced/minio_helm3.md)
- [Hummingbot](documentation/system_administrators/advanced/hummingbot.md)
- - [AI & ML Workloads](documentation/system_administrators/advanced/ai_ml_workloads.md)
+ - [AI & ML Workloads](documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md)
+ - [CPU and Llama](documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md)
+ - [GPU and Pytorch](documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md)
- [Ecommerce](documentation/system_administrators/advanced/ecommerce/ecommerce.md)
- [WooCommerce](documentation/system_administrators/advanced/ecommerce/woocommerce.md)
- [nopCommerce](documentation/system_administrators/advanced/ecommerce/nopcommerce.md)
diff --git a/src/documentation/system_administrators/advanced/advanced.md b/src/documentation/system_administrators/advanced/advanced.md
index c37fd741..489a6f26 100644
--- a/src/documentation/system_administrators/advanced/advanced.md
+++ b/src/documentation/system_administrators/advanced/advanced.md
@@ -14,7 +14,9 @@ In this section, we delve into sophisticated topics and powerful functionalities
- [IPFS on a Full VM](./ipfs/ipfs_fullvm.md)
- [IPFS on a Micro VM](./ipfs/ipfs_microvm.md)
- [Hummingbot](./hummingbot.md)
-- [AI & ML Workloads](./ai_ml_workloads.md)
+- [AI & ML Workloads](./ai_ml_workloads/ai_ml_workloads_toc.md)
+ - [CPU and Llama](./ai_ml_workloads/cpu_and_llama.md)
+ - [GPU and Pytorch](./ai_ml_workloads/gpu_and_pytorch.md)
- [Ecommerce](./ecommerce/ecommerce.md)
- [WooCommerce](./ecommerce/woocommerce.md)
- [nopCommerce](./ecommerce/nopcommerce.md)
diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md b/src/documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md
new file mode 100644
index 00000000..bb353b0e
--- /dev/null
+++ b/src/documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md
@@ -0,0 +1,6 @@
+# AI & ML Workloads
+
+
Table of Contents
+
+- [CPU and Llama](./cpu_and_llama.md)
+- [GPU and Pytorch](./gpu_and_pytorch.md)
\ No newline at end of file
diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md b/src/documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md
new file mode 100644
index 00000000..938fb1c2
--- /dev/null
+++ b/src/documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md
@@ -0,0 +1,105 @@
+ AI & ML Workloads: CPU and Llama
+
+Table of Contents
+
+- [Introduction](#introduction)
+- [Prerequisites](#prerequisites)
+- [Deploy a Full VM](#deploy-a-full-vm)
+- [Preparing the VM](#preparing-the-vm)
+- [Setting OpenWebUI](#setting-openwebui)
+- [Pull a Model](#pull-a-model)
+- [Using Llama](#using-llama)
+- [References](#references)
+
+---
+
+## Introduction
+
+We present a simple guide on how to deploy large language models on the grid using CPU. For this guide, we will be deploying Llama on a full VM using OpenWebUI bundled with Ollama support.
+
+Llama is a large language model trained by Meta AI. It is an open-source model, meaning that it is free to use and customize for various applications. This LLM is designed to be a more conversational AI allowing users to engage in natural-sounding conversations. Llama is trained on a massive dataset of text from the internet and can generate responses to a wide range of topics and questions.
+
+Ollama is an open-source project that allows users to run large language models (LLMs) on their local machine.
+
+OpenWebUI is one of many front ends for Ollama, providing a convenient and user friendly way to load weights and chat with the bot.
+
+## Prerequisites
+
+- [A TFChain account](../../../dashboard/wallet_connector.md)
+- TFT in your TFChain account
+ - [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
+ - [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)
+
+## Deploy a Full VM
+
+We start by deploying a full VM on the ThreeFold Dashboard. The more cores we set to the machine, the faster the model will be.
+
+* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/)
+* Deploy a full VM (Ubuntu 22.04) with only `Wireguard` as the network
+ * Vcores: 8 vcores
+ * MB of RAM: 4096 GB
+ * GB of storage: 100 GB
+* After deployment, [set the Wireguard configurations](../../getstarted/ssh_guide/ssh_wireguard.md)
+* Connect to the VM via SSH
+ * ```
+ ssh root@VM_Wireguard_Address
+ ```
+
+## Preparing the VM
+
+We prepare the full VM to run Llama.
+
+* Install Docker
+ * ```
+ wget -O docker.sh get.docker.com
+ bash docker.sh
+ ```
+
+## Setting OpenWebUI
+
+We now install OpenWebUI with bundled Ollama support. Note that you might need to use another port than `3000` if this port is already in use on your local machine.
+
+* For CPU only
+ ```
+ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
+ ```
+* Once the container is fully loaded and running, go to your browser to access OpenWebUI using the Wireguard address:
+ * ```
+ 10.20.4.2:3000
+ ```
+
+You should now see the OpenWebUI page. You can register by entering your email and setting a password. This information will stay on the machine running OpenWebUI.
+
+
+
+
+
+## Pull a Model
+
+Once you've access OpenWebUI, you need to download a LLM model before using it.
+
+- Click on the bottom left button displaying your username
+- Click on `Settings`, then `Admin Settings` and `Models`
+- Under `Pull a model from Ollama.com`, enter the LLM model you want to use
+ - In our case we will use `llama3`
+- Click on the button on the right to pull the image
+
+![](./img/openwebui_model.png)
+
+## Using Llama
+
+Let's now use Llama!
+
+- Click on `New Chat` on the top left corner
+- Click on `Select a model` and select the model you downloaded
+ - You can click on `Set as default` for convenience
+
+![](./img/openwebui_set_model.png)
+
+- You can now `Send a Message` to Llama and interact with it!
+
+That's it. You now have a running LLM instance on the grid.
+
+## References
+
+For any advanced configurations, you may refer to the [OpenWebUI documentation](https://github.com/open-webui/open-webui).
\ No newline at end of file
diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads.md b/src/documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md
similarity index 98%
rename from src/documentation/system_administrators/advanced/ai_ml_workloads.md
rename to src/documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md
index bc5760ec..a15c2170 100644
--- a/src/documentation/system_administrators/advanced/ai_ml_workloads.md
+++ b/src/documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md
@@ -1,4 +1,4 @@
- AI & ML Workloads
+ AI & ML Workloads: GPU and Pytorch
Table of Contents
diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_model.png b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_model.png
new file mode 100644
index 00000000..9c35e516
Binary files /dev/null and b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_model.png differ
diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_page.png b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_page.png
new file mode 100644
index 00000000..76d3542e
Binary files /dev/null and b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_page.png differ
diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_set_model.png b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_set_model.png
new file mode 100644
index 00000000..1c018d30
Binary files /dev/null and b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_set_model.png differ
diff --git a/src/documentation/system_administrators/system_administrators.md b/src/documentation/system_administrators/system_administrators.md
index a0cb65b4..d4e2e355 100644
--- a/src/documentation/system_administrators/system_administrators.md
+++ b/src/documentation/system_administrators/system_administrators.md
@@ -85,7 +85,9 @@ For complementary information on ThreeFold grid and its cloud component, refer t
- [IPFS on a Full VM](./advanced/ipfs/ipfs_fullvm.md)
- [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md)
- [Hummingbot](./advanced/hummingbot.md)
- - [AI & ML Workloads](./advanced/ai_ml_workloads.md)
+ - [AI & ML Workloads](./advanced/ai_ml_workloads/ai_ml_workloads_toc.md)
+ - [CPU and Llama](./advanced/ai_ml_workloads/cpu_and_llama.md)
+ - [GPU and Pytorch](./advanced/ai_ml_workloads/gpu_and_pytorch.md)
- [Ecommerce](./advanced/ecommerce/ecommerce.md)
- [WooCommerce](./advanced/ecommerce/woocommerce.md)
- [nopCommerce](./advanced/ecommerce/nopcommerce.md)