diff --git a/src/SUMMARY.md b/src/SUMMARY.md index 823d713a..d3f87682 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -35,6 +35,7 @@ - [Static Website](documentation/dashboard/solutions/static_website.md) - [Subsquid](documentation/dashboard/solutions/subsquid.md) - [Taiga](documentation/dashboard/solutions/taiga.md) + - [TFRobot](documentation/dashboard/solutions/tfrobot.md) - [Umbrel](documentation/dashboard/solutions/umbrel.md) - [WordPress](documentation/dashboard/solutions/wordpress.md) - [Your Contracts](documentation/dashboard/deploy/your_contracts.md) @@ -267,7 +268,9 @@ - [IPFS on a Micro VM](documentation/system_administrators/advanced/ipfs/ipfs_microvm.md) - [MinIO Operator with Helm3](documentation/system_administrators/advanced/minio_helm3.md) - [Hummingbot](documentation/system_administrators/advanced/hummingbot.md) - - [AI & ML Workloads](documentation/system_administrators/advanced/ai_ml_workloads.md) + - [AI & ML Workloads](documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md) + - [CPU and Llama](documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md) + - [GPU and Pytorch](documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md) - [Ecommerce](documentation/system_administrators/advanced/ecommerce/ecommerce.md) - [WooCommerce](documentation/system_administrators/advanced/ecommerce/woocommerce.md) - [nopCommerce](documentation/system_administrators/advanced/ecommerce/nopcommerce.md) diff --git a/src/documentation/dashboard/deploy/applications.md b/src/documentation/dashboard/deploy/applications.md index ed7f2191..3ed57451 100644 --- a/src/documentation/dashboard/deploy/applications.md +++ b/src/documentation/dashboard/deploy/applications.md @@ -20,5 +20,6 @@ Easily deploy your favourite applications on the ThreeFold grid with a click of - [Static Website](../solutions/static_website.md) - [Subsquid](../solutions/subsquid.md) - [Taiga](../solutions/taiga.md) +- [TFRobot](../solutions/tfrobot.md) - [Umbrel](../solutions/umbrel.md) - [WordPress](../solutions/wordpress.md) \ No newline at end of file diff --git a/src/documentation/dashboard/img/applications_landing.png b/src/documentation/dashboard/img/applications_landing.png index e4117a57..17d3c040 100644 Binary files a/src/documentation/dashboard/img/applications_landing.png and b/src/documentation/dashboard/img/applications_landing.png differ diff --git a/src/documentation/dashboard/solutions/algorand.md b/src/documentation/dashboard/solutions/algorand.md index a6295437..96197413 100644 --- a/src/documentation/dashboard/solutions/algorand.md +++ b/src/documentation/dashboard/solutions/algorand.md @@ -4,10 +4,9 @@ - [Introduction](#introduction) - [Prerequisites](#prerequisites) - - [Algorand Structure](#algorand-structure) +- [Algorand Structure](#algorand-structure) - [Run Default Node](#run-default-node) - [Run Relay Node](#run-relay-node) -- [Run Participant Node](#run-participant-node) - [Run Indexer Node](#run-indexer-node) - [Select Capacity](#select-capacity) @@ -23,19 +22,17 @@ - From the sidebar click on **Applications** - Click on **Algorand** -### Algorand Structure +## Algorand Structure -- Algorand has two main [types](https://developer.algorand.org/docs/run-a-node/setup/types/#:~:text=The%20Algorand%20network%20is%20comprised,%2C%20and%20non%2Drelay%20nodes.) of nodes (Relay or Participant). You can also run those nodes on 4 different networks. Combining the types you can get: - - Default: - - This is a Non-relay and Non-participant - - It can run on Devnet, Testnet, Betanet and Mainnet. - - Relay: - - A relay node can't be participant. - - It can run only on Testnet and Mainnet - - Participant: - - Can run on any of the four networks. - - Indexer: - - It is a default node but with Archival Mode enabled which will make you able to query the data of the blockchain. +An Algorand node can be either a `Default`, `Relay` or `Indexer` node. + +- Default: + - This is a non-relay node. + - It can run on Devnet, Testnet, Betanet and Mainnet. +- Relay: + - It can run only on Testnet and Mainnet. +- Indexer: + - It is a default node but with Archival Mode enabled which will make you able to query the data of the blockchain. ## Run Default Node @@ -56,6 +53,7 @@ Here you see your node runs on mainnet. Relay nodes are where other nodes connect. Therefore, a relay node must be able to support a large number of connections and handle the processing load associated with all the data flowing to and from these connections. Thus, relay nodes require significantly more power than non-relay nodes. Relay nodes are always configured in archival mode. The relay node must be publicaly accessable, so it must have a public ip. + ![relaydep](./img/algorand_relaydep.png) Once the deployment is done, SSH into the node and run `goal node status` to see the status of the node. You can also check if the right port is listening (:4161 for testnet, and :4160 for mainnet). @@ -64,32 +62,6 @@ Once the deployment is done, SSH into the node and run `goal node status` to see The next step accourding to the [docs](https://developer.algorand.org/docs/run-a-node/setup/types/#relay-node) is to register your `ip:port` on Algorand Public SRV. -## Run Participant Node - -Participation means participation in the Algorand consensus protocol. An account that participates in the Algorand consensus protocol is eligible and available to be selected to propose and vote on new blocks in the Algorand blockchain. - -Participation node is responsible for hosting participation keys for one or more online accounts. - -- What do you need? - - Account mnemonics on the network you deploy on (offline) you can check the status for you account on the AlgoExplorer. Search using your account id. - - The account needs to have some microAlgo to sign the participation transaction. - - [Main net explorer](https://algoexplorer.io/) - - [Test net explorer](https://testnet.algoexplorer.io/) - -- First Round: is the first block you need your participaiton node to validate from. You can choose the last block form the explorer. - ![partexp](./img/algorand_partexp.png) -- Last Round: is the final block your node can validate. Let's make it 30M - -![partdep](./img/algorand_partdep.png) - -Once the deployment is done, SSH into the node and run `goal node status` to see the status of the node. You can see it doing catchup and the fast catchup is to make the node synchronize with the latest block faster by only fetching the last 1k blocks. After this is done, it will start to create the participation keys. -![partstatus](./img/algorand_partstatus.png) - -Now if you check the explorer, you can see the status of the account turned to `Online`: - -![partonl](./img/algorand_partonl.png) - ## Run Indexer Node The primary purpose of this Indexer is to provide a REST API interface of API calls to support searching the Algorand Blockchain. The Indexer REST APIs retrieve the blockchain data from a PostgreSQL compatible database that must be populated. This database is populated using the same indexer instance or a separate instance of the indexer which must connect to the algod process of a running Algorand node to read block data. This node must also be an Archival node to make searching the entire blockchain possible. diff --git a/src/documentation/dashboard/solutions/img/solutions_tfrobot.png b/src/documentation/dashboard/solutions/img/solutions_tfrobot.png new file mode 100644 index 00000000..ab0110be Binary files /dev/null and b/src/documentation/dashboard/solutions/img/solutions_tfrobot.png differ diff --git a/src/documentation/dashboard/solutions/img/tfrobot1.png b/src/documentation/dashboard/solutions/img/tfrobot1.png new file mode 100644 index 00000000..f3c800f4 Binary files /dev/null and b/src/documentation/dashboard/solutions/img/tfrobot1.png differ diff --git a/src/documentation/dashboard/solutions/tfrobot.md b/src/documentation/dashboard/solutions/tfrobot.md new file mode 100644 index 00000000..9505e656 --- /dev/null +++ b/src/documentation/dashboard/solutions/tfrobot.md @@ -0,0 +1,56 @@ +

TFRobot

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Features](#features) +- [Prerequisites](#prerequisites) +- [Deployment](#deployment) +- [Deployed Instances Table](#deployed-instances-table) + +*** + +## Introduction + +[TFRobot](https://github.com/threefoldtech/tfgrid-sdk-go/blob/development/tfrobot/README.md) is tool designed to automate mass deployment of groups of VMs on the ThreeFold Grid, with support of multiple retries for failed deployments. + +## Features + +- **Mass Deployment:** Deploy groups of VMs on the grid simultaneously. +- **Mass Cancellation:** Cancel simultaneously all VMs on the grid defined in the configuration file. +- **Load Deployments:** Load simultaneously groups of VMs deployed with TFRobot. +- **Customizable Configurations:** Define node groups, VMs groups and other configurations through YAML or JSON files. + +## Prerequisites + +- Make sure you have a [wallet](../wallet_connector.md) +- From the sidebar click on **Applications** +- Click on **TFRobot** + +## Deployment + +![ ](./img/solutions_tfrobot.png) + +- Enter an Application Name. + +- Select a capacity package: + - **Small**: {cpu: 1, memory: 2, diskSize: 25 } + - **Medium**: {cpu: 2, memory: 4, diskSize: 50 } + - **Large**: {cpu: 4, memory: 16, diskSize: 100 } + - Or choose a **Custom** plan + +- `Dedicated` flag to retrieve only dedeicated nodes +- `Certified` flag to retrieve only certified nodes +- Choose the location of the node + - `Region` + - `Country` + - `Farm Name` +- Click on `Load Nodes` +- Select the node you want to deploy on +- Click `Deploy` + +## Deployed Instances Table + +At all time, you can see a list of all of your deployed instances: + +![ ](./img/tfrobot1.png) \ No newline at end of file diff --git a/src/documentation/farmers/farmerbot/farmerbot_information.md b/src/documentation/farmers/farmerbot/farmerbot_information.md index 37cc8132..ad08326f 100644 --- a/src/documentation/farmers/farmerbot/farmerbot_information.md +++ b/src/documentation/farmers/farmerbot/farmerbot_information.md @@ -74,13 +74,19 @@ included_nodes: [optional, if no nodes are added then the farmerbot will include - "" excluded_nodes: - "" +priority_nodes: + - "" never_shutdown_nodes: - "" power: periodic_wake_up_start: "" - wake_up_threshold: "" periodic_wake_up_limit: "" overprovision_cpu: "" + wake_up_threshold: + cru: "" + mru: "" + sru: "" + hru: "" ``` ## Supported Commands and Flags diff --git a/src/documentation/system_administrators/advanced/advanced.md b/src/documentation/system_administrators/advanced/advanced.md index c37fd741..489a6f26 100644 --- a/src/documentation/system_administrators/advanced/advanced.md +++ b/src/documentation/system_administrators/advanced/advanced.md @@ -14,7 +14,9 @@ In this section, we delve into sophisticated topics and powerful functionalities - [IPFS on a Full VM](./ipfs/ipfs_fullvm.md) - [IPFS on a Micro VM](./ipfs/ipfs_microvm.md) - [Hummingbot](./hummingbot.md) -- [AI & ML Workloads](./ai_ml_workloads.md) +- [AI & ML Workloads](./ai_ml_workloads/ai_ml_workloads_toc.md) + - [CPU and Llama](./ai_ml_workloads/cpu_and_llama.md) + - [GPU and Pytorch](./ai_ml_workloads/gpu_and_pytorch.md) - [Ecommerce](./ecommerce/ecommerce.md) - [WooCommerce](./ecommerce/woocommerce.md) - [nopCommerce](./ecommerce/nopcommerce.md) diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md b/src/documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md new file mode 100644 index 00000000..bb353b0e --- /dev/null +++ b/src/documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md @@ -0,0 +1,6 @@ +# AI & ML Workloads + +

Table of Contents

+ +- [CPU and Llama](./cpu_and_llama.md) +- [GPU and Pytorch](./gpu_and_pytorch.md) \ No newline at end of file diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md b/src/documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md new file mode 100644 index 00000000..938fb1c2 --- /dev/null +++ b/src/documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md @@ -0,0 +1,105 @@ +

AI & ML Workloads: CPU and Llama

+ +

Table of Contents

+ +- [Introduction](#introduction) +- [Prerequisites](#prerequisites) +- [Deploy a Full VM](#deploy-a-full-vm) +- [Preparing the VM](#preparing-the-vm) +- [Setting OpenWebUI](#setting-openwebui) +- [Pull a Model](#pull-a-model) +- [Using Llama](#using-llama) +- [References](#references) + +--- + +## Introduction + +We present a simple guide on how to deploy large language models on the grid using CPU. For this guide, we will be deploying Llama on a full VM using OpenWebUI bundled with Ollama support. + +Llama is a large language model trained by Meta AI. It is an open-source model, meaning that it is free to use and customize for various applications. This LLM is designed to be a more conversational AI allowing users to engage in natural-sounding conversations. Llama is trained on a massive dataset of text from the internet and can generate responses to a wide range of topics and questions. + +Ollama is an open-source project that allows users to run large language models (LLMs) on their local machine. + +OpenWebUI is one of many front ends for Ollama, providing a convenient and user friendly way to load weights and chat with the bot. + +## Prerequisites + +- [A TFChain account](../../../dashboard/wallet_connector.md) +- TFT in your TFChain account + - [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md) + - [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md) + +## Deploy a Full VM + +We start by deploying a full VM on the ThreeFold Dashboard. The more cores we set to the machine, the faster the model will be. + +* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/) +* Deploy a full VM (Ubuntu 22.04) with only `Wireguard` as the network + * Vcores: 8 vcores + * MB of RAM: 4096 GB + * GB of storage: 100 GB +* After deployment, [set the Wireguard configurations](../../getstarted/ssh_guide/ssh_wireguard.md) +* Connect to the VM via SSH + * ``` + ssh root@VM_Wireguard_Address + ``` + +## Preparing the VM + +We prepare the full VM to run Llama. + +* Install Docker + * ``` + wget -O docker.sh get.docker.com + bash docker.sh + ``` + +## Setting OpenWebUI + +We now install OpenWebUI with bundled Ollama support. Note that you might need to use another port than `3000` if this port is already in use on your local machine. + +* For CPU only + ``` + docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama + ``` +* Once the container is fully loaded and running, go to your browser to access OpenWebUI using the Wireguard address: + * ``` + 10.20.4.2:3000 + ``` + +You should now see the OpenWebUI page. You can register by entering your email and setting a password. This information will stay on the machine running OpenWebUI. + +

+ +

+ +## Pull a Model + +Once you've access OpenWebUI, you need to download a LLM model before using it. + +- Click on the bottom left button displaying your username +- Click on `Settings`, then `Admin Settings` and `Models` +- Under `Pull a model from Ollama.com`, enter the LLM model you want to use + - In our case we will use `llama3` +- Click on the button on the right to pull the image + +![](./img/openwebui_model.png) + +## Using Llama + +Let's now use Llama! + +- Click on `New Chat` on the top left corner +- Click on `Select a model` and select the model you downloaded + - You can click on `Set as default` for convenience + +![](./img/openwebui_set_model.png) + +- You can now `Send a Message` to Llama and interact with it! + +That's it. You now have a running LLM instance on the grid. + +## References + +For any advanced configurations, you may refer to the [OpenWebUI documentation](https://github.com/open-webui/open-webui). \ No newline at end of file diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads.md b/src/documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md similarity index 95% rename from src/documentation/system_administrators/advanced/ai_ml_workloads.md rename to src/documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md index bc5760ec..66854113 100644 --- a/src/documentation/system_administrators/advanced/ai_ml_workloads.md +++ b/src/documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md @@ -1,4 +1,4 @@ -

AI & ML Workloads

+

AI & ML Workloads: GPU and Pytorch

Table of Contents

@@ -23,7 +23,7 @@ In the second part, we show how to use PyTorch to run AI/ML tasks. ## Prerequisites -You need to reserve a [dedicated GPU node](../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid. +You need to reserve a [dedicated GPU node](../../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid. ## Prepare the System diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_model.png b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_model.png new file mode 100644 index 00000000..9c35e516 Binary files /dev/null and b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_model.png differ diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_page.png b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_page.png new file mode 100644 index 00000000..76d3542e Binary files /dev/null and b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_page.png differ diff --git a/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_set_model.png b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_set_model.png new file mode 100644 index 00000000..1c018d30 Binary files /dev/null and b/src/documentation/system_administrators/advanced/ai_ml_workloads/img/openwebui_set_model.png differ diff --git a/src/documentation/system_administrators/system_administrators.md b/src/documentation/system_administrators/system_administrators.md index a0cb65b4..d4e2e355 100644 --- a/src/documentation/system_administrators/system_administrators.md +++ b/src/documentation/system_administrators/system_administrators.md @@ -85,7 +85,9 @@ For complementary information on ThreeFold grid and its cloud component, refer t - [IPFS on a Full VM](./advanced/ipfs/ipfs_fullvm.md) - [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md) - [Hummingbot](./advanced/hummingbot.md) - - [AI & ML Workloads](./advanced/ai_ml_workloads.md) + - [AI & ML Workloads](./advanced/ai_ml_workloads/ai_ml_workloads_toc.md) + - [CPU and Llama](./advanced/ai_ml_workloads/cpu_and_llama.md) + - [GPU and Pytorch](./advanced/ai_ml_workloads/gpu_and_pytorch.md) - [Ecommerce](./advanced/ecommerce/ecommerce.md) - [WooCommerce](./advanced/ecommerce/woocommerce.md) - [nopCommerce](./advanced/ecommerce/nopcommerce.md)