Skip to content

Commit

Permalink
Merge pull request #582 from threefoldtech/development
Browse files Browse the repository at this point in the history
dev to master periodic update
  • Loading branch information
Mik-TF committed Jun 27, 2024
2 parents d3f2442 + 60a0a48 commit d5c617a
Show file tree
Hide file tree
Showing 16 changed files with 199 additions and 46 deletions.
5 changes: 4 additions & 1 deletion src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@
- [Static Website](documentation/dashboard/solutions/static_website.md)
- [Subsquid](documentation/dashboard/solutions/subsquid.md)
- [Taiga](documentation/dashboard/solutions/taiga.md)
- [TFRobot](documentation/dashboard/solutions/tfrobot.md)
- [Umbrel](documentation/dashboard/solutions/umbrel.md)
- [WordPress](documentation/dashboard/solutions/wordpress.md)
- [Your Contracts](documentation/dashboard/deploy/your_contracts.md)
Expand Down Expand Up @@ -267,7 +268,9 @@
- [IPFS on a Micro VM](documentation/system_administrators/advanced/ipfs/ipfs_microvm.md)
- [MinIO Operator with Helm3](documentation/system_administrators/advanced/minio_helm3.md)
- [Hummingbot](documentation/system_administrators/advanced/hummingbot.md)
- [AI & ML Workloads](documentation/system_administrators/advanced/ai_ml_workloads.md)
- [AI & ML Workloads](documentation/system_administrators/advanced/ai_ml_workloads/ai_ml_workloads_toc.md)
- [CPU and Llama](documentation/system_administrators/advanced/ai_ml_workloads/cpu_and_llama.md)
- [GPU and Pytorch](documentation/system_administrators/advanced/ai_ml_workloads/gpu_and_pytorch.md)
- [Ecommerce](documentation/system_administrators/advanced/ecommerce/ecommerce.md)
- [WooCommerce](documentation/system_administrators/advanced/ecommerce/woocommerce.md)
- [nopCommerce](documentation/system_administrators/advanced/ecommerce/nopcommerce.md)
Expand Down
1 change: 1 addition & 0 deletions src/documentation/dashboard/deploy/applications.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,5 +20,6 @@ Easily deploy your favourite applications on the ThreeFold grid with a click of
- [Static Website](../solutions/static_website.md)
- [Subsquid](../solutions/subsquid.md)
- [Taiga](../solutions/taiga.md)
- [TFRobot](../solutions/tfrobot.md)
- [Umbrel](../solutions/umbrel.md)
- [WordPress](../solutions/wordpress.md)
Binary file modified src/documentation/dashboard/img/applications_landing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
52 changes: 12 additions & 40 deletions src/documentation/dashboard/solutions/algorand.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,9 @@

- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Algorand Structure](#algorand-structure)
- [Algorand Structure](#algorand-structure)
- [Run Default Node](#run-default-node)
- [Run Relay Node](#run-relay-node)
- [Run Participant Node](#run-participant-node)
- [Run Indexer Node](#run-indexer-node)
- [Select Capacity](#select-capacity)

Expand All @@ -23,19 +22,17 @@
- From the sidebar click on **Applications**
- Click on **Algorand**

### Algorand Structure
## Algorand Structure

- Algorand has two main [types](https://developer.algorand.org/docs/run-a-node/setup/types/#:~:text=The%20Algorand%20network%20is%20comprised,%2C%20and%20non%2Drelay%20nodes.) of nodes (Relay or Participant). You can also run those nodes on 4 different networks. Combining the types you can get:
- Default:
- This is a Non-relay and Non-participant
- It can run on Devnet, Testnet, Betanet and Mainnet.
- Relay:
- A relay node can't be participant.
- It can run only on Testnet and Mainnet
- Participant:
- Can run on any of the four networks.
- Indexer:
- It is a default node but with Archival Mode enabled which will make you able to query the data of the blockchain.
An Algorand node can be either a `Default`, `Relay` or `Indexer` node.

- Default:
- This is a non-relay node.
- It can run on Devnet, Testnet, Betanet and Mainnet.
- Relay:
- It can run only on Testnet and Mainnet.
- Indexer:
- It is a default node but with Archival Mode enabled which will make you able to query the data of the blockchain.

## Run Default Node

Expand All @@ -56,6 +53,7 @@ Here you see your node runs on mainnet.
Relay nodes are where other nodes connect. Therefore, a relay node must be able to support a large number of connections and handle the processing load associated with all the data flowing to and from these connections. Thus, relay nodes require significantly more power than non-relay nodes. Relay nodes are always configured in archival mode.

The relay node must be publicaly accessable, so it must have a public ip.

![relaydep](./img/algorand_relaydep.png)

Once the deployment is done, SSH into the node and run `goal node status` to see the status of the node. You can also check if the right port is listening (:4161 for testnet, and :4160 for mainnet).
Expand All @@ -64,32 +62,6 @@ Once the deployment is done, SSH into the node and run `goal node status` to see

The next step accourding to the [docs](https://developer.algorand.org/docs/run-a-node/setup/types/#relay-node) is to register your `ip:port` on Algorand Public SRV.

## Run Participant Node

Participation means participation in the Algorand consensus protocol. An account that participates in the Algorand consensus protocol is eligible and available to be selected to propose and vote on new blocks in the Algorand blockchain.

Participation node is responsible for hosting participation keys for one or more online accounts.

- What do you need?
- Account mnemonics on the network you deploy on (offline) you can check the status for you account on the AlgoExplorer. Search using your account id.

The account needs to have some microAlgo to sign the participation transaction.
- [Main net explorer](https://algoexplorer.io/)
- [Test net explorer](https://testnet.algoexplorer.io/)

- First Round: is the first block you need your participaiton node to validate from. You can choose the last block form the explorer.
![partexp](./img/algorand_partexp.png)
- Last Round: is the final block your node can validate. Let's make it 30M

![partdep](./img/algorand_partdep.png)

Once the deployment is done, SSH into the node and run `goal node status` to see the status of the node. You can see it doing catchup and the fast catchup is to make the node synchronize with the latest block faster by only fetching the last 1k blocks. After this is done, it will start to create the participation keys.
![partstatus](./img/algorand_partstatus.png)

Now if you check the explorer, you can see the status of the account turned to `Online`:

![partonl](./img/algorand_partonl.png)

## Run Indexer Node

The primary purpose of this Indexer is to provide a REST API interface of API calls to support searching the Algorand Blockchain. The Indexer REST APIs retrieve the blockchain data from a PostgreSQL compatible database that must be populated. This database is populated using the same indexer instance or a separate instance of the indexer which must connect to the algod process of a running Algorand node to read block data. This node must also be an Archival node to make searching the entire blockchain possible.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
56 changes: 56 additions & 0 deletions src/documentation/dashboard/solutions/tfrobot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
<h1> TFRobot </h1>

<h2>Table of Contents</h2>

- [Introduction](#introduction)
- [Features](#features)
- [Prerequisites](#prerequisites)
- [Deployment](#deployment)
- [Deployed Instances Table](#deployed-instances-table)

***

## Introduction

[TFRobot](https://github.com/threefoldtech/tfgrid-sdk-go/blob/development/tfrobot/README.md) is tool designed to automate mass deployment of groups of VMs on the ThreeFold Grid, with support of multiple retries for failed deployments.

## Features

- **Mass Deployment:** Deploy groups of VMs on the grid simultaneously.
- **Mass Cancellation:** Cancel simultaneously all VMs on the grid defined in the configuration file.
- **Load Deployments:** Load simultaneously groups of VMs deployed with TFRobot.
- **Customizable Configurations:** Define node groups, VMs groups and other configurations through YAML or JSON files.

## Prerequisites

- Make sure you have a [wallet](../wallet_connector.md)
- From the sidebar click on **Applications**
- Click on **TFRobot**

## Deployment

![ ](./img/solutions_tfrobot.png)

- Enter an Application Name.

- Select a capacity package:
- **Small**: {cpu: 1, memory: 2, diskSize: 25 }
- **Medium**: {cpu: 2, memory: 4, diskSize: 50 }
- **Large**: {cpu: 4, memory: 16, diskSize: 100 }
- Or choose a **Custom** plan

- `Dedicated` flag to retrieve only dedeicated nodes
- `Certified` flag to retrieve only certified nodes
- Choose the location of the node
- `Region`
- `Country`
- `Farm Name`
- Click on `Load Nodes`
- Select the node you want to deploy on
- Click `Deploy`

## Deployed Instances Table

At all time, you can see a list of all of your deployed instances:

![ ](./img/tfrobot1.png)
8 changes: 7 additions & 1 deletion src/documentation/farmers/farmerbot/farmerbot_information.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,13 +74,19 @@ included_nodes: [optional, if no nodes are added then the farmerbot will include
- "<your node ID to be included, required at least 2>"
excluded_nodes:
- "<your node ID to be excluded, optional>"
priority_nodes:
- "<your node ID to have a priority in nodes management, optional>"
never_shutdown_nodes:
- "<your node ID to be never shutdown, optional>"
power:
periodic_wake_up_start: "<daily time to wake up nodes for your farm, default is the time your run the command, format is 00:00AM or 00:00PM, optional>"
wake_up_threshold: "<the threshold number for resources usage that will need another node to be on, default is 80, optional>"
periodic_wake_up_limit: "<the number (limit) of nodes to be waken up everyday, default is 1, optional>"
overprovision_cpu: "<how much node allows over provisioning the CPU , default is 1, range: [1;4], optional>"
wake_up_threshold:
cru: "<the threshold number for cru usage that will need another node to be on, default is 80, optional>"
mru: "<the threshold number for mru usage that will need another node to be on, default is 80, optional>"
sru: "<the threshold number for sru usage that will need another node to be on, default is 80, optional>"
hru: "<the threshold number for hru usage that will need another node to be on, default is 80, optional>"
```

## Supported Commands and Flags
Expand Down
4 changes: 3 additions & 1 deletion src/documentation/system_administrators/advanced/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,9 @@ In this section, we delve into sophisticated topics and powerful functionalities
- [IPFS on a Full VM](./ipfs/ipfs_fullvm.md)
- [IPFS on a Micro VM](./ipfs/ipfs_microvm.md)
- [Hummingbot](./hummingbot.md)
- [AI & ML Workloads](./ai_ml_workloads.md)
- [AI & ML Workloads](./ai_ml_workloads/ai_ml_workloads_toc.md)
- [CPU and Llama](./ai_ml_workloads/cpu_and_llama.md)
- [GPU and Pytorch](./ai_ml_workloads/gpu_and_pytorch.md)
- [Ecommerce](./ecommerce/ecommerce.md)
- [WooCommerce](./ecommerce/woocommerce.md)
- [nopCommerce](./ecommerce/nopcommerce.md)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# AI & ML Workloads

<h2>Table of Contents</h2>

- [CPU and Llama](./cpu_and_llama.md)
- [GPU and Pytorch](./gpu_and_pytorch.md)
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
<h1> AI & ML Workloads: CPU and Llama </h1>

<h2>Table of Contents</h2>

- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Deploy a Full VM](#deploy-a-full-vm)
- [Preparing the VM](#preparing-the-vm)
- [Setting OpenWebUI](#setting-openwebui)
- [Pull a Model](#pull-a-model)
- [Using Llama](#using-llama)
- [References](#references)

---

## Introduction

We present a simple guide on how to deploy large language models on the grid using CPU. For this guide, we will be deploying Llama on a full VM using OpenWebUI bundled with Ollama support.

Llama is a large language model trained by Meta AI. It is an open-source model, meaning that it is free to use and customize for various applications. This LLM is designed to be a more conversational AI allowing users to engage in natural-sounding conversations. Llama is trained on a massive dataset of text from the internet and can generate responses to a wide range of topics and questions.

Ollama is an open-source project that allows users to run large language models (LLMs) on their local machine.

OpenWebUI is one of many front ends for Ollama, providing a convenient and user friendly way to load weights and chat with the bot.

## Prerequisites

- [A TFChain account](../../../dashboard/wallet_connector.md)
- TFT in your TFChain account
- [Buy TFT](../../../threefold_token/buy_sell_tft/buy_sell_tft.md)
- [Send TFT to TFChain](../../../threefold_token/tft_bridges/tfchain_stellar_bridge.md)

## Deploy a Full VM

We start by deploying a full VM on the ThreeFold Dashboard. The more cores we set to the machine, the faster the model will be.

* On the [Threefold Dashboard](https://dashboard.grid.tf/#/), go to the [full virtual machine deployment page](https://dashboard.grid.tf/#/deploy/virtual-machines/full-virtual-machine/)
* Deploy a full VM (Ubuntu 22.04) with only `Wireguard` as the network
* Vcores: 8 vcores
* MB of RAM: 4096 GB
* GB of storage: 100 GB
* After deployment, [set the Wireguard configurations](../../getstarted/ssh_guide/ssh_wireguard.md)
* Connect to the VM via SSH
* ```
ssh root@VM_Wireguard_Address
```

## Preparing the VM

We prepare the full VM to run Llama.

* Install Docker
* ```
wget -O docker.sh get.docker.com
bash docker.sh
```

## Setting OpenWebUI

We now install OpenWebUI with bundled Ollama support. Note that you might need to use another port than `3000` if this port is already in use on your local machine.

* For CPU only
```
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
* Once the container is fully loaded and running, go to your browser to access OpenWebUI using the Wireguard address:
* ```
10.20.4.2:3000
```

You should now see the OpenWebUI page. You can register by entering your email and setting a password. This information will stay on the machine running OpenWebUI.

<p align="center">
<img src="./img/openwebui_page.png" />
</p>

## Pull a Model

Once you've access OpenWebUI, you need to download a LLM model before using it.

- Click on the bottom left button displaying your username
- Click on `Settings`, then `Admin Settings` and `Models`
- Under `Pull a model from Ollama.com`, enter the LLM model you want to use
- In our case we will use `llama3`
- Click on the button on the right to pull the image

![](./img/openwebui_model.png)

## Using Llama

Let's now use Llama!

- Click on `New Chat` on the top left corner
- Click on `Select a model` and select the model you downloaded
- You can click on `Set as default` for convenience

![](./img/openwebui_set_model.png)

- You can now `Send a Message` to Llama and interact with it!

That's it. You now have a running LLM instance on the grid.

## References

For any advanced configurations, you may refer to the [OpenWebUI documentation](https://github.com/open-webui/open-webui).
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<h1> AI & ML Workloads </h1>
<h1> AI & ML Workloads: GPU and Pytorch</h1>

<h2> Table of Contents </h2>

Expand All @@ -23,7 +23,7 @@ In the second part, we show how to use PyTorch to run AI/ML tasks.

## Prerequisites

You need to reserve a [dedicated GPU node](../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid.
You need to reserve a [dedicated GPU node](../../../dashboard/deploy/node_finder.md#dedicated-nodes) on the ThreeFold Grid.

## Prepare the System

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,9 @@ For complementary information on ThreeFold grid and its cloud component, refer t
- [IPFS on a Full VM](./advanced/ipfs/ipfs_fullvm.md)
- [IPFS on a Micro VM](./advanced/ipfs/ipfs_microvm.md)
- [Hummingbot](./advanced/hummingbot.md)
- [AI & ML Workloads](./advanced/ai_ml_workloads.md)
- [AI & ML Workloads](./advanced/ai_ml_workloads/ai_ml_workloads_toc.md)
- [CPU and Llama](./advanced/ai_ml_workloads/cpu_and_llama.md)
- [GPU and Pytorch](./advanced/ai_ml_workloads/gpu_and_pytorch.md)
- [Ecommerce](./advanced/ecommerce/ecommerce.md)
- [WooCommerce](./advanced/ecommerce/woocommerce.md)
- [nopCommerce](./advanced/ecommerce/nopcommerce.md)
Expand Down

0 comments on commit d5c617a

Please sign in to comment.