Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix formatting and typos #316

Merged
merged 2 commits into from
Aug 20, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions docs/tutorials/clay-v1-wall-to-wall.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@
"## Run Clay v1\n",
"\n",
"This notebook shows how to run Clay v1 wall-to-wall, from downloading imagery\n",
"to training a tiny fine tuning head. This will include the following steps:\n",
"to training a tiny, fine-tuned head. This will include the following steps:\n",
"\n",
"1. Set a location and date range of interest\n",
"2. Download Sentinel-2 imagery for this specification\n",
"3. Load the model checkpoint\n",
"4. Prepare data into a format for the model\n",
"5. Run the model on the imagery\n",
"6. Analyise the model embeddings output using PCA\n",
"7. Train a Support Vector Machines fine tuning head"
"6. Analyse the model embeddings output using PCA\n",
"7. Train a Support Vector Machine fine-tuning head"
]
},
{
Expand Down Expand Up @@ -333,11 +333,11 @@
"source": [
"### Prepare band metadata for passing it to the model\n",
"\n",
"This is the most technical part so far. We will take the information in the stack of imagery and convert it into the formate that the model requires. This includes converting the lat/lon and the date of the imagery into normalized values.\n",
"This is the most technical part so far. We will take the information in the stack of imagery and convert it into the format that the model requires. This includes converting the lat/lon and the date of the imagery into normalized values.\n",
"\n",
"The Clay model will accept any band combination in any order, from different platforms. But for this the model needs to know the wavelength of each band that is passed to it, and normalization parameters for each band as well. It will use that to normalize the data and to interpret each band based on its central wavelength.\n",
"\n",
"For Sentinel-2 we can use medata file of the model to extract those values. But this cloud also be something custom for a different platform."
"For Sentinel-2 we can use a metadata file of the model to extract those values. But this could also be something custom for a different platform."
]
},
{
Expand Down Expand Up @@ -372,9 +372,9 @@
"id": "c2d8e1f3-011d-4be5-8071-547f0ad91ad6",
"metadata": {},
"source": [
"### Convert the band pixel data in to the format for the model\n",
"### Convert the band pixel data into the format for the model\n",
"\n",
"We will take the information in the stack of imagery and convert it into the formate that the model requires. This includes converting the lat/lon and the date of the imagery into normalized values."
"We will take the information in the stack of imagery and convert it into the format that the model requires. This includes converting the lat/lon and the date of the imagery into normalized values."
]
},
{
Expand Down Expand Up @@ -422,7 +422,7 @@
"source": [
"### Combine the metadata and the transformed pixels\n",
"\n",
"Now we can combine all of these inputs into a dictionary that combines everything."
"Now we can combine all these inputs into a dictionary that combines everything."
]
},
{
Expand Down Expand Up @@ -481,7 +481,7 @@
"source": [
"### Analyse the embeddings\n",
"\n",
"A simple analysis of the embeddings is to reduce each one of them into a single number using Principal Component Analysis. For this we will fit a PCA on the 12 embeddings we have, and do the dimensionality reduction for them. We will se a separation into three groups, the previous images, the cloudy images, and the images after the fire, they all fall into a different range of the PCA space."
"A simple analysis of the embeddings is to reduce each one of them into a single number using Principal Component Analysis. For this we will fit a PCA on the 12 embeddings we have and do the dimensionality reduction for them. We will se a separation into three groups, the previous images, the cloudy images, and the images after the fire, they all fall into a different range of the PCA space."
]
},
{
Expand Down Expand Up @@ -534,7 +534,7 @@
"id": "b38b70a6-2156-41f8-967e-a490cc8e2778",
"metadata": {},
"source": [
"### And finally, some finetuning\n",
"### And finally, some fine-tuning\n",
"\n",
"We are going to train a classifier head on the embeddings and use it to detect fires."
]
Expand Down Expand Up @@ -564,7 +564,7 @@
"fit = [0, 1, 3, 4, 7, 8, 9]\n",
"test = [2, 5, 6, 10, 11]\n",
"\n",
"# Train a support vector machine model\n",
"# Train a Support Vector Machine model\n",
"clf = svm.SVC()\n",
"clf.fit(embeddings[fit] + 100, labels[fit])\n",
"\n",
Expand Down