Skip to content

Commit

Permalink
[docs] labe studio + tab cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
themattinthehatt committed Dec 2, 2023
1 parent d576bab commit 627fcfd
Show file tree
Hide file tree
Showing 13 changed files with 156 additions and 44 deletions.
12 changes: 10 additions & 2 deletions docs/source/accessing_your_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,22 @@
Accessing your data
###################

First, make sure you return to the terminal/VS Code view of the Studio by clicking on
the appropriate icon in the right-hand tool bar.

.. image:: https://imgur.com/lINajyE.png
:width: 200

You will see a file explorer on the left-hand side.

Lightning Pose project structure
================================

Data for the project named ``<PROJ_NAME>`` will be stored in a directory with the following structure:

.. code-block::
~/Pose-app/.shared/data/<PROJ_NAME>
~/Pose-app/data/<PROJ_NAME>
├── labeled-data/
├── models/
├── videos/
Expand Down Expand Up @@ -50,7 +58,7 @@ Each model and its associated outputs will be stored in a directory with the fol

.. code-block::
~/Pose-app/.shared/data/<PROJ_NAME>/models/YYYY-MM-DD/HH-MM-SS
~/Pose-app/data/<PROJ_NAME>/models/YYYY-MM-DD/HH-MM-SS
├── tb_logs/
├── video_preds/
├── video_preds_infer/
Expand Down
2 changes: 1 addition & 1 deletion docs/source/faqs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,5 +54,5 @@ You can find the relevant parameters to adjust

The app uses generic login info:

* username: user@localhost
* email: user@localhost
* password: pw
9 changes: 7 additions & 2 deletions docs/source/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Getting started

There are several options for getting started with the Lightning Pose app:

* :ref:`Duplicate Lightning Studio <lightning_studio>` is a no-install option - simply clone a cloud-based environment that comes with the app already installed. Requires creating a Lightning.ai account.
* :ref:`Duplicate Lightning Studio <lightning_studio>` is a no-install option - simply clone a cloud-based environment that comes with the app already installed. Requires creating a Lightning account.

* :ref:`Install app from github <conda_from_source>` is for installation on a local machine or a fresh Lightning Studio. This option is mostly used for development purposes.

Expand All @@ -16,11 +16,16 @@ Duplicate Lightning Studio with pre-installed app
Follow
`this link <todo>`_
to the Lightning Pose App Studio.
When you click the **Use** button you will be taken to a Lightning Studio environment with access to a command line interface, VSCode IDE, Jupyter IDE, and more.
When you click the **Use** button you will be taken to a Lightning Studio environment with access
to a terminal, VSCode IDE, Jupyter IDE, and more.
The app and all dependencies are already installed.

You will be required to create a Lightning account if you have not already signed up.

Once you have opened the Studio environment you can proceed to
:ref:`Using the app <using_the_app>`
to learn how to launch the app and navigate the various tabs.

.. _conda_from_source:

Install app from github
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tabs/extract_frames.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You can also select the portion of the video to extract frames from.
If the beginning and/or end of your videos do not contain the animals or contain extra objects
(e.g. experimenter hands) we recommend excluding these portions.

Click "Extract frames", and another progress bar will appear.
Click "Extract frames" once the video upload is complete, and another progress bar will appear.

.. image:: https://imgur.com/U258Vah.png

Expand Down
27 changes: 25 additions & 2 deletions docs/source/tabs/label_frames.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,29 @@ At the login screen, provide the following generic credentials:
.. image:: https://imgur.com/TGIqQ6I.png

Once you have logged in, you will see a list of your labeling projects.
Even though you can see all projects, only select the project which is currently loaded in the app!
Even though you can see all projects, only select the project you loaded in the app from the
project manager!

TODO
.. image:: https://imgur.com/FkeRAHW.png

The next screen displays all of your images selected for labeling.

.. image:: https://imgur.com/WNIq5oJ.png

Click on one of the images and it will enlarge.
Click on a keypoint name (or hit the corresponding number on your keyboard) and the keypoint name
will become highlighted.
You can now place the keypoint on the image.

.. image:: https://imgur.com/becL4T6.png

Use the provided tools in the right-hand tool bar to pan and zoom.

.. image:: https://imgur.com/buWE79h.png
:width: 50

When you have labeled all desired keypoints, click "Submit".
You can return to any image and move keypoints around; be sure to click "Update" to save the
changes.

You can then click on another image on the left-hand side to continue labeling.
14 changes: 7 additions & 7 deletions docs/source/tabs/labeled_diagnostics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,22 +18,22 @@ Models can be renamed using the text fields in the left panel.
Select data to plot
-------------------

.. image:: https://imgur.com/8C7JShk.png
.. .. image:: https://imgur.com/8C7JShk.png
Filter results using various criteria:

* keypoint: mean computes metric average across all keypoints on each frame
* metric: choose from available metrics like pixel error and confidence
* train/val/test: data split to plot
* **Keypoint**: mean computes metric average across all keypoints on each frame
* **Metric**: choose from available metrics like pixel error and confidence
* **Train/Val/Test**: data split to plot

Compare multiple models
-----------------------

Plot selected metric/keypoint across all models:

* plot style: choose from box, violin, or strip
* metric threshold: ignore any values below this threshold
* y-axis scale: choose from log or linear
* **Plot style**: choose from box, violin, or strip
* **Metric threshold**: ignore any values below this threshold
* **Y-axis scale**: choose from log or linear

Compare two models
------------------
Expand Down
68 changes: 58 additions & 10 deletions docs/source/tabs/manage_project.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,37 +34,84 @@ In this example we have a side and bottom view of a mouse, so we enter "2".
Next, you will define the set of keypoints that you would like to label.
In the example below we enter four keypoint names: two bodyparts ("nose" and "tailbase")
seen from two views ("top" and "bottom").
If you are using more than two views, we recommend listing all keypoints from one view first,
If you are using more than one view, we recommend listing all keypoints from one view first,
then all keypoints from the next view, etc.

.. image:: https://imgur.com/m0a6TRy.png

You will then be prompted to select a subset of keypoints for the Pose PCA loss.
Including static keypoints (e.g. those marking a corner of an arena) are generally not helpful.
Also be careful to not include keypoints that are often occluded, like the tongue.
If these keypoints are included the loss will try to localize them even when they are occluded,
which might be unhelpful if you want to use the confidence of the outputs as a lick detector.
If these keypoints are included the Pose PCA loss will try to localize them even when they are
occluded, which might be unhelpful if you want to use the confidence of the outputs as a lick
detector.

.. image:: https://imgur.com/1BtsrWG.png

Finally, if you chose more than 1 camera view, you will select which keypoints correspond to the
Finally, if you chose more than one camera view, you will select which keypoints correspond to the
same body part from different views.
This table will be filled out automatically, but check to make sure it is correct!

.. image:: https://imgur.com/0Nb7hCp.png

Click "Create project".
You will see "Request submitted", and once the project is created the text will update to
"Proceed to the next tab to extract frames for labeling" in green.
Click "Create project"; you will see "Request submitted".
Once the project is created the text will update to
"Proceed to the next tab to extract frames for labeling" in green,
and a new set of tabs will appear at the top of the app.

.. image:: https://imgur.com/J2IEZrm.png
.. .. image:: https://imgur.com/J2IEZrm.png
.. _create_new_project_from_source:

Create new project from source
==============================

COMING SOON
.. image:: https://imgur.com/499rk2a.png

.. warning::

The app currently only supports conversion of DLC projects.
If you have another type of project that needs conversion support (SLEAP, DPK, etc.) please
`raise an issue <https://github.com/Lightning-Universe/Pose-app/issues>`_.

The standard DLC project directory looks like the following:

.. code-block::
<dlc-project>
├── dlc-models/
├── labeled-data/
├── training-datasets/
├── videos/
└── config.yaml
You will need to create a zip file of this project directory to upload to the app.
The upload process can take some time, so we recommend first creating a version of the dlc project
that **only** contains the directories ``labeled-data`` and ``videos``.
Make sure the videos are not symlinks!
Once you have created this project copy, compress it into a zip file.

.. code-block::
<dlc-project-copy>
├── labeled-data/
└── videos/
In the Lightning Pose App project manager, select "Create new project from source" and give your
project a name (can be the same as the DLC name or different).
You will then select the uploaded project format, and upload your zip file.

.. note::

If your zip file is larger than the 200MB limit, :ref:`see the FAQ <faq_upload_limit>`.
You may also replace many large video files with smaller video snippets for faster uploading.
Whatever video files are in the ``videos`` directory will be used for unsupervised losses.

Once the zip file upload is complete you will need to walk through the steps covered in
:ref:`Create new project <create_new_project>` (though note the keypoint names are now provided).
Once you click "Create project" your DLC project will be successfully converted!
If you have many hundreds or thousands of labeled images in your project it may take
several minutes to upload all of the data into LabelStudio.

.. _load_existing_project:

Expand All @@ -73,7 +120,8 @@ Load existing project

.. image:: https://imgur.com/O8Jdd54.png

Enter project name; you will see a list of available projects (like 'mirror-mouse' above).
Enter project name; you will see a list of available projects (like 'mirror-mouse' above) -
you **must** select one of the available projects, or you will see an error message.
Once you enter the project name click "Load project".

You will see the previously entered project data appear (camera views, keypoint names, etc.).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tabs/prepare_fiftyone.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ existing names are shown above the text field (in this example, "data-1" and "da
.. image:: https://imgur.com/sYZ0UCb.png
:width: 600

Click the "Prepare Fiftyone dataset" button.
Click the "Prepare Fiftyone dataset" button or hit "Enter".
The green success message will indicate when it is time to proceed.

.. image:: https://imgur.com/tFuAVt6.png
Expand Down
34 changes: 20 additions & 14 deletions docs/source/tabs/train_infer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This tab is the interface for training models and running inference on new video
.. image:: https://imgur.com/GXhvqXI.png

The left side-bar displays your current labeling progress, and contains a drop-down menu showing
all existing models.
all previously trained models.
The "Train Networks" and "Predict on New Videos" columns are for training and inference,
and detailed below.

Expand All @@ -19,23 +19,28 @@ Train Networks
Training options
----------------

Optionally change the max training epochs and the types of unsupervised losses used for the
From the drop-down "Change Defaults" menu,
optionally change the max training epochs and the types of unsupervised losses used for the
semi-supervised models.

.. image:: https://imgur.com/LiylXxc.png
.. .. image:: https://imgur.com/LiylXxc.png
:width: 400
The PCA Multiview option will only appear if your data have more than one view;
the Pose PCA option will only appear if you selected keypoints for the Pose PCA loss during
project creation.

Video handling options
----------------------

After each model has completed training, you can choose to automatically run inference on the set
of videos uploaded for labeling:

* Do not run inference: self-explanatory
* Run inference on videos: runs on all videos previously uploaded in the "Extract Frames" tab
* Run inference on videos and make labeled movie: runs inference and then creates a labeled video with model predictions overlaid on the frames.
* **Do not run inference**: self-explanatory
* **Run inference on videos**: runs on all videos previously uploaded in the "Extract Frames" tab
* **Run inference on videos and make labeled movie**: runs inference and then creates a labeled video with model predictions overlaid on the frames.

.. image:: https://imgur.com/8UBY5y9.png
.. .. image:: https://imgur.com/8UBY5y9.png
:width: 400
.. warning::
Expand All @@ -48,17 +53,17 @@ Select models to train

There are currently 4 options to choose from:

* Supervised: fully supervised baseline
* Semi-supervised: uses labeled frames plus unlabeled video data for training
* Supervised context: supervised model with temporal context frames
* Semi-supervised context: semi-supervised model plus temporal context frames
* **Supervised**: fully supervised baseline
* **Semi-supervised**: uses labeled frames plus unlabeled video data for training
* **Supervised context**: supervised model with temporal context frames
* **Semi-supervised context**: semi-supervised model plus temporal context frames

.. image:: https://imgur.com/x1MdTSk.png
.. .. image:: https://imgur.com/x1MdTSk.png
:width: 400
.. note::

If you uploaded a DLC project you will not see the context options.
If you uploaded a DLC project or are using ``demo_app.py`` you will not see the context options.

Train models
------------
Expand Down Expand Up @@ -86,7 +91,8 @@ You will see an upload progress bar.
.. image:: https://imgur.com/MXHq8hx.png
:width: 400

Click "Run inference", and another set of progress bars will appear.
Click "Run inference" once the video uploads are complete,
and another set of progress bars will appear.
After inference is complete for each video a small snippet is extracted
(during the period of highest motion energy)
and a video of raw frames overlaid with model predictions is created for diagnostic purposes.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tabs/video_diagnostics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ Select a subset of models and a keypoint.
The trace plot shows various metrics up top (temporal norm, pca errors),
followed by the (x, y) predictions and their confidences.

.. image:: https://imgur.com/LGFnBVD.png
.. image:: https://imgur.com/RBRsUg0.png
24 changes: 23 additions & 1 deletion docs/source/using_the_app.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _using_the_app:

#############
Using the app
#############
Expand All @@ -10,13 +12,24 @@ We provide three different apps:

**Launch the app**

Once you have opened the Lightning Pose App Studio, you will see the following:

.. image:: https://imgur.com/N351izy.png
:width: 400

First, open a terminal by using the drop-down menu at the top left and select
``Terminal > New Terminal``.

.. image:: https://imgur.com/ZqhpAhE.png
:width: 400

To launch an app from the terminal, make sure you are in the ``Pose-app`` directory and run
To launch an app from the terminal, move to the ``Pose-app`` directory:

.. code-block:: console
cd Pose-app
and run

.. code-block:: console
Expand All @@ -36,6 +49,15 @@ Navigate to the app output by clicking on the "port" plugin on the right-hand to
.. image:: https://imgur.com/0XxDcpZ.png
:width: 400

IMPORTANT! You will need to click on the "Share" button, which will open the app in a separate
browser window.
This is crucial to getting all of the components visualized properly.
In the original browser page you can return to the terminal to see printouts by clicking on the
VS Code icon in the right-hand tool bar.

.. image:: https://imgur.com/lINajyE.png
:width: 200

Click on the links below to find more information about specific tabs;
remember that ``demo_app.py`` and ``labeleing_app.py`` only utilize a subset of the tabs.

Expand Down
2 changes: 1 addition & 1 deletion lightning_pose_app/ui/train_infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -851,7 +851,7 @@ def _render_streamlit_fn(state: AppState):
st_loss_pcamv = False
pcasv = state.config_dict["data"].get("columns_for_singleview_pca", [])
if len(pcasv) > 0:
st_loss_pcasv = expander.checkbox("PCA Singleview", value=True)
st_loss_pcasv = expander.checkbox("Pose PCA", value=True)
else:
st_loss_pcasv = False
st_loss_temp = expander.checkbox("Temporal", value=True)
Expand Down
Loading

0 comments on commit 627fcfd

Please sign in to comment.