Skip to content

Quick Start Guide

Brandon Forys edited this page Aug 19, 2021 · 51 revisions

Ready to start using MesoNet? This is the place to begin!

Note: Before starting this tutorial, make sure you've installed MesoNet and its dependencies using the instructions found here.

MesoNet can be used in one of two ways:

  • through a graphical user interface (GUI), which doesn't require you to type any code. This method is useful if you have a set of brain images and you want to predict and export the brain regions from each image, and aren't interested in customizing various parameters such as the machine learning model parameters or the brain atlases used to make predictions.
  • through a command line interface (CLI), which requires you to type a minimal amount of code using IPython. This method is useful if you're using a Jupyter notebook, Colab environment, or another environment that doesn't easily support GUIs, or if you want to customize various parameters in the code.

Additionally, MesoNet can be used through five approaches:

  1. Atlas to brain: Given a pre-trained DeepLabCut model that was trained to associate anatomical landmarks with corresponding points on atlases of brain regions, this approach registers an atlas of brain regions to the fixed brain imaging data using affine transformations. This approach is useful if your data has common anatomical landmarks and is the most robust to variations in image quality and orientation within your data.
  2. Brain to atlas: Given a pre-trained DeepLabCut model that was trained to associate anatomical landmarks with corresponding points on atlases of brain regions, the brain imaging data is fixed onto an atlas of brain regions using affine transformations. This approach is useful if you would like to normalize your brain images to a common template based on anatomical landmarks.
  3. Atlas to brain + sensory maps: Given a pre-trained DeepLabCut model that was trained to associate anatomical landmarks with corresponding points on atlases of brain regions as well as a set of folders containing sensory maps with peaks of sensory activation for each animal, register an atlas of brain regions to the fixed brain imaging data using affine transformations. This approach is useful if you have consistent peaks of functional activity across animals that you would like to use in the alignment processes.
  4. Motif-based functional maps (MBFMs) + U-Net: Given a pre-trained U-Net model that was trained to associate brain imaging data with atlases of brain regions, predict the locations of brain regions in the data without the use of landmarks. The brain imaging data should be motif-based functional maps (MBFMs) calculated from a series of frames (unlike other approaches, which use only one frame at a time) using the associated MATLAB code (10.24433/CO.4985659.v1). This approach is useful if one wishes to mark functional regions based on more complex features of the data (e.g. a motif-based functional map) than landmarks.
  5. Motif-based functional maps (MBFMs) + Brain-to-atlas + VoxelMorph: Given a pre-trained VoxelMorph model that was trained to compute a non-linear transformation between a template functional brain atlas and brain image data, predict the locations of brain regions in the data. In particular, this approach can register each input brain image to a user-defined template functional atlas. The brain imaging data should be motif-based functional maps (MBFMs) calculated using the associated MATLAB code (using seqNMF). This approach is useful if your images are consistently oriented and you want to compare the predicted locations of brain regions across different images.

Each of these approaches can be combined with the others in a variety of ways, and each approach can be included or excluded through the GUI or CLI.

Let's go through how you can use each method to predict and export brain regions:

Preparation

If you are using U-Net (.hdf5) or VoxelMorph (.h5) models, make sure that these models are placed in a new folder called models within the mesonet subdirectory of the MesoNet git repository.

In most cases, MesoNet requires a DeepLabCut model to identify anatomical landmarks in your brain images. For ease of use, go to the git repository for MesoNet that you have downloaded to your computer > mesonet subfolder, then create a models folder and input a DeepLabCut project folder (the default is the one provided on our OSF repository: atlas-DongshengXiao-2020-08-03).

If you wish to use a U-Net model to segment the borders of the cortex, go to the git repository for MesoNet that you have downloaded to your computer > mesonet subfolder, then create a models folder there and input an .hdf5 format MesoNet model. To use a VoxelMorph model to additionally align a motif-based functional map (MBFM) using local deformation, create a subfolder within models called voxelmorph and input an .h5 format VoxelMorph model.

Place all of your brain images in a single directory. For best results, please make sure that all of your brain images are 8-bit, and in .png format.

Alternatively, you can analyze an image stack in .tif format. Simply place the .tif image stack in an otherwise empty folder, and MesoNet will analyze all images in the stack.

The GUI method is outlined right below; you can also skip to the command line method if you prefer to work in a coding environment.


Graphical User Interface (GUI) method

Quick usage reference:

activate DLC-GPU
ipython
import mesonet
mesonet.gui_start()
  1. Input folder: select input images
  2. OPTIONAL: Inspect images using arrow keys/buttons
  3. Output folder: select folder to which you'll output images
  4. CHOICE: Select a default pipeline on the rightmost column and skip to step 11, OR continue following instructions below.
  5. Select U-Net model to use in list on right
  6. OPTIONAL: Select options and landmarks to use
  7. Click "Predict brain regions using landmarks"
  8. Check outputs (segmented brain image in GUI, other outputs in selected output folder)

Complete guide:

  1. If you haven't already done so, open your favourite command line application (e.g. Terminal on Mac or Linux, or Command Prompt on Windows) and activate the DeepLabCut environment for your computer by typing activate followed by the name of the DeepLabCut environment that you installed (based on the instructions given here). For example, you might type:
activate DLC-GPU
  1. After going to any directory outside of the MesoNet git repository folder, start IPython by typing ipython at the command line.
  2. To start the gui for applying an existing model to your dataset, type:
import mesonet
mesonet.gui_start()

NOTE: Running the GUI using the above command will automatically search for your MesoNet git repository so that it can access necessary non-Python files (e.g. masks, U-Net models, etc.). On some computers (especially Linux) this process can take a long time. If you are experiencing very long (>1 min) load times when starting the GUI, you can manually define the git repository by using the command mesonet.gui_start(git_repo='path/to/repo'), where 'path/to/repo' is a string containing the full path to the top level of your MesoNet git repository (i.e. the folder containing mesonet, setup.py, etc. You can change the git repository used at any time from within the GUI as well with the git repository browse button.

You should see a screen like this:

The screen that appears when you start MesoNet

  1. Next to "Input folder" at the top of the screen, click "Browse..." and select a folder containing your brain images (.png, .npy, and .mat will work) or a single .tif, .npy, or .mat image stack, which should be in the format outlined above in the Preparation section. Don't be alarmed if nothing appears in the directory - if you have images matching these criteria in the directory, they will appear in the GUI!
  2. Next to "Save folder" at the top of the screen, click "Browse..." and select (or create) an empty folder to which you want to save your analyses. After you've selected the folder, you should see a screen that looks like this:

The screen that appears after selecting a save folder

You now have two options:

  • If you would like to use one of the five pipelines outlined at the top of this page with their default options, click the corresponding button on the rightmost column of the screen under the heading Quick Start: automated pipelines. If you choose to use one of these pipelines, you do not need to select any other options unless you are using the the Atlas to brain + sensory maps pipeline - you can skip to step 8 on this page. If you are using this pipeline, go to "Sensory map folder" and select a folder containing a subfolder for each input brain image, named after that brain image without the file extension (e.g. 10.png -> folder named 10). Within each subfolder, place three images of functional activity for that specific brain. The peaks of functional activity in each image should ideally represent regions that are consistently activated given a specific activity (e.g. whisker stimulation).

  • Alternatively, if you would like to define your own pipeline using custom options, continue to follow the steps below.

  1. For "DLC config folder", if you are using a DeepLabCut model to identify landmarks on the mouse cortex, locate the config.yaml file for that model here. Otherwise, the system will use a model at the path shown in the box at "DLC config folder" when you start up the MesoNet GUI. NOTE: you will likely have to change the path here to find a config.yaml file for the model you wish to use!

You can now use the arrow buttons at the bottom of the screen, or the left and right arrow keys on your keyboard, to browse through all of the images that MesoNet will analyze.

  1. Now you can configure the settings on the right side of the screen:
  • Select the U-Net model that you want to use to find the edges of the cortex (important if you've trained more than one model!) If you're looking for a model and can't find it here, check the models folder in the MesoNet git repository and ensure that your desired .hdf5 model is there.

  • Select "Save predicted regions as .mat files" if you want to export each brain region as a region of interest (ROI) as a MATLAB (.mat) file. Select this if you have a workflow that involves detecting activity in specific brain regions through MATLAB or Octave (e.g. overlaying an ROI on functional brain imaging to identify activity in a specific brain region).

  • Select "Use U-Net for alignment" if you have a U-Net model selected from the list above and you wish to constrain the aligned atlas to within the borders of the visible cortex. This enables the U-Net approach.

  • Select "Use DeepLabCut for alignment" if you have a DeepLabCut model and you wish to align the atlas and brain based on landmarks defined in this model. This enables the Landmark Estimation (DeepLabCut) approach.

  • Select "Use VoxelMorph for alignment" if you have a VoxelMorph model and wish to align brain regions within the borders of the cortex based on a provided template. This enables the VoxelMorph approach. NOTE: If you wish to use this approach, uncheck "Align atlas to brain" below as VoxelMorph works best when first aligning an input brain image to an atlas, followed by that atlas being deformed to the boundaries of the brain image.

  • Select "Draw olfactory bulbs" if the olfactory bulbs are visible in all images.

  • Select "Align atlas to brain" if you wish to register a standardized atlas to the brain images. Uncheck this option if you are using VoxelMorph, or if you wish to normalize and align the brain images to a stationary standardized atlas. The latter approach may allow the brain regions to be identified in a more consistent manner, but requires transformation back to the native space of the brain for follow-up analyses if you do not want to work in normalized space.

  • Select "Align using sensory map" only if you have images of functional activity in the brain, and a set of coordinates for peaks of functional activity overlaid on a brain atlas that corresponds to these images. This will allow you to use your functional activity data to potentially improve the quality of the brain region predictions. If you have such images, create a folder with one subfolder for each brain image you plan to analyze. For example, if you're analyzing images 0, 1, 2, 3, ... create subfolders 0, 1, 2, 3, ... Within each subfolder, place three images of functional activity for that specific brain. The peaks of functional activity in each image should ideally represent regions that are consistently activated given a specific activity (e.g. whisker stimulation). Make sure to locate the folder containing your sensory map subfolders under "Sensory map folder".

  • Select "Plot DLC landmarks on final image" to plot the landmarks as predicted by DeepLabCut as large circles, and the the landmarks on the aligned atlas as small circles. For best results, points of corresponding colour should be as close to each other as possible. Note: if you deselect "Use DeepLabCut for alignment", this option will be overridden and landmarks will not be plotted (because there would be no landmarks to plot).

  • Select "Align based on first brain image only" to only calculate a transformation (brain-to-atlas or atlas-to-brain) based on the first brain image only (as opposed to individually for each brain image). This can save time if all of your brain images are from the same animal and are perfectly aligned. Additionally, if you want to initially align each brain image based on a template (i.e. the first image in your set), but then conduct all other follow-up alignments using each individual image, choose this option.

  • Select "Use old label consistency method (less consistent)" to try and assign labels to brain regions by brain hemisphere. This method may not assign labels consistently between brain images because it is partly dependent on the positioning of the contours. The method used when this box is not checked requires a brain atlas (in .csv format) that has each brain region filled with a unique numeric label; it provides very high consistency in the labels of the brain regions between brain images. Note: this method is used in the VoxelMorph and sensory map alignment approaches because of the non-linear transformations involved in each approach - which are not compatible with the data format that stores the brain atlas containing labels.

  • The remaining nine check-boxes allow you to select the landmarks to be used in the alignment. You can align with as few as two landmarks or as many as nine landmarks; the default (from the included, default DeepLabCut model) is nine landmarks, which are as follows (with stereotaxic coordinates relative to bregma, in mm):

Position Definition Coordinates (mm)
1. Left Anterolateral tip of the left parietal bone (-3.13, 2.19)
2. Top left Left frontal pole (-1.83, 3.41)
3. Bottom left Posterior tip of the left retrosplenial region (-0.85, -4.02)
4. Top centre Cross point between the median line and the line which connects the left and right frontal pole (0, 3.41)
5. Bregma Bregma (0, 0)
6. Lambda Anterior tip of the interparietal bone (0, -3.49)
7. Right Anterolateral tip of the right parietal bone (3.13, 2.19)
8. Top right Right frontal pole (1.83, 3.41)
9. Bottom right Posterior tip of the right retrosplenial region (0.85, -4.02)

For best alignment results, try and select as many landmarks as are present in your data, or at least two points in each hemisphere (left and right), as well as at least two of Top centre, Bregma, and Lambda. If you leave all of the landmarks selected, the ones used for alignment by default will be Left, Top centre, Lambda, and Top right. If you deselect any of these four landmarks, the first three landmarks you selected in the left hemisphere and the last three landmarks you selected in the right hemisphere will be used. For example, if you deselect landmarks 1 (Left) and 9 (Bottom right), the landmarks that will be used are 2, 3, and 4 in the left hemisphere and 8, 7, and 6 in the right hemisphere.

  • Click on "Open VoxelMorph settings" to define options for the VoxelMorph stage of the workflow, which carries out a second, model-based transformation of local functional regions based on a pre-trained model. The following window will appear (it might be behind the current window):

The screen that appears when you click "Open VoxelMorph settings"

  • Next to "Template file location", select the folder containing a template file to which the brain image will be registered. This will usually be a functional brain image containing functional motifs to which you want to align an input functional image. This image can be in .png, .mat, or .npy format.

  • Only change "Flow file location" if you want to apply transformations based on an existing VoxelMorph deformation field (such as the .npy file that is output alongside each atlas in output_mask. If you define an existing flow file to use, make sure to check "Use existing transformation" in order to use that flow file.

  • In the box to the right, select the VoxelMorph model that will be used to compute the transformation. Any models you add should be placed in the models/voxelmorph subfolder of the MesoNet git repository; they will then appear in this list.

  • When you've defined your input images, a save folder, and U-Net model (and optionally a set of sensory maps), return to the main window and click "Predict brain regions using landmarks". Select this option to automatically predict brain regions in all of your brain images. After running this step (which may take several minutes depending on your processor speed), the GUI will look something like this:

The screen that appears after running "Predict brain regions using landmarks"

  • The numerical labels on this brain image correspond to the order in which MesoNet identified the brain regions in your image. In general, brain regions in each image which have the same number are considered by MesoNet to be the same brain region. If you selected the "Save predicted regions as .mat files" checkbox, the numbers on those files also correspond to the brain regions you see here.

  • Leave "Predict brain regions directly using pretrained U-Net model" alone unless you have a machine learning model (selected in the white box above) that you've trained to segment your specific brain images. If you do have such a model, click this button to predict the shape and location of each brain region in your brain image based on your model.

  • You can browse through the segmented brain images using the left and right arrow buttons at the bottom of the screen, or the left and right arrow keys on your keyboard.

  • You can also use MesoNet as a quick interface for evaluating animal behaviour using DeepLabCut. First, place the .yaml config file for your DeepLabCut pose estimation model in dlc -> behavior in the mesonet subfolder. Next, select a set of images that you'd like to analyze for a behavior in the Behavior Input folder at the top of the screen; select a folder to which you'd like to save these images using the Behavior Save folder; then click the "Predict animal movements" button on the right. Please note that this feature is experimental.

  1. Congrats, you're done! You can now go to the save folder that you selected and find all of the data files and products of this analysis, including the images output by each step, the landmark predictions, and your .mat files (if you chose to generate them). In the save folder, you'll also find a file called 'mesonet_test_config.yaml'; this is a file containing all of the settings you defined for this analysis in the GUI, and you can use this config file in the command line method described below to extend your analysis in the command line.

Command Line Interface method

If you want to have more control over the parameters of MesoNet's analysis, or simply feel more comfortable working with a bit of IPython, then we also offer a straightforward command line interface.

Quick usage reference:

activate [your DeepLabCut environment, e.g. DLC-GPU]
ipython
import mesonet
input_file = 'path/to/input/folder'
output_file = 'path/to/output/folder'
config_file = mesonet.config_project(input_file, output_file, 'test')
mesonet.predict_regions(config_file)
mesonet.predict_dlc(config_file)
  • If you want to rerun or continue an existing analysis (e.g. continue an analysis you started in the GUI):
import mesonet
config_file = 'path/to/config/file'
mesonet.predict_regions(config_file)
mesonet.predict_dlc(config_file)

Complete guide:

  1. If you haven't already done so, open your favourite command line application (e.g. Terminal on Mac or Linux, or Command Prompt on Windows) and activate the DeepLabCut environment for your computer by typing activate followed by the name of the DeepLabCut environment that you installed (based on the instructions given here). For example, you might type:
activate DLC-GPU
  1. Type ipython to enter the IPython interpreter, then import the MesoNet package by typing import mesonet.
  2. First, we need to define the folder containing your input brain images:
input_file = 'path/to/input/folder'

where path/to/input/folder is the folder containing your input images (make sure they're in the format discussed above in Preparation). If you're on Windows, make sure to add an r before the first single quote (e.g. r'C:\...').

  1. Now, define the folder to which you'll save the output brain images:
output_file = 'path/to/input/folder'
  1. MesoNet's command line interface works using configuration files that define - and allow you to customize - settings for each analysis (see the Config File Guide for all customizable parameters. To generate a config file for your analysis, run:
config_file = mesonet.config_project(input_file, output_file, 'test')

The 'test' command indicates that this config file will be used to apply an existing model to predict brain regions. We'll be adding the ability to train a new brain region segmentation model through this method soon! This step will generate a config file in the output_file directory (i.e. save directory) that you defined. You can open this file (mesonet_test_config.yaml) with any text editor; the details of its parameters can be found here.

NOTE: the parameters use_dlc, use_unet, and use_voxelmorph will activate the Landmark Estimation (DeepLabCut) approach, the U-Net approach, and the VoxelMorph approach, respectively, if set to True.

If you didn't type 'config_file =' before the command above, define the path to this config file by running:

config_file = 'path/to/config/file'

where 'path/to/config/file' is the full path to the config file in your save directory.

  1. The analysis itself runs in two steps. Firstly, to generate the masks of the brain's outline to be used in the second stage of the analysis, run:
mesonet.predict_regions(config_file)

After running this step, you may wish to look at your save folder. It will now have a couple of new folders, one of which is called output_mask. This folder contains the masks of the brain's outline.

  1. Lastly, run:
mesonet.predict_dlc(config_file)

to generate the segmented brain regions! These will be saved in the output_overlay folder in your save folder.