-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refining masks with micro-sam. #383
Comments
Great !!! thanks a lot :) I will try to make it "napari compatible" using viewer.layers as well :) Thanks !!! |
my attempt on a 2d image : import numpy as np image = viewer.layers[0] Get the labels from the napari Labels layerlabels = initial_segmentation.data Convert labels to a numpy arraylabeled_image = np.array(labels) Get the image data as a NumPy arrayimage_data = viewer.layers[1].data Convert to a NumPy arrayimage = np.array(image_data) props = regionprops(labeled_image) predictor = get_sam_model(model_type="vit_b_lm") # <- you can control which model is used with the model type argument. See the function signature of get_sam_model for details.refined_segmentation = batched_inference( BUT :
not sure how to fix the dimension problem ...? thanks |
It turns out an extra dimension needs to be added to the inputs of # We need to add an extra dimension to provide the correct input for batched_prediction.
points = np.expand_dims(points, 1)
point_labels = np.expand_dims(point_labels, 1) (I have updated the pseudo-code on top too, so you can see the full example there.) |
Thanks a lot, there is a shape argument missing in the function ( mask_data_to_segmentation) , will work to fix it. thanks a lot, |
Ok that seems to work : (tested roughly!) I aslo added in the inference.py file (shape attribute was missing) : added shape=image_shape
import numpy as np image = viewer.layers[0] Get the labels from the napari Labels layerlabels = initial_segmentation.data Convert labels to a numpy arraylabeled_image = np.array(labels) Get the image data as a NumPy arrayimage_data = viewer.layers[1].data Convert to a NumPy arrayimage = np.array(image_data) props = regionprops(labeled_image) We need to add an extra dimension to provide the correct input for batched_prediction.points = np.expand_dims(points, 1) predictor = get_sam_model(model_type="vit_b_lm") # <- you can control which model is used with the model type argument. See the function signature of get_sam_model for details.refined_segmentation = batched_inference( Add the refined segmentation labels as a new layer to the viewerviewer.add_labels(refined_segmentation, name='Refined Segmentation Labels') Optionally, you can also set the colormap and opacity for the new layerviewer.layers[-1].colormap = 'viridis' will test more later on, Thanks a lot !! :) |
Ok, great! Let me know how the quality looks. If there are any issues this can probably be improved by adjusting some parameters.
This should not be necessary if you're working of the But that is only a minor thing, just be aware that this might change soon on |
the results are slightly different , hence I think I have to adjust the parameters as you suggested, but it works in principle :D . I think the point by side (default is 32) gives better results using 100 (more granular), but any customable parameters will be useful :). I can share the image / pre-sam output if that helps? The main idea, is to refine the segmentation for the elongated cells (often the sides are not well segmented), but also refine doublets and general fine segmentation. In addition it will be really cool to be able to add points (automatically) for any missing cells from the pre-segmentation. That way it will do 2 things : refine the existing segmentation and add the missing cells (1 stone , 2 birds..!) . I am still on the master branch , but will switch to the dev one , what about for 3d (my main interest) ? |
That's great!
Yes, that would be quite helpful!
Do you have a good heuristic for how to adding points automatically?
I will follow up on that next week. (I am on a retreat this week, so my answers are a bit slower, but I will be working on this next week anyways and share some code.) |
Hi, I sent you an invite to share the files to your email, I included the original image, my custom 2d model masks and the refinements from microsam, as mentioned the main improvement could be with elongated nucleus, that will be great to refine these :) . For adding points , I was part of the last HTAN jamboree (https://github.com/NCI-HTAN-Jamborees/Improving-cell-segmentation-for-spatial-omics/tree/main) , we worked on similar approaches, and I know there are a few papers working on the idea , using the specialized models as promts (as we are doing now) , but adding the automatic grid points on top in case that the specialized model missed some nuclei (grid worked better with 100 points if i remember correctly). I will dig into finding these papers later on . No probs for the delay, the 3d is the most time consuming, hence any help will be appreciated :) thanks. |
Thanks for sending the data. Unfortunately the service you used for sharing seems to require a client for downloading that is not available for linux (and I use a linux machine). Could you share it with a different service that enables direct download via the browser? |
I sent a google drive link, does this work ? |
Yes that worked! I have downloaded the data and will take a closer look next week. |
Hi @Nal44 , |
Hi , |
There are different ways for refining existing masks with
micro_sam
.The easiest option would be to derive point prompts from the centers of the masks and then prompt the model with these points.
The function batched_inference can be used for this.
Here is some (non-tested!) code for this, using skimage to derive the point prompts.
Another possible strategies is to derive bounding boxes from the segmented objects and use these for prompts instead.
This could be done by passing the boxes argument.
Note that this code will only for 2D. It is possible to extend this to 3D, but I would suggest to start in 2D first and once this is working well I can give hints for how to extend it to 3D.
cc @Nal44
The text was updated successfully, but these errors were encountered: