Skip to content
This repository has been archived by the owner on Feb 11, 2023. It is now read-only.

Commit

Permalink
rename package (#4)
Browse files Browse the repository at this point in the history
 * rename package
 * update README
 * update setup
 * update asserts
 * fix dividing warning
 * update CI
  • Loading branch information
Borda committed Mar 14, 2018
1 parent dc030e5 commit f30e8e4
Show file tree
Hide file tree
Showing 61 changed files with 501 additions and 331 deletions.
6 changes: 4 additions & 2 deletions .shippable.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ script:
- mkdir data && mkdir output && mkdir results
- python setup.py build_ext --inplace

- nosetests -v --with-xunit --xunit-file=$CI_REPORTS/nosetests.xml
- nosetests segmentation -v --exe --with-doctest --with-xunit --with-coverage --cover-package segmentation
# - nosetests -v --with-xunit --xunit-file=$CI_REPORTS/nosetests.xml
- nosetests -v --exe --with-doctest --with-xunit --with-coverage --cover-package imsegm --xunit-file=$CI_REPORTS/nosetests.xml

# ANNOTATION section
- python handling_annotations/run_image_color_quantization.py -imgs "images/drosophila_ovary_slice/segm_rgb/*.png"
Expand Down Expand Up @@ -86,6 +86,8 @@ script:
- python experiments_ovary_detect/run_egg_swap_orientation.py

after_success:
- python setup.py install

- coverage xml -o $COVERAGE_REPORTS/coverage.xml
- codecov -t 80efed4e-ac2b-4fea-a642-0a8b1c82e1c8
- coverage report
5 changes: 3 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,9 @@ before_script:
- python setup.py build_ext --inplace

script:
# - pytest segmentation -v --doctest-modules
- nosetests segmentation -v --exe --with-doctest --with-xunit --with-coverage --cover-package=segmentation
# - pytest imsegm -v --doctest-modules
- nosetests imsegm -v --exe --with-doctest --with-xunit --with-coverage --cover-package=imsegm
- python setup.py install

after_success:
# - codecov # public repository on Travis CI
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright (c) 2015-2017, Jiri Borovec
Copyright (c) 2015-2018, Jiri Borovec
All rights reserved.

Redistribution and use in source and binary forms, with or without modification,
Expand Down
69 changes: 39 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,34 +65,43 @@ Borovec J., Kybic J., Sugimoto, A. (2017). **Region growing using superpixels wi
**Configure local environment**

Create your own local environment, for more see the [User Guide](https://pip.pypa.io/en/latest/user_guide.html), and install dependencies requirements.txt contains list of packages and can be installed as
```
```bash
@duda:~$ cd pyImSegm
@duda:~/pyImSegm$ virtualenv env
@duda:~/pyImSegm$ source env/bin/activate
(env)@duda:~/pyImSegm$ pip install -r requirements.txt
(env)@duda:~/pyImSegm$ python ...
```
and in the end terminating...
```
```bash
(env)@duda:~/pyImSegm$ deactivate
```

<!--
Moreover, we are using python [GraphCut wrapper](https://github.com/Borda/pyGCO) which require to be installed separately (not yet integrated in PIP)
```
```bash
(env)@duda:~/pyImSegm$ mkdir libs && cd libs
(env)@duda:~/pyImSegm$ git clone https://github.com/Borda/pyGCO.git
(env)@duda:~/pyImSegm$ pip install -r requirements.txt
(env)@duda:~/pyImSegm$ python setup.py install
```
-->

**Compilation**

We have implemented cython version of some functions, especially computing descriptors, which require to compile them before using them
```
```bash
python setup.py build_ext --inplace
```
If loading of compiled descriptors in cython fails, it is automatically swapped to numpy which gives the same results, but it is significantly slower.

**Installation**

The package can be installed via pip from the folder
```bash
python setup.py install
```

---


Expand All @@ -110,32 +119,32 @@ Short description of our three sets of experiments that together compose single
We introduce some useful tools for work with image annotation and segmentation.

* In case you have some smooth color labeling in your images you can remove them with following quantization script.
```
```bash
python handling_annotations/run_image_color_quantization.py \
-imgs "images/drosophila_ovary_slice/segm_rgb/*.png" \
-m position -thr 0.01 --nb_jobs 2
```
* Concerting image labels into colour space and other way around.
```
```bash
python handling_annotations/run_image_convert_label_color.py \
-imgs "images/drosophila_ovary_slice/segm/*.png" \
-out images/drosophila_ovary_slice/segm_rgb
```
* Having input image and its segmentation we can use simple visualisation which overlap the segmentation over input image.
```
```bash
python handling_annotations/run_overlap_images_segms.py \
-imgs "images/drosophila_ovary_slice/image/*.jpg" \
-segs images/drosophila_ovary_slice/segm \
-out results/overlap_ovary_segment
```
* Inpainting selected labels in segmentation.
```
```bash
python handling_annotations/run_segm_annot_inpaint.py \
-imgs "images/drosophila_ovary_slice/segm/*.png" \
--label 4
```
* Change labels in input segmentation into another set of lables in 1:1 schema.
```
```bash
python handling_annotations/run_segm_annot_relabel.py \
-imgs "images/drosophila_ovary_slice/center_levels/*.png" \
-out results/relabel_center_levels \
Expand All @@ -148,29 +157,29 @@ We introduce some useful tools for work with image annotation and segmentation.
We utilize (un)supervised segmentation according to given training examples or some expectations.

* Evaluate superpixels (with given SLIC parameters) quality against given segmentation. It helps find out best SLIC configuration.
```
```bash
python experiments_segmentation/run_eval_superpixels.py \
-imgs "images/drosophila_ovary_slice/image/*.jpg" \
-segm "images/drosophila_ovary_slice/annot_eggs/*.png" \
--img_type 2d_gray \
--slic_size 20 --slic_regul 0.25 --slico 0
```
* Perform **Unsupervised** segmentation.
```
```bash
python experiments_segmentation/run_segm_slic_model_graphcut.py \
-list images/langerhans_islets/list_lang-isl_imgs-annot.csv \
-imgs "images/langerhans_islets/image/*.jpg" \
-out results -n langIsl --nb_classes 3 --visual --nb_jobs 2
```
* Perform **Supervised** segmentation with afterwards evaluation.
```
```bash
python experiments_segmentation/run_segm_slic_classif_graphcut.py \
-list images/drosophila_ovary_slice/list_imgs-annot-struct.csv \
-imgs "images/drosophila_ovary_slice/image/*.jpg" \
-out results -n Ovary --img_type 2d_gray --visual --nb_jobs 2
```
* For both experiment you can evaluate segmentation results.
```
```bash
python experiments_segmentation/run_compute-stat_annot-segm.py \
-annot "images/drosophila_ovary_slice/annot_struct/*.png" \
-segm "results/experiment_segm-supervise_ovary/*.png" \
Expand All @@ -188,49 +197,49 @@ In general, the input is a formatted list (CSV file) of input images and annotat
**Experiment sequence is following:**

1. We can create the annotation completely manually or use following script which uses annotation of individual objects and create the zones automatically.
```
```bash
python experiments_ovary_centres/run_create_annotation.py
```
1. With zone annotation, we train a classifier for center candidate prediction. The annotation can be a CSV file with annotated centers as points, and the zone of positive examples is set uniformly as the circular neighborhood around these points. Another way (preferable) is to use annotated image with marked zones for positive, negative and neutral examples.
```
```bash
python experiments_ovary_centres/run_center_candidate_training.py -list none \
-segs "images/drosophila_ovary_slice/segm/*.png" \
-imgs "images/drosophila_ovary_slice/image/*.jpg" \
-centers "images/drosophila_ovary_slice/center_levels/*.png" \
-out results -n ovary
```
1. Having trained classifier we perfom center prediction composed from two steps: i. center candidate clustering and candidate clustering.
```
```bash
python experiments_ovary_centres/run_center_prediction.py -list none \
-segs "images/drosophila_ovary_slice/segm/*.png" \
-imgs "images/drosophila_ovary_slice/image/*.jpg" \
-centers results/detect-centers-train_ovary/classifier_RandForest.pkl \
-out results -n ovary
```
1. Assuming you have an expert annotation you can compute static such as missed eggs.
```
```bash
python experiments_ovary_centres/run_center_evaluation.py
```
1. This is just cut out clustering in case you want to use different parameters.
```
```bash
python experiments_ovary_centres/run_center_clustering.py
```
1. Matching the ellipses to the user annotation.
```
```bash
python experiments_ovary_detect/run_ellipse_annot_match.py \
-info "~/Medical-drosophila/all_ovary_image_info_for_prague.txt" \
-ells "~/Medical-drosophila/RESULTS/3_ellipse_ransac_crit_params/*.csv" \
-out ~/Medical-drosophila/RESULTS
```
1. Cut eggs by stages and norm to mean size.
```
```bash
python experiments_ovary_detect/run_ellipse_cut_scale.py \
-info ~/Medical-drosophila/RESULTS/info_ovary_images_ellipses.csv \
-imgs "~/Medical-drosophila/RESULTS/0_input_images_png/*.png" \
-out ~/Medical-drosophila/RESULTS/images_cut_ellipse_stages
```
1. Rotate (swap) extrated eggs according the larger mount of mass.
```
```bash
python experiments_ovary_detect/run_egg_swap_orientation.py
```

Expand All @@ -241,7 +250,7 @@ In general, the input is a formatted list (CSV file) of input images and annotat
In case you do not have estimated object centers, you can use [plugins](ij_macros) for landmarks import/export for [Fiji](http://fiji.sc/).

**Note:** install multi-snake package which is used in multi-method segmentation experiment.
```
```bash
cd libs
git clone https://github.com/Borda/morph-snakes.git
cd morph-snakes
Expand All @@ -252,13 +261,13 @@ python setup.py install
**Experiment sequence is following:**

1. Estimating shape model from set training images containing single egg annotation.
```
```bash
python experiments_ovary_detect/run_RG2Sp_estim_shape-models.py \
-annot "~/Medical-drosophila/egg_segmentation/mask_2d_slice_complete_ind_egg/*.png" \
-out data -nb 15
```
1. Run several segmentation techniques on each image.
```
```bash
python experiments_ovary_detect/run_ovary_egg-segmentation.py \
-list images/drosophila_ovary_slice/list_imgs-segm-center-points.csv \
-out output -n ovary_image --nb_jobs 1 \
Expand All @@ -274,18 +283,18 @@ python setup.py install
watershed_morph
```
1. Evaluate your segmentation results to expert annotation.
```
```bash
python experiments_ovary_detect/run_ovary_segm_evaluation.py --visual
```
1. In the end, cut individual segmented objects comes as minimal bounding box.
```
```bash
python experiments_ovary_detect/run_cut_segmented_objects.py \
-annot "images/drosophila_ovary_slice/annot_eggs/*.png" \
-img "images/drosophila_ovary_slice/segm/*.png" \
-out results/cut_images --padding 50
```
1. Finally, performing visualisation of segmentation results toghter with expert annotation.
```
```bash
python experiments_ovary_detect/run_export_user-annot-segm.py
```
![user-annnot](figures/insitu7545_user-annot-segm.jpg)
Expand All @@ -296,6 +305,6 @@ python setup.py install
## References

For complete references see [BibTex](docs/references.bib).
1. Borovec J., Svihlik J., Kybic J., Habart D. (2017). **Supervised and unsupervised segmentation using superpixels, model estimation, and Graph Cut.** SPIE Journal of Electronic Imaging 26(6), 061610, http://doi.org/10.1117/1.JEI.26.6.061610
1. Borovec J., Kybic J., Nava R. (2017) **Detection and Localization of Drosophila Egg Chambers in Microscopy Images.** In: Wang Q., Shi Y., Suk HI., Suzuki K. (eds) Machine Learning in Medical Imaging. MLMI 2017. LNCS, vol 10541. Springer, Cham. http://doi.org/10.1007/978-3-319-67389-9_3
1. Borovec J., Kybic J., Sugimoto, A. (2017). **Region growing using superpixels with learned shape prior.** SPIE Journal of Electronic Imaging 26(6), 061611, http://doi.org/10.1117/1.JEI.26.6.061611
1. Borovec J., Svihlik J., Kybic J., Habart D. (2017). **Supervised and unsupervised segmentation using superpixels, model estimation, and Graph Cut.** SPIE Journal of Electronic Imaging 26(6), 061610, [DOI: 10.1117/1.JEI.26.6.061610](http://doi.org/10.1117/1.JEI.26.6.061610).
1. Borovec J., Kybic J., Nava R. (2017) **Detection and Localization of Drosophila Egg Chambers in Microscopy Images.** In: Wang Q., Shi Y., Suk HI., Suzuki K. (eds) Machine Learning in Medical Imaging. MLMI 2017. LNCS, vol 10541. Springer, Cham. [DOI: 10.1007/978-3-319-67389-9_3](http://doi.org/10.1007/978-3-319-67389-9_3).
1. Borovec J., Kybic J., Sugimoto, A. (2017). **Region growing using superpixels with learned shape prior.** SPIE Journal of Electronic Imaging 26(6), 061611, [DOI: 10.1117/1.JEI.26.6.061611](http://doi.org/10.1117/1.JEI.26.6.061611).
2 changes: 1 addition & 1 deletion circle.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ test:
- mkdir -p $CIRCLE_TEST_REPORTS

override:
- unset DISPLAY && coverage run --source segmentation -m py.test segmentation -v --doctest-modules --junitxml=$CIRCLE_TEST_REPORTS/pytest_junit.xml
- unset DISPLAY && coverage run --source imsegm -m py.test imsegm -v --doctest-modules --junitxml=$CIRCLE_TEST_REPORTS/pytest_junit.xml

post:
- coverage report && coverage xml -o $CIRCLE_TEST_REPORTS/coverage.xml
Expand Down
6 changes: 3 additions & 3 deletions experiments_ovary_centres/gui_annot_center_correction.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@
from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg

sys.path += [os.path.abspath('.'), os.path.abspath('..')] # Add path to root
import segmentation.utils.data_io as tl_io
import segmentation.utils.drawing as tl_visu
import imsegm.utils.data_io as tl_io
import imsegm.utils.drawing as tl_visu

PATH_BASE = tl_io.update_path(os.path.join('images', 'drosophila_ovary_slice'))
PATH_IMAGES = os.path.join(PATH_BASE, 'image', '*.jpg')
Expand Down Expand Up @@ -90,7 +90,7 @@ def arg_parse_params():
if params[k] is None: continue
params[k] = os.path.abspath(os.path.expanduser(params[k]))
p = os.path.dirname(params[k]) if '*' in params[k] else params[k]
assert os.path.exists(p), '%s' % p
assert os.path.exists(p), 'missing: %s' % p
logging.info('ARG PARAMETERS: \n %s', repr(params))
return params

Expand Down
18 changes: 9 additions & 9 deletions experiments_ovary_centres/run_center_candidate_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,13 @@
import matplotlib.pyplot as plt

sys.path += [os.path.abspath('.'), os.path.abspath('..')] # Add path to root
import segmentation.utils.data_io as tl_io
import segmentation.utils.experiments as tl_expt
import segmentation.utils.drawing as tl_visu
import segmentation.superpixels as seg_spx
import segmentation.descriptors as seg_fts
import segmentation.classification as seg_clf
import segmentation.labeling as seg_lbs
import imsegm.utils.data_io as tl_io
import imsegm.utils.experiments as tl_expt
import imsegm.utils.drawing as tl_visu
import imsegm.superpixels as seg_spx
import imsegm.descriptors as seg_fts
import imsegm.classification as seg_clf
import imsegm.labeling as seg_lbs

# whether skip loading triplest CSV from previous run
FORCE_RELOAD = False
Expand Down Expand Up @@ -353,7 +353,7 @@ def estim_points_compute_features(name, img, segm, params):
"""
# superpixels on image
assert img.shape[:2] == segm.shape[:2], \
'shapes: %s : %s' % (repr(img.shape), repr(segm.shape))
'not matching shapes: %s : %s' % (repr(img.shape), repr(segm.shape))
slic = seg_spx.segment_slic_img2d(img, params['slic_size'],
params['slic_regul'])
slic_centers = seg_spx.superpixel_centers(slic)
Expand Down Expand Up @@ -437,7 +437,7 @@ def label_close_points(centers, points, params):
labels = [-1] * len(points)
assert len(points) == len(labels), \
'not equal lenghts of points (%i) and labels (%i)' \
% (len(points),len(labels))
% (len(points), len(labels))
return labels


Expand Down
6 changes: 3 additions & 3 deletions experiments_ovary_centres/run_center_clustering.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@
import matplotlib.pylab as plt

sys.path += [os.path.abspath('.'), os.path.abspath('..')] # Add path to root
import segmentation.utils.data_io as tl_io
import segmentation.utils.experiments as tl_expt
import segmentation.utils.drawing as tl_visu
import imsegm.utils.data_io as tl_io
import imsegm.utils.experiments as tl_expt
import imsegm.utils.drawing as tl_visu
import run_center_candidate_training as run_train
# import run_center_prediction as run_pred

Expand Down
6 changes: 3 additions & 3 deletions experiments_ovary_centres/run_center_evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@
import matplotlib.pyplot as plt

sys.path += [os.path.abspath('.'), os.path.abspath('..')] # Add path to root
import segmentation.utils.experiments as tl_expt
import segmentation.utils.drawing as tl_visu
import segmentation.annotation as seg_annot
import imsegm.utils.experiments as tl_expt
import imsegm.utils.drawing as tl_visu
import imsegm.annotation as seg_annot
import run_center_candidate_training as run_train
import run_center_prediction as run_detect
import run_center_clustering as run_clust
Expand Down
6 changes: 3 additions & 3 deletions experiments_ovary_centres/run_center_prediction.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@
import pandas as pd

sys.path += [os.path.abspath('.'), os.path.abspath('..')] # Add path to root
import segmentation.utils.experiments as tl_expt
import segmentation.utils.data_io as tl_io
import segmentation.classification as seg_clf
import imsegm.utils.experiments as tl_expt
import imsegm.utils.data_io as tl_io
import imsegm.classification as seg_clf
import run_center_candidate_training as run_train
import run_center_clustering as run_clust

Expand Down
2 changes: 1 addition & 1 deletion experiments_ovary_centres/run_create_annotation.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ def draw_circle(pos_center, radius, img_shape):


def segm_set_center_levels(name, seg_labels, path_out, levels=DISTANCE_LEVELS):
""" set segmentation levels according distance inside object segmentation
""" set segmentation levels according distance inside object imsegm
:param str name: image name
:param ndarray seg_labels:
Expand Down
Loading

0 comments on commit f30e8e4

Please sign in to comment.