Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issues about the code #1

Open
wants to merge 251 commits into
base: master
Choose a base branch
from
Open

Conversation

bowenroom
Copy link

Hi! Thanks for releasing the code. I re-implement the code and found that the result is not as good as what the paper has demonstrated. There is a gap between two results especially the car. I follows the step Readme has pointed out. Can you release the data used in the experiment? (data in baidu yun are the not sliced ones, I think they are not the data used in the experiment)

RockeyCoss and others added 30 commits January 11, 2022 12:27
* [Feature] add auto resume

* Update mmseg/utils/find_latest_checkpoint.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/utils/find_latest_checkpoint.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* modify docstring

* Update mmseg/utils/find_latest_checkpoint.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* add copyright

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
* Fix typo in usage example

* original mosaic code in mmdet

* Adjust mosaic to the semantic segmentation

* Remove bbox test in test_mosaic

* Add unittests

* Fix resize mode for seg_fields

* Fix repr error

* modify Mosaic docs

* modify from Mosaic to RandomMosaic

* Add docstring

* modify Mosaic docstring

* [Docs] Add a blank line before Returns:

* add blank lines

Co-authored-by: MeowZheng <meowzheng@outlook.com>
* Fix typo in usage example

* original MultiImageMixDataset code in mmdet

* Add MultiImageMixDataset unittests in test_dataset_wrapper

* fix lint error

* fix value name ann_file to ann_dir

* modify retrieve_data_cfg (#1)

* remove dynamic_scale & add palette

* modify retrieve_data_cfg method

* modify retrieve_data_cfg func

* fix error

* improve the unittests coverage

* fix unittests error

* Dataset (#2)

* add cfg-options

* Add unittest in test_build_dataset

* add blank line

* add blank line

* add a blank line

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

Co-authored-by: Younghoon-Lee <72462227+Younghoon-Lee@users.noreply.github.com>
Co-authored-by: MeowZheng <meowzheng@outlook.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
* [Feature] add log collector

* Update .dev/log_collector/readme.md

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update .dev/log_collector/example_config.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* fix typo and so on

* modify readme

* fix some bugs and revise the readme.md

* more elegant

* Update .dev/log_collector/readme.md

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
* fix stdc1 download link

* fix stdc1 download link
* Update README.md

Update README to add OpenMMLab website and platform link

* Update README_zh-CN.md

Update README_zh-CN to add website and platform link in chinese
* add isprs potsdam dataset

* add isprs dataset configs

* fix lint error

* fix potsdam conversion bug

* fix error in potsdam class

* fix error in potsdam class

* add vaihingen dataset

* add vaihingen dataset

* add vaihingen dataset

* fix some description errors.

* fix some description errors.

* fix some description errors.

* upload models & logs of Potsdam

* remove vaihingen and add unit test

* add chinese readme

* add pseudodataset

* use mmcv and add class_names

* use f-string

* add new dataset unittest

* add docstring and remove global variables args

* fix metafile error in PSPNet

* fix pretrained value

* Add dataset info

* fix typo

Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
…s default work-dir (#1126)

* [Feature] benchmark can add work_dir and repeat times

* change the parameter's name

* change the name of the log file

* add skp road

* add default work dir

* make it optional

* Update tools/benchmark.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update tools/benchmark.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* fix typo

* modify json name

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
* add cocostuff in class_names

* add more class names
* Fix typo in usage example

* original MultiImageMixDataset code in mmdet

* Add MultiImageMixDataset unittests in test_dataset_wrapper

* fix lint error

* fix value name ann_file to ann_dir

* modify retrieve_data_cfg (#1)

* remove dynamic_scale & add palette

* modify retrieve_data_cfg method

* modify retrieve_data_cfg func

* fix error

* improve the unittests coverage

* fix unittests error

* Dataset (#2)

* add cfg-options

* Add unittest in test_build_dataset

* add blank line

* add blank line

* add a blank line

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* [Fix] Add MultiImageMixDataset unittests

Co-authored-by: Younghoon-Lee <72462227+Younghoon-Lee@users.noreply.github.com>
Co-authored-by: MeowZheng <meowzheng@outlook.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
* Add Vaihingen

* upload models&logs of vaihingen

* fix unit test

* fix dataset pipeline

* fix unit test coverage

* fix vaihingen docstring
* add vaihingen in readme

* add vaihingen in readme

* add vaihingen in readme
* [Docs] Add MultiImageMixDataset tutorial

* modify to randommosaic

* fix markdown
)

* fix README.md in configs

* fix README.md in configs

* modify [ALGORITHM] to [BACKBONE] in backbone config README.md
* segmenter: add model

* update

* readme: update

* config: update

* segmenter: update readme

* segmenter: update

* segmenter: update

* segmenter: update

* configs: set checkpoint path to pretrain folder

* segmenter: modify vit-s/lin, remove data config

* rreadme: update

* configs: transfer from _base_ to segmenter

* configs: add 8x1 suffix

* configs: remove redundant lines

* configs: cleanup

* first attempt

* swipe CI error

* Update mmseg/models/decode_heads/__init__.py

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* segmenter_linear: use fcn backbone

* segmenter_mask: update

* models: add segmenter vit

* decoders: yapf+remove unused imports

* apply precommit

* segmenter/linear_head: fix

* segmenter/linear_header: fix

* segmenter: fix mask transformer

* fix error

* segmenter/mask_head: use trunc_normal init

* refactor segmenter head

* Fetch upstream (#1)

* [Feature] Change options to cfg-option (#1129)

* [Feature] Change option to cfg-option

* add expire date and fix the docs

* modify docstring

* [Fix] Add <!-- [ABSTRACT] --> in metafile #1127

* [Fix] Fix correct num_classes of HRNet in LoveDA dataset #1136

* Bump to v0.20.1 (#1138)

* bump version 0.20.1

* bump version 0.20.1

* [Fix] revise --option to --options #1140

Co-authored-by: Rockey <41846794+RockeyCoss@users.noreply.github.com>
Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>

* decode_head: switch from linear to fcn

* fix init list formatting

* configs: remove variants, keep only vit-s on ade

* align inference metric of vit-s-mask

* configs: add vit t/b/l

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* model_converters: use torch instead of einops

* setup: remove einops

* segmenter_mask: fix missing imports

* add necessary imported init funtion

* segmenter/seg-l: set resolution to 640

* segmenter/seg-l: fix test size

* fix vitjax2mmseg

* add README and unittest

* fix unittest

* add docstring

* refactor config and add pretrained link

* fix typo

* add paper name in readme

* change segmenter config names

* fix typo in readme

* fix typos in readme

* fix segmenter typo

* fix segmenter typo

* delete redundant comma in config files

* delete redundant comma in config files

* fix convert script

* update lateset master version

Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
Co-authored-by: Rockey <41846794+RockeyCoss@users.noreply.github.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
* Fix bug in non-distributed training

* Fix bug in non-distributed testing

* delete uncomment lines

* add args.gpus
* [Enhance] New-style CPU training and inference.

* assert mmcv version

* SyncBN to BN in training and testing

* SyncBN to BN in training and testing

* upload untracked files to this branch

* delete gpu_ids

* fix bugs

* assert args.gpu_id in train.py

* use cfg.gpu_ids = [args.gpu_id]

* use cfg.gpu_ids = [args.gpu_id]

* fix typo

* fix typo

* fix typos
* change version to v0.21.0

* change version to v0.21.0

* change version to v0.21.0

* change version to v0.21.0
1. Fix img path typo in `useful_tools.md`, `zh_cn/model_zoo.md`, and `zh_cn/train.md`
2. Add missing content in `zh_cn/useful_tools.md` to to match `en/useful_tools.md`
* [Improve] Use MMCV load_state_dict func in ViT/Swin

* use CheckpointLoader instead
* [Improve] Add exception for PointRend for support CPU-only usage

* fixed linting
* Bump v0.21.1

* add improvements in changelog

* add improvements in changelog

* fix cn readme

* change changelog
lzyhha and others added 15 commits February 15, 2023 21:12
## Motivation

We are from NVIDIA and we have developed a simplified and
inference-efficient transformer for dense prediction tasks. The method
is based on SegFormer with hardware-friendly design choices, resulting
in better accuracy and over 2x reduction in inference speed as compared
to the baseline. We believe this model would be of particular interests
to those who want to deploy an efficient vision transformer for
production, and it is easily adaptable to other tasks. Therefore, we
would like to contribute our method to mmsegmentation in order to
benefit a larger audience.

The paper was accepted to [Transformer for Vision
workshop](https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.google.com%2Fview%2Ft4v-cvpr22%2Fpapers%3Fauthuser%3D0&data=05%7C01%7Cboyinz%40nvidia.com%7Cbf078d69821449d1f4c908dab5e8c7da%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638022308636438546%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=XtSgPQrbVgHxt5L9XkXF%2BGWvc95haB3kKPcHnsVIF3M%3D&reserved=0)
at CVPR 2022, here below are some resource links:
Paper
[https://arxiv.org/pdf/2204.13791.pdf](https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Farxiv.org%2Fpdf%2F2204.13791.pdf&data=05%7C01%7Cboyinz%40nvidia.com%7Cbf078d69821449d1f4c908dab5e8c7da%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638022308636438546%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=X%2FCVoa6PFA09EHfClES36QOa5NvbZu%2F6IDfBVwiYywU%3D&reserved=0)
(Table 3 shows the semseg results)
Code
[https://github.com/NVIDIA/DL4AGX/tree/master/DEST](https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FNVIDIA%2FDL4AGX%2Ftree%2Fmaster%2FDEST&data=05%7C01%7Cboyinz%40nvidia.com%7Cbf078d69821449d1f4c908dab5e8c7da%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638022308636438546%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9DLQZpEq1cN75%2FDf%2FniUOOUFS1ABX8FEUH02O6isGVQ%3D&reserved=0)
A webinar on its application
[https://www.nvidia.com/en-us/on-demand/session/other2022-drivetraining/](https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.nvidia.com%2Fen-us%2Fon-demand%2Fsession%2Fother2022-drivetraining%2F&data=05%7C01%7Cboyinz%40nvidia.com%7Cbf078d69821449d1f4c908dab5e8c7da%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C638022308636438546%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=8jrBC%2Bp3jGxiaW4vtSfhh6GozC3tRqGNjNoALM%2FOYxs%3D&reserved=0)

## Modification

Add backbone(smit.py) and head(dest_head.py) of DEST

## BC-breaking (Optional)

N/A

## Use cases (Optional)

N/A

---------

Co-authored-by: MeowZheng <meowzheng@outlook.com>
Just fixes a small typo in the example.
## Motivation

Support SegNeXt.

Due to many commits & changed files caused by WIP too long (perhaps it
could be resolved by `git merge` or `git rebase`).

This PR is created only for backup of old PR
#2247

Co-authored-by: MeowZheng <meowzheng@outlook.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
## Motivation

Transfer keys of each `mscan_x.pth` pretrained models of SegNeXt, and
upload them in the website.

The reason of transferring keys is we modify original repo
[`.dwconv.dwconv.xxx`](https://github.com/Visual-Attention-Network/SegNeXt/blob/main/mmseg/models/backbones/mscan.py#L21)
to
[`.dwconv.xxx`](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/backbones/mscan.py#L43).
Use the word "Library" instead of using the word "toolbox".
## Motivation

Added ascending device support in mmseg.

## Modification

The main modification points are as follows:
We added an NPU device in the DDP scenario and DP scenario when using
the NPU.

## BC-breaking (Optional)

None

## Use cases (Optional)

We tested
[fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py)
.
…#2730)

Note that this PR is a modified version of the withdrawn PR
#1748

## Motivation

In the last years, panoptic segmentation has become more into the focus
in reseach. Weber et al.
[[Link]](http://www.cvlibs.net/publications/Weber2021NEURIPSDATA.pdf)
have published a quite nice dataset, which is in the same style like
Cityscapes, but for KITTI sequences. Since Cityscapes and KITTI-STEP
share the same classes and also a comparable domain (dashcam view),
interesting investigations, e.g. about relations in the domain e.t.c.
can be done.

Note that KITTI-STEP provices panoptic segmentation annotations which
are out of scope for mmsegmentation.

## Modification

Mostly, I added the new dataset and dataset preparation file. To
simplify the first usage of the new dataset, I also added configs for
the dataset, segformer and deeplabv3plus.

## BC-breaking (Optional)

No BC-breaking

## Use cases (Optional)

Researchers want to test their new methods, e.g. for interpretable AI in
the context of semantic segmentation. They want to show, that their
method is reproducible on comparable datasets. Thus, they can compare
Cityscapes and KITTI-STEP.

---------

Co-authored-by: CSH <40987381+csatsurnh@users.noreply.github.com>
Co-authored-by: csatsurnh <cshan1995@126.com>
Co-authored-by: 谢昕辰 <xiexinch@outlook.com>
Thanks for your contribution and we appreciate it a lot. The following
instructions would make your pull request more healthy and more easily
get feedback. If you do not understand some items, don't worry, just
make the pull request and seek help from maintainers.

## Motivation

The focal Tversky loss was proposed in https://arxiv.org/abs/1810.07842.
It has nearly 600 citations and has been shown to be extremely useful
for highly imbalanced (medical) datasets. To add support for the focal
Tversky loss, only few lines of changes are needed for the Tversky loss.

## Modification

Add `gamma` as (optional) argument in the constructor of `TverskyLoss`.
This parameter is then passed to `tversky_loss` to compute the focal
Tversky loss.

## BC-breaking (Optional)

Does the modification introduce changes that break the
backward-compatibility of the downstream repos?
If so, please describe how it breaks the compatibility and how the
downstream projects should modify their code to keep compatibility with
this PR.

## Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
2. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
3. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
4. The documentation has been modified accordingly, like docstring or
example tutorials.

Reopening of previous
[PR](#2783).
@twsha
Copy link

twsha commented Nov 13, 2023

How to visualize and save test set data,When I run the visual command, I get the following error:
image

h1063135843 pushed a commit that referenced this pull request Nov 17, 2023
* Fix typo in usage example

* original MultiImageMixDataset code in mmdet

* Add MultiImageMixDataset unittests in test_dataset_wrapper

* fix lint error

* fix value name ann_file to ann_dir

* modify retrieve_data_cfg (#1)

* remove dynamic_scale & add palette

* modify retrieve_data_cfg method

* modify retrieve_data_cfg func

* fix error

* improve the unittests coverage

* fix unittests error

* Dataset (open-mmlab#2)

* add cfg-options

* Add unittest in test_build_dataset

* add blank line

* add blank line

* add a blank line

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

Co-authored-by: Younghoon-Lee <72462227+Younghoon-Lee@users.noreply.github.com>
Co-authored-by: MeowZheng <meowzheng@outlook.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
h1063135843 pushed a commit that referenced this pull request Nov 17, 2023
* Fix typo in usage example

* original MultiImageMixDataset code in mmdet

* Add MultiImageMixDataset unittests in test_dataset_wrapper

* fix lint error

* fix value name ann_file to ann_dir

* modify retrieve_data_cfg (#1)

* remove dynamic_scale & add palette

* modify retrieve_data_cfg method

* modify retrieve_data_cfg func

* fix error

* improve the unittests coverage

* fix unittests error

* Dataset (open-mmlab#2)

* add cfg-options

* Add unittest in test_build_dataset

* add blank line

* add blank line

* add a blank line

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* [Fix] Add MultiImageMixDataset unittests

Co-authored-by: Younghoon-Lee <72462227+Younghoon-Lee@users.noreply.github.com>
Co-authored-by: MeowZheng <meowzheng@outlook.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
h1063135843 pushed a commit that referenced this pull request Nov 17, 2023
* segmenter: add model

* update

* readme: update

* config: update

* segmenter: update readme

* segmenter: update

* segmenter: update

* segmenter: update

* configs: set checkpoint path to pretrain folder

* segmenter: modify vit-s/lin, remove data config

* rreadme: update

* configs: transfer from _base_ to segmenter

* configs: add 8x1 suffix

* configs: remove redundant lines

* configs: cleanup

* first attempt

* swipe CI error

* Update mmseg/models/decode_heads/__init__.py

Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>

* segmenter_linear: use fcn backbone

* segmenter_mask: update

* models: add segmenter vit

* decoders: yapf+remove unused imports

* apply precommit

* segmenter/linear_head: fix

* segmenter/linear_header: fix

* segmenter: fix mask transformer

* fix error

* segmenter/mask_head: use trunc_normal init

* refactor segmenter head

* Fetch upstream (#1)

* [Feature] Change options to cfg-option (open-mmlab#1129)

* [Feature] Change option to cfg-option

* add expire date and fix the docs

* modify docstring

* [Fix] Add <!-- [ABSTRACT] --> in metafile open-mmlab#1127

* [Fix] Fix correct num_classes of HRNet in LoveDA dataset open-mmlab#1136

* Bump to v0.20.1 (open-mmlab#1138)

* bump version 0.20.1

* bump version 0.20.1

* [Fix] revise --option to --options open-mmlab#1140

Co-authored-by: Rockey <41846794+RockeyCoss@users.noreply.github.com>
Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>

* decode_head: switch from linear to fcn

* fix init list formatting

* configs: remove variants, keep only vit-s on ade

* align inference metric of vit-s-mask

* configs: add vit t/b/l

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* Update mmseg/models/decode_heads/segmenter_mask_head.py

Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>

* model_converters: use torch instead of einops

* setup: remove einops

* segmenter_mask: fix missing imports

* add necessary imported init funtion

* segmenter/seg-l: set resolution to 640

* segmenter/seg-l: fix test size

* fix vitjax2mmseg

* add README and unittest

* fix unittest

* add docstring

* refactor config and add pretrained link

* fix typo

* add paper name in readme

* change segmenter config names

* fix typo in readme

* fix typos in readme

* fix segmenter typo

* fix segmenter typo

* delete redundant comma in config files

* delete redundant comma in config files

* fix convert script

* update lateset master version

Co-authored-by: MengzhangLI <mcmong@pku.edu.cn>
Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
Co-authored-by: Rockey <41846794+RockeyCoss@users.noreply.github.com>
Co-authored-by: Miao Zheng <76149310+MeowZheng@users.noreply.github.com>
@h1063135843
Copy link
Owner

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

@twsha
Copy link

twsha commented Nov 20, 2023

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

thanks ,I would try . I use the command python tools\test.py configs\edft\segformer_mit_fuse-b0_256x256_80k_vai.py mit_fuse_b0.pth --eval mIoU --show ,then an error about the number of channels was reported.

@twsha
Copy link

twsha commented Nov 20, 2023

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

I seemed to have run this command at first,but it still didn’t work.

@twsha
Copy link

twsha commented Nov 20, 2023

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

image
and now it raise this error

@twsha
Copy link

twsha commented Nov 20, 2023

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

How to enter the mmsegmengtation package in the conda environment
image
image
Or run the pip install -e . command directly in the created virtual environment?
do you know how to solve it,thanks.

@dabendanjjbang
Copy link

Hello, I directly downloaded your code and dataset and ran it, I found that the error did not come to pretrain/mit_b0.pth, so I went to segformer and downloaded the mit_b0.pth, in the case that I did not change any parameters, I ran python tools\train.py configs\edft\segformer_mit_fuse_b0_256x256_80k_vai.py, on both 1050ti and 3090 are more than 71, is it a problem with the pre-trained model or something else, please guide.
image

@twsha
Copy link

twsha commented Nov 22, 2023

Hello, I directly downloaded your code and dataset and ran it, I found that the error did not come to pretrain/mit_b0.pth, so I went to segformer and downloaded the mit_b0.pth, in the case that I did not change any parameters, I ran python tools\train.py configs\edft\segformer_mit_fuse_b0_256x256_80k_vai.py, on both 1050ti and 3090 are more than 71, is it a problem with the pre-trained model or something else, please guide. image

hi,Can you give me your contact information? I want to communicate with you.

@dabendanjjbang
Copy link

+v cx3245299704

@twsha
Copy link

twsha commented Nov 27, 2023

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

hi , I have clone the EDFT again ,but when I run the command to show ,it's still report this error ,can you tell me how to solve it !thanks!
3cb0ef4d1232a9ec5edd8455a991127

@h1063135843
Copy link
Owner

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

hi , I have clone the EDFT again ,but when I run the command to show ,it's still report this error ,can you tell me how to solve it !thanks! 3cb0ef4d1232a9ec5edd8455a991127

Can you try out the branch "merge to master"? You should make sure mmseg official repo running well.

@twsha
Copy link

twsha commented Dec 7, 2023

How to visualize and save test set data,When I run the visual command, I get the following error: image

You can try to run pip install -e .. In your screeshot, you use original mmseg, which doesn't support four channel image.

hi , I have clone the EDFT again ,but when I run the command to show ,it's still report this error ,can you tell me how to solve it !thanks! 3cb0ef4d1232a9ec5edd8455a991127

Can you try out the branch "merge to master"? You should make sure mmseg official repo running well.

Sorry, I would like to ask you again. I used the latest branch to successfully run the segmentation result image. The saveable image is a mask image. Do you know where the configuration needs to be changed? Thank you.
7902456b9142757010014083abef169

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.