forked from kubeedge/ianvs
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request kubeedge#85 from qxygxt/main
OSPP: Implementation of a Class Incremental Learning Algorithm Evaluation System based on Ianvs
- Loading branch information
Showing
55 changed files
with
5,386 additions
and
17 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
270 changes: 270 additions & 0 deletions
270
...n of a Class Incremental Learning Algorithm Evaluation System based on Ianvs.md
Large diffs are not rendered by default.
Oops, something went wrong.
Binary file added
BIN
+2.08 MB
docs/proposals/algorithms/lifelong-learning/images/OSPP_MDIL-SS_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+533 KB
docs/proposals/algorithms/lifelong-learning/images/OSPP_MDIL-SS_11.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+2.24 MB
docs/proposals/algorithms/lifelong-learning/images/OSPP_MDIL-SS_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+1010 KB
docs/proposals/algorithms/lifelong-learning/images/OSPP_MDIL-SS_3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+9.71 KB
docs/proposals/algorithms/lifelong-learning/images/OSPP_MDIL-SS_4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+11.8 KB
docs/proposals/algorithms/lifelong-learning/images/OSPP_MDIL-SS_5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
112 changes: 112 additions & 0 deletions
112
examples/class_increment_semantic_segmentation/lifelong_learning_bench/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,112 @@ | ||
# Quick Start about Class Incremental Semantic Segmentation | ||
|
||
Welcome to Ianvs! Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards, | ||
in order to facilitate more efficient and effective development. This semantic segmentation scenario quick start guides you how to test your class incremental algorithm on Ianvs. You can reduce manual procedures to just a few steps so that you can | ||
build and start your distributed synergy AI solution development within minutes. | ||
|
||
Before using Ianvs, you might want to have the device ready: | ||
- One machine is all you need, i.e., a laptop or a virtual machine is sufficient and a cluster is not necessary | ||
- 2 CPUs or more | ||
- 4GB+ free memory, depends on algorithm and simulation setting | ||
- 10GB+ free disk space | ||
- Internet connection for GitHub and pip, etc | ||
- Python 3.6+ installed | ||
|
||
|
||
In this example, we are using the Linux platform with Python 3.8. If you are using Windows, most steps should still apply but a few like commands and package requirements might be different. | ||
|
||
## Step 1. Ianvs Preparation | ||
|
||
First, we download the code of Ianvs. Assuming that we are using `/ianvs` as workspace, Ianvs can be cloned with `Git` | ||
as: | ||
|
||
``` shell | ||
mkdir /ianvs | ||
cd /ianvs # One might use another path preferred | ||
|
||
mkdir project | ||
cd project | ||
git clone https://github.com/kubeedge/ianvs.git | ||
``` | ||
|
||
|
||
Then, we install third-party dependencies for ianvs. | ||
``` shell | ||
sudo apt-get update | ||
sudo apt-get install libgl1-mesa-glx -y | ||
python -m pip install --upgrade pip | ||
|
||
cd ianvs | ||
python -m pip install ./examples/resources/third_party/* | ||
python -m pip install -r requirements.txt | ||
``` | ||
|
||
We are now ready to install Ianvs. | ||
``` shell | ||
python setup.py install | ||
``` | ||
|
||
## Step 2. Dataset Preparation | ||
|
||
Datasets and models can be large. To avoid over-size projects in the Github repository of Ianvs, the Ianvs code base does | ||
not include origin datasets. Then developers do not need to download non-necessary datasets for a quick start. | ||
|
||
``` shell | ||
mkdir dataset | ||
cd dataset | ||
unzip mdil-ss.zip | ||
``` | ||
|
||
The URL address of this dataset then should be filled in the configuration file ``testenv.yaml``. In this quick start, | ||
we have done that for you and the interested readers can refer to [testenv.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details. | ||
|
||
|
||
Related algorithm is also ready in this quick start. | ||
|
||
``` shell | ||
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/class_increment_semantic_segmentation/lifelong_learning_bench/testalgorithms/erfnet/ERFNet | ||
``` | ||
|
||
The URL address of this algorithm then should be filled in the configuration file ``algorithm.yaml``. In this quick | ||
start, we have done that for you and the interested readers can refer to [algorithm.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details. | ||
|
||
|
||
## Step 3. Ianvs Execution and Presentation | ||
|
||
We are now ready to run the ianvs for benchmarking. | ||
|
||
``` shell | ||
cd /ianvs/project | ||
ianvs -f examples/class_increment_semantic_segmentation/lifelong_learning_bench/benchmarkingjob.yaml | ||
``` | ||
|
||
Finally, the user can check the result of benchmarking on the console and also in the output path( | ||
e.g. `/ianvs/project/ianvs-workspace/mdil-ss/lifelong_learning_bench`) defined in the benchmarking config file ( | ||
e.g. `benchmarkingjob.yaml`). In this quick start, we have done all configurations for you and the interested readers | ||
can refer to [benchmarkingJob.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details. | ||
|
||
The final output might look like this: | ||
|
||
| rank | algorithm | Task_Avg_Acc | BWT | FWT | paradigm | basemodel | task_definition | task_allocation | basemodel-learning_rate | basemodel-epochs | task_definition-origins | task_allocation-origins | time | url | | ||
|:----:|:------------------------:|:--------------------:|:--------------------:|:--------------------:|:----------------:|:---------:|:----------------------:|:----------------------:|:-----------------------:|:----------------:|:-----------------------------------------:|:-----------------------------------------:|:-------------------:|:-------------------------------------------------------------------------------------------------------------------------------:| | ||
| 1 | erfnet_lifelong_learning | 0.027414088670437726 | 0.010395591126145793 | 0.002835451693721201 | lifelonglearning | BaseModel | TaskDefinitionByDomain | TaskAllocationByDomain | 0.0001 | 1 | ['Cityscapes', 'Synthia', 'Cloud-Robotics'] | ['Cityscapes', 'Synthia', 'Cloud-Robotics'] | 2023-09-26 20:13:21 | ./ianvs-workspace/mdil-ss/lifelong_learning_bench/benchmarkingjob/erfnet_lifelong_learning/3a8c73ba-5c64-11ee-8ebd-b07b25dd6922 | | ||
|
||
|
||
In addition, in the log displayed at the end of the test, you can see the accuracy of known and unknown tasks in each round, as shown in the table below (in the testing phase of round 3, all classes are seen). | ||
|
||
|
||
| Round | Seen Class Accuracy | Unseen Class Accuracy | | ||
|:-----:|:---------------------:|:-------------------:| | ||
| 1 | 0.176 | 0.0293 | | ||
| 2 | 0.203 | 0.0265 | | ||
| 3 | 0.311 | 0.0000 | | ||
|
||
|
||
|
||
This ends the quick start experiment. | ||
|
||
# What is next | ||
|
||
If any problems happen, the user can refer to [the issue page on Github](https://github.com/kubeedge/ianvs/issues) for help and are also welcome to raise any new issue. | ||
|
||
Enjoy your journey on Ianvs! |
72 changes: 72 additions & 0 deletions
72
examples/class_increment_semantic_segmentation/lifelong_learning_bench/benchmarkingjob.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
benchmarkingjob: | ||
# job name of bechmarking; string type; | ||
name: "benchmarkingjob" | ||
# the url address of job workspace that will reserve the output of tests; string type; | ||
workspace: "./ianvs-workspace/mdil-ss/lifelong_learning_bench" | ||
|
||
# the url address of test environment configuration file; string type; | ||
# the file format supports yaml/yml; | ||
testenv: "./examples/class_increment_semantic_segmentation/lifelong_learning_bench/testenv/testenv.yaml" | ||
|
||
# the configuration of test object | ||
test_object: | ||
# test type; string type; | ||
# currently the option of value is "algorithms",the others will be added in succession. | ||
type: "algorithms" | ||
# test algorithm configuration files; list type; | ||
algorithms: | ||
# algorithm name; string type; | ||
- name: "erfnet_lifelong_learning" | ||
# the url address of test algorithm configuration file; string type; | ||
# the file format supports yaml/yml | ||
url: "./examples/class_increment_semantic_segmentation/lifelong_learning_bench/testalgorithms/erfnet/test_algorithm.yaml" | ||
|
||
# the configuration of ranking leaderboard | ||
rank: | ||
# rank leaderboard with metric of test case's evaluation and order ; list type; | ||
# the sorting priority is based on the sequence of metrics in the list from front to back; | ||
sort_by: [ { "accuracy": "descend" }, { "BWT": "descend" } ] | ||
|
||
# visualization configuration | ||
visualization: | ||
# mode of visualization in the leaderboard; string type; | ||
# There are quite a few possible dataitems in the leaderboard. Not all of them can be shown simultaneously on the screen. | ||
# In the leaderboard, we provide the "selected_only" mode for the user to configure what is shown or is not shown. | ||
mode: "selected_only" | ||
# method of visualization for selected dataitems; string type; | ||
# currently the options of value are as follows: | ||
# 1> "print_table": print selected dataitems; | ||
method: "print_table" | ||
|
||
# selected dataitem configuration | ||
# The user can add his/her interested dataitems in terms of "paradigms", "modules", "hyperparameters" and "metrics", | ||
# so that the selected columns will be shown. | ||
selected_dataitem: | ||
# currently the options of value are as follows: | ||
# 1> "all": select all paradigms in the leaderboard; | ||
# 2> paradigms in the leaderboard, e.g., "singletasklearning" | ||
paradigms: [ "all" ] | ||
# currently the options of value are as follows: | ||
# 1> "all": select all modules in the leaderboard; | ||
# 2> modules in the leaderboard, e.g., "basemodel" | ||
modules: [ "all" ] | ||
# currently the options of value are as follows: | ||
# 1> "all": select all hyperparameters in the leaderboard; | ||
# 2> hyperparameters in the leaderboard, e.g., "momentum" | ||
hyperparameters: [ "all" ] | ||
# currently the options of value are as follows: | ||
# 1> "all": select all metrics in the leaderboard; | ||
# 2> metrics in the leaderboard, e.g., "F1_SCORE" | ||
metrics: [ "accuracy", "BWT", "FWT"] | ||
|
||
# model of save selected and all dataitems in workspace `./rank` ; string type; | ||
# currently the options of value are as follows: | ||
# 1> "selected_and_all": save selected and all dataitems; | ||
# 2> "selected_only": save selected dataitems; | ||
save_mode: "selected_and_all" | ||
|
||
|
||
|
||
|
||
|
||
|
38 changes: 38 additions & 0 deletions
38
...nt_semantic_segmentation/lifelong_learning_bench/testalgorithms/erfnet/ERFNet/accuracy.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,38 @@ | ||
from basemodel import val_args | ||
from utils.metrics import Evaluator | ||
from tqdm import tqdm | ||
from dataloaders import make_data_loader | ||
from sedna.common.class_factory import ClassType, ClassFactory | ||
|
||
__all__ = ('accuracy') | ||
|
||
@ClassFactory.register(ClassType.GENERAL) | ||
def accuracy(y_true, y_pred, **kwargs): | ||
args = val_args() | ||
_, _, test_loader, num_class = make_data_loader(args, test_data=y_true) | ||
evaluator = Evaluator(num_class) | ||
|
||
tbar = tqdm(test_loader, desc='\r') | ||
for i, (sample, img_path) in enumerate(tbar): | ||
if args.depth: | ||
image, depth, target = sample['image'], sample['depth'], sample['label'] | ||
else: | ||
image, target = sample['image'], sample['label'] | ||
if args.cuda: | ||
image, target = image.cuda(args.gpu_ids), target.cuda(args.gpu_ids) | ||
if args.depth: | ||
depth = depth.cuda(args.gpu_ids) | ||
|
||
target[target > evaluator.num_class-1] = 255 | ||
target = target.cpu().numpy() | ||
# Add batch sample into evaluator | ||
evaluator.add_batch(target, y_pred[i]) | ||
|
||
# Test during the training | ||
# Acc = evaluator.Pixel_Accuracy() | ||
CPA = evaluator.Pixel_Accuracy_Class() | ||
mIoU = evaluator.Mean_Intersection_over_Union() | ||
FWIoU = evaluator.Frequency_Weighted_Intersection_over_Union() | ||
|
||
print("CPA:{}, mIoU:{}, fwIoU: {}".format(CPA, mIoU, FWIoU)) | ||
return CPA |
Oops, something went wrong.