Skip to content

Commit

Permalink
upload an example about the scence based unknown task regconition alg…
Browse files Browse the repository at this point in the history
…orithm

update algorithms
delete annotated codes
refer the origin reproduced p
add class documentation
upload Readme
Modify folder name
add an example
modify readme

Signed-off-by: Frank-lilinjie <lilinjie@bupt.edu.cn>
  • Loading branch information
Frank-lilinjie committed Oct 30, 2022
1 parent c908199 commit 3834b8e
Show file tree
Hide file tree
Showing 53 changed files with 423 additions and 111 deletions.
103 changes: 0 additions & 103 deletions examples/curb-detection/lifelong_learning_bench/README.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# Quick Start about Unknown task recognition

Welcome to Ianvs! Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards, in order to facilitate more efficient and effective development. Quick start helps you to test your algorithm on Ianvs with a simple example of industrial defect detection. You can reduce manual procedures to just a few steps so that you can build and start your distributed synergy AI solution development within minutes.

Before using Ianvs, you might want to have the device ready:

- One machine is all you need, i.e., a laptop or a virtual machine is sufficient and a cluster is not necessary
- 2 CPUs or more
- 4GB+ free memory depends on the algorithm and simulation setting
- 10GB+ free disk space
- Internet connection for GitHub and pip, etc
- Python 3.6+ installed

In this example, we are using the Linux platform with Python 3.7.1. If you are using Windows, most steps should still apply but a few like commands and package requirements might be different.

## Step 1. Ianvs Preparation

First, we download the code of Ianvs. Assuming that we are using `/ianvs` as workspace, Ianvs can be cloned with `Git` as:

```shell
mkdir /ianvs
cd /ianvs #One might use another path preferred

mkdir project
cd project
git clone https://github.com/kubeedge/ianvs.git
```

Then, we install third-party dependencies for ianvs.

```shell
sudo apt-get update
sudo apt-get install libgl1-mesa-glx -y
python -m pip install --upgrade pip

cd ianvs
python -m pip install ./examples/resources/third_party/*
python -m pip install -r requirements.txt
```

We are now ready to install Ianvs.

```shell
python setup.py install
```

## Step 2. Dataset and Model Preparation

Datasets and models can be large. To avoid over-size projects in the GitHub repository of Ianvs, the Ianvs code base does not include origin datasets. Then developers do not need to download non-necessary datasets for a quick start.

```shell
cd /ianvs #One might use another path preferred
mkdir dataset
cd dataset
wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/ianvs/curb-detection/curb-detection.zip
unzip dataset.zip
```

The URL address of this dataset then should be filled in the configuration file `testenv.yaml`. In this quick start, we have done that for you and interested readers can refer to [testenv.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.

```shell
cd /ianvs/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet
mkdir results
```

Put the model to results.Download [model](https://pan.baidu.com/s/18MA8Gaw7ptpipfLD6Hz6SA) *access code*: 37ff.

The related algorithm is also ready for this quick start.

```shell
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet/RFNet
export PYTHONPATH=$PYTHONPATH:/ianvs/project/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet/
```

The URL address of this algorithm then should be filled in the configuration file `algorithm.yaml`. In this quick start, we have done that for you and interested readers can refer to [algorithm.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.

## Step 3. Ianvs Execution and Presentation

We are now ready to run the ianvs for benchmarking.

```shell
cd /ianvs/project
ianvs -f examples/scene-based-unknown-task-recognition/lifelong_learning_bench/benchmarkingjob.yaml
```

Finally, the user can check the result of benchmarking on the console and also in the output path( e.g. `/ianvs/lifelong_learning_bench/workspace`) defined in the benchmarking config file ( e.g. `benchmarkingjob.yaml`). In this quick start, we have done all configurations for you and the interested readers can refer to [benchmarkingJob.yaml](https://ianvs.readthedocs.io/en/latest/guides/how-to-test-algorithms.html#step-1-test-environment-preparation) for more details.

The final output might look like this:

| rank | algorithm | accuracy | samples_transfer_ratio | paradigm | basemodel | task_definition | task_allocation | unseen_sample_recognition | basemodel-learning_rate | task_definition-origins | task_allocation-origins | unseen_sample_recognition-model_path | time | url |
| ---- | ----------------------- | ------------------- | ---------------------- | ---------------- | --------- | ---------------------- | ---------------------- | ------------------------------ | ----------------------- | ----------------------- | ----------------------- | :----------------------------------------------------------- | ------------------- | ------------------------------------------------------------ |
| 1 | rfnet_lifelong_learning | 0.30090234155994056 | 0.4535 | lifelonglearning | BaseModel | TaskDefinitionByOrigin | TaskAllocationByOrigin | UnseenSampleRecognitionByScene | 0.0001 | ['real', 'sim'] | ['real', 'sim'] | /examples/scene-based-unknown-task-recognition/lifelong_learning_bench/testalgorithms/rfnet/results/Epochofprose17.pth | 2022-10-25 14:50:01 | /ianvs/lifelong_learning_bench/workspace/benchmarkingjob/rfnet_lifelong_learning/1dfff552-542f-11ed-b875-b07b25dd6922 |

This ends the quick start experiment.

# What is next

If any problems happen, the user can refer to [the issue page on Github](https://github.com/kubeedge/ianvs/issues) for help and are also welcome to raise any new issue.

Enjoy your journey on Ianvs!
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ benchmarkingjob:

# the url address of test environment configuration file; string type;
# the file format supports yaml/yml;
testenv: "./examples/curb-detection/lifelong_learning_bench/testenv/testenv.yaml"
testenv: "./examples/scene-based-unknown-task-recognition//lifelong_learning_bench/testenv/testenv.yaml"

# the configuration of test object
test_object:
Expand All @@ -19,7 +19,7 @@ benchmarkingjob:
- name: "rfnet_lifelong_learning"
# the url address of test algorithm configuration file; string type;
# the file format supports yaml/yml
url: "./examples/curb-detection/lifelong_learning_bench/testalgorithms/rfnet/rfnet_algorithm.yaml"
url: "./examples/scene-based-unknown-task-recognition//lifelong_learning_bench/testalgorithms/rfnet/rfnet_algorithm.yaml"

# the configuration of ranking leaderboard
rank:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from models.wide_resnet_embedding import *
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
import os
import shutil
import time
import pprint
import torch
import numpy as np
import torch.nn as nn
# --- functional helper ---
def category_mean(data, label, label_max):
'''compute mean for each category'''
one_hot_label = one_hot(label, label_max)
class_num = torch.sum(one_hot_label, 0, keepdim=True) + 1e-15
one_hot_label = one_hot_label / class_num
return torch.mm(data.view(1, -1), one_hot_label).squeeze(0)

def category_mean1(data, label, label_max):
'''compute mean for each category
only return centers for given categories'''
labelset = torch.unique(label, sorted=True)
one_hot_label = one_hot(label, label_max)
class_num = torch.sum(one_hot_label, 0, keepdim=True) + 1e-15
one_hot_label = one_hot_label / class_num
output = torch.mm(data.view(1, -1), one_hot_label).squeeze(0)
return output[labelset]

def category_mean2(data, label, label_max):
'''compute mean for each category, based on a matrix'''
one_hot_label = one_hot(label, label_max)
data = torch.gather(data, 1, label.unsqueeze(1))
class_num = torch.sum(one_hot_label, 0, keepdim=True) + 1e-15
one_hot_label = one_hot_label / class_num
return torch.mm(data.view(1, -1), one_hot_label).squeeze(0)

def category_mean3(data, label, label_max):
'''compute mean for each category, each row corresponds to an elements'''
one_hot_label = one_hot(label, label_max)
class_num = torch.sum(one_hot_label, 0, keepdim=True) + 1e-15
one_hot_label = one_hot_label / class_num
return torch.mm(one_hot_label.t(), data)

def category_mean4(data, label, label_max):
'''compute mean for each category, each row corresponds to an elements
only return centers for given categories'''
labelset = torch.unique(label, sorted=True)
one_hot_label = one_hot(label, label_max)
class_num = torch.sum(one_hot_label, 0, keepdim=True) + 1e-15
one_hot_label = one_hot_label / class_num
pre_center = torch.mm(one_hot_label.t(), data)
return pre_center[labelset, :]

def one_hot(indices, depth):
"""
Returns a one-hot tensor.
This is a PyTorch equivalent of Tensorflow's tf.one_hot.
Parameters:
indices: a (n_batch, m) Tensor or (m) Tensor.
depth: a scalar. Represents the depth of the one hot dimension.
Returns: a (n_batch, m, depth) Tensor or (m, depth) Tensor.
"""

encoded_indicies = torch.zeros(indices.size() + torch.Size([depth]))
if indices.is_cuda:
encoded_indicies = encoded_indicies.cuda()
index = indices.view(indices.size()+torch.Size([1]))
encoded_indicies = encoded_indicies.scatter_(1,index,1)

return encoded_indicies

def set_gpu(x):
os.environ['CUDA_VISIBLE_DEVICES'] = x
print('using gpu:', x)


def ensure_path(path, remove=True):
if os.path.exists(path):
if remove:
if input('{} exists, remove? ([y]/n)'.format(path)) != 'n':
shutil.rmtree(path)
os.mkdir(path)
else:
os.mkdir(path)

class Averager():

def __init__(self):
self.n = 0
self.v = 0

def add(self, x):
self.v = (self.v * self.n + x) / (self.n + 1)
self.n += 1

def item(self):
return self.v


def count_acc(logits, label):
pred = torch.argmax(logits, dim=1)
if torch.cuda.is_available():
return (pred == label).type(torch.cuda.FloatTensor).mean().item()
else:
return (pred == label).type(torch.FloatTensor).mean().item()

def euclidean_metric(a, b):
n = a.shape[0]
m = b.shape[0]
a = a.unsqueeze(1).expand(n, m, -1)
b = b.unsqueeze(0).expand(n, m, -1)
logits = -((a - b)**2).sum(dim=2)
return logits

class Timer():

def __init__(self):
self.o = time.time()

def measure(self, p=1):
x = (time.time() - self.o) / p
x = int(x)
if x >= 3600:
return '{:.1f}h'.format(x / 3600)
if x >= 60:
return '{}m'.format(round(x / 60))
return '{}s'.format(x)

_utils_pp = pprint.PrettyPrinter()
def pprint(x):
_utils_pp.pprint(x)

def compute_confidence_interval(data):
"""
Compute 95% confidence interval
:param data: An array of mean accuracy (or mAP) across a number of sampled episodes.
:return: the 95% confidence interval for this data.
"""
a = 1.0 * np.array(data)
m = np.mean(a)
std = np.std(a)
pm = 1.96 * (std / np.sqrt(len(a)))
return m, pm


class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()

def forward(self, x):
return x
Loading

0 comments on commit 3834b8e

Please sign in to comment.