Skip to content

Commit

Permalink
Code Refactor ruff check --fix --extend-select I (#56)
Browse files Browse the repository at this point in the history
  • Loading branch information
glenn-jocher committed Jun 16, 2024
1 parent c6011a0 commit 641da1c
Show file tree
Hide file tree
Showing 9 changed files with 29 additions and 32 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/--bug-report.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
______________________________________________________________________
---

name: "\\U0001F41BBug report" about: Create a report to help us improve title: '' labels: bug assignees: ''

______________________________________________________________________
---

Before submitting a bug report, please ensure that you are using the latest versions of:

Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/--feature-request.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
______________________________________________________________________
---

name: "\\U0001F680Feature request" about: Suggest an idea for this project title: '' labels: enhancement assignees: ''

______________________________________________________________________
---

## 🚀 Feature

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ name: Ultralytics Actions

on:
push:
branches: [main,master]
branches: [main]
pull_request:
branches: [main,master]
branches: [main]

jobs:
format:
Expand Down
27 changes: 13 additions & 14 deletions .github/workflows/greetings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,20 @@ jobs:
greeting:
runs-on: ubuntu-latest
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: 'Hello @${{ github.actor }}, thank you for submitting a PR! We will respond as soon as possible.'
issue-message: |
Hello @${{ github.actor }}, thank you for your interest in our work! **Ultralytics has publicly released YOLOv5** at https://github.com/ultralytics/yolov5, featuring faster, lighter and more accurate object detection. YOLOv5 is recommended for all new projects.
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: "Hello @${{ github.actor }}, thank you for submitting a PR! We will respond as soon as possible."
issue-message: |
Hello @${{ github.actor }}, thank you for your interest in our work! **Ultralytics has publicly released YOLOv5** at https://github.com/ultralytics/yolov5, featuring faster, lighter and more accurate object detection. YOLOv5 is recommended for all new projects.
<a href="https://apps.apple.com/app/id1452689527" target="_blank">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/splash.jpg" width="800"></a>
<a href="https://apps.apple.com/app/id1452689527" target="_blank">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/splash.jpg" width="800"></a>
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/model_plot.png" width="800">
<img src="https://github.com/ultralytics/yolov5/releases/download/v1.0/model_plot.png" width="800">
If this is a 🐛 Bug Report, please provide screenshots and **minimum viable code to reproduce your issue**, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online [W&B logging](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data#visualize) if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
If this is a 🐛 Bug Report, please provide screenshots and **minimum viable code to reproduce your issue**, otherwise we can not help you.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online [W&B logging](https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data#visualize) if available.
For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
8 changes: 4 additions & 4 deletions .github/workflows/stale.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ jobs:
- uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'
stale-pr-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.'
stale-issue-message: "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions."
stale-pr-message: "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions."
days-before-stale: 30
days-before-close: 5
exempt-issue-labels: 'documentation,tutorial'
operations-per-run: 100 # The maximum number of operations per run, used to control rate limiting.
exempt-issue-labels: "documentation,tutorial"
operations-per-run: 100 # The maximum number of operations per run, used to control rate limiting.
4 changes: 2 additions & 2 deletions detect.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,8 +92,8 @@ def detect(opt):
for batch_i, (img_paths, img) in enumerate(dataloader):
print("\n", batch_i, img.shape, end=" ")

img_ud = np.ascontiguousarray(np.flip(img, axis=1))
img_lr = np.ascontiguousarray(np.flip(img, axis=2))
np.ascontiguousarray(np.flip(img, axis=1))
np.ascontiguousarray(np.flip(img, axis=2))

preds = []
length = opt.img_size
Expand Down
2 changes: 1 addition & 1 deletion models.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def __init__(self, anchors, nC, img_dim, anchor_idxs):
def forward(self, p, targets=None, requestPrecision=False, weight=None, epoch=None):
"""Processes input tensor `p`, optional targets for precision calculation; returns loss, precision, or both."""
FT = torch.cuda.FloatTensor if p.is_cuda else torch.FloatTensor
device = torch.device("cuda:0" if p.is_cuda else "cpu")
torch.device("cuda:0" if p.is_cuda else "cpu")
# weight = xview_class_weights(range(60)).to(device)

bs = p.shape[0]
Expand Down
1 change: 0 additions & 1 deletion scoring/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,6 @@ def compute_average_precision_recall(groundtruth_coordinates, coordinates, iou_t
# Start to build up the Matching instances for each of the image_id_*, which
# is to hold the IOU computation between the rectangle pairs for the same
# image_id_*.
matchings = {}
if (len(groundtruth_coordinates) % 4 != 0) or (len(coordinates) % 4 != 0):
raise ValueError("groundtruth_info_dict and test_info_dict should hold " "only 4 * N numbers.")

Expand Down
7 changes: 3 additions & 4 deletions scoring/score.py
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,6 @@ def score(path_predictions, path_groundtruth, path_output, iou_threshold=0.5):
average_precision_per_class[i] = ap

# metric splits
metric_keys = ["map", "map/small", "map/medium", "map/large", "map/common", "map/rare"]

splits = {
"map/small": [17, 18, 19, 20, 21, 23, 24, 26, 27, 28, 32, 41, 60, 62, 63, 64, 65, 66, 91],
Expand Down Expand Up @@ -436,9 +435,9 @@ def score(path_predictions, path_groundtruth, path_output, iou_threshold=0.5):
vals["map_score"] = np.nanmean(per_class_p)
vals["mar_score"] = np.nanmean(per_class_r)

a = np.concatenate(
(average_precision_per_class, per_class_p, per_class_r, per_class_rcount, num_gt_per_cls)
).reshape(5, 100)
np.concatenate((average_precision_per_class, per_class_p, per_class_r, per_class_rcount, num_gt_per_cls)).reshape(
5, 100
)

for i in splits:
vals[i] = np.nanmean(average_precision_per_class[splits[i]])
Expand Down

0 comments on commit 641da1c

Please sign in to comment.