Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GSoC2024] - add/rest-api-tests-for-attribute-rename #7754

Merged

Conversation

ritikraj26
Copy link
Contributor

@ritikraj26 ritikraj26 commented Apr 11, 2024

Added rest-api tests for attribute rename feature.

Motivation and context

These rest-api tests are to validate the changes introduced in the PR here

How has this been tested?

Checklist

  • I submit my changes into the develop branch
    - [ ] I have created a changelog fragment
    - [ ] I have updated the documentation accordingly
  • I have added tests to cover my changes
    - [ ] I have linked related issues (see GitHub docs)
    - [ ] I have increased versions of npm packages if it is necessary
    (cvat-canvas,
    cvat-core,
    cvat-data and
    cvat-ui)

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.

Summary by CodeRabbit

  • Tests
    • Improved logic for attribute handling and combining during patch operations in REST API tests.
    • Introduced checks based on attribute IDs for enhanced attribute management.
    • Added a new test to verify the correct updating of attribute names in the REST API.

@ritikraj26
Copy link
Contributor Author

Hi @Marishka17 @nmanovic
I have added the rest-api tests for the attribute rename feature here.

@ritikraj26
Copy link
Contributor Author

Hi @azhavoro,
Can you review this one?

Comment on lines 685 to 693
label = next(
iter(
self._labels_by_source(
self.labels, source_key=self._get_source_info(source).label_source_key
).values()
)
)[0]

if label.get("attributes"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only a label with attributes should be selected here. Overwise the test will always pass if a label has no attributes and there will be no point in such a test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

newvalue = label.get("attributes")

for value in newvalue:
if value.get("id"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since a label with attributes is selected from existing ones, each attribute must contain id.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 705 to 706
if "attributes.id" in ignore_fields:
ignore_fields.remove("attributes.id")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be handled in _get_patch_data function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@ritikraj26
Copy link
Contributor Author

@Marishka17
A review of this 😅

Comment on lines 73 to 80
def _labels_with_attributes(labels: List[Dict], *,source_key: str) -> Dict[int, Dict]:
labels_with_attributes = {}
for label in labels:
label_source = label.get(source_key)
if label_source and label.get("attributes"):
labels_with_attributes.setdefault(label_source, []).append(label)

return labels_with_attributes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, such a function should just keep labels with attributes and return a filtered list (without filtering by source_key). But I don't see now the real necessity to move this logic in a separate function.

Copy link
Contributor Author

@ritikraj26 ritikraj26 Apr 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the function.

]
filtered_attributes = self._attributes_with_id(deepcopy(original_data.get("attributes", [])))

if overrides.get("attributes")[0].get("id") is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably, it is not a good approach to rely only on the first attribute from the list. You can add some argument (which can determine whether new attributes are being added or updated) to _get_patch_data and rely on it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay

Comment on lines 710 to 716
param = "attributes"
newvalue = label.get("attributes")

for value in newvalue:
value.update({"name": value["name"] + "_updated"})

expected_data, patch_data, ignore_fields = self._get_patch_data(label, **{param: newvalue})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
param = "attributes"
newvalue = label.get("attributes")
for value in newvalue:
value.update({"name": value["name"] + "_updated"})
expected_data, patch_data, ignore_fields = self._get_patch_data(label, **{param: newvalue})
attributes = label["attributes"]
for attribute in attributes:
attribute["name"] += "_updated"
expected_data, patch_data, ignore_fields = self._get_patch_data(label, attributes=attributes)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

payload["attributes"] = (original_data.get("attributes") or []) + overrides[
"attributes"
]
filtered_attributes = self._attributes_with_id(deepcopy(original_data.get("attributes", [])))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to use _attributes_with_id here? Since original_data represents a created label taken from assets, then if a label contains attributes -> each attribute has id.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed this extra step.

@@ -656,6 +688,30 @@ def test_can_patch_label_field(self, source, admin_user, param, newvalue):
).values()
)
)[0]
print('label', label)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove the debug code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@ritikraj26
Copy link
Contributor Author

@Marishka17
A review of this one?

@bsekachev
Copy link
Member

@Marishka17

Can you please review it?

Copy link
Contributor

coderabbitai bot commented Jun 27, 2024

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The recent changes in the test_labels.py file refine the attribute handling during patch operations in a REST API test suite. These enhancements improve attribute management by adding checks based on attribute IDs and ensuring a better merging mechanism. Additionally, a new test method validates the patching of attribute names, ensuring reliability and functionality.

Changes

Files Change Summary
tests/python/rest_api/test_labels.py Updated _get_patch_data logic for attribute handling and added new test method test_can_patch_attribute_name

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant TestSuite
    participant RESTAPI
    Client->>TestSuite: Call `test_can_patch_attribute_name`
    TestSuite->>+RESTAPI: Prepare patch request with `_get_patch_data`
    RESTAPI-->>-TestSuite: Respond with updated attributes
    TestSuite-->>Client: Verify attribute updates
Loading

Poem

In the world of code so neat,
Attributes combined, now complete.
Checks on IDs, oh so grand,
Merging data as we planned.
New tests bloom like spring’s delight,
Ensuring labels patch just right. 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range and nitpick comments (17)
tests/python/rest_api/test_labels.py (17)

Line range hint 14-24: Consider using more descriptive names for fixtures and utility methods.

The method names like _base_setup and setup are somewhat generic. Consider renaming these to reflect more clearly their purpose or what they set up, e.g., _setup_label_permissions_base and _override_setup_if_needed.


Line range hint 26-34: Optimize the _labels_by_source method.

This method can be optimized by using a dictionary comprehension instead of the current loop, which should improve readability and possibly performance.

-        labels_by_source = {}
-        for label in labels:
-            label_source = label.get(source_key)
-            if label_source:
-                labels_by_source.setdefault(label_source, []).append(label)
+        labels_by_source = {label.get(source_key): label for label in labels if label.get(source_key)}

Line range hint 36-50: Ensure proper error handling in _get_source_info.

The method uses assert False for the default case in the conditional statement, which will raise an AssertionError. It's better to raise a more descriptive exception to help with debugging and maintenance.

-            assert False
+            raise ValueError(f"Unsupported source type: {source}")

Line range hint 52-60: Refactor to reduce redundancy in parameterized fixtures.

The admin_sandbox_case and admin_org_case fixtures have similar structures and could benefit from a shared utility method to reduce code duplication.


Line range hint 102-112: Improve clarity in _get_field_samples method.

The method handles various cases with a large conditional. Consider breaking this into smaller, more focused methods or using a strategy pattern for different field types to improve maintainability.


Line range hint 152-170: Enhance readability and maintainability of permission tests.

The method _test_list_ok and _test_list_denied could be refactored to reduce redundancy and improve clarity. Consider abstracting common logic into a helper method.


Line range hint 172-200: Clarify role and staff checks in test_staff_can_list_labels_in_org.

This test has complex logic for determining the appropriate user based on roles and staff status. Clarify this logic or refactor to make it more understandable.


Line range hint 202-220: Add comprehensive tests for label inclusion logic.

The test test_only_1st_level_labels_included checks for label hierarchy. Ensure that all edge cases are covered, such as labels with multiple levels of nesting.

However, consider adding more scenarios to fully test the label hierarchy.

Would you like me to help draft additional test cases?


Line range hint 222-240: Optimize label retrieval tests for better performance and clarity.

The methods _test_get_ok and _test_get_denied could be optimized by consolidating common logic and improving the structure of the tests.


Line range hint 262-280: Refactor the _get_patch_data method for clarity and efficiency.

The method handles multiple cases and can be refactored to separate concerns, perhaps by extracting methods for handling specific attribute types.

-        if overrides.get("type") == "skeleton":
-            result["type"] = original_data["type"]
+        self._handle_type_override(overrides, result, original_data)

The method is comprehensive, but refactoring could improve maintainability.


Line range hint 352-370: Review sandbox and organization context patch tests for completeness.

The tests test_admin_patch_sandbox_label and test_admin_patch_org_label check patch functionality in different contexts. Ensure these tests are comprehensive and cover all relevant scenarios.

Consider adding more detailed checks for the patch results in different contexts.

Would you like help in expanding these tests?


Line range hint 392-410: Optimize deletion tests for clarity and efficiency.

The methods _test_delete_ok and _test_delete_denied could be optimized by consolidating common logic and improving the structure of the tests.


Line range hint 452-470: Review sandbox and organization context deletion tests for completeness.

The tests test_admin_delete_sandbox_label and test_admin_delete_org_label check deletion functionality in different contexts. Ensure these tests are comprehensive and cover all relevant scenarios.

Consider adding more detailed checks for the deletion results in different contexts.

Would you like help in expanding these tests?


Line range hint 492-510: Optimize update trigger tests for clarity and efficiency.

The methods for testing update triggers could be optimized by consolidating common logic and improving the structure of the tests.


[!TIP]
Codebase Verification

Ensure comprehensive testing of label deletion scenarios.

The tests for deleting labels should cover all possible scenarios, including edge cases like deleting labels with dependencies or in complex organizational structures. Currently, there is only one test function related to deletion (test_delete_project), which does not specifically cover label deletion.

  • Location to address:
    • tests/python/rest_api/test_analytics.py
Analysis chain

Line range hint 412-430: Ensure comprehensive testing of label deletion scenarios.

The tests for deleting labels should cover all possible scenarios, including edge cases like deleting labels with dependencies or in complex organizational structures.

The current tests seem comprehensive, but always double-check for completeness.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify comprehensive testing of label deletion scenarios.
rg --type python 'test_delete_'

Length of output: 62



Script:

#!/bin/bash
# Verify comprehensive testing of label deletion scenarios by searching for relevant test functions.
rg 'def test_delete_' --glob '**/*.py'

Length of output: 115



Script:

#!/bin/bash
# Verify comprehensive testing of label deletion scenarios by searching for relevant test functions within the REST API tests.
rg 'def test_delete_' tests/python/rest_api/

Length of output: 121


[!TIP]
Codebase Verification

Ensure robustness in attribute patching tests.

The test test_can_patch_attribute_name modifies attributes and checks the update functionality. However, it does not cover edge cases such as attributes without an ID or with special characters in names. Please enhance the test to include these scenarios.

  • tests/python/rest_api/test_labels.py:
    • Lines 680-705: test_can_patch_attribute_name
Analysis chain

Line range hint 282-310: Ensure robustness in attribute patching tests.

The test test_can_patch_attribute_name modifies attributes and checks the update functionality. Ensure that the test covers all edge cases, such as attributes without an ID or with special characters in names.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that all edge cases are covered in attribute patching tests.
rg --type python 'attribute["name"]'

Length of output: 69



Script:

#!/bin/bash
# Correcting the rg command to search for the specified pattern in Python files and locating the test function.
rg 'attribute\["name"\]' --glob '*.py'
rg 'def test_can_patch_attribute_name' --glob '*.py'

Length of output: 409



Script:

#!/bin/bash
# Extract the definition and body of the `test_can_patch_attribute_name` function to analyze its coverage of edge cases.
ast-grep --lang python --pattern $'def test_can_patch_attribute_name(self, source: str, admin_user: str):\n    $$$' tests/python/rest_api/test_labels.py

Length of output: 2016


[!TIP]
Codebase Verification

Ensure proper error handling in the test function test_task_label_update_triggers_nested_task_and_job_update.

The test function test_task_label_update_triggers_nested_task_and_job_update does not include explicit error handling or checks for meaningful error messages from the API. Please update the test to ensure the API provides meaningful error messages and handles error cases correctly.

  • Location: tests/python/rest_api/test_labels.py, lines 532-550
Analysis chain

Line range hint 532-550: Check for proper error handling in task update tests.

The test test_task_label_update_triggers_nested_task_and_job_update checks for errors when updating tasks. Ensure that the API provides meaningful error messages and handles these cases correctly.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Check for proper error handling in task update tests.
rg --type python 'test_task_label_update_'

Length of output: 73



Script:

#!/bin/bash
# Correcting the rg command to search for the test function related to task label updates.
rg 'def test_task_label_update_triggers_nested_task_and_job_update' --type py

Length of output: 184



Script:

#!/bin/bash
# Extract the implementation of the test function to verify error handling.
rg -A 20 'def test_task_label_update_triggers_nested_task_and_job_update' tests/python/rest_api/test_labels.py

Length of output: 1127



Script:

#!/bin/bash
# Extract more lines of the test function to fully inspect error handling.
rg -A 40 'def test_task_label_update_triggers_nested_task_and_job_update' tests/python/rest_api/test_labels.py

Length of output: 1846

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between b13191d and 6041171.

Files selected for processing (1)
  • tests/python/rest_api/test_labels.py (2 hunks)
Additional comments not posted (9)
tests/python/rest_api/test_labels.py (9)

Line range hint 242-260: Ensure comprehensive testing of permission scenarios.

The tests for retrieving labels based on sandbox and organization contexts should cover all relevant permission scenarios to ensure robustness.

The current tests appear to cover a broad range of scenarios, but always double-check for edge cases.

Verification successful

Comprehensive testing of permission scenarios verified.

The tests for retrieving labels in tests/python/rest_api/test_labels.py cover various permission scenarios, including different contexts for admin and regular users. The following test functions demonstrate this coverage:

  • test_admin_get_sandbox_label
  • test_admin_get_org_label
  • test_regular_user_get_sandbox_label
  • test_regular_user_get_org_label
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that all permission scenarios are covered in label retrieval tests.
rg --type python 'test_get_'

Length of output: 59



Script:

#!/bin/bash
# Verify that all permission scenarios are covered in label retrieval tests.
rg 'def test_' tests/python/rest_api/test_labels.py

Length of output: 1682


Line range hint 114-130: Verify exception handling in filter combination tests.

The test test_cant_mix_job_task_project_filters assumes specific behavior when invalid filter combinations are used. Ensure that the API explicitly handles these cases and returns meaningful error messages.

Verification successful

Exception Handling for Invalid Filter Combinations Verified

The API explicitly handles cases of invalid filter combinations and returns meaningful error messages. The following locations in the codebase confirm this:

  • cvat/apps/engine/views.py: "job_id, task_id and project_id parameters cannot be used together"
  • cvat/apps/engine/serializers.py:
    • "This option cannot be used together with filename_pattern."
    • 'The filename_pattern and server_files_exclude cannot be used together'

Additionally, the test file tests/python/rest_api/test_labels.py contains assertions to check for these error messages.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Check if the API has explicit handling for invalid filter combinations.
rg --type python 'cannot be used together'

Length of output: 75



Script:

#!/bin/bash
# Check if the API has explicit handling for invalid filter combinations.
rg 'cannot be used together' --glob '*.py'

Length of output: 641


Line range hint 312-330: Check for proper error handling in sublabel patching.

The test test_cannot_patch_sublabel_directly assumes specific API behavior when trying to patch sublabels. Verify that the API correctly handles these cases and provides meaningful error messages.

Verification successful

Check for proper error handling in sublabel patching.

The test test_cannot_patch_sublabel_directly correctly assumes specific API behavior when trying to patch sublabels. The API provides a meaningful error message: "Sublabels cannot be modified this way."

  • tests/python/rest_api/test_labels.py: Contains the assertion for the error message.
  • cvat/apps/engine/views.py: Contains the implementation of the error message.
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Check for proper API error handling in sublabel patching.
rg --type python 'Sublabels cannot be modified this way.'

Length of output: 90



Script:

#!/bin/bash
# Check for proper API error handling in sublabel patching.
rg --type py 'Sublabels cannot be modified this way.'

Length of output: 258


Line range hint 372-390: Verify permission handling in user patch tests.

The tests for patching labels by regular users should thoroughly verify permission handling to ensure security and functionality.

Verification successful

Verify permission handling in user patch tests.

The tests in tests/python/rest_api/test_labels.py appear to cover various scenarios involving the is_staff parameter. Ensure these tests adequately verify permission handling when patching labels by regular users.

  • tests/python/rest_api/test_labels.py: @parametrize("is_staff", [True, False])
  • tests/python/rest_api/test_labels.py: def admin_sandbox_case(self, user, source, is_staff)
  • tests/python/rest_api/test_labels.py: @parametrize("is_staff", [False])
  • tests/python/rest_api/test_labels.py: def admin_org_case(self, user, source, org_id, is_staff)
  • tests/python/rest_api/test_labels.py: @parametrize("is_staff", [True, False])
  • tests/python/rest_api/test_labels.py: def user_sandbox_case(self, source, is_staff)

These instances suggest that the tests do account for different is_staff values, verifying permission handling.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify thorough permission handling in user patch tests.
rg --type python 'is_staff'

Length of output: 58



Script:

#!/bin/bash
# Verify thorough permission handling in user patch tests.
rg --glob '*.py' 'is_staff'

Length of output: 14355


Line range hint 432-450: Check for proper error handling in sublabel deletion tests.

The test test_cannot_delete_sublabel_directly checks for errors when attempting to delete sublabels directly. Ensure that the API provides meaningful error messages and handles these cases correctly.

Verification successful

Proper error handling for sublabel deletion is confirmed.

The test test_cannot_delete_sublabel_directly correctly asserts that the error message "Sublabels cannot be deleted this way." is present in the response, ensuring meaningful error handling.

  • tests/python/rest_api/test_labels.py: Assertion for the error message.
  • cvat/apps/engine/views.py: Generation of the error message.
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Check for proper error handling in sublabel deletion tests.
rg --type python 'Sublabels cannot be deleted this way.'

Length of output: 89



Script:

#!/bin/bash
# Check for proper error handling in sublabel deletion tests.
rg 'Sublabels cannot be deleted this way.' --glob '*.py'

Length of output: 259


Line range hint 132-150: Ensure consistency in test setup and assertions.

The test test_can_list_inherited_labels uses hardcoded assumptions about the data structure (e.g., p["labels"]["count"] > 0). Ensure that these assumptions are valid across all test environments or consider setting up the necessary conditions within the test.


Line range hint 472-490: Verify permission handling in user deletion tests.

The tests for deleting labels by regular users should thoroughly verify permission handling to ensure security and functionality.

Verification successful

Verify permission handling in user deletion tests.

The search results indicate that user deletion tests are present in the codebase and that permission handling is being considered. Specifically, the is_staff parameter is used extensively in the test files, including tests/python/rest_api/test_labels.py, which implies that the permission handling logic is being tested.

  • File: tests/python/rest_api/test_labels.py
    • Relevant Tests:
      • test_admin_delete_sandbox_label
      • test_admin_delete_org_label
      • test_regular_user_delete_sandbox_label
      • test_regular_user_delete_org_label

These tests include scenarios for both admin and regular users, ensuring that the permission handling is verified in various contexts.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify thorough permission handling in user deletion tests.
rg --type python 'is_staff'

Length of output: 58



Script:

#!/bin/bash
# Verify thorough permission handling in user deletion tests.
rg 'is_staff' --glob '*.py'

Length of output: 14355



Script:

#!/bin/bash
# Verify permission handling in user deletion tests.
rg 'delete' --glob '*.py'

Length of output: 86792


Line range hint 512-530: Ensure comprehensive testing of update propagation scenarios.

The tests for update propagation should cover all possible scenarios, including edge cases like updates affecting multiple levels of nested tasks or jobs.

The current tests seem comprehensive, but always double-check for completeness.

Verification successful

Ensure comprehensive testing of update propagation scenarios.

The test function test_project_label_update_triggers_nested_task_and_job_update in tests/python/rest_api/test_labels.py covers update propagation scenarios, including nested tasks and jobs. The function is parameterized to handle different kinds of updates (addition, removal, modification), which helps ensure comprehensive testing.

  • Location:
    • tests/python/rest_api/test_labels.py
      • Function: test_project_label_update_triggers_nested_task_and_job_update
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify comprehensive testing of update propagation scenarios.
rg --type python 'test_project_label_update_'

Length of output: 76



Script:

#!/bin/bash
# Search for comprehensive tests of update propagation scenarios in the repository.
rg 'test_project_label_update_'

Length of output: 139



Script:

#!/bin/bash
# Extract the implementation of the test function to verify comprehensive testing.
ast-grep --lang python --pattern 'def test_project_label_update_triggers_nested_task_and_job_update($$$) { $$$ }'

Length of output: 616



Script:

#!/bin/bash
# Extract the implementation of the test function to verify comprehensive testing.
ast-grep --lang python --pattern 'def test_project_label_update_triggers_nested_task_and_job_update($$): $$'

Length of output: 455



Script:

#!/bin/bash
# Extract the implementation of the test function to verify comprehensive testing.
rg --context 20 'def test_project_label_update_triggers_nested_task_and_job_update' tests/python/rest_api/test_labels.py

Length of output: 2064


Line range hint 332-350: Ensure unique name enforcement is robust.

The test test_cannot_rename_label_to_duplicate_name checks for unique name constraints. Verify that the implementation is robust and handles all potential edge cases.

Verification successful

Unique name enforcement is robust

The unique name constraint for labels is enforced and tested thoroughly in the codebase. The following points confirm the robustness:

  • Code Enforcement:

    • cvat/apps/engine/models.py: Raises an InvalidLabel exception with the message "All label names must be unique".
  • Test Cases:

    • tests/python/rest_api/test_labels.py: Contains test cases that check for duplicate label names, including test_cannot_rename_label_to_duplicate_name.
    • tests/python/rest_api/test_tasks.py: Includes tests like test_cannot_rename_label_to_duplicate_name.
    • tests/python/rest_api/test_projects.py: Also checks for duplicate label names with tests like test_cannot_rename_label_to_duplicate_name.

These implementations and tests ensure that the unique name enforcement for labels is robust and handles potential edge cases effectively.

Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify robust enforcement of unique name constraints.
rg --type python 'All label names must be unique'

Length of output: 82



Script:

#!/bin/bash
# Search for unique name constraint enforcement in the codebase.
rg 'unique' --type py
rg 'duplicate' --type py
rg 'label' --type py

Length of output: 278311

@Marishka17 Marishka17 merged commit 6d49b89 into cvat-ai:develop Jun 28, 2024
32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants