-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance test is not working in Workload benchmark test #4298
Comments
UPDATE
|
UPDATE
|
ReopeningFailed again in |
ClosingWas able to execute manually after parsing the cluster logs. |
Reopened due to the failure encountered in wazuh/wazuh#24894 workload-4.9.0-alpha3-artifacts.zip I tried to execute it manually without success python3 -m pytest test_cluster_performance.py --artifacts_path='/home/nstefani/Downloads/workload-4.9.0-alpha3-artifacts' --n_workers=25 --n_agents=50000 --html=report.html --self-contained-html
============================= test session starts ==============================
platform linux -- Python 3.9.16, pytest-7.1.2, pluggy-1.5.0
rootdir: /home/nstefani/git/wazuh-qa/tests, configfile: pytest.ini
plugins: html-3.1.1, metadata-3.1.1, testinfra-5.0.0
collected 1 item
test_cluster_performance.py F [100%]
=================================== FAILURES ===================================
___________________________ test_cluster_performance ___________________________
artifacts_path = '/home/nstefani/Downloads/workload-4.9.0-alpha3-artifacts'
n_workers = 25, n_agents = 50000
def test_cluster_performance(artifacts_path, n_workers, n_agents):
"""Check that a cluster environment did not exceed certain thresholds.
This test obtains various statistics (mean, max, regression coefficient) from CSVs with
data generated in a cluster environment (resources used and duration of tasks). These
statistics are compared with thresholds established in the data folder.
Args:
artifacts_path (str): Path where CSVs with cluster information can be found.
n_workers (int): Number of workers folders that are expected inside the artifacts path.
n_agents (int): Number of agents in the cluster environment.
"""
if None in (artifacts_path, n_workers, n_agents):
pytest.fail("Parameters '--artifacts_path=<path> --n_workers=<n_workers> --n_agents=<n_agents>' are required.")
# Check if there are threshold data for the specified number of workers and agents.
selected_conf = f"{n_workers}w_{n_agents}a"
if selected_conf not in configurations:
pytest.fail(f"This is not a supported configuration: {selected_conf}. "
f"Supported configurations are: {', '.join(configurations.keys())}.")
# Check if path exists and if expected number of workers matches what is found inside artifacts.
try:
cluster_info = ClusterEnvInfo(artifacts_path).get_all_info()
except FileNotFoundError:
pytest.fail(f"Path '{artifacts_path}' could not be found or it may not follow the proper structure.")
if cluster_info.get('worker_nodes', 0) != int(n_workers):
pytest.fail(f"Information of {n_workers} workers was expected inside the artifacts folder, but "
f"{cluster_info.get('worker_nodes', 0)} were found.")
# Calculate stats from data inside artifacts path.
data = {'tasks': ClusterCSVTasksParser(artifacts_path).get_stats(),
'resources': ClusterCSVResourcesParser(artifacts_path).get_stats()}
if not data['tasks'] or not data['resources']:
> pytest.fail(f"Stats could not be retrieved, '{artifacts_path}' path may not exist, it is empty or it may not"
f" follow the proper structure.")
E Failed: Stats could not be retrieved, '/home/nstefani/Downloads/workload-4.9.0-alpha3-artifacts' path may not exist, it is empty or it may not follow the proper structure.
test_cluster_performance.py:68: Failed
- generated html file: file:///home/nstefani/git/wazuh-qa/tests/performance/test_cluster/test_cluster_performance/report.html -
=========================== short test summary info ============================
FAILED test_cluster_performance.py::test_cluster_performance - Failed: Stats ...
============================== 1 failed in 0.57s =============================== |
@rafabailon it's necessary to review why no binary data was collected in build https://ci.wazuh.info/job/CLUSTER-Workload_benchmarks_metrics/590/console |
UpdateI've looked through the code and it seems that some files are missing. The error occurs when the user The missing files are not referenced in the pipeline logs and there is no error in the artifacts indicating that something went wrong. The changes in https://github.com/wazuh/wazuh-jenkins/pull/6608 should not affect this as the option is not checked in the pipeline execution. I have launched the pipeline to continue the research: CLUSTER-Workload_benchmarks_metrics/604/
|
UpdateThe error is that before 4.9.0, the There are two possibilities to fix this error:
I have tested running the monitoring script locally to make sure this is the error. I have also run the pipeline with the changes in the code to verify that the necessary |
UpdateBefore Build: https://ci.wazuh.info/job/CLUSTER-Workload_benchmarks_metrics/615/ |
UpdateI've made the suggested changes and created a new PR with the correct branch nomenclature |
LGTM! |
Description
Performing: Release 4.6.0 - Pre-Alpha1 - Workload benchmarks metrics
The performance test is not functioning correctly in the Workload benchmark test.
Current behavior
When the test is triggered by the pipeline, the following issue occurs:
Expected behavior
The performance test should run smoothly without encountering any path-related problems.
The text was updated successfully, but these errors were encountered: