Skip to content
#

smote-oversampler

Here are 64 public repositories matching this topic...

solution https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud. Xgboost is an efficient method of gradient boosting that makes a random initial prediction then calculates similarity scores and gain to build the trees and decrease the gap between the actual value and the predicted value.Gridsearch was used to get the best parameters tuning.

  • Updated Aug 14, 2023
  • Jupyter Notebook

This project utilizes advanced data analysis and machine learning techniques to predict equipment failures before they occur. The goal is to detect anomalies and possible defects in equipment and processes to enable preemptive maintenance, thereby reducing downtime and costs.

  • Updated Jul 20, 2024
  • Jupyter Notebook

Machine learning for credit card default. Precision-recalls are calculated due to imbalanced data. Confusion matrices and test statistics are compared with each other based on Logit over and under-sampling methods, decision tree, SVM, ensemble learning using Random Forest, Ada Boost and Gradient Boosting. Easy Ensemble AdaBoost classifier appear…

  • Updated Jul 24, 2020
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the smote-oversampler topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the smote-oversampler topic, visit your repo's landing page and select "manage topics."

Learn more