You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about the preprocessing section in the ipynb.
It seems that the preprocessing of min-max scaling is applied to the entire dataset, but it is not distinguished between training data and validation data, which may cause data leakage. How are you addressing this issue?
The text was updated successfully, but these errors were encountered:
It's great to see that you've addressed the data leakage issue in my preprocessing section. By splitting the data into training and testing sets before applying the Min-Max scaling, I've ensured that the scaling parameters are learned only from the training data and then consistently applied to the test data. This approach helps prevent any information leakage from the test set into the training process.
Here's a brief description of the changes I made:
I converted my DataFrame (data_df) to a NumPy array (data_numpy).
split the data into training (x_train, y_train) and testing (x_test, y_test) sets using train_test_split.
applied Min-Max scaling to the training data (x_train_scaled) using a feature range of (-1, 1).
transformed the test data (x_test_scaled) using the same scaler.
Finally, I organized the data into sequences of time steps for both training and testing.
I have a question about the preprocessing section in the ipynb.
It seems that the preprocessing of min-max scaling is applied to the entire dataset, but it is not distinguished between training data and validation data, which may cause data leakage. How are you addressing this issue?
The text was updated successfully, but these errors were encountered: