Skip to content

Releases: EpistasisLab/tpot

TPOT v0.11.4 minor release

29 May 16:12
Compare
Choose a tag to compare
  • Add a new built configuration "TPOT NN" which includes all operators in "Default TPOT" plus additional neural network estimators written in PyTorch (currently tpot.builtins.PytorchLRClassifier and tpot.builtins.PytorchMLPClassifier for classification tasks only)
  • Refine log_file parameter's behavior

TPOT v0.11.3 minor release

14 May 13:16
Compare
Choose a tag to compare
  • Fix a bug in TPOTRegressor in v0.11.2
  • Add -log option in command line interface to save process log to a file.

TPOT v0.11.2 Minor Release

13 May 14:49
9395855
Compare
Choose a tag to compare
Pre-release
  • Fix early_stop parameter does not work properly
  • TPOT built-in OneHotEncoder can refit to different datasets
  • Fix the issue that the attribute evaluated_individuals_ cannot record correct generation info.
  • Add a new parameter log_file to output logs to a file instead of sys.stdout
  • Fix some code quality issues and mistakes in documentations
  • Fix minor bugs

TPOT v0.11.1 Minor Release

03 Jan 18:04
aea42a5
Compare
Choose a tag to compare
  • Fix compatibility issue with scikit-learn v0.22
  • warm_start now saves both Primitive Sets and evaluated_pipelines_ from previous runs;
  • Fix the error that TPOT assign wrong fitness scores to non-evaluated pipelines (interrupted by max_min_mins or KeyboardInterrupt) ;
  • Fix the bug that mutation operator cannot generate new pipeline when template is not default value and warm_start is True;
  • Fix the bug that max_time_mins cannot stop optimization process when search space is limited.
  • Fix a bug in exported codes when the exported pipeline is only 1 estimator
  • Fix spelling mistakes in documentations
  • Fix some code quality issues

Version 0.11.0

05 Nov 21:04
e473d73
Compare
Choose a tag to compare
  • Support for Python 3.4 and below has been officially dropped. Also support for scikit-learn 0.20 or below has been dropped.
  • The support of a metric function with the signature score_func(y_true, y_pred) for scoring parameter has been dropped.
  • Refine StackingEstimator for not stacking NaN/Infinity predication probabilities.
  • Fix a bug that population doesn't persist even warm_start=True when max_time_mins is not default value.
  • Now the random_state parameter in TPOT is used for pipeline evaluation instead of using a fixed random seed of 42 before. The set_param_recursive function has been moved to export_utils.py and it can be used in exported codes for setting random_state recursively in scikit-learn Pipeline. It is used to set random_state in fitted_pipeline_ attribute and exported pipelines.
  • TPOT can independently use generations and max_time_mins to limit the optimization process through using one of the parameters or both.
  • .export() function will return string of exported pipeline if output filename is not specified.
  • Add SGDClassifier and SGDRegressor into TPOT default configs.
  • Documentation has been updated.
  • Fix minor bugs.

TPOT v0.10.2 minor release

16 Jul 17:29
Compare
Choose a tag to compare
  • TPOT v0.10.2 is the last version to support Python 2.7 and Python 3.4.
  • Minor updates for fixing compatibility issues with the latest version of scikit-learn (version > 0.21) and xgboost (v0.90)
  • Default value of template parameter is changed to None instead.
  • Fix errors in documentation

TPOT v0.10.1 minor release

19 Apr 15:19
b626271
Compare
Choose a tag to compare
  • Add data_file_path option into expert function for replacing 'PATH/TO/DATA/FILE' to customized dataset path in exported scripts. (Related issue #838)
  • Change python version in CI tests to 3.7
  • Add CI tests for macOS.

TPOT 0.10.0 Release

12 Apr 14:48
75dc2a0
Compare
Choose a tag to compare
  • Add a new template option to specify a desired structure for machine learning pipeline in TPOT. Check TPOT API (it will be updated once it is merge to master branch).
  • Add FeatureSetSelector operator into TPOT for feature selection based on priori export knowledge. Please check our preprint paper for more details (Note: it was named DatasetSelector in 1st version paper but we will rename to FeatureSetSelector in next version of the paper)
  • Refine n_jobs parameter to accept value below -1. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used. It is related to the issue #846.
  • Now memory parameter can create memory cache directory if it does not exist. It is related to the issue #837.
  • Fix minor bugs.

TPOT 0.9.6 Minor Release

01 Mar 18:30
c3f165b
Compare
Choose a tag to compare
  • Fix a bug causing that max_time_mins parameter doesn't work when use_dask=True in TPOT 0.9.5
  • Now TPOT saves best pareto values best pareto pipeline s in checkpoint folder
  • TPOT raises ImportError if operators in the TPOT configuration are not available when verbosity>2
  • Thank @PGijsbers for the suggestions. Now TPOT can save scores of individuals already evaluated in any generation even the evaluation process of that generation is interrupted/stopped. But it is noted that, in this case, TPOT will raise this warning message: WARNING: TPOT may not provide a good pipeline if TPOT is stopped/interrupted in a early generation., because the pipelines in early generation, e.g. 1st generation, are evolved/modified very limited times via evolutionary algorithm.
  • Fix bugs in configuration of TPOTRegressor
  • Error fixes in documentation

TPOT now supports integration with Dask for parallelization

04 Sep 16:41
Compare
Choose a tag to compare
  • TPOT now supports integration with Dask for parallelization + smart caching. Big thanks to the Dask dev team for making this happen!

  • TPOT now supports for imputation/sparse matrices into predict and predict_proba functions.

  • TPOTClassifier and TPOTRegressor now follows scikit-learn estimator API.

  • We refined scoring parameter in TPOT API for accepting Scorer object.

  • We refined parameters in VarianceThreshold and FeatureAgglomeration.

  • TPOT now supports using memory caching within a Pipeline via a optional memory parameter.

  • We improved documentation of TPOT.