Skip to content

Releases: sylvaticus/BetaML.jl

v0.12.1

16 Aug 18:59
Compare
Choose a tag to compare

BetaML v0.12.1

Diff since v0.12.0

  • minor bugfixes

v0.12.0

15 May 20:35
Compare
Choose a tag to compare

BetaML v0.12.0

Diff since v0.11.4

  • Added FeatureRanker, a flexible feature ranking estimator using multiple feature importance metrics
  • new functions kl_divergence and sobol_index
  • added option to tree-based models to ignore specific variables in prediction, by following both the splits on nodes occurring on that dimensions, as the keyword ignore_dims to the predict function
  • added option sampling_share to RandomForestEstimator model
  • DOC: added Benchmarks (but then temporarily removed due to the issue of SystemBenchmark not installable, see this issue )
  • DOC: added FeatureRanker tutorial
  • bugfix on l2loss_by_cv for unsupervised models

v0.11.4

18 Mar 10:09
Compare
Choose a tag to compare

BetaML v0.11.4

Diff since v0.11.3

bugfix (solve issue in cosine_distance - similarity was actually computed)

v0.11.3

09 Feb 16:31
Compare
Choose a tag to compare

BetaML v0.11.3

Diff since v0.11.2

  • bugfixes (removed old, undocumented, unused, type pirate findfirst and findall functions)

v0.11.2

29 Jan 12:59
Compare
Choose a tag to compare

BetaML v0.11.2

Diff since v0.11.1

  • bugfixes

v0.11.1

26 Jan 11:34
Compare
Choose a tag to compare

BetaML v0.11.1

Diff since v0.11.0

  • changed some keyword arguments of AutoEncoder and PCAEncoder: outdims => encoded_size and innerdims => layers_size

This shouldn't be breaking as I twisted the constructor to accept the older names (until next breaking version 0.12)

v0.11.0

25 Jan 14:26
Compare
Choose a tag to compare

BetaML v0.11.0

Diff since v0.10.4

Attention: many breaking changes in this version !!

  • experimental new ConvLayer and PoolLayer for convolutional networks. BetaML neural networks work only on CPU and even on CPU the convolution layers (but not the dense ones) are 2-3 times slower than Flux. Still they have some quite unique characteristics, like working with any dimensions or not requiring AD in most cases, so they may still be useful in some corner situations. Then, if you want to help in porting to GPU... ;-)
  • Isolated MLJ interface models into their own Bmlj submodule
  • Renamed many model in a congruent way
  • Shortened the hyper-parameters and learnable parameters struct names
  • Corrected many doc bugs
  • Several bugfixes

v0.10.4

29 Dec 15:37
Compare
Choose a tag to compare

BetaML v0.10.4

Diff since v0.10.3

  • Added models AutoEncoder and MLJ wrapper AutoEncoderMLJ with a m=AutoEncoder(hp); fit!(m,x); x_latent = predict(m,x); x̂ = inverse_predict(m,x_latent) interface. Users can optionally specify the number of dimensions to shrink the data (outdims), the number of neurons of the inner layers (innerdims) or the full details of the encoding and decoding layers and all the underlying NN options, but this remains optional.
  • Adapted 2loss_by_cv function to unsupervised models with inverse_predict
  • Several bugfixes

Merged pull requests:

  • CompatHelper: add new compat entry for Statistics at version 1, (keep existing compat) (#61) (@github-actions[bot])
  • correct typo in AbstractTrees.printnode (#62) (@roland-KA)

Closed issues:

  • Deprecation warning from ProgressMeter.jl (#58)

v0.10.3

15 Aug 15:48
Compare
Choose a tag to compare

BetaML v0.10.3

Diff since v0.10.2

v0.10.2

07 Jul 20:35
Compare
Choose a tag to compare

BetaML v0.10.2

Diff since v0.10.1

Merged pull requests:

  • CompatHelper: add new compat entry for DelimitedFiles at version 1, (keep existing compat) (#55) (@github-actions[bot])