Skip to content

Releases: tensorflow/addons

TensorFlow Addons v0.15.0

10 Nov 21:09
8cec33f
Compare
Choose a tag to compare

Release Notes

  • Built against TensorFlow 2.7
  • CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
  • API docs found on the website

Changelog

  • Use multipython image for dev container (#2598)
  • Add support for publishing macOS M1 ARM64 wheels for tfa-nightly (#2559)

Tutorials

  • Update optimizers_cyclicallearningrate.ipynb (#2538)

tfa.activations

  • Correct documentation for Snake activation to match literature and return statement (#2572) @fliptrail

tfa.iamge

  • Fix euclidean distance transform float16 kernel (#2568)

tfa.layers

  • Fix using NoisyNet with .fit() or .train_on_batch() (#2486)
  • Fix spectral norm mixed precision (#2576)

tfa.optimizers

  • Add AdaBelief optimizer (#2548)
  • Make Rectified Adam faster (#2570)

tfa.text

  • Add the CRF model wrapper (#2555)
  • Add a codeowner for CRF (#2556)

Thanks to our Contributors

@MarkDaoust, @bhack, @eli-osherovich, @fliptrail, @fsx950223, @howl-anderson, @juntang-zhuang, @jvishnuvardhan, @lgeiger, @markub3327, @seanpmorgan, @szutenberg and @vtjeng

TensorFlow Addons v0.14.0

19 Aug 12:57
45928da
Compare
Choose a tag to compare

Release Notes

  • Built against TensorFlow 2.6
  • CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
  • API docs found on the website

Changelog

  • Remove compatibility code for TensorFlow < 2.4 (#2545)
  • Modify configure.py to recognize 'aarch64' for 64-Bit Raspberry Pi OS (#2540)
  • Apple silicon support (#2504)
  • Build fix Raspberry Pi 4 Linux ARM64 (#2487)

tfa.layers

  • Add EmbeddingBag gpu op and layer (#2352) (#2517)(#2505)
  • Fix StochasticDepth layer error in training mixed_float16 (#2450)

tfa.optimizers

  • Adding a tutorial on CyclicalLearningRate (#2463)

Thanks to our Contributors

@HeatfanJohn, @Rocketknight1, @RyanGoslingsBugle, @fsx950223, @kaoh, @leondgarse, @lgeiger, @maxhgerlach, @sayakpaul, @seanpmorgan, @singhsidhukuldeep and @tetsuyasu

TensorFlow Addons v0.13.0

15 May 16:45
9613618
Compare
Choose a tag to compare

Release Notes

  • Built against TensorFlow 2.5
  • CUDA kernels are compiled with CUDA11.2 and cuDNN 8.1.0
  • API docs found on the website

Changelog

  • Add python3.9 support (#2204)
  • Fixed build on ppc (#2438)

tfa.activations

  • Cleanup legacy codes for activations (#2394)

tfa.image

  • Add python fallback for adjust_hsv_in_yiq (#2392)
  • Remove ImageProjectiveTransform kernel (#2395)
  • Fix EDT float16 and float64 kernels (#2412)
  • Optimize EDT (#2402)
  • Update cutout_ops.py (#2416)

tfa.metrics

  • Initial commit of streaming Kendall's Tau algorithm. (#2423)
  • Fix F1Score docs (#2462)
  • Matthew Fix (#2406)
  • Fix RSquare serialization (#2390)
  • Make RSquare.reset_states to be able to run in tf.function (#2445)

tfa.optimizers

  • Adding COntinuos COin Betting (COCOB) Backprop optimizer (#2063)
  • Fix NovoGrad optimizer to work with float64 layers (#2467)
  • Update cyclical_learning_rate.py (#2286)
  • RectifiedAdam: Store 'total_steps' hyperparameter as float (#2369)

tfa.text

  • fix wrong type hinting of crf_log_likelihood (#2471)

Thanks to our Contributors

@0x0badc0de, @bhack, @DragonPG2000, @Harsh188, @WindQAQ, @ashutosh1919, @fsx950223, @jeongukjae, @jonpsy, @juliangilbey, @lucasdavid, @lum4chi, @m-a-r-o-u, @nickswalker, @nleastaugh, @npanpaliya, @olesalscheider, @rehanguha, @seanpmorgan, @shubhanshu02, @sorensenjs, @whatwilliam and @xiedeping

TensorFlow Addons v0.12.1

30 Jan 13:40
1e05eb9
Compare
Choose a tag to compare

Release Notes

  • Built against TensorFlow 2.4.1
  • CUDA kernels are compiled with CUDA 11
  • API docs found on the website

Changelog

tfa.image

  • Fix image/fix sparse image warp unknown batch size (#2311)

TensorFlow Addons v0.12.0

23 Dec 19:21
d26e2ed
Compare
Choose a tag to compare

Release Notes

Changelog

  • Add AVX2 support (#2299)
  • Drop TF2.2 compatibility (#2224)
  • Drop python3.5 support (#2204)
  • Expose tfa.types doc (#2162)
  • Rename "Arguments:" to "Args:" (#2267)
  • Add support for ARM architecture build from source (#2182)

tfa.activations

  • Add tf.nn.gelu alias for TF >= 2.4 (#2265)
  • Remove custom op activations (#2247)

tfa.image

  • Speedup gaussian kernel generation (#2149)
  • Support fill_mode for transform (#2153)
  • Use ImageProjectiveTransformV3 for TF >= 2.4.0 (#2293)
  • Support unknown rank image (#2300)
  • Fix sparse_image_warp partially unknown shape (#2308)
  • Make cutout compatible with keras layer (#2302)
  • Remove unsupported data_format (#2296)
  • Refactor sharpness (#2287)
  • Rodert fix image random cutout 2276 (#2285)
  • Remove tf.function decorator in tfa.image.equalize (#2264)
  • Support empty batches in ResamplerOp (#2219)
  • Make cutout op compatible with non eager mode (#2190)

tfa.layers

  • Add stochastic depth layer (#2154)

  • Add MaxUnpooling2D layer (#2272)

  • Add noisy dense layers. (#2099)

  • Add discriminative Layer Training (#969)

  • Make MultiHeadAttention agnostic to dtype (float32 vs. float16) (#2253)

  • Change CRF layer dtype (#2270)

  • Change GroupNormalization default groups to 32. (#2241)

tfa.optimizers

  • Standardized Testing Module (#2233)
  • Fix LazyAdam resource variable ops performance issue (#2274)
  • Add experimental_aggregate_gradients support (#2137)

tfa.rnn

  • Bug fix for conflict variable name in layernorm cells. (#2284)

tfa.seq2seq

  • Graduate _BaseAttentionMechanism to a public base class (#2209)
  • Add a doctest example for BasicDecoder (#2214)
  • Add a doctest example for AttentionWrapper (#2215)
  • Improve sampler documentation, use doctest (#2213)
  • Beam search decoding procedure added to seq2seq_nmt tutorial (#2140)

Thanks to our Contributors

@DanBmh, @DavidWAbrahams, @Harsh188, @JulianRodert, @LeonShams, @MHStadler, @MarkDaoust, @SamuelMarks, @WindQAQ, @aaronmondal, @abhishek-niranjan, @albertz, @bhack, @crccw, @edend10, @fsx950223, @gabrieldemarmiesse, @guillaumekln, @HMPH, @hp77-creator, @hwaxxer, @hyang0129, @kaixih, @lamberta, @marksandler2, @matwilso, @napsternxg, @nataliyah123, @perfinion, @qlzh727, @rmlarsen, @rushabh-v, @rybakov, @seanpmorgan, @stephengmatthews, @tgaddair and @thaink

TensorFlow Addons v0.11.2

27 Aug 02:52
81529ff
Compare
Choose a tag to compare

Changelog

Improve API documentation

  • Beautifier image doc (#2101)
  • Beautifier callbacks doc (#2105)
  • Beautifier losses doc (#2062)
  • Fix broken link in NovoGrad docstring (#2096)
  • Beautifier layers doc (#2072)

TensorFlow Addons v0.11.1

07 Aug 02:28
f62f05c
Compare
Choose a tag to compare

Release Notes

  • Update TF compatibility warning to include all of 2.3.x as acceptable.

TensorFlow Addons v0.11.0

06 Aug 01:20
3078485
Compare
Choose a tag to compare

Release Notes

  • Built against TensorFlow 2.3
  • CUDA kernels are compiled with CUDA 10.1
  • API docs found on the website

Changelog

  • Support building against CUDA 11 and CUDNN 8 (#1950)

tfa.activations

  • Add Snake layer and activation (#1967)
  • Deprecate gelu (#2048)

tfa.image

  • Set shape for dense image warp (#1993
  • Drop data_format argument (#1980)
  • Enable half and double for resampler GPU ops (#1852)

tfa.layers

  • Add Spectral Normalization layer (#1244)
  • Add CRF layer (#1999)
  • Add Snake layer and activation (#1967)
  • Add Spatial Pyramid Pooling layer (#1745)
  • Add Echo State Network (ESN) layer (#1862)
  • Incorporate low-rank techniques into DCN. (#1795)

tfa.metrics

  • Add geometric mean (#2031)
  • Fix R_Square shape issue in model.evaluate (#2034)

tfa.losses

  • Change the default distance metric for tfa.losses.triplet_semihard_loss and tfa.losses.triplet_hard_loss from squared euclidean norm to euclidean norm. Users must change distance_metric to "squared-L2" in order to achieve the old behavior.

tfa.optimizers

  • Add ProximalAdagrad optimizer (#1976)
  • Add support for scheduled weight decays in RectifiedAdam. (#1974)
  • Fixed lr/wd schedules for DecoupledWeightDecayExtension running on GPU (#2053) (#2029)
  • Fixed sparse novograd (#1970)
  • MovingAverage: add dynamic decay and swap weights (#1726)
  • Remove RAdam optional float total steps (#1871)

tfa.rnn

  • Move the tf.keras.layers.PeepholeLSTMCell to TFA (#1944)
  • Added echo state network (ESN) recurrent cell (#1811)

tfa.seq2seq

  • Improve support of global dtype policy in seq2seq layers (#1981)
  • Add a Python alternative to seq2seq.gather_tree (#1925)
  • Allow resetting embedding_fn when calling BeamSearchDecoder (#1917)
  • Fixup returned cell state structure in BasicDecoder (#1905)
  • Fixup returned cell state structure in BeamSearchDecoder (#1904)
  • Fix AttentionWrapper type annotation for multiple attention mechanisms (#1872)
  • Ensure cell state structure is unchanged on first AttentionWrapper call (#1861)
  • Remove sequential_update from AverageWrapper (#1807)

Thanks to our Contributors

@AakashKumarNain, @AntPeixe, @JakeTheWise, @MHStadler, @PRUBHTEJ, @Smankusors, @Squadrick, @Susmit-A, @WindQAQ, @autoih, @bhack, @brunodoamaral, @cgarciae, @charlielito, @csachs, @failure-to-thrive, @feyn-aman, @fsx950223, @gabrieldemarmiesse, @gugarosa, @guillaumekln, @jaeyoo, @jaspersjsun, @jlsneto, @ksachdeva, @lc0, @leandro-gracia-gil, @marload, @nluehr, @pedrolarben, @qlzh727, @seanpmorgan, @tanzhenyu, @tf-marissaw and @xvr-hlt

TensorFlow Addons v0.10.0

15 May 00:34
5f618fd
Compare
Choose a tag to compare

Release Notes

  • Built against TensorFlow 2.2
  • CUDA kernels are compiled with CUDA 10.1
  • API docs found on the website

Changelog

  • Enable ppc64le build (#1672)

tfa.activations

  • Added the DepreciationWarning for the custom op version of activations functions (#1791)

tfa.image

  • Fix condition tracing in scale_channel (#1830)
  • Expose sharpness and equalize image op (#1827)
  • Clarify flow definition for dense_image_warp (#1817)
  • Added gaussian_blur_op (#1450)

tfa.layers

  • Added Adaptive MaxPooling layers (#1727)
  • Added AdaptiveAveragePooling2D layer (#1383)

tfa.metrics

  • Add sample_weight support to FScore metrics (#1816)

tfa.losses

  • Added angular distance option to triplet loss (#1730)
  • Enable npairs loss on windows (#1742)
  • Added float16 and bfloat16 support for TripletSemiHardLoss, TripletHardLoss and LiftedStructLoss (#1683)
  • Add Soft Weighted Kappa Loss (#762)

tfa.optimizers

  • Fixed serializability bug in yogi (#1728)

Thanks to our Contributors

@Dagamies, @HauserA, @MarkDaoust, @Squadrick, @Susmit-A, @WindQAQ, @ageron, @amascia, @ashutosh1919, @autoih, @ben-arnao, @bhack, @fsx950223, @gabrieldemarmiesse, @ghosalsattam, @guillaumekln, @henry-eigen, @jharmsen, @olesalscheider, @seanpmorgan, @shun-lin, @terrytangyuan and @wenmin-wu

TensorFlow Addons v0.9.1

10 Apr 18:23
ad132da
Compare
Choose a tag to compare

Release Notes

  • Include CUDA kernels missing from 0.9.0
  • Fix serialization for cyclical learning rate (#1623)