Skip to content

Releases: jgrss/cultionet

v1.7.3

16 Sep 05:38
e18e52e
Compare
Choose a tag to compare
  • Addressed Issue #63 (#63)
  • Better version dependency pinning
  • Improved handling of image resolution and height/width dimensions

v1.7.2

04 May 04:40
a0e1519
Compare
Choose a tag to compare

What changed?

  • Upgraded geowombat (#72)

v1.7.1

03 May 21:18
9debff1
Compare
Choose a tag to compare

What changed?

  • Pinned the upper version of pytorch-lightning and upgraded geowombat (#71)

v1.7.0

09 Mar 18:15
8ab9e68
Compare
Choose a tag to compare

What changed?

  • Improved and fixed issues in the ResUNet 3+ Psi architecture, which was introduced in v1.6.5
  • More flexible user arguments. The user can now specify:
    • the model architecture
    • convolution blocks
    • dilations
    • attention weights
  • Improvements in the train optimizer stability
  • Deep supervision
    • Cultionet uses the UNet 3+ style of deep supervision along three decoders
      • These are optional during training
  • Improved training efficiency using PyTorch’s parallel data loader
  • Improved inference efficiency using PyTorch’s batch loader

v1.6.5

28 Feb 18:51
f8f7f71
Compare
Choose a tag to compare

What changed?

We upgraded geowombat to v2.1.4 following a bug fix.

v1.6.4

01 Feb 16:07
d3616f2
Compare
Choose a tag to compare
v1.6.4

v1.6.3

23 Jan 17:24
334db7c
Compare
Choose a tag to compare
v1.6.3

v1.6.2

11 Jan 20:51
da1ec14
Compare
Choose a tag to compare

What changed?

setuptools was bumped from >=59.5.0 to >=65.5.1

v1.6.1

04 Jan 17:08
3a6ba38
Compare
Choose a tag to compare
v1.6.1

v1.6.0

03 Jan 16:15
f689244
Compare
Choose a tag to compare

What's new?

  • New architecture design based on UNet 3+ and residual convolutions
    • The new design is a multi-head connection of the UNet 3+ architecture
    • Added optional crop-type model for finer crop learning
  • Modified total loss quantification with deep supervision of crop type in RNN layer
    • The tanimoto loss is used on all layers
  • Added num_workers option in DataLoader for faster train/predict
  • Added .pt data compression by changing torch.save|load to joblib.dump|load