Skip to content

Releases: tensorflow/model-analysis

TensorFlow Model Analysis 0.30.0

21 Apr 20:35
c823349
Compare
Choose a tag to compare

Major Features and Improvements

  • N/A

Bug fixes and other Changes

  • Fix bug that FeaturesExtractor incorrectly handles RecordBatches that
    have only the raw input column but no other feature columns.

Breaking Changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.29.0

24 Mar 22:14
3aba4b9
Compare
Choose a tag to compare

Major Features and Improvements

  • Added support for output aggregation.

Bug fixes and other Changes

  • For lift metrics, support negative values in the Fairness Indicator UI bar
    chart.
  • Make legacy predict extractor also input/output batched extracts.
  • Updated to use new compiled_metrics and compiled_loss APIs for keras
    in-graph metric computations.
  • Add support for calling model.evaluate on keras models containing custom
    metrics.
  • Add CrossSliceMetricComputation metric type.
  • Add Lift metrics under addons/fairness.
  • Don't add metric config from config.MetricsSpec to baseline model spec by
    default.
  • Fix invalid calculations for metrics derived from tf.keras.losses.
  • Fixes following bugs related to CrossSlicingSpec based evaluation results.
    • metrics_plots_and_validations_writer was failing while writing cross
      slice comparison results to metrics file.
    • Fairness widget view was not compatible with cross slicing key type.
  • Fix support for loading the UI outside of a notebook.
  • Depends on absl-py>=0.9,<0.13.
  • Depends on tensorflow-metadata>=0.29.0,<0.30.0.
  • Depends on tfx-bsl>=0.29.0,<0.30.0.

Breaking Changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.28.0

23 Feb 18:55
e37b0dd
Compare
Choose a tag to compare

Major Features and Improvements

  • Add a new base computation for binary confusion matrix (other than based on
    calibration histogram). It also provides a sample of examples for the
    confusion matrix.
  • Adding two new metrics - Flip Count and Flip Rate to evaluate Counterfactual
    Fairness.

Bug fixes and other Changes

  • Fixed division by zero error for diff metrics.
  • Depends on apache-beam[gcp]>=2.28,<3.
  • Depends on numpy>=1.16,<1.20.
  • Depends on tensorflow-metadata>=0.28.0,<0.29.0.
  • Depends on tfx-bsl>=0.28.0,<0.29.0.

Breaking Changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.27.0

28 Jan 16:09
56cc2ca
Compare
Choose a tag to compare

Major Features and Improvements

  • Created tfma.StandardExtracts with helper methods for common keys.
  • Updated StandardMetricInputs to extend from the tfma.StandardExtracts.
  • Created set of StandardMetricInputsPreprocessors for filtering extracts.
  • Introduced a padding_options config to ModelSpec to configure whether
    and how to pad the prediction and label tensors expected by the model's
    metrics.

Bug fixes and other changes

  • Fixed issue with metric computation deduplication logic.
  • Depends on apache-beam[gcp]>=2.27,<3.
  • Depends on pyarrow>=1,<3.
  • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,<3.
  • Depends on tensorflow-metadata>=0.27.0,<0.28.0.
  • Depends on tfx-bsl>=0.27.0,<0.28.0.

Breaking changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.26.0

16 Dec 20:41
44f5dae
Compare
Choose a tag to compare

Major Features and Improvements

  • Added support for aggregating feature attributions using special metrics
    that extend from tfma.metrics.AttributionMetric (e.g.
    tfma.metrics.TotalAttributions, tfma.metrics.TotalAbsoluteAttributions).
    To use make use of these metrics a custom extractor that add attributions to
    the tfma.Extracts under the key name tfma.ATTRIBUTIONS_KEY must be
    manually created.
  • Added support for feature transformations using TFT and other preprocessing
    functions.
  • Add support for rubber stamping (first run without a valid baseline model)
    when validating a model. The change threshold is ignored only when the model
    is rubber stamped, otherwise, an error is thrown.

Bug fixes and other changes

  • Fix the bug that Fairness Indicator UI metric list won't refresh if the
    input eval result changed.
  • Add support for missing_thresholds failure to validations result.
  • Updated to set min/max value for precision/recall plot to 0 and 1.
  • Fix issue with MinLabelPosition not being sorted by predictions.
  • Updated NDCG to ignore non-positive gains.
  • Fix bug where an example could be aggregated more than once in a single
    slice if the same slice key were generated from more than one SlicingSpec.
  • Add threshold support for confidence interval type metrics based on its
    unsampled_value.
  • Depends on apache-beam[gcp]>=2.25,!=2.26.*,<3.
  • Depends on tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.4.*,<3.
  • Depends on tensorflow-metadata>=0.26.0,<0.27.0.
  • Depends on tfx-bsl>=0.26.0,<0.27.0.

Breaking changes

  • Changed MultiClassConfusionMatrix threshold check to prediction > threshold
    instead of prediction >= threshold.
  • Changed default handling of materialize in default_extractors to False.
  • Separated tfma.extractors.BatchedInputExtractor into
    tfma.extractors.FeaturesExtractor, tfma.extractors.LabelsExtractor, and
    tfma.extractors.ExampleWeightsExtractor.

Deprecations

  • N/A

TensorFlow Model Analysis 0.25.0

04 Nov 18:38
5a650a1
Compare
Choose a tag to compare

Major Features and Improvements

  • Added support for reading and writing metrics, plots and validation results
    using Apache Parquet.

  • Updated the FI indicator slicing selection UI.

  • Fixed the problem that slices are refreshed when user selected a new
    baseline.

  • Add support for slicing on ragged and multidimensional data.

  • Load TFMA correctly in JupyterLabs even if Facets has loaded first.

  • Added support for aggregating metrics using top k values.

  • Added support for padding labels and predictions with -1 to align a batch of
    inputs for use in tf-ranking metrics computations.

  • Added support for fractional labels.

  • Add metric definitions as tooltips in the Fairness Inidicators metric
    selector UI

  • Added support for specifying label_key to use with MinLabelPosition metric.

  • From this release TFMA will also be hosting nightly packages on
    https://pypi-nightly.tensorflow.org. To install the nightly package use the
    following command:

    pip install -i https://pypi-nightly.tensorflow.org/simple tensorflow-model-analysis
    

    Note: These nightly packages are unstable and breakages are likely to
    happen. The fix could often take a week or more depending on the complexity
    involved for the wheels to be available on the PyPI cloud service. You can
    always use the stable version of TFMA available on PyPI by running the
    command pip install tensorflow-model-analysis .

Bug fixes and other changes

  • Fix incorrect calculation with MinLabelPosition when used with weighted
    examples.
  • Fix issue with using NDCG metric without binarization settings.
  • Fix incorrect computation when example weight is set to zero.
  • Depends on apache-beam[gcp]>=2.25,<3.
  • Depends on tensorflow-metadata>=0.25.0,<0.26.0.
  • Depends on tfx-bsl>=0.25.0,<0.26.0.

Breaking changes

  • AggregationOptions are now independent of BinarizeOptions. In order to
    compute AggregationOptions.macro_average or
    AggregationOptions.weighted_macro_average,
    AggregationOptions.class_weights must now be configured. If
    AggregationOptions.class_weights are provided, any missing keys now
    default to 0.0 instead of 1.0.
  • In the UI, aggregation based metrics will now be prefixed with 'micro_',
    'macro_', or 'weighted_macro_' depending on the aggregation type.

Deprecations

  • tfma.extractors.FeatureExtractor, tfma.extractors.PredictExtractor,
    tfma.extractors.InputExtractor, and
    tfma.evaluators.MetricsAndPlotsEvaluator are deprecated and may be
    replaced with newer versions in upcoming releases.

TensorFlow Model Analysis 0.24.3

24 Sep 21:58
f9ba53a
Compare
Choose a tag to compare

Major Features and Improvements

  • N/A

Bug fixes and other changes

  • Depends on apache-beam[gcp]>=2.24,<3.
  • Depends on tfx-bsl>=0.24.1,0.25.

Breaking changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.24.2

19 Sep 01:21
2c7a271
Compare
Choose a tag to compare

Major Features and Improvements

  • N/A

Bug fixes and other changes

  • Added an extra requirement group all. As a result, barebone TFMA does not
    require tensorflowjs , prompt-toolkit and ipython any more.
  • Added an extra requirement group all that specifies all the extra
    dependencies TFMA needs. Use pip install tensorflow-model-analysis[all] to
    pull in those dependencies.

Breaking changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.24.1

11 Sep 23:01
6a3450e
Compare
Choose a tag to compare

Major Features and Improvements

  • N/A

Bug fixes and other changes

  • Fix Jupyter lab issue with missing data-base-url.

Breaking changes

  • N/A

Deprecations

  • N/A

TensorFlow Model Analysis 0.24.0

10 Sep 01:55
f0b99a9
Compare
Choose a tag to compare

Major Features and Improvements

  • Use TFXIO and batched extractors by default in TFMA.

Bug fixes and other changes

  • Updated the type hint of FilterOutSlices.
  • Fix issue with precision@k and recall@k giving incorrect values when
    negative thresholds are used (i.e. keras defaults).
  • Fix issue with MultiClassConfusionMatrixPlot being overridden by
    MultiClassConfusionMatrix metrics.
  • Made the Fairness Indicators UI thresholds drop down list sorted.
  • Fix the bug that Sort menu is not hidden when there is no model comparison.
  • Depends on absl-py>=0.9,<0.11.
  • Depends on ipython>=7,<8.
  • Depends on pandas>=1.0,<2.
  • Depends on protobuf>=3.9.2,<4.
  • Depends on tensorflow-metadata>=0.24.0,<0.25.0.
  • Depends on tfx-bsl>=0.24.0,<0.25.0.

Breaking changes

  • Query based metrics evaluations that make use of MetricsSpecs.query_key
    are now passed tfma.Extracts with leaf values that are of type
    np.ndarray containing an additional dimension representing the values
    matched by the query (e.g. if the labels and predictions were previously 1D
    arrays, they will now be 2D arrays where the first dimension's size is equal
    to the number of examples matching the query key). Previously a list of
    tfma.Extracts was passed instead. This allows user's to now add custom
    metrics based on tf.keras.metrics.Metric as well as tf.metrics.Metric
    (any previous customizations based on tf.metrics.Metric will need to be
    updated). As part of this change the tfma.metrics.NDCG,
    tfma.metrics.MinValuePosition, and tfma.metrics.QueryStatistics have
    been updated.
  • Renamed ConfusionMatrixMetric.compute to ConfusionMatrixMetric.result
    for consistency with other APIs.

Deprecations

  • Deprecating Py3.5 support.