You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If fields (actualField, predictedField) are misconfigured and it happens that their mapping types differ (e.g. long vs boolean) then Evaluate API fails with non-intuitive error message mentioning script failure (the script we are using internally to calculate the metrics).
I closed this issue as now the Evaluate API does not throw any strange script-related errors in face of type mismatch situation.
This approach may be revisited in the future. I can imagine adding a validation where we would check upfront what the actual and predicted fields types are and deciding whether this evaluation request makes sense or not.
Also, we may think of matching in a smart manner. E.g.: matching a boolean field (that only has true or false values) with a numeric field (that only has 1 or 0 values).
If fields (
actualField
,predictedField
) are misconfigured and it happens that their mapping types differ (e.g.long
vsboolean
) then Evaluate API fails with non-intuitive error message mentioning script failure (the script we are using internally to calculate the metrics).Example failures:
The text was updated successfully, but these errors were encountered: