Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overflow behaviour of Double64 does not match Float64 #151

Open
KlausC opened this issue May 22, 2022 · 84 comments
Open

Overflow behaviour of Double64 does not match Float64 #151

KlausC opened this issue May 22, 2022 · 84 comments

Comments

@KlausC
Copy link

KlausC commented May 22, 2022

Please re-open this issue:
The root cause seems to be in DoubleFloats:

julia> Double64(1e300)^2
NaN

julia> Float64(1e300)^2
Inf

Originally posted by @KlausC in #149 (comment)

@KlausC
Copy link
Author

KlausC commented May 22, 2022

A fix could look like so: in DoubleFloats/src/math/errorfree.jl:120

"""
    two_hilo_sum(a, b)

*unchecked* requirement `|a| ≥ |b|`
Computes `s = fl(a+b)` and `e = err(a+b)`.
"""
@inline function two_hilo_sum(a::T, b::T) where {T<:FloatWithFMA}
    isfinite(b) || return a, a
    s = a + b
    e = b - (s - a)
    return s, e
end

@JeffreySarnoff
Copy link
Member

This is a deep, heavily called function. I need to see if there is a way to resolve the issue that is less impactful.

@JeffreySarnoff
Copy link
Member

Fortunately, this anomaly is not pervasive.

julia> exp(Double64(1e300))
Inf
julia> tanpi(Double64(1/2))
Inf

If the behavior is limited to one or just a few functions, it makes more sense to trap this case within ^ etc. I am checking other functions for the same behavior.

@JeffreySarnoff
Copy link
Member

This issue is based in a corner case for multiplication or squaring, when the magnitudes get huge:

julia> a=Double64(sqrt(floatmax(Float64)))
1.3407807929942596e154

julia> a*a
1.7976931348623155e308

julia> a=Double64(1.000000000000001*sqrt(floatmax(Float64)))
1.340780792994261e154

julia> a*a
NaN

The next step is looking into this more deeply.

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 23, 2022

omitted

@KlausC
Copy link
Author

KlausC commented May 23, 2022

I think, the issue is with all base operations (+, -, *, /), which all return NaN in cases when they should return Inf or -Inf.
``
julia> a = floatmax(Double64)
1.7976931348623157e308

julia> a + a
NaN

julia> -a - a
NaN

julia> a * a
NaN

julia> a / Double64(0.0)
Inf

julia> a / (1/a)
NaN

julia> 1/a
5.562684646268003e-309`

@JeffreySarnoff
Copy link
Member

That is clear, and helpful.

@JeffreySarnoff
Copy link
Member

I found the problem -- when the result is Inf the Double64 computation can result in HILO(result) == (Inf, -Inf) [or (-Inf, Inf) which prints as NaN when it should print as HI(result). I need to see if this is enough or if the result must be remade to the general form used in the package (Inf, NaN) or (-Inf, NaN) to work in carry through calculations.

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 23, 2022

THIS IS INCORRECT, see next

The results need to be remade -- or something else modified

julia> exp(Double64(Inf))
Inf

julia> exp(Double64((Inf,-Inf)) # !! incorrect initialization
NaN

julia> dump(ans)
Double64
  hi: Float64 NaN
  lo: Float64 NaN

@JeffreySarnoff
Copy link
Member

Ignore last comment --

julia> b=Double64(Inf,-Inf)
Inf

julia> HILO(b)
(Inf, -Inf)

julia> exp(b)
Inf

@JeffreySarnoff
Copy link
Member

still though, there is the problem you highlighted .. and unfortunately not a string display issue

a = floatmax(Float64)
b = floatmax(Double64)

julia> a = floatmax(Float64)
1.7976931348623157e308

julia> b = floatmax(Double64)
1.7976931348623157e308

julia> a+a
Inf

julia> b+b
NaN

julia> HILO(ans)
(NaN, NaN)

@KlausC
Copy link
Author

KlausC commented May 23, 2022

I think we must first make the base operation correct. The exp example should be considered later.


julia> d = Double64(Inf, -Inf)
Inf

julia> d + d
NaN

julia> 

julia> d = Double64(Inf, Inf)
Inf

julia> d + d
NaN

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 23, 2022

It appears that I have to check for Inf after the initial arithmetic operation (establishing the HI part) and return that immediately in the event it is +/- Inf (or NaN .. although that will propagate). This changes the routines in op_dddd_dd.jl and probably those where one of the args is dd and the other Float64 (although I have not checked yet).

@KlausC
Copy link
Author

KlausC commented May 23, 2022

I agree. Question is where to insert the checks to avoid runtime regression afa possible.
I my experience the intermediate terms are "contaminated" with NaN , when operations like Inf - Inf are induced.

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 23, 2022

the obvious place is to change (and similarly for sub_, mul_ dvi_)

@inline function add_dddd_dd(x::Tuple{T,T}, y::Tuple{T,T}) where T<:IEEEFloat
    xhi, xlo = x
    yhi, ylo = y
    hi, lo = two_sum(xhi, yhi)
    thi, tlo = two_sum(xlo, ylo)
    c = lo + thi
    hi, lo = two_hilo_sum(hi, c)
    c = tlo + lo
    hi, lo = two_hilo_sum(hi, c)
    return hi, lo
end

into

@inline function add_dddd_dd(x::Tuple{T,T}, y::Tuple{T,T}) where T<:IEEEFloat
    xhi, xlo = x
    yhi, ylo = y
    hi, lo = two_sum(xhi, yhi)
    isinf(hi) && return (hi, lo)                # inserting this
    thi, tlo = two_sum(xlo, ylo)
    c = lo + thi
    hi, lo = two_hilo_sum(hi, c)
    c = tlo + lo
    hi, lo = two_hilo_sum(hi, c)
    return hi, lo
end

It would be better if there were a less pervasive way, though.

@KlausC
Copy link
Author

KlausC commented May 23, 2022

Consequently all other *_dddd_dd should be modified accordingly.

But as I see, that add_dddd_dd is called many times internally, it is maybe possible to change only in the user-facing situations.
For example add add_dddd_dd_checked modified like above, then call that only in add_dbdb_db.

All other calls of *_dddd_dd must be verified as well to find out, if they have to be replaced by the _checked versions.

@JeffreySarnoff
Copy link
Member

Hmm, good thought.

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 23, 2022

e.g. replacing (the | should be || in any event)

@inline function add_dbdb_db(x::DoubleFloat{T}, y::DoubleFloat{T}) where {T<:IEEEFloat}
    (isnan(LO(x)) | isnan(LO(y))) && return add_dbdb_db_nonfinite(x,y)
    return DoubleFloat{T}(add_dddd_dd(HILO(x), HILO(y)))
end

with

@inline function add_dbdb_db(x::DoubleFloat{T}, y::DoubleFloat{T}) where {T<:IEEEFloat}
    !isfinite(HI(x) + HI(y)) && return add_dbdb_db_nonfinite(x,y)
    return DoubleFloat{T}(add_dddd_dd(HILO(x), HILO(y)))
end

or

@inline function add_dbdb_db(x::DoubleFloat{T}, y::DoubleFloat{T}) where {T<:IEEEFloat}
    isfinite(HI(x) + HI(y)) && return DoubleFloat{T}(add_dddd_dd(HILO(x), HILO(y)))
    add_dbdb_db_nonfinite(x,y)
end

@KlausC
Copy link
Author

KlausC commented May 23, 2022

That would work at the cost of one additional Float64 op.

(Actually I did not understand, why the original test for infinity was based on the NaNs in the LO parts of the arguments)

@JeffreySarnoff
Copy link
Member

The encoding of (+/-Inf, NaN) for +/-Inf and (NaN, NaN) for NaN was chosen so that isnan(LO(x)) would select nonfinite values (it was the fastest way that occurred to me then). However, the following could be used on (or HI(x::Double64)) too:

isnonfinite(x::Float64) = reinterpret(UInt64,x) & 0x7FFFFFFFFFFFFFFF >= 0x7FF0000000000000
isfinite(x::Float64) = reinterpret(UInt64,x) & 0x7FFFFFFFFFFFFFFF < 0x7FF0000000000000

@KlausC
Copy link
Author

KlausC commented May 23, 2022

Your last proposal looks more obvious to me. ( I mean isfinite(x::Double64) = isfinite(HI(x)))

@JeffreySarnoff
Copy link
Member

agreed

@JeffreySarnoff
Copy link
Member

The same corrections need to done to the DoubleT op FloatT (and FloatT op DoubleT) user facing routines.

@JeffreySarnoff
Copy link
Member

I will implement these changes on a new branch (with tests) and post here when done.

@KlausC
Copy link
Author

KlausC commented May 23, 2022

I saw, that also square, cube, ^ involved.

@JeffreySarnoff
Copy link
Member

ok .. let me know if you notice others

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 23, 2022

I understand that the ops in op_dddd_dd.jl and op_ddfp_dd.jl and op_fpdd_dd.jl and op_dd_dd.jl and many or all in op_fp_dd.jl, op_ddsi_dd.jl need this adjustment.

@KlausC
Copy link
Author

KlausC commented May 23, 2022

Also returning NaN: cbrt (sqrt works), add2, sub2, mul2, div2 from arith.jl

@KlausC
Copy link
Author

KlausC commented May 23, 2022

I understand that the ops in op_dddd_dd.jl and op_ddfp_dd.jl and op_fpdd_dd.jl and op_dd_dd.jl and many or all in op_fp_dd.jl, op_ddsi_dd.jl need this adjustment.

I am not sure about that - as those functions are not exported (afaik). Modifying only the exported functions (and sufficient test cases for those) would not have as much impact on performance, I guess.

@KlausC
Copy link
Author

KlausC commented May 24, 2022

Sorry, I wasn't aware of the normality concept. How is it defined? Do the user functions always return normalized DoubleFloats?

@JeffreySarnoff
Copy link
Member

Yes. Great care is taken to assure the user facing functions consistently provide normalized results.

A normalized Double64 is a pair (HI::Float64, LO::Float64) where
criterion 1 abs(HI) >= abs(LO) # equality when both are 0.0
criterion 2 HI + LO == HI (the idea is that their bits to not overlap, a half-bit overlap may be ok (Hi-Lo != Hi+Lo))
criterion 3 the value separating function always does the same thing (hi, lo = two_sum(a, b) is always the same hi, lo)

more to follow

@JeffreySarnoff
Copy link
Member

any error-free transformation takes a mathematical (usually arithmetic or related) function of a typed variable, and generates both the commonly resolved result and a very good approximation to the error in that result. Another way to look at this is with two functions, both operationally same and one working at twice the precision of the other:

y64   = fn(x::Float64)
y128 = fn(x::Float128)
# now take the most significant "half" of y128 and the remaining half
yhi = Float64(y128)
ylo = Float64(y128 - yhi)
# this just about what two_sum, two_prod do, without the doubled precision type being explicit
hi, lo = two_prod(x, y)
hi + lo == hi
x * y == hi
x * y <--> hi + lo

A Double64 is the same (hi, lo). Arithmetic and elementary functions of Double64s maintain and develop
this hipart, lowpart relationship.

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 24, 2022

here is the error-free product of two Float64 values

function two_prod(a::T, b::T) where {T<:Float64}
    hi = a * b    # their product is computed as accurately as possible for the precision given
    # we want to know what is "left over" from the multiplication that is hidden from `hi`
    lo = fma(a, b, -hi)     # we recompute a*b - hi using fma(a, b, -hi) to avoid an extra rounding step
    # hi = roundnearest(*(a, b)), lo = roundnearest( +(*(a, b), -hi) == 
    #                                                  roundnearest( +(*(a,b), -roundnearest(*(a,b)) ) ) 
    # lo is the [roundednearest] difference between the product and the rounded product
    #        lo captures the "left over floating point value" 
    #             from the product wrt the arbitrarily precise value of that product
             the accuracy of lo is limited to not more than the precision of lo (53 sigbits)
                 in fact the generally accepted number of significant bits in a Double64
                 is 106 (53 sig bits from the hi part and 53 sig bits from the low part)
                 I presuppose that 4-5 of the lowest order bits are fuzzed in 
                      cascaded function evaluation, and that 106 - 8 = 98 bits
                      should be good (these functions try, and generally succeed
                      at keeping 98-100 good bits).
```

@KlausC
Copy link
Author

KlausC commented May 24, 2022

Another failing test case:

julia> d = floatmax(Double64) / Double64(100.0)
1.7976931348623156e306

julia> d * 2
NaN

julia> d * 1
NaN

julia> dump(d)
Double64
  hi: Float64 1.7976931348623156e306
  lo: Float64 NaN

julia> 

@KlausC
Copy link
Author

KlausC commented May 24, 2022

As a consequence of

@inline function dvi_dbdb_db(x::DoubleFloat{T}, y::DoubleFloat{T}) where {T<:IEEEFloat}
    (isfinite(HI(x) * HI(y)) && isfinite(HI(x) * inv(HI(y))))

The first part of the condition is simply wrong! What was the purpose of introducing it?

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 24, 2022

It was an error yesterday night. I have the same setup in all the dvi_.._db functions. I will remove them.

@KlausC
Copy link
Author

KlausC commented May 24, 2022

Then also have to specialcase y == Inf and return zero in that case, maybe with the correct sign.

@KlausC
Copy link
Author

KlausC commented May 24, 2022

I am still working to get log correct for huge and tiny args.

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 24, 2022

this may help with log, exp
(it is an open source MIT license implementation by Masahide Kashiwagi 2021-08-29)
here

@JeffreySarnoff
Copy link
Member

Here are the first 30 inverse factorials as Double64s.

invfactorials = [
Double64(0x1p+0, 0x0p+0),
Double64(0x1p-1, 0x0p+0),
Double64(0x1.5555555555555p-3, 0x1.5555555555555p-57),
Double64(0x1.5555555555555p-5, 0x1.5555555555555p-59),
Double64(0x1.1111111111111p-7, 0x1.1111111111111p-63),
Double64(0x1.6c16c16c16c17p-10, -0x1.f49f49f49f49fp-65),
Double64(0x1.a01a01a01a01ap-13, 0x1.a01a01a01a01ap-73),
Double64(0x1.a01a01a01a01ap-16, 0x1.a01a01a01a01ap-76),
Double64(0x1.71de3a556c734p-19, -0x1.c154f8ddc6cp-73),
Double64(0x1.27e4fb7789f5cp-22, 0x1.cbbc05b4fa99ap-76),
Double64(0x1.ae64567f544e4p-26, -0x1.c062e06d1f209p-80),
Double64(0x1.1eed8eff8d898p-29, -0x1.2aec959e14c06p-83),
Double64(0x1.6124613a86d09p-33, 0x1.f28e0cc748ebep-87),
Double64(0x1.93974a8c07c9dp-37, 0x1.05d6f8a2efd1fp-92),
Double64(0x1.ae7f3e733b81fp-41, 0x1.1d8656b0ee8cbp-97),
Double64(0x1.ae7f3e733b81fp-45, 0x1.1d8656b0ee8cbp-101),
Double64(0x1.952c77030ad4ap-49, 0x1.ac981465ddc6cp-103),
Double64(0x1.6827863b97d97p-53, 0x1.eec01221a8b0bp-107),
Double64(0x1.2f49b46814157p-57, 0x1.2650f61dbdcb4p-112),
Double64(0x1.e542ba4020225p-62, 0x1.ea72b4afe3c2fp-120),
Double64(0x1.71b8ef6dcf572p-66, -0x1.d043ae40c4647p-120),
Double64(0x1.0ce396db7f853p-70, -0x1.aebcdbd20331cp-124),
Double64(0x1.761b41316381ap-75, -0x1.3423c7d91404fp-130),
Double64(0x1.f2cf01972f578p-80, -0x1.9ada5fcc1ab14p-135),
Double64(0x1.3f3ccdd165fa9p-84, -0x1.58ddadf344487p-139),
Double64(0x1.88e85fc6a4e5ap-89, -0x1.71c37ebd1654p-143),
Double64(0x1.d1ab1c2dccea3p-94, 0x1.054d0c78aea14p-149),
Double64(0x1.0a18a2635085dp-98, 0x1.b9e2e28e1aa54p-153),
Double64(0x1.259f98b4358adp-103, 0x1.eaf8c39dd9bc5p-157),
Double64(0x1.3932c5047d60ep-108, 0x1.832b7b530a627p-162)
];

@JeffreySarnoff
Copy link
Member

JeffreySarnoff commented May 24, 2022

Corrected the divide tests and covered One more subcase x / y where isinf(y) && isinf(x) should be NaN rather than 0.
There are now tests in test/corners.jl for those cases.

Bring me up to speed on log and exp. What subrange[s] of their domains are computing with inaccuracy?
(or maybe wait on that, if you are in the midst of it).

@KlausC
Copy link
Author

KlausC commented May 24, 2022

Precision failures for corner argument values.

log is not exact (1e-32) for arguments < 1e-290.
Reason: exp is not exact for arguments < -660.
Reason: inv is not exact for arguments > 1e290.

So it makes sense to proceed in the order inv => exp => log.
The accuracies are only 1e-17; expected are 1e-32.

Calculated relative errors for example:

fi(x) = (inv(x) - inv(big(x))) / inv(x)

@JeffreySarnoff
Copy link
Member

Sounds reasonable. Maybe we need to check for magnitudes >= ldexp(1.0, 996) (textbook) or ldexp(1.0, 995) (safer) and when found use the magnitude specific handling (an alternative implementation of two_sum). Those checks are quite costly -- a more general approach would be to allow the user to choose absolute error management (keeping any uncertain bits in the least significant third or quarter of the LO bits) or throughput when working within more usual numeric spreads?

@KlausC
Copy link
Author

KlausC commented May 25, 2022

Forget what I said about inv, exp, log. These are not to blame!

With the given implementation it is simply not possible to represent their tiny return values with higher precision than we observe.
Reason is, that the LO part of the representation becomes an unnormalized Float64, which gradually restricts relative precision from eps(Double64) to eps(Float64).
As the docu of floatmin tells:

help?> floatmin
  floatmin(T = Float64)
  Return the smallest positive __normal__ number representable by the floating-point type T.

it would be fair to have floatmin(Double64) = Double64(floatmin(Float64)) / eps(Float64)

@KlausC
Copy link
Author

KlausC commented May 25, 2022

I finished now a version of log which is precise for all finite inputs.


function log(x::D) where {T<:IEEEFloat, D<:DoubleFloat{T}}
    isnan(x) && return x
    isinf(x) && !signbit(x) && return x
    x === zero(D) && return neginf(D)
    y = DoubleFloat(log(HI(x)), zero(T))
    if HI(x) > floatmax(T) / 3
        z = exp(y - 1) # avoid spurious Inf results
        x = x * exp(-one(D))
        zx = x + ldexp(z - x, -1) # avoid overflow (x + z) / 2
    elseif HI(x) < floatmin(T) / eps(T)
        z = exp(y + 64) # avoid unprecise exp results
        x = x * exp(D(64.0))
        zx = ldexp(x + z, -1)
    else
        z = exp(y)
        zx = ldexp(x + z, -1)
    end
    adj = (z - x) / zx
    y = y - adj
    return y
end

@KlausC
Copy link
Author

KlausC commented May 25, 2022

Some tests to be added to notnan.jl:


@testset "not NaN and NaN" begin

@testset "unary operators $op($T)" for op in (log, exp, sqrt, cbrt, square, cube, log2, log10, exp2, exp10),
    T in (Double64, Double32, Double16)
    
    d = floatmax(T)
    if op ∈ (exp, square, cube, exp2, exp10)
        @test isinf(op(d))
    else
        @test isfinite(op(d))
    end
    @test isinf(T(2.1) ^ d)
    @test isinf(3 ^ d)
    @test iszero(0.999 ^ d)
    @test isinf(10 ^ T(500.5))

    d = -floatmax(T)
    op ∉ (sqrt, log, log2, log10) && @test !isnan(op(d))
    d = T(NaN)
    @test isnan(op(d))
end

@testset "binary operators $T $op $T" for op in (+, -, *, /), T in (Double64, Double32, Double16)

    d1, d2 = floatmax(T), floatmax(T)
    @test !isnan(op(d1, d2))
    d1, d2 = floatmax(T), -floatmax(T)
    @test !isnan(op(d1, d2))
    d1, d2 = T(Inf), floatmax(T)
    @test !isnan(op(d1, d2))
    d1, d2 = T(Inf), T(Inf)
    @test isnan(op(d1, d2)) == (op ∈ (-, /))
    d1, d2 = zero(T), T(NaN)
    @test isnan(op(d1, d2))
end

end

@JeffreySarnoff
Copy link
Member

excellent

@KlausC
Copy link
Author

KlausC commented May 29, 2022

Unfortunately there are still NaN in such harmless multiplications like:

julia> x = Double64((6.61334345850887e307, 4.073749943731599e291))
; y = Double64((2.718281828459045, 1.4456468917292502e-16))
2.718281828459045

julia> HILO(x)
(6.61334345850887e307, 4.073749943731599e291)

julia> HILO(y)
(2.718281828459045, 1.4456468917292502e-16)

julia> y * x
NaN

julia> x * y
NaN

@JeffreySarnoff
Copy link
Member

At first glance, this appears to happen when the product should be >= floatmax(Double64). (e.g. prevfloat(x)y and xprevfloat(y) work as expected).
I will follow that clue.

@JeffreySarnoff
Copy link
Member

The current value of floatmax(Double64) is stressing some basic arithmetic.
floatmax(Double64) == Double64((1.7976931348623157e308, 9.979201547673598e291)

julia> floatmax(Double64) / 3
NaN


julia> fmax = floatmax(Double64) * (6/7)
1.540879829881985e308

julia> fmax / 3
5.136266099606617e307

@KlausC
Copy link
Author

KlausC commented May 29, 2022

I don't think it is a good idea to shrink the floatmax.

@JeffreySarnoff
Copy link
Member

I am going through the arithmetic.

@JeffreySarnoff
Copy link
Member

(note) at least part of the problem is in the way that fma handles Inf products

hiproduct = xhi * yhi
loproduct = fma(xhi, yhi, -hiproduct)

if hiproduct == Inf then loproduct == -Inf and using (Inf, -Inf) in * generates NaN

I will post here after determining (a) where best to pretest for nonfinite product and (b) checking if that fixes known problems and (c) adding tests to cover.

@KlausC
Copy link
Author

KlausC commented May 30, 2022

My experience: it is good to start with (c) in order to keep the once broken corner cases as regression tests.
Otherwise it is easy to forget some of them, once fixed.

@JeffreySarnoff
Copy link
Member

That is good advice.
I have merged your log, notnan tests, and redefinition of floatmin on #notnan.

I implemented an almost correct adjustment the * to handle +/-Inf and not give NaN incorrectly --
(the mul_ functions in op_ddfp_dd.jl op_fpdd_dd.jl and op_dddd_dd.jl)
almost correct because after doing so the current tests pass with two exceptions

These two should be zero (and were zero before the mul_ changes)
instead of wrapping to 0, they both evaluate slightly less than the wrap value

julia> DoubleFloats.modhalfpi(0.5 * Double64(pi))
1.5707963267948966

julia> DoubleFloats.modqrtrpi(0.25 * Double64(pi))
0.7853981633974482

So, either it is a subtle aspect of the reworked mul_ (I swapped in a one-flop faster implementation to help cover the conditional) or those routines (or functions they call) are slightly inexact .. to be determined.

For the additional tests, they need to cover more * and / cases with values near floatmax and floatmin and
1.0, ±(cbrt(floatmax), sqrt(floatmax), cbrt(floatmax)^2) in combinations

Do you have other value pairs, or predicates on results to test already in mind?

@KlausC
Copy link
Author

KlausC commented May 30, 2022

In the moment I have nothing new, so the following cases have already been reported:

@testset "corner cases NaN" begin
    x = Double64((6.61334345850887e307, 4.073749943731599e291))
    y = Double64((2.718281828459045, 1.4456468917292502e-16))
    @test !isnan(x * y)

    x = floatmax(Double64)
    y = Double64(3.0)
    @test !isnan(x / y)
end

@JeffreySarnoff
Copy link
Member

I have had to spend time elsewhere, and need to do that for a short while longer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants