Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxMsg=1 not respected. second message ALWAYS gets redelivered as if its ackwait started before yielded #608

Closed
mattbdc opened this issue Aug 22, 2024 · 18 comments · Fixed by #626

Comments

@mattbdc
Copy link

mattbdc commented Aug 22, 2024

MaxMsg=1 not respected. second message ALWAYS gets redelivered when ack'ed within time, it is as if its ackwait started before yielded (as if MaxMsg 1 wasn't originally respected and this is was in the buffer before the loop back to foreach). Theory only. Reproducible consistently. I get the point that one should handle redeliveries via idempotency onwards or some KV lookup to check, due to anything from network partitions, server restarts, client crashes, internal buffers being too strained, and so on, but this is consistently reproducible locally.

Observed behavior (local setup VM and client app next to each other, no network chat)

using NATS.Client.JetStream;

Console.WriteLine("Start");

var c = new NatsConnection(new NatsOpts()
{
    Url = "nats://127.0.0.1:4222"
});

var s = new NatsJSContext(c);
var consumer = await s.GetConsumerAsync("email-queue", "email-queue-consumer");

await foreach(var i in consumer.ConsumeAsync<string>(opts: new NatsJSConsumeOpts() {
                  MaxMsgs = 1
})) {
    Console.WriteLine("TS" + DateTime.UtcNow + "   -> " + i.Data);
    await Task.Delay(32500); // ack wait configured to 36second on consumer config
    await i.AckAsync();
}
Console.WriteLine("Done");

consumer

Configuration:

                    Name: email-queue-consumer
               Pull Mode: true
          Deliver Policy: All
              Ack Policy: Explicit
                Ack Wait: 36.00s
           Replay Policy: Instant
      Maximum Deliveries: 2
         Max Ack Pending: 1,000
       Max Waiting Pulls: 512

State:

  Last Delivered Message: Consumer sequence: 20 Stream sequence: 14 Last delivery: 1m49s ago
    Acknowledgment Floor: Consumer sequence: 20 Stream sequence: 14 Last Ack: 40.49s ago
        Outstanding Acks: 0 out of maximum 1,000
    Redelivered Messages: 0
    Unprocessed Messages: 0
           Waiting Pulls: 1 of maximum 512
➜  ~ nats stream info email-queue
Information for Stream email-queue created 2024-07-18 12:13:40

              Subjects: email.queue
              Replicas: 1
               Storage: File

Options:

             Retention: WorkQueue
       Acknowledgments: true
        Discard Policy: Old
      Duplicate Window: 2m0s
     Allows Msg Delete: true
          Allows Purge: true
        Allows Rollups: false

Limits:

      Maximum Messages: unlimited
   Maximum Per Subject: unlimited
         Maximum Bytes: unlimited
           Maximum Age: 1d0h0m0s
  Maximum Message Size: unlimited
     Maximum Consumers: unlimited

State:

              Messages: 0
                 Bytes: 0 B
        First Sequence: 17
         Last Sequence: 16 @ 2024-08-22 12:09:39 UTC
      Active Consumers: 1

Once setup run this:

nats publish email.queue 'ahhh1'
nats publish email.queue 'ahhh2'

Expected behavior

shouldnt see second message 2 times. as acked within 32.5 seconds when ackwait is 36s

Server and client version

v2.10.17 and "NATS.Net" Version="2.3.3". All local Fedora 40. Dotnet 8. Plenty of resources. No docker.

Host environment

linux, localhost, all same machine

Steps to reproduce

setup stream and consumer via cli
build and start dotnet simple app
nats publish email.queue 'ahhh1'
nats publish email.queue 'ahhh2'

@mattbdc mattbdc changed the title MaxMsg=1 not respect second message ALWAYS gets redelivered MaxMsg=1 not respected. second message ALWAYS gets redelivered as if its ackwait started before yielded Aug 22, 2024
@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

This is reproducible even if I bump to 45seconds. nats consumer edit --wait=45s

/home/matt/ws/Nats.Demo/Nats.Demo/bin/Debug/net8.0/Nats.Demo
Start
TS8/22/2024 5:18:19 AM   -> ahh1
TS8/22/2024 5:18:52 AM   -> ahh2
TS8/22/2024 5:19:24 AM   -> ahh2
^C
Process finished with exit code 130.


@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

Here's a trace

< 2024/08/22 12:33:16.000524169  length=344 from=0 to=343
INFO {"server_id":"NBFOCDLIIQFCSS4UIVJTZKWPTPF3T6MQDYLTGUVKK5GGFM7HOSCS6HKM","server_name":"fedora-laptop","version":"2.10.9","proto":1,"go":"go1.22.0","host":"0.0.0.0","port":4222,"headers":true,"max_payload":1048576,"jetstream":true,"client_id":111,"client_ip":"127.0.0.1","xkey":"XCZI4W2QHARYNYZYXPBYGD7ABVMCR7VQ4RTNL2ZNPLFULIX5NXKXLKRR"} \r
> 2024/08/22 12:33:16.000583162  length=190 from=0 to=189
CONNECT {"echo":true,"verbose":false,"pedantic":false,"tls_required":false,"name":"NATS .NET Client","lang":".NET","version":"2.3.3","protocol":1,"headers":true,"no_responders":true}\r
PING\r
< 2024/08/22 12:33:16.000583664  length=6 from=344 to=349
PONG\r
> 2024/08/22 12:33:16.000591051  length=6 from=190 to=195
PING\r
< 2024/08/22 12:33:16.000591165  length=6 from=350 to=355
PONG\r
> 2024/08/22 12:33:16.000597470  length=39 from=196 to=234
SUB _INBOX.B1a13BRHp1WrMfjxLTNlKr.* 1\r
> 2024/08/22 12:33:16.000600449  length=117 from=235 to=351
PUB $JS.API.CONSUMER.INFO.email-queue.email-queue-consumer _INBOX.B1a13BRHp1WrMfjxLTNlKr.B1a13BRHp1WrMfjxLTNlM5 0\r
\r
< 2024/08/22 12:33:16.000638619  length=773 from=356 to=1128
MSG _INBOX.B1a13BRHp1WrMfjxLTNlKr.B1a13BRHp1WrMfjxLTNlM5 1 707\r
{"type":"io.nats.jetstream.api.v1.consumer_info_response","stream_name":"email-queue","name":"email-queue-consumer","created":"2024-08-08T11:49:03.10318283Z","config":{"durable_name":"email-queue-consumer","name":"email-queue-consumer","deliver_policy":"all","ack_policy":"explicit","ack_wait":45000000000,"max_deliver":2,"replay_policy":"instant","max_waiting":512,"max_ack_pending":1000,"num_replicas":0},"delivered":{"consumer_seq":29,"stream_seq":20,"last_active":"2024-08-22T05:30:47.837524321Z"},"ack_floor":{"consumer_seq":29,"stream_seq":20,"last_active":"2024-08-22T05:31:06.608245898Z"},"num_ack_pending":0,"num_redelivered":0,"num_waiting":0,"num_pending":0,"ts":"2024-08-22T05:33:16.638528915Z"}\r
> 2024/08/22 12:33:16.000713114  length=37 from=352 to=388
SUB _INBOX.stezM8rxis6jsu2L7YDf8O 2\r
> 2024/08/22 12:33:16.000718874  length=161 from=389 to=549
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.stezM8rxis6jsu2L7YDf8O 62\r
{"expires":30000000000,"batch":1,"idle_heartbeat":15000000000}\r
< 2024/08/22 12:33:18.000848469  length=6 from=1129 to=1134
PING\r
> 2024/08/22 12:33:18.000849315  length=6 from=550 to=555
PONG\r
< 2024/08/22 12:33:19.000540976  length=99 from=1135 to=1233
MSG email.queue 2 $JS.ACK.email-queue.email-queue-consumer.1.21.30.1724304799540839424.0 5\r
shit1\r
> 2024/08/22 12:33:19.000547903  length=161 from=556 to=716
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.stezM8rxis6jsu2L7YDf8O 62\r
{"expires":30000000000,"batch":1,"idle_heartbeat":15000000000}\r
< 2024/08/22 12:33:20.000901952  length=99 from=1234 to=1332
MSG email.queue 2 $JS.ACK.email-queue.email-queue-consumer.1.22.31.1724304800901829677.0 5\r
shit2\r
> 2024/08/22 12:33:20.000902484  length=161 from=717 to=877
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.stezM8rxis6jsu2L7YDf8O 62\r
{"expires":30000000000,"batch":1,"idle_heartbeat":15000000000}\r
< 2024/08/22 12:33:35.000903169  length=123 from=1333 to=1455
HMSG _INBOX.stezM8rxis6jsu2L7YDf8O 2 77 77\r
NATS/1.0 100 Idle Heartbeat\r
Nats-Last-Consumer: 31\r
Nats-Last-Stream: 22\r
\r
\r
< 2024/08/22 12:33:50.000903547  length=127 from=1456 to=1582
HMSG _INBOX.stezM8rxis6jsu2L7YDf8O 2 81 81\r
NATS/1.0 408 Request Timeout\r
Nats-Pending-Messages: 1\r
Nats-Pending-Bytes: 0\r
\r
\r
> 2024/08/22 12:33:50.000905690  length=161 from=878 to=1038
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.stezM8rxis6jsu2L7YDf8O 62\r
{"expires":30000000000,"batch":1,"idle_heartbeat":15000000000}\r
> 2024/08/22 12:33:52.000061475  length=84 from=1039 to=1122
PUB $JS.ACK.email-queue.email-queue-consumer.1.21.30.1724304799540839424.0 4\r
+ACK\r
< 2024/08/22 12:34:05.000903716  length=99 from=1583 to=1681
MSG email.queue 2 $JS.ACK.email-queue.email-queue-consumer.2.22.32.1724304800901829677.0 5\r
shit2\r
> 2024/08/22 12:34:05.000904260  length=161 from=1123 to=1283
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.stezM8rxis6jsu2L7YDf8O 62\r
{"expires":30000000000,"batch":1,"idle_heartbeat":15000000000}\r
< 2024/08/22 12:34:20.000905538  length=123 from=1682 to=1804
HMSG _INBOX.stezM8rxis6jsu2L7YDf8O 2 77 77\r
NATS/1.0 100 Idle Heartbeat\r
Nats-Last-Consumer: 32\r
Nats-Last-Stream: 22\r
\r
\r
> 2024/08/22 12:34:24.000562659  length=84 from=1284 to=1367
PUB $JS.ACK.email-queue.email-queue-consumer.1.22.31.1724304800901829677.0 4\r
+ACK\r

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

image

Where's the ack gone?

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

There's no +ack between delivery of ahh1 and ahh2 (okay I used a different word than ahh above).

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

For avoidance of doubt, this also happens with await i.AckAsync(new AckOpts() { DoubleAck = true}); too..

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

I've tried in bun.js (probably works in node too if you want loose hair over configuring ESM modules etc..). Exact same setup I believe. The second message is not seen twice. i.e. Bun/Node work as expected against same server/stream/consumer.

➜  natsjs more index.js
import { connect } from "nats";

const nc = await connect({ servers: ["nats://localhost:4222"] });
const js = await nc.jetstream()
const consumer = await js.consumers.get("email-queue", "email-queue-consumer")
const sleepMe = (ms) => new Promise(r => setTimeout(() => r(), ms))

const messages = await consumer.consume({ max_messages: 1});
for await (const m of messages) {
  console.log("TS" + (new Date()) + "   -> " + m.data);
  await sleepMe(32500)
  await m.ack();
}

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

The bun consumer correctly flushes the ack, and waits 32.5 seconds before calling JS.API.CONSUMER.MSG.NEXT . The dotnet version seems to have a seperate loop calling JS.API.CONSUMER.MSG.NEXT which is ignorant of the consuming loop

SUB _INBOX.XHQHFKJLZHFM6VVQ26B29H.* 1\r
PUB $JS.API.CONSUMER.INFO.email-queue.email-queue-consumer _INBOX.XHQHFKJLZHFM6VVQ26B29H.XHQHFKJLZHFM6VVQ26B21H 0\r
\r
< 2024/08/22 13:05:11.000671979  length=772 from=350 to=1121
MSG _INBOX.XHQHFKJLZHFM6VVQ26B29H.XHQHFKJLZHFM6VVQ26B21H 1 706\r
{"type":"io.nats.jetstream.api.v1.consumer_info_response","stream_name":"email-queue","name":"email-queue-consumer","created":"2024-08-08T11:49:03.10318283Z","config":{"durable_name":"email-queue-consumer","name":"email-queue-consumer","deliver_policy":"all","ack_policy":"explicit","ack_wait":45000000000,"max_deliver":2,"replay_policy":"instant","max_waiting":512,"max_ack_pending":1000,"num_replicas":0},"delivered":{"consumer_seq":41,"stream_seq":32,"last_active":"2024-08-22T05:53:17.41315325Z"},"ack_floor":{"consumer_seq":41,"stream_seq":32,"last_active":"2024-08-22T05:53:49.915881975Z"},"num_ack_pending":0,"num_redelivered":0,"num_waiting":0,"num_pending":0,"ts":"2024-08-22T06:05:11.671919217Z"}\r
> 2024/08/22 13:05:11.000674117  length=37 from=291 to=327
SUB _INBOX.XHQHFKJLZHFM6VVQ26B2HH 2\r
> 2024/08/22 13:05:11.000674336  length=175 from=328 to=502
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.XHQHFKJLZHFM6VVQ26B2HH 76\r
{"batch":1,"max_bytes":0,"idle_heartbeat":15000000000,"expires":30000000000}\r
< 2024/08/22 13:05:14.000039495  length=6 from=1122 to=1127
PING\r
> 2024/08/22 13:05:14.000040902  length=6 from=503 to=508
PONG\r
< 2024/08/22 13:05:15.000282940  length=99 from=1128 to=1226
MSG email.queue 2 $JS.ACK.email-queue.email-queue-consumer.1.33.42.1724306715282743443.0 5\r
shit1\r
> 2024/08/22 13:05:47.000789281  length=84 from=509 to=592
PUB $JS.ACK.email-queue.email-queue-consumer.1.33.42.1724306715282743443.0 4\r
+ACK\r
> 2024/08/22 13:05:47.000789440  length=175 from=593 to=767
PUB $JS.API.CONSUMER.MSG.NEXT.email-queue.email-queue-consumer _INBOX.XHQHFKJLZHFM6VVQ26B2HH 76\r
{"batch":1,"max_bytes":0,"idle_heartbeat":15000000000,"expires":30000000000}\r
< 2024/08/22 13:05:47.000789715  length=99 from=1227 to=1325
MSG email.queue 2 $JS.ACK.email-queue.email-queue-consumer.1.34.43.1724306716705750724.0 5\r
shit2\r

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

Temporarily using a hack around it now so I dont have to change all my handlers. I get it's quite damaging performance wise but hopeless we get back to near 0 unexpected re-deliveries..

    public static async IAsyncEnumerable<NatsJSMsg<T>> ConsumeAsync2<T>(
        this INatsJSConsumer consumer, NatsJSConsumeOpts opts,
        CancellationToken cancellationToken = default)
    {
        while (true)
        {
            var fetcher = consumer.FetchAsync<T>(new NatsJSFetchOpts()
            {
                MaxMsgs = opts.MaxMsgs,
                Expires = opts.Expires,
                IdleHeartbeat = opts.IdleHeartbeat,
                MaxBytes = opts.MaxBytes
            }, cancellationToken: cancellationToken);

            await foreach (var msg in fetcher)
            {
                yield return msg;
            }
        }
    }

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

Eek I think this bug is pretty serious.

using System.Runtime.CompilerServices;
using NATS.Client.Core;
using NATS.Client.JetStream;
using System.Threading.Tasks;

Console.WriteLine("Start");

var c = new NatsConnection(new NatsOpts()
{
Url = "nats://127.0.0.1:4222"
});

var s = new NatsJSContext(c);
var consumer = await s.GetConsumerAsync("email-queue", "email-queue-consumer");


Task.Run(async () =>
{
    for (var i = 0; i < 100; i++)
    {
        await s.PublishAsync("email.queue", "ahhh" + i);
    }
});

await foreach(var i in consumer.ConsumeAsync<string>(opts: new NatsJSConsumeOpts() {
                  MaxMsgs = 1
              })) {
    Console.WriteLine("TS" + DateTime.UtcNow + "   -> " + i.Data);
    await Task.Delay(32500);
    break; // WE DIE HERE. JUST ONE LOOP
}
await c.DisposeAsync();
Console.WriteLine("ENd");

This should leave 1 outstanding ack on new/fresh stream.

But we get Outstanding Acks: 100 out of maximum 1,000

@mtmk
Copy link
Collaborator

mtmk commented Aug 22, 2024

maybe related to #508

@mtmk
Copy link
Collaborator

mtmk commented Aug 22, 2024

thanks for the detailed report @mattbdc I think you're right it's a bug. I was working on a fix in #508 are you able to check that branch if it fixes this issue?

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

First tests passes. No ahh2 twice.

Second test passes.. 99 messages

Last Delivered Message: Consumer sequence: 1,458 Stream sequence: 1,046 Last delivery: 6.26s ago
Acknowledgment Floor: Consumer sequence: 1,457 Stream sequence: 1,045 Last Ack: 2m12s ago
Outstanding Acks: 1 out of maximum 1,000
Redelivered Messages: 0
Unprocessed Messages: 99
Waiting Pulls: 0 of maximum 512

Looking good!! Thank you. But that's two use-cases so far.

It probably needs more real world testing but then again it's strange no one else seems to have noticed this, it would immediately drain the whole queue as fast as it could loop doing JS.API.CONSUMER.MSG.NEXT upto max ack pending irrespective if you had even finish processing the first message with max msg 1. Maybe people aren't using ConsumeAsync yet? I'm comfortable enough to try it in our prod environment (our use case at the moment isn't a business critical). I'll wire it up tomorrow and let it run over weekend.

@mtmk
Copy link
Collaborator

mtmk commented Aug 22, 2024

it's strange no one else seems to have noticed this

There was a discussion on slack: https://natsio.slack.com/archives/C1V81FKU6/p1717497381540349

unfortunately slack clears after certain amount of time. here is a copy of the initial message:

What role is MaxMsgs (defined in NatsJsConsumeOpts) supposed to play in ConsumeAsync? We were expecting it to set a ceiling to how many messages the client may fetch from the consumer in a pull request.
That is, the client would keep an internal buffer of - at most - MaxMsgs pre-fetched msgs, from which it would yield the messages to the caller code. And once the MsgThreshold is reached, ithe client would refill the internal buffer by fetching more messages from the consumer (again, up to MaxMsgs). All in all the, messages in the internal buffer would amount to at most MaxMsgs ack pendings.
But what we are observing defies our expectations (or maybe our mental model concerning ConsumeAsync is just not correct 😅) : we are always fetching up to 1000 messages (the default) not matter what value MaxMsgs is set to.
(how to reproduce it in the thread. Tested on client 2.2.1 (edited)

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

thanks. so, it looks like we are pulling until the internal channel is full which is set to a hard coded value of 1000.
@Miguel Pires
we have a fix to limit the channel capacity which in turn throttles pull requests. we still get a few more because of pre-fetching.

Thanks, this is concerning though, can you confirm the latest branch is absent of this additional pre-fetching (other than upto max msgs) or if it can be disabled entirely, pre-fetching beyond maxmsgs would be problematic for me and make everything non-determistic, I want to establish a sensible ackwait accepting the performance penalty of max-msg 1, but I wouldn't know how if I have no idea when my ackwait timer starts for each message, it would be need to desired ackwait * max prefetching which would be far too loose. I guess these problems become apparent when processing a message is rather slow (like talking to crappy SaaS like mandrill that can spike to 12seconds to respond) coupled with unprocessed messages (i.e. a storm of emails to be sent), I going to see a lot of redelivery if theres some unknown prefetching too (with say an ackwait for 30s)..

.Update: If I'm reading this correctly https://github.com/nats-io/nats.net/pull/508/files nothing in there doing any sneaky prefetching beyond max-msgs? that would correlate with 99 unprocessed on test 2 I done... so maybe its all good :)

@mtmk
Copy link
Collaborator

mtmk commented Aug 22, 2024

thanks. so, it looks like we are pulling until the internal channel is full which is set to a hard coded value of 1000.
@Miguel Pires
we have a fix to limit the channel capacity which in turn throttles pull requests. we still get a few more because of pre-fetching.

Thanks, this is concerning though, can you confirm the latest branch is absent of this additional pre-fetching (other than upto max msgs) or if it can be disabled entirely, pre-fetching beyond maxmsgs would be problematic for me and make everything non-determistic, I want to establish a sensible ackwait accepting the performance penalty of max-msg 1, but I wouldn't know how if I have no idea when my ackwait timer starts for each message, it would be need to desired ackwait * max prefetching which would be far too loose. I guess these problems become apparent when processing a message is rather slow (like talking to crappy SaaS like mandrill that can spike to 12seconds to respond) coupled with unprocessed messages (i.e. a storm of emails to be sent), I going to see a lot of redelivery if theres some unknown prefetching too (with say an ackwait for 30s)..

.Update: If I'm reading this correctly https://github.com/nats-io/nats.net/pull/508/files nothing in there doing any sneaky prefetching beyond max-msgs? that would correlate with 99 unprocessed on test 2 I done... so maybe its all good :)

Yes, pre-fetching should not exceed the max-msgs limit. The idea is to fetch additional messages (up to the max-msgs limit) when the number of consumed messages reaches a threshold, which is set by default to half of max-msgs. This approach ensures a smooth flow of messages without overwhelming the application with too many at once. Our issue was that we were deciding whether to issue a new pull request based on when messages were written to the channel that the application code yielded from. PR #508 addresses this issue by adjusting the calculation to occur after each message is yielded, thereby avoiding erroneous internal buffering.

@mattbdc
Copy link
Author

mattbdc commented Aug 22, 2024

So let's say you have MaxMsgs=30, pulls 30 first, then after consuming 15, you would fetch upto another 15 (15x JS.API.CONSUMER.MSG.NEXT presuming messages are available), and the internal buffer would never exceed 30. I would probably still apperciate a way to disable this though if that's possible. I have a particular use case in one worker where the API partner lets us batch requests to save money, so I have an abstraction over ConsumerAsync that basically Task.WhenAny on MoveNexyAsync and a Task.Delay (as we dont want to wait too long before wrapping up the batch). With this I have to manage ackwait timing very carefully. I worry this would throw it off although I havent drawn it all out yet.

Maybe NoOverlapFetching : boolean ?

@mtmk
Copy link
Collaborator

mtmk commented Aug 22, 2024

So let's say you have MaxMsgs=30, pulls 30 first, then after consuming 15, you would fetch upto another 15 (15x JS.API.CONSUMER.MSG.NEXT presuming messages are available), and the internal buffer would never exceed 30. I would probably still apperciate a way to disable this though if that's possible. I have a particular use case in one worker where the API partner lets us batch requests to save money, so I have an abstraction over ConsumerAsync that basically Task.WhenAny on MoveNexyAsync and a Task.Delay (as we dont want to wait too long before wrapping up the batch). With this I have to manage ackwait timing very carefully. I worry this would throw it off although I havent drawn it all out yet.

Maybe NoOverlapFetching : boolean ?

We could potentially do that but I wonder if you can get what you need by adjusting the threshold or instead by using FetchAsync() instead?

@mattbdc
Copy link
Author

mattbdc commented Aug 23, 2024

Yeah that's probably a good idea. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants