Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocate write buffers only when needed #9

Closed
garyburd opened this issue Dec 20, 2013 · 17 comments
Closed

Allocate write buffers only when needed #9

garyburd opened this issue Dec 20, 2013 · 17 comments

Comments

@garyburd
Copy link
Contributor

A write buffer is allocated to the connection for the lifetime of the connection, but the buffer is only needed when the application is writing a message. Use the Go 1.3 sync.Pool type (https://code.google.com/p/go/source/detail?r=2d9fe00d6ce1) to get and put write buffers as needed.

@aktau
Copy link

aktau commented Mar 15, 2014

Sounds like a good idea, do you advocate copying the (as mentioned non-optimal, non-final) implementation or just wait for Go 1.3?

@garyburd
Copy link
Contributor Author

I recommend waiting for the 1.3 release.

@GeertJohan
Copy link

1.3 has been released a while ago now, I think this would be a great feature!

@ChrisPowellIinc
Copy link

He sent that in March...
On Nov 4, 2014 6:35 AM, "Geert-Johan Riemer" notifications@github.com
wrote:

1.3 has been released a while ago :)


Reply to this email directly or view it on GitHub
#9 (comment).

@GeertJohan
Copy link

Yes I know. I just mean to say that I would be very interested in this feature and the stdlib now provides sync.Pool so it could be used in gorilla/websocket. 😄

@garyburd
Copy link
Contributor Author

garyburd commented Nov 5, 2014

I'd like to wait until Go 1.4 is released so that I can claim that the package works with the two most recent versions of Go.

@larsth
Copy link

larsth commented Apr 28, 2015

@garyburd

Go is now at version 1.4.2. Do you prefer to wait for the release of Go 1.5?

Build tags could make sure that this lib is always working by using the now existing implementation for Go < 1.3, and for Go >= 1.3 use sync.Pool.

@garyburd
Copy link
Contributor Author

I think it's OK to add the feature now that 1.4 is out.

I am not in favor of using build tags because this approach will spread connection code across three files (common code, < 1.3 code, >= 1.3 code). I prefer to keep things simple.

@nemosupremo
Copy link

I thought about contributing this, but after some thought I'm not sure what is gained by using pooling here vs. c.writeBuf. The default write buffer size is 4096b which is kicking around for the lifetime of the connection, but so is the read buffer (bufio.Reader), and given the size, the buffer never grows (the only chance it can grow is in the append statements at the start, and if that entire statement is less than the buffer, the buffer never grows again). In short, at most, you would save 4kb per connection. Even if you did have a high number of idles, you are still paying 16k for each goroutine you have per connection anyways.

With sync.Pool you could trade that 4k saving per routines with increased GC activity. Any byte buffer that is Put into the Pool and isn't Get before the next gc is collected. If you aren't doing very many writes, its no different than allocating a buffer anytime you want to write a message. Under high load with very many active connections it would be no different then what is done now.

My guess is if you have a high number of idles and a few of them doing high traffic, you'd probably benefit then - and thats a case for it, but I'm not sure if thats the use case to program against.

It would probably be best to benchmark but I'm not convinced it would be worth the effort.

@garyburd
Copy link
Contributor Author

If a goroutine can execute pool get, write to connection and pool put without being rescheduled, then many active connections under load will share a small number of buffers.

It is worth coding for the scenario where there are large number of inactive connections. Chat and notification services are examples of services with large numbers of inactive connections.

A benchmark should be written to determine if the change is beneficial. The benchmark will require more effort than the actual feature.

@fungl164
Copy link

For larger initial buffer sizes this makes a lot of sense, especially when as you load up with idle connections. This sounds very desirable to me. Any progress/eta on this? Thnxs...

@y3llowcake
Copy link
Contributor

I am now depending on this library for a service that handles millions of websocket connections that are mostly inactive. I have begun implementing this feature by necessity.

I propose to upstream a version of this that maintains the default functionality: allocating a new connection allocates exclusive read and write buffers. This maintains zero-allocation behavior for applications with small numbers of websockets and a large number of writes.

The change would be to add a new field to Dialer and Upgrader that cause buffers to be recycled via a pool instead. This would be highly recommended for applications with large numbers of mostly inactive connections.

Thoughts?

@garyburd
Copy link
Contributor Author

garyburd commented Dec 12, 2016

Here's my proposal on how to implement the feature:

type BufferPool interface {
     Get() interface{}
     Put(interface{})
}

type Conn struct {
   ...
   writePool BufferPool
   writeBufferSize int
   ...
}

type Upgrader struct {
   ...
   WriteBufferPool BufferPool
   ...
}

type Dialer struct {
   ...
   WriteBufferPool BufferPool
   ...
}

func acquireWriteBuf() error {
    if c.writeBuf != nil {
        return nil
    }
   n := c.writeBufferSize+maxFrameHeaderSize
    if c.writePool != nil {
        if v = c.writePool.Get(); v != nil {
            p, ok := v.([]byte)
            if !ok || len(p) != n {
                return errors.New("bad value from write buffer pool")
            }
            c.writeBuf = p
            return nil
        }
   c.writeBuf = make([]byte, n)
   return nil
}

func releaseWriteBuf() {
    if c.writePool != nil && c.writeBuf != nil {
        c.writePool.Put(c.writeBuf)
        c.writeBuf = nil
   }
}

Set c.writeBufferSize & c.writePool in Upgrader.Upgrade and Dialer.Dial. Remove line of code to allocate c.writeBuf in newConn.

A sync.Pool can be used as a websocket.BufferPool.

@y3llowcake
Copy link
Contributor

In your proposal, what is the behavior if you don't set Upgrader/Dialer.WriteBufferPool? Does it fall back to connection exclusive buffers, or opt you into a default pool?

@garyburd
Copy link
Contributor Author

It falls back to exclusive buffers as shown in my previous comment.

@y3llowcake
Copy link
Contributor

Hey Gary, I opened a PR: #192

It's woefully lacking in the area of tests, but I'd be happy to reconcile that if the overall approach seems fine to you.

@garyburd
Copy link
Contributor Author

Closed by b378cae

@gorilla gorilla locked and limited conversation to collaborators Apr 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants