-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments, documentations and questions #105
Conversation
readme.go
Outdated
@@ -1,5 +1,8 @@ | |||
package flatfs | |||
|
|||
// TODO: now that datastore don't store CIDs but multihashes instead, this is really |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found out later that it's addressed in #103
flatfs.go
Outdated
@@ -606,6 +624,12 @@ func (fs *Datastore) putMany(data map[datastore.Key][]byte) error { | |||
if _, err := tmp.Write(value); err != nil { | |||
return err | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
partially addressed in #36
flatfs.go
Outdated
// TODO: honestly those rules are weird to me and might lead to data loss? Why a later concurrent op | ||
// should not execute? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect this has to do with flatfs not really being used as a generic datastore but as a backing for a blockstore. This means that (key, value) pairs are immutable so what's your other operation going to do other than a) write the data that's already present b) delete the data you just wrote which is likely due to an application layer bug.
I suspect this isn't the only place where we have these assumptions another is related to the domain of valid key names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something non-obvious when reading from this PR: this synch/dedup structure is shared by all write functions of the datastore, so a Delete
could be cancelled by a recent Put
for example. That would be a very legal scenario, even if attached to a blockstore. My point is that this code brings a lot of complexity for a rare case (dedup writes on the same key) for little benefit.
@@ -363,15 +379,15 @@ func (fs *Datastore) renameAndUpdateDiskUsage(tmpPath, path string) error { | |||
fi, err := os.Stat(path) | |||
|
|||
// Destination exists, we need to discount it from diskUsage | |||
if fs != nil && err == nil { | |||
if fi != nil && err == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a bug to me 😄
flatfs.go
Outdated
// TODO: I don't understand why the temp files are not closed immediately | ||
// here. After this point we either rename or delete them so why wait? | ||
// It induces the complexity of the closer to avoid having too much open | ||
// files. | ||
// TODO: concurrent writes? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Stebalien might be able to give more context here, but this may be about delayed/grouping disk syncing and other proposals like #77.
Closing the tmp files will result in the data being synced to disk which could be expensive so instead the changes are grouped and we sync them all together which hopefully reduces any disk thrashing.
This is just my guess though. Someone more familiar with this code might have a more informed opinion though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Closing the tmp files will result in the data being synced to disk
Ha right, that's a very good point. So that way it's up to the kernel to stream that data on disk however makes the more sense.
I think the questions have been discussed enough, we can remove the last commit and merge the rest. |
2022-06-03: @aschmahmann will handle this today as part of PR review day. |
fd93fc7
to
c866877
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rebased to fix conflicts and ready to go. Thanks @MichaelMure 🙏
While I was trying to understand that code, I added some comment and documentation to make that process easier for the next person.
I also found a possible bug (26abf67). It looks like a mistake as
fs
here can't be nil.Additionally, I'm confused about certain things or noticed some possible improvement (846fdf9). I'm using that PR as a way to discuss those points, this last commit can be removed after that.