Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comments, documentations and questions #105

Merged
merged 2 commits into from
Jun 10, 2022

Conversation

MichaelMure
Copy link
Contributor

While I was trying to understand that code, I added some comment and documentation to make that process easier for the next person.

I also found a possible bug (26abf67). It looks like a mistake as fs here can't be nil.

Additionally, I'm confused about certain things or noticed some possible improvement (846fdf9). I'm using that PR as a way to discuss those points, this last commit can be removed after that.

readme.go Outdated
@@ -1,5 +1,8 @@
package flatfs

// TODO: now that datastore don't store CIDs but multihashes instead, this is really
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Found out later that it's addressed in #103

flatfs.go Outdated
@@ -606,6 +624,12 @@ func (fs *Datastore) putMany(data map[datastore.Key][]byte) error {
if _, err := tmp.Write(value); err != nil {
return err
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partially addressed in #36

flatfs.go Outdated
Comment on lines 190 to 191
// TODO: honestly those rules are weird to me and might lead to data loss? Why a later concurrent op
// should not execute?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect this has to do with flatfs not really being used as a generic datastore but as a backing for a blockstore. This means that (key, value) pairs are immutable so what's your other operation going to do other than a) write the data that's already present b) delete the data you just wrote which is likely due to an application layer bug.

I suspect this isn't the only place where we have these assumptions another is related to the domain of valid key names.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something non-obvious when reading from this PR: this synch/dedup structure is shared by all write functions of the datastore, so a Delete could be cancelled by a recent Put for example. That would be a very legal scenario, even if attached to a blockstore. My point is that this code brings a lot of complexity for a rare case (dedup writes on the same key) for little benefit.

@@ -363,15 +379,15 @@ func (fs *Datastore) renameAndUpdateDiskUsage(tmpPath, path string) error {
fi, err := os.Stat(path)

// Destination exists, we need to discount it from diskUsage
if fs != nil && err == nil {
if fi != nil && err == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a bug to me 😄

flatfs.go Outdated
Comment on lines 628 to 632
// TODO: I don't understand why the temp files are not closed immediately
// here. After this point we either rename or delete them so why wait?
// It induces the complexity of the closer to avoid having too much open
// files.
// TODO: concurrent writes?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Stebalien might be able to give more context here, but this may be about delayed/grouping disk syncing and other proposals like #77.

Closing the tmp files will result in the data being synced to disk which could be expensive so instead the changes are grouped and we sync them all together which hopefully reduces any disk thrashing.

This is just my guess though. Someone more familiar with this code might have a more informed opinion though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Closing the tmp files will result in the data being synced to disk

Ha right, that's a very good point. So that way it's up to the kernel to stream that data on disk however makes the more sense.

@BigLep BigLep requested a review from aschmahmann May 6, 2022 15:33
@BigLep BigLep added this to the Best Effort Track milestone May 6, 2022
@MichaelMure
Copy link
Contributor Author

I think the questions have been discussed enough, we can remove the last commit and merge the rest.

@BigLep
Copy link

BigLep commented Jun 3, 2022

2022-06-03: @aschmahmann will handle this today as part of PR review day.

@aschmahmann aschmahmann force-pushed the comment-and-question branch 2 times, most recently from fd93fc7 to c866877 Compare June 10, 2022 13:18
Copy link
Contributor

@aschmahmann aschmahmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rebased to fix conflicts and ready to go. Thanks @MichaelMure 🙏

@aschmahmann aschmahmann merged commit 8d63ceb into ipfs:master Jun 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants