Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: ipfs file ls: list node size for directories #4861

Closed
wants to merge 1 commit into from
Closed

fix: ipfs file ls: list node size for directories #4861

wants to merge 1 commit into from

Conversation

schomatis
Copy link
Contributor

List the size of the underlying node for directories to honor the unix API.

If a test is needed: only the JSON encoding outputs the file size and I'm not sure how to process that with the sharness framework.

Fixes #4580.

@schomatis schomatis requested a review from Kubuxu as a code owner March 22, 2018 18:40
@schomatis
Copy link
Contributor Author

I was looking at the test file t0250-files-api.sh when I should have checked t0200-unixfs-ls.sh, I'll fix the second test.

License: MIT
Signed-off-by: Lucas Molas <schomatis@gmail.com>
@ghost
Copy link

ghost commented Mar 24, 2018

It would be good to take directory sharding into account here too -- a directory is a unixfspb.Data_HAMTShard in these cases. We'd probably want the combined size of all shards.

@Stebalien
Copy link
Member

It would be good to take directory sharding into account here too -- a directory is a unixfspb.Data_HAMTShard in these cases. We'd probably want the combined size of all shards.

Really, directories should list their sizes. Recursively reading the size of the hamt is going to be really annoying.

@ghost
Copy link

ghost commented Mar 24, 2018

Not the size of the contents, just the raw size of the (sharded) directory object. Or do you mean that's something to precompute and stick into the object?

@Stebalien
Copy link
Member

Not the size of the contents, just the raw size of the (sharded) directory object.

That still involves traversing the sharded directory. Not that bad, just annoying.

Or do you mean that's something to precompute and stick into the object?

Yeah. Ideally, everything would have a filesize.

@whyrusleeping
Copy link
Member

Yeah. Ideally, everything would have a filesize.

Add that to the ipld unixfs requirements listing

Copy link
Member

@Kubuxu Kubuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGWM, we can't simply resolve the issue of sharded directories without defeating most of the benefit of sharding.

@Stebalien
Copy link
Member

Add that to the ipld unixfs requirements listing

ipld/legacy-unixfs-v2#7

@schomatis
Copy link
Contributor Author

I got lost after the sharded directory comments, should I modify something of this PR?

@Stebalien
Copy link
Member

@schomatis so, we've been discussing this:

And I'm now less convinced we should actually include the size. Normally, filesize tells you how big a file would be when downloaded. However, the directory size (in bytes) won't really tell you anything useful. Thoughts? I don't really have any strong opinions. My initial opinion was "follow unix" but I'm now less convinced.

@kevina
Copy link
Contributor

kevina commented Jun 19, 2018

follow unix

Is not a good guide on this. Unix uses the physical directory size (that is the list of inodes), not the size of the contents, or even the number of entries. In some ways this makes sense and it is also easy to implement, but it isn't very useful and it doesn't really make sense for us.

@schomatis
Copy link
Contributor Author

As suggested in the previous comments, this PR doesn't seems to add much value, closing.

@schomatis schomatis closed this Dec 13, 2018
@schomatis schomatis deleted the fix/ls/dir-size branch December 13, 2018 23:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants