Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipfs pin add is very slow / hanging #3505

Closed
Voker57 opened this issue Dec 14, 2016 · 27 comments
Closed

ipfs pin add is very slow / hanging #3505

Voker57 opened this issue Dec 14, 2016 · 27 comments
Milestone

Comments

@Voker57
Copy link
Contributor

Voker57 commented Dec 14, 2016

Version information:

go-ipfs version: 0.4.4-
Repo version: 4
System version: amd64/linux
Golang version: go1.7

Type:

Bug

Priority:

P3

Description:

I have a directory with 25G files in IPFS, to which I'm adding another file with 4GB. After adding, I'm trying to pin new directory version.

ipfs pin add QmPfmRdxrBL3nV9gChCMfv9uAbQSGmh1mcZ6Y1BfwdTBFd
# Actively churning disk for some time, lots of ipfs daemon threads competing for IO, then process is consuming ~15% CPU time indefinitely. I left it running for 10 hours, then killed
# However...
ipfs refs -r QmPfmRdxrBL3nV9gChCMfv9uAbQSGmh1mcZ6Y1BfwdTBFd > /dev/null
# Done in ~15 minutes.
@Kubuxu
Copy link
Member

Kubuxu commented Dec 14, 2016

What happens if try pinning after the refs call?

@Voker57
Copy link
Contributor Author

Voker57 commented Dec 14, 2016

Results are inconclusive. One time I got fast pin add ~15 minutes, now it hangs anyway.

@Kubuxu
Copy link
Member

Kubuxu commented Dec 14, 2016

If you can, providing following debug info would be quite useful: https://github.com/ipfs/go-ipfs/blob/master/docs/debug-guide.md#beginning

@Kubuxu
Copy link
Member

Kubuxu commented Dec 14, 2016

Best when you observe that it is stuck.

@Kubuxu
Copy link
Member

Kubuxu commented Dec 14, 2016

Make also sure that garbage collector is disabled, it might interfere is you are adding and not pinning.

@Voker57
Copy link
Contributor Author

Voker57 commented Dec 14, 2016

Debug info in "stuck" phase: http://dump.bitcheese.net/files/lysomul/ipfs.debug.tar.gz
GC is disabled.

@Voker57
Copy link
Contributor Author

Voker57 commented Dec 14, 2016

In "stuck" phase ipfs daemon is sending 10MB/s somewhere. If command is interrupted, network activity disappears.

@wigy-opensource-developer

This looks really familiar from my gx publish hassles: #3052 (comment)

@Voker57
Copy link
Contributor Author

Voker57 commented Dec 28, 2016

I've ran the same pin add with daemon running in offline mode and got more clear goroutine dump: http://dump.bitcheese.net/files/zapumel/ipfs.stacks

I'm new to go so I'm kind of at loss at this, but it appears that fetchNodes is getting stuck on selecting nodes from channel of GetMany() ? I ran the pin op several times and it gets stuck on different hashes each time. Not sure why no goroutines called from github.com/ipfs/go-ipfs/merkledag.fetchNodes.func2 are in the dump.

@Voker57
Copy link
Contributor Author

Voker57 commented Dec 29, 2016

Not sure why but reverting 0c14b41 fixes the problem

@Kubuxu
Copy link
Member

Kubuxu commented Dec 29, 2016

Looks like the concurrency limit set over there was too low.

We should probably uncap it, check what is reached during normal operation and go on from there.

@ghost
Copy link

ghost commented Dec 29, 2016

Consistently seeing similar behaviour when trying to pin the 33c3 schedule (e.g. /ipfs/QmUFz3URXrbRSAUot3j5hPo3Nw3qcGywZXc4wMMKdAd8Js) on the gateways.

@Voker57
Copy link
Contributor Author

Voker57 commented Dec 29, 2016

Problem is not low concurrency limit but possible deadlocking when both output and input of FetchNodes() are full. I think I solved it with deferring input into goroutines: #3550

@kevina
Copy link
Contributor

kevina commented Jan 4, 2017

@Kubuxu do you want to take this. I might be able to but I would need a good test case to do what you suggest when you said:

Looks like the concurrency limit set over there was too low.

We should probably uncap it, check what is reached during normal operation and go on from there.

@Kubuxu
Copy link
Member

Kubuxu commented Jan 4, 2017

I would wait, I have to evaluate @Voker57 solution.

@whyrusleeping
Copy link
Member

@Voker57 @Kubuxu could one of you check that the code in #3571 addresses the issue? I've had a hard time reproducing the failure in unit tests (though we should make a note to add 'pin a really big thing' as one of our larger scale integration tests)

@Kubuxu Kubuxu added the status/ready Ready to be worked label Jan 10, 2017
@Voker57
Copy link
Contributor Author

Voker57 commented Jan 15, 2017

Issue is not only in pin add hanging but also being slow. Is pin add verifying integrity of whole stored tree really required?

One way to solve this would be keeping index of DAG links and traverse it when needed, only checking if nodes exists on disk instead of parsing all of them each time. Proper implementation of index imo required adding relational DB to ipfs (sqlite? pluggable backend for postgres/mysql?), can be done quick&dirty with forward and reverse indexes in leveldb though.

@whyrusleeping
Copy link
Member

@Voker57 Yeah, we have a 'linkservice' abstraction that is supposed to be for this type of optimization. I'm going to bump this issue from 0.4.5 so we can focus on the speed of things before closing it.

@whyrusleeping whyrusleeping modified the milestones: ipfs 0.4.6, ipfs 0.4.5 Jan 17, 2017
@kevina
Copy link
Contributor

kevina commented Jan 17, 2017

Note: The 'linkservice' abstraction will only have an effect when raw leaves are used.

@Kubuxu
Copy link
Member

Kubuxu commented Jan 18, 2017

@kevina the linkservice currently only provides optimization when rew leaves are used, there is nothing stopping us from adding cache/index there for other nodes types.

@ghost
Copy link

ghost commented Jan 28, 2017

I have a same problem.

I have IPFS 0.4.4.

I try to duplicate files from one computer on another. The .ipfs catalog size — 42 GB. Has received the pins list on one computer and I do pin add on another. At first went quickly, but now when the extent of base on other computer makes 28 GB, process takes 10-15 seconds on each hash. And they are 30000. What worst of all, after interruption of process and restart everything brakes from the very beginning. Slowly even addition of the hashes which are already available in base works.

Whether there is an opportunity to somehow accelerate this process?

@Voker57 Voker57 mentioned this issue Jan 29, 2017
@Voker57
Copy link
Contributor Author

Voker57 commented Jan 29, 2017

@balancer in 0.4.4 pin add is broken and can randomly deadlock with a lot of data to pin.
In latest git it's properly working and you're welcome to test my patch #3642 which makes pinning of present data much faster.
Also unrelated, but why do you have 30000 pins? Are these all different directories which have to go to different parts of filesystem?

@whyrusleeping
Copy link
Member

@balancer This issue has been resolved in 0.4.5 (ipfs pin add will no longer hang and crash), and will be further improved in 0.4.6.

Adding 30,000 pins however will be very inefficient in 0.4.5 due to the issue that was resolved here: #3640

@whyrusleeping
Copy link
Member

Closing this now as the optimized code in #3598 was merged

@whyrusleeping whyrusleeping removed the status/ready Ready to be worked label Feb 17, 2017
@JazzTp
Copy link

JazzTp commented Oct 19, 2018

go-ipfs_v0.4.17_linux-amd64

ipfs pin add hangs, it only happened so far with three files, first time today

ipfs add <hashcode> -t seems to be working normally, ipfs refs local however doesn't show those hashcodes, and of course they are not in the output of ipfs pin ls -t recursive either.

(OFFTOPIC I haven't been using this node in a while but I had done quite some experimenting and a bit of script writing, 4.1 GB node, Max is set at 500GB for now but wouldn't mind enlarging, the problem was another, internet service provider not allowing to open ports to receive incoming connections, couldn't find out how to proxify the ipfs daemon, searched+tried with various proxifiers with no success, so finally I didn't bother to buy some VPN-proxy service allowing ports forwarding)

@Stebalien
Copy link
Member

@JazzTp please open a new issue. This one has been closed for quite a while.

@JazzTp
Copy link

JazzTp commented Oct 22, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants