Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use jsipfs as fallback instead of public gateways #44

Merged
merged 5 commits into from
Nov 22, 2018

Conversation

fsdiogo
Copy link
Contributor

@fsdiogo fsdiogo commented Nov 15, 2018

Closes #43.

@fsdiogo fsdiogo self-assigned this Nov 15, 2018
@ghost ghost added the status/in-progress In progress label Nov 15, 2018
@fsdiogo
Copy link
Contributor Author

fsdiogo commented Nov 19, 2018

When a file larger than 1GB is found, a warning is shown: (in the image I mocked it to be for files larger than 2MB)

warning

@fsdiogo fsdiogo changed the title wip: use jsipfs as fallback instead of public gateways feat: use jsipfs as fallback instead of public gateways Nov 19, 2018
Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the icon color matching message left border -- makes it easy to see which files may cause trouble 👍

Some asks below:

  1. BUG: Link to IPFS Companion is broken (both links points at desktop)

    • This is probably because the same <1> tag is used for both. Check working version of tag interpolation in i18n in ipfs-companion (Welcome/Landing Page).
  2. STYLE: What if we change the message to be more actionable and emphasize value added by Desktop/Companion?

    • Perhaps something like:

      If you experience issues when downloading large files, try IPFS Desktop and IPFS Companion for improved performance.

  3. FUN: I noticed ipfs-redux-bundle by default also tries to connect to local API port directly – not sure if it is a bug... it could be a feature ;)

Open problem: Upload

I think this PR should also address UX of Upload part: when you add a big file it will be added to local js-ipfs repo but it will take time for actual data to hit preload servers.

Imagine a scenario (with js-ipfs running on the page):

  • You add 3GB file, get working download link, close the tab, share it with friends
  • Friends starts to download data
    • ... but you closed the tab before entire thing propagated to preload server, there is no other source and they are stuck at 15% forever :(

Until we have some sort of background worker we probably should display the same yellow icon and a message warning user that the tab should be open until their friend downloads entire thing (when embedded js-ipfs is used).

@fsdiogo
Copy link
Contributor Author

fsdiogo commented Nov 20, 2018

Fixed the broken link and updated the copy, thanks @lidel!

About the open problem: I agree that it needs to be addressed ASAP, but always and not only when using an embedded js-ipfs node. Even if the user is using a local node, if he adds a file, gets the share link and then quits its daemon, the file is "lost". That must be explicit somewhere, and I think it should be somewhere in the copy of "how the share app works".

Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can merge this PR (pinging @olizilla to proofread English copy)

I created #46 to discuss UX for sharing files that take long time to hit preload servers (which is a generic problem we should solve in separate PR).

@fsdiogo fsdiogo mentioned this pull request Nov 20, 2018
@olizilla
Copy link
Member

👀

@olizilla
Copy link
Member

FUN: I noticed ipfs-redux-bundle by default also tries to connect to local API port directly – not sure if it is a bug... it could be a feature ;)

@lidel that was intentional, is it a problem?

@lidel
Copy link
Member

lidel commented Nov 21, 2018

@olizilla not a problem, I think it makes sense for Share app to do such opportunistic lookup, at least until we have "API lookup heuristic" spec and UX worked out.
However the error indicated by the first arrow here suggests we are using multiaddr without converting it to URL first somewhere (it then retries with real URL).

Copy link
Member

@olizilla olizilla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is neat! ✨ I think using a random shortId as a stable reference for files as the go from Dom style file object, to streams, to ipfs references is a really good idea.

One roadblock to this is that the "download all" trick doesn't work for me in Firefox (v63 on macos 10.14)

@olizilla
Copy link
Member

olizilla commented Nov 22, 2018

@lidel thanks! now i see it. That is gross. In part it's a bug from https://github.com/tableflip/uri-to-multiaddr - I assumed that dnsaddr was a generic protocol for "could be dns4 or dns6", but it actually means, check this domain for a dnslink! The only reference I can find to back that up is multiformats/go-multiaddr-dns#7 (comment) which needs more documenting.

The next problem is there seems to be no way to convert a uri to a multiaddr correctly without being able to do dns lookups or know in advance if a domain will resolve to an. ipv4 or ipv6 address. multiformats/multiaddr#22

@fsdiogo fsdiogo merged commit af8a06a into master Nov 22, 2018
@ghost ghost removed the status/in-progress In progress label Nov 22, 2018
@fsdiogo fsdiogo deleted the feat/use-ipfs-redux-bundler branch November 22, 2018 14:05
@lidel
Copy link
Member

lidel commented Nov 23, 2018

@olizilla

I was going to suggest that ip4/ip6 hint (based on presence of a specific DNS record type: A vs AAA) could be added to the output of DNSLink lookup endpoint at /api/v0/dns/ (the other one being TTL), but we experience the error while resolving API address 🤦‍♂️ and asking https://ipfs.io/api/v0/dns/ is bit ugly.

So.. is the fix to just switch the default API multiaddr to /ip4/ ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants