Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build a web scraper #4

Open
wanjauk opened this issue Sep 1, 2020 · 1 comment
Open

Build a web scraper #4

wanjauk opened this issue Sep 1, 2020 · 1 comment
Labels
eLife-Sprint-2020 help wanted Extra attention is needed

Comments

@wanjauk
Copy link
Contributor

wanjauk commented Sep 1, 2020

The web scraper will be used to get the data from publisher pages and existing APIs automatically. Any resources with information where researchers can publish at a waived or subsidised APC will be useful in providing content to the website. We will be looking to automate this process and make use of GitHub Actions or CI to extract the data and add to the GitHub pages.

@kipkurui kipkurui changed the title build a web scraper Build a web scraper Sep 2, 2020
@kipkurui kipkurui added eLife-Sprint-2020 help wanted Extra attention is needed labels Sep 2, 2020
@kipkurui
Copy link
Contributor

kipkurui commented Sep 3, 2020

Owing to the unstructured nature of the information on various publisher's websites, I am not sure if this is possible, but we'll keep exploring. The current solution is to use crowd-sourcing on the website through a G-Form, which feeds to the website directly after review and approval.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
eLife-Sprint-2020 help wanted Extra attention is needed
Projects
Development

No branches or pull requests

2 participants