This is a web scraper. We will populate our database with data from: Data SF on all of the parklets in SF.
- install dependencies (
npm install
) - start server in development mode (
npm run start:dev
)
npm run deploy:develop
will build and run latest local changes in dev modenpm run deploy:staging
will build and run latest local changes in staging modedocker-compose stop
will down crawler container
- Clone the repo
git clone https://github.com/hangoutsidesf/parklet-crawl.git crawler
cd crawler
- Start Production Container
npm run deploy:prod
- Edit
start
script to auto exit when data is collected