Skip to content

atesahmet0/TWFlame

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is an application made for scraping tweets from Twitter.

This application made possible thanks to the vladkens's twscrape. Please also checkout it: https://github.com/vladkens/twscrape

Setup

Python3.11 has to be installed. Python dependencies are

  • twscrape
  • loguru
  • reportlab
  • asyncio

To install dependencies use: python3 -m pip install -r requirements.txt

or install them one by one manually

  • python3 -m pip install twscrape
  • python3 -m pip install loguru
  • python3 -m pip install reportlab

Then you can run the applicatin with python3 main.py

Usage

Application acts as a normal user on twitter. So we have to provide accounts to for application to use.

Accounts Page

atesdijital.com Accounts page
  • username: username of the twitter account
  • password: password of the twitter account
  • email: email of the twitter account
  • email_password: email password of the email

Warning

Not all email providers are supported Eg. @yandex.com. Only: yahoo, icloud, hotmail and outlook are supported for now. For more information: vladkens/twscrape#67

After adding accounts use "login all accounts" button to login. If accounts are not active head to the Tweet page and checkout application output to see what is wrong.

Tweet Page

atesdijital.com Tweet Scraper's Tweet Page

Tweet Page has the output field of the program. Any errors or infos may be read here.

Input username then provide dates to scrape Tweets. Note that dates must be "Year-month-day" format like "2024-04-03".

Note

Scraped tweets will be stored in a database with the specified username. Eg. if you scraped tweets of "elonmusk" then tweets will be stored in a SQLite3 database called "elonmusk.db"

PDF Page

atesdijtial.com Tweet Scraper's PDF Page

There is an example database that has ~500 tweets. Use "TansuYegen" as username to test PDF Page. Input username then click import to fetch all the tweets of the user that is stored in the database. Select tweets that you want then click turn to pdf button. PDF will be stored in the same directory as "main.py" file.

Note

Note that you don't need an active account nor internet connection in order to do this. PDF Page only works with the tweets that are stored in the database.

About

A wrapper for twscraper.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages