Skip to content

DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks; Plus 30+ Code and Markdown Metrics (MSR '24).

License

Notifications You must be signed in to change notification settings

ISE-Research/DistilKaggle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks

Access the dataset:
DOI

Overview

DistilKaggle is a curated dataset extracted from Kaggle Jupyter notebooks spanning from September 2015 to October 2023. This dataset is a distilled version derived from the download of over 300GB of Kaggle kernels, focusing on essential data for research purposes. The dataset exclusively comprises publicly available Python Jupyter notebooks from Kaggle. The essential information for retrieving the data needed to download the dataset is obtained from the MetaKaggle dataset provided by Kaggle

Contents

The DistilKaggle dataset consists of three main CSV files:

  1. code.csv: Contains over 12 million rows of code cells extracted from the Kaggle kernels. Each row is identified by the kernel's ID and cell index for reproducibility.

  2. markdown.csv: Includes over 5 million rows of markdown cells extracted from Kaggle kernels. Similar to code.csv, each row is identified by the kernel's ID and cell index.

  3. notebook_metrics.csv: This file provides notebook features described in the accompanying paper released with this dataset. It includes metrics for over 517,000 Python notebooks.

Directory Structure

The kernels directory is organized based on Kaggle's Performance Tiers (PTs), a ranking system in Kaggle that classifies users. The structure includes PT-specific directories, each containing user ids that belong to this PT, download logs, and the essential data needed for downloading the notebooks.

The utility directory contains two important files:

  1. aggregate_data.py: A Python script for aggregating data from different PTs into the mentioned CSV files.

  2. application.ipynb: A Jupyter notebook serving as a simple example application using the metrics dataframe. It demonstrates predicting the PT of the author based on notebook metrics.

Usage

Researchers can leverage this distilled dataset for various analyses without dealing with the bulk of the original 300GB dataset. For access to the raw, unprocessed Kaggle kernels, researchers can request the dataset directly.

Dataset

You can access the dataset from the link below:
DOI

Please note that the original dataset of Kaggle kernels is substantial, exceeding 300GB, making it impractical for direct upload to Zenodo. Researchers interested in the full dataset can contact the dataset maintainers for access.

Citation

If you use this dataset in your research, please cite the accompanying paper:

M. Mostafavi Ghahfarokhi, A. Asgari, M. Abolnejadian, and A. Heydarnoori. "DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks", In Proceedings of the 21st IEEE/ACM International Conference on Mining Software Repositories (MSR), Lisbon, Portugal, Apr. 2024.

@inproceedings{mostafavi-msr2024-DistilKaggle,
  title={DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks},
  booktitle={Proceedings of the 21st IEEE/ACM International Conference on Mining Software Repositories (MSR)},
  author={Mojtaba Mostafavi Ghahfarokhi and Arash Asgari and Mohammad Abolnejadian and Abbas Heydarnoori},
  month={April},
  year={2024},
  publisher={IEEE/ACM},
  address={Lisbon, Portugal},
}

Thank you for using DistilKaggle!

About

DistilKaggle: A Distilled Dataset of Kaggle Jupyter Notebooks; Plus 30+ Code and Markdown Metrics (MSR '24).

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published