Skip to content

bigdata-ustc/EduCAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Computerized Adaptive Testing: A Python Library

This Python library offers a streamlined solution for rapidly developing a Computerized Adaptive Testing (CAT) system. It encompasses a comprehensive suite of tools that integrate both traditional statistical methods and recent machine learning and deep learning techniques.

PyPI version GitHub license

❗ What is CAT About?

Computerized Adaptive Testing (CAT) stands as one of the earliest and most successful integrations of educational practices and computing technology.
CAT is a dynamic and interactive process between a student and a testing system. If traditional paper-and-pencil tests are "one-for-all," then CAT is "one-for-each". Each student gets a personalized test that adapts to their proficiency level and knowledge, ensuring each question accurately assesses and challenges them. CAT tailors the selection of questions to each student’s level of proficiency, thereby maximizing the accuracy of the assessment while minimizing the test length.

The CAT system is split into two main components that take turns: At each test step, the Cognitive Diagnosis Model (CDM), as the user model, first uses the student’s previous responses to estimate their current proficiency, based on cognitive science or psychometrics. Then, the Selection Algorithm picks the next question from the bank according to certain criteria.This two-step process repeats until a predefined stopping rule is met, and the final estimated proficiency (i.e., diagnostic report) of individual students will be fed back to themselves as the outcome of this assessment or for facilitating future learning.

Figure 2

⚡ Contribution

This repository implements basic functionalities of CAT. It includes the implements three types of CDM: Item Response Theory, Multidimensional Item Response Theory and Neural Cognitive Diagnosis. And each CDM has its corresponding selection algorithm:

  • IRT: Item Response Theory
    • MFI: Maximum Fisher Information strategy
    • KLI: Kullback-Leibler Information strategy
    • MAAT: Model-Agnostic Adaptive Testing strategy
    • BECAT: Bounded Ability Estimation Adaptive Testing strategy
    • BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing strategy
    • NCAT: Neural Computerized Adaptive Testing strategy
  • MIRT: Multidimensional Item Response Theory
    • D-opt: D-Optimality strategy
    • MKLI: Multivariate Kullback-Leibler Information strategy
    • MAAT: Model-Agnostic Adaptive Testing strategy
    • BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing strategy
    • NCAT: Neural Computerized Adaptive Testing strategy
  • NCD: Neural Cognitive Diagnosis
    • MAAT: Model-Agnostic Adaptive Testing strategy
    • BECAT: Bounded Ability Estimation Adaptive Testing strategy

It is worth noting that the data needs to be processed before it can be used. In the script/dataset directory, we provide the preprocessing files for the ASSISTment dataset for reference.

⚡ Installation

To make use of our work, you should do these below:

Git and install by pip

pip install -e .

or install from pypi

pip install EduCAT

Quick Start

See the examples in scripts directory.

utils

Visualization

By default, we use tensorboard to help visualize the reward of each iteration, see demos in scripts and use

tensorboard --logdir /path/to/logs

to see the visualization result.

📕 Machine Learning-Based Methods

🔍Cognitive Diagnosis Models (CDM)

Cognitive Diagnosis Model (CDM), as the user model, first uses the student's previous responses to estimate their current proficiency, based on cognitive science or psychometrics.

✏️Selection Algorithm

Then, the \textbf{Selection Algorithm} picks the next question from the \textbf{Question Bank} according to certain criteria \cite{lord2012applications, chang1996global, bi2020quality}. Most traditional statistical criteria are informativeness metrics, e.g., selecting the question whose difficulty matches the student's current proficiency estimate, meaning the student has roughly a 50% chance of getting it right \cite{lord2012applications}. The above process repeats until a predefined stopping rule is met, and the final estimated proficiency (i.e., diagnostic report) of individual students will be fed back to themselves as the outcome of this assessment or for facilitating future learning.

2023 - 2024

  • BETA-CD: a Bayesian meta-learned cognitive diagnosis framework for personalized learning Paper
  • Self Supervised Graph Learning for Long-Tailed Cognitive Diagnosis Paper
  • Deep reinforcement learning for adaptive learning systems Paper
  • A novel computerized adaptive testing framework with decoupled learning selector Paper
  • Gmocat: A graph enhanced multi-objective method for computerized adaptive testing Paper
  • Towards scalable adaptive learning with graph neural net works and reinforcement learning Paper
  • Towards a holistic under standing of mathematical questions with contrastive pre-training Paper
  • Adaptive e-learning system based on learner portraits and knowledge graph Paper
  • A bounded ability estimation for computerized adaptive testing Paper
  • Balancing test accuracy and security in computerized adaptive testing Paper
  • Search-efficient computerized adaptive testing Paper

2022-2023

  • Hiercdf: A bayesian network-based hierarchical cognitive diagnosis framework Paper
  • Deep cognitive diagnosis model for predicting students’ performance Paper
  • Computerized adaptive testing: A unified approach under markov decision process Paper
  • Fully adaptive framework: Neural computerized adaptive testing for online education Paper
  • Is the naplan results delay about politics or precision? Paper
  • Algorithmic fairness in education Paper
  • A robust computerized adaptive testing approach in educational question retrieval Paper
  • Self-Attention Gated Cognitive Diagnosis For Faster Adaptive Educational Assessments Paper

2021-2022

  • Item response ranking for cognitive diagnosis, in Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21 Paper
  • Rcd: Relation map driven cognitive diagnosis for intelligent education systems Paper
  • Bobcat: Bilevel optimization based computerized adaptive testing Paper
  • Multi-objective optimization of item selection in computerized adaptive testing Paper
  • Consistency-aware multi-modal network for hierarchical multi-label classification in online education system Paper

2020-2021

  • Neural cognitive diagnosis for intelligent education systems Paper
  • Quality meets diversity: A model agnostic framework for computerized adaptive testing Paper

2019-2020

  • Dirt: Deep learning en hanced item response theory for cognitive diagnosis,” in Proceedings of the 28th ACM International Conference on Information and Knowledge Paper
  • Robust computerized adaptive testing Paper
  • Reinforcement learning applied to adaptive classification testing Paper
  • Exploiting cognitive structure for adaptive learning Paper
  • Question difficulty prediction for multiple choice problems in medical exams Paper
  • Hierarchical multi label text classification: An attention-based recurrent network approach Paper
  • Quesnet: A unified representation for heterogeneous test questions Paper

Before 2019

  • Recommendation system for adaptive learning Paper
  • Question difficulty prediction for reading problems in standard tests Paper
  • Detecting biased items using catsib to increase fairness in computer adaptive tests Paper
  • Evaluating knowledge structure-based adaptive testing algorithms and system development Paper
  • Applications of item response theory to practi cal testing problems Paper
  • An adaptive testing system for supporting versatile educational assessment Paper

Citation

If this repository is helpful for you, please cite our work

@misc{liu2024survey,
      title={Survey of Computerized Adaptive Testing: A Machine Learning Perspective}, 
      author={Qi Liu and Yan Zhuang and Haoyang Bi and Zhenya Huang and Weizhe Huang and Jiatong Li and Junhao Yu and Zirui Liu and Zirui Hu and Yuting Hong and Zachary A. Pardos and Haiping Ma and Mengxiao Zhu and Shijin Wang and Enhong Chen},
      year={2024},
      eprint={2404.00712},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}