Skip to content

lycyhrc/csKT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

csKT: Addressing Cold-start Problem in Knowledge Tracing with Kernel Bias and Cone Attention

This is the official implementation code. Our implementation is based on knowledge tracing benchmark pyKT

Quick Start

Conda setup

Since the training length is not equal to the test length in the cold start knowledge tracking scenario, two different conda environments are required first.

conda create --name=cskt_train python=3.8.11
source activate cskt_train
pip install -e .
conda create --name=cskt_test python=3.8.11
source activate cskt_test
pip install -e .

Dataset split

csKT training at length =20, testing at length 50,100,500 (Take 50 and statics2011 dataset as an example)

source activate cskt_train  # Train
cd examples
python data_preprocess.py --dataset_name=statics2011_20 --l=20

Training and Evaluation

After data preprocessing, you can use wandb_cskt_train.py [parameter] in the directory to train the model:

CUDA_VISIBLE_DEVICES=0 nohup python wandb_cskt_train.py --dataset_name=statics2011_20 --use_wandb=0

The result of the command line will return valid_acu,valid_acc for the default hyperparameters on the validator set.

Test

Since the test is performed at different lengths, our best recommendation is to copy the all directory (train length=20) to a new directory (such as, test length=50), And re-perform the data set processing steps.

source activate cskt_test # Test
cd examples
python data_preprocess.py --dataset_name=statics2011_20 --l=50
python wandb_predict.py # Test

Hyperparameter search

  • Following knowledge tracking benchmark pyKT (NIPS2022),we use Bayesian search to check the best hyperparameters for csKT.

  • All baseline experimental code and results are derived from 【pykt's】 best hyperparameters.

  • All results are reported as 5 fold as mean and standard deviation, all models are implemented in PyTorch and trained on a cluster of Linux servers equipped with NVIDIA RTX 3090 GPUs.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages