This is the official implementation code. Our implementation is based on knowledge tracing benchmark pyKT
Since the training length is not equal to the test length in the cold start knowledge tracking scenario, two different conda environments are required first.
conda create --name=cskt_train python=3.8.11
source activate cskt_train
pip install -e .
conda create --name=cskt_test python=3.8.11
source activate cskt_test
pip install -e .
csKT training at length =20, testing at length 50,100,500 (Take 50 and statics2011 dataset as an example)
source activate cskt_train # Train
cd examples
python data_preprocess.py --dataset_name=statics2011_20 --l=20
After data preprocessing, you can use wandb_cskt_train.py [parameter]
in the directory to train the model:
CUDA_VISIBLE_DEVICES=0 nohup python wandb_cskt_train.py --dataset_name=statics2011_20 --use_wandb=0
The result of the command line will return valid_acu
,valid_acc
for the default hyperparameters on the validator set.
Since the test is performed at different lengths, our best recommendation is to copy the all directory (train length=20) to a new directory (such as, test length=50), And re-perform the data set processing steps.
source activate cskt_test # Test
cd examples
python data_preprocess.py --dataset_name=statics2011_20 --l=50
python wandb_predict.py # Test
-
Following knowledge tracking benchmark pyKT (NIPS2022),we use Bayesian search to check the best hyperparameters for csKT.
-
All baseline experimental
code
andresults
are derived from 【pykt's】 best hyperparameters. -
All results are reported as
5
fold as mean and standard deviation, all models are implemented in PyTorch and trained on a cluster of Linux servers equipped with NVIDIA RTX 3090 GPUs.