PyTorch Implementation of DTF-AT: Decoupled Time-Frequency Audio Transformer for Event Classification (AAAI 2024)
Clone or download this repository and set it as the working directory, create a virtual environment and install the dependencies.
cd DTFAT/
conda env create -f dtfat.yml
conda activate dtfat
Since the AudioSet data is downloaded from YouTube directly, videos get deleted and the available dataset decreases in size over time. So you need to prepare the following files for the AudioSet copy available to you.
Prepare data files as mentioned in AST
We have provided the best model. Please download the model weight and keep it in DTFAT/pretrained_models/best_model/model
.
You can validate the model performance on your AudioSet evaluation data as follows,
cd DTFAT/egs/audioset
bash eval_run.sh
This script create log file with date time stamp in the same directory(eg:1692289183.log). You can find the mAP in the end of the log file.
We are using the AST repo for model training and timm(do not install timm) for model implementation and ImageNet-1K pretrained weights.
If you find our work useful, please cite it as:
@inproceedings{alex2024dtf,
title={DTF-AT: decoupled time-frequency audio transformer for event classification},
author={Alex, Tony and Ahmed, Sara and Mustafa, Armin and Awais, Muhammad and Jackson, Philip JB},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={16},
pages={17647--17655},
year={2024}
}