Codes for Efficient and robust temporal processing with neural oscillations modulated spiking neural networks
Yinsong Yan†, Qu Yang†, Yujie Wu, Haizhou Li, Kay Chen Tan and Jibin Wu*
# create virtual environment
conda create -n RhythmSNN python=3.8.18
conda activate RhythmSNN
# select pytorch version=2.0.1
# install RhythmSNN requirements
pip install -r requirements.txt
All data used in this paper are publicly available. After downloading, please put the dataset in the folder with the corresponding dataset name.
- S-MNIST and PS-MNIST datasets can be downloaded from http://yann.lecun.com/exdb/mnist/
- SHD dataset can be downloaded from https://zenkelab.org/resources/spiking-heidelberg-datasets-shd/
- ECG dataset can be downloaded from https://physionet.org/content/qtdb/1.0.0/
- GSC dataset is available at https://tensorflow.google.cn/datasets/catalog/speech_commands/
- DVS-Gesture dataset can be downloaded from https://research.ibm.com/interactive/dvsgesture/
- VoxCeleb1 dataset can be accessed at https://www.tensorflow.org/datasets/catalog/voxceleb
- PTB dataset can be obtained from https://www.kaggle.com/datasets/aliakay8/penn-treebank-dataset
- Intel N-DNS Challenge dataset can be downloaded from https://github.com/IntelLabs/IntelNeuromorphicDNSChallenge
The datasets (SHD and GSC) are required to arrange the data before training. The pre-processing codes and instructions can be found in the folder that corresponds to the task.
This section provides instructions on how to train models on various datasets using the provided scripts. Follow the steps below for each dataset:
Run the following scripts for training on the PS-MNIST dataset, located in the Spiking_pmnist
folder:
python main_psmnist-skipFFSNN.py # for Rhythm-FFSNN
python main_psmnist-skipASRNN.py # for Rhythm-ASRNN
Run the following scripts for training on the S-MNIST dataset, located in the spiking_smnist
folder:
python main_seqmnist-skipFFSNN.py # for Rhythm-FFSNN
python main_seqmnist-skipASRNN.py # for Rhythm-ASRNN
Run the following script for training on the SHD dataset, located in the SHD
folder:
python main_dense_general_rhy.py # for Rhythm-DH-SFNN
Run the following scripts for training on the ECG dataset, located in the ECG
folder:
python main_ecg-skipFFSNN.py # for Rhythm-FFSNN
python main_ecg-skipASRNN.py # for Rhythm-ASRNN
Run the following scripts for training on the DVS-Gesture dataset, located in the DVS-Gesture
folder:
python main_DVS-skipSRNN_general_cosA.py # for Rhythm-SRNN
python main_DVS-skipASRNN_general_cosA.py # for Rhythm-ASRNN
Run the following script for training on the VoxCeleb1 dataset, located in the VoxCeleb1
folder:
run_exp_spk.py
Alternatively, you can use the shell script:
run_rhy_exp_spk.sh
Run the following scripts for training on the PTB dataset:
python RhythmSRNN-ptb.py # for Rhythm-SRNN
python RhythmALIF-ptb.py # for Rhythm-ASRNN
The Intel_N-DNS_Challenge
folder contains code to implement the Rhythm-GSNN model, as described in our paper, by incorporating the proposed rhythm mechanisms into the GSNN. This is based on Spiking-FullSubNet, the winner of Intel N-DNS Challenge Track 1.
See the Documentation for installation and usage.
More listening samples of Intel DNS Challenge are provided in this Google drive folder (est.wav is our model's denoising result, raw.wav and ref.wav are raw and clean audios, respectively).