You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello thanks for your hard work :)
I read the ICML 2019 version of your paper and I came up with some questions about implementation of pre-trained ResNET-18 model.
FIrst ot all, is pre-tranining for ResNET-18 on ImageNET omitted in your code?
I saw in the paper that the pre-training was done on ImageNET which only has non overlapping classes with CIFAR100. I was looking for your pre-training implementations (Augmentation, epochs, lr, and etc.) in the paper and code (Maybe it might be my bad) but, currently, I am having hard time finding it.
Also, below line in your code using pretranied model from torchvision even confuses me.
Hello thanks for your hard work :)
I read the ICML 2019 version of your paper and I came up with some questions about implementation of pre-trained ResNET-18 model.
FIrst ot all, is pre-tranining for ResNET-18 on ImageNET omitted in your code?
I saw in the paper that the pre-training was done on ImageNET which only has non overlapping classes with CIFAR100. I was looking for your pre-training implementations (Augmentation, epochs, lr, and etc.) in the paper and code (Maybe it might be my bad) but, currently, I am having hard time finding it.
Also, below line in your code using pretranied model from torchvision even confuses me.
lifelong-learning-pretraining-and-sam/img_exps/vision_utils.py
Lines 54 to 62 in 2fee18a
So, can you specify your pre-training implementation of your ResNET-18 model on ImageNET with less classes?
Thanks again :)
The text was updated successfully, but these errors were encountered: