-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error with 4bit-128g model loading #439
Comments
Did you select occam gptq as the GPTQ backend? Because for autogptq you need an extra file. |
Yes, I have occam gptq selected |
Traced this back to the newer transformers version we are using compared to occam's old branch. |
Okay, so far that's worked. |
Hi to all!
When I load this model (from directory): https://huggingface.co/OccamRazor/pygmalion-6b-gptq-4bit,
I get error:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\KoboldAI\\models\\pygmalion-6b-gptq-4bit\\quantize_config.json'
Here full bug report: Bug_Report.txt
I previously used KoboldAI's fork by 0cc4m and there was no such error.
Here the debug dump: kobold_debug.zip
The text was updated successfully, but these errors were encountered: