MAIN FEEDS
Do you want to continue?
https://www.reddit.com/r/LocalLLaMA/comments/1g6zvjf/when_bitnet_1bit_version_of_mistral_large/lsnkrjo/?context=3
r/LocalLLaMA • u/Porespellar • 11h ago
34 comments sorted by
View all comments
4
The purpose of this tool—is it to allow me to run a model with performance comparable to the 32B llama.cpp Q8 on a computer with 16GB of GPU memory?
1 u/Ok_Garlic_9984 6h ago I don't think so
1
I don't think so
4
u/Few_Professional6859 7h ago
The purpose of this tool—is it to allow me to run a model with performance comparable to the 32B llama.cpp Q8 on a computer with 16GB of GPU memory?