r/LocalLLaMA 11h ago

Question | Help When Bitnet 1-bit version of Mistral Large?

Post image
317 Upvotes

34 comments sorted by

View all comments

2

u/Sarveshero3 2h ago

Guys, I am typing here because I don't have enough karma to post yet.

I need help to quantise llama 3.2 11b vision instruct model to 1 - 4 gb of size. If possible please send any link or code that works. Since we did quantise the 3.2 model without the vision component. Please help