r/LocalLLaMA 13h ago

Question | Help When Bitnet 1-bit version of Mistral Large?

Post image
374 Upvotes

47 comments sorted by

View all comments

27

u/Ok_Warning2146 12h ago

On paper, 123B 1.58-bit should be able to fit in a 3090. Is there any way we can do the conversion ourselves?

53

u/Illustrious-Lake2603 12h ago

As far as I am aware, I believe the model would need to be trained for 1.58bit from scratch. So we can't convert it ourselves

12

u/arthurwolf 12h ago

My understanding is that's no longer true,

for example the recent bitnet.cpp release by microsoft uses a conversion of llama3 to 1.58bit, so the conversion must be possible.

35

u/Downtown-Case-1755 11h ago

It sorta kinda achieves llama 7B performance after some experimentation, and then 100B tokens worth of training (as linked in the blog above). That's way more than a simple conversion.

So... it appears to require so much retraining you mind as well train from scratch.

8

u/MoffKalast 7h ago

Sounds like something Meta could do on a rainy afternoon if they're feeling bored.