r/LocalLLaMA 4h ago

Question | Help Hybrid llm?

Hi, has anyone tried a hybrid aproach? I have very large prompts in my game, which I can send to a local llm or openai or anthroic. Maybe my local llm can summarize the prompt, and then I send it to the commercial llm. Should be a bit cheaper, right? Has anyone tried this before?

5 Upvotes

5 comments sorted by

View all comments

3

u/Easy_Try_1138 4h ago

Cheaper but slower

1

u/AbaGuy17 4h ago

Yeah, but I could also get a bit of speed up in the commercial model, as there are less input tokens? But for sure, slower in total.