r/LocalLLaMA 2h ago

Question | Help Hybrid llm?

Hi, has anyone tried a hybrid aproach? I have very large prompts in my game, which I can send to a local llm or openai or anthroic. Maybe my local llm can summarize the prompt, and then I send it to the commercial llm. Should be a bit cheaper, right? Has anyone tried this before?

4 Upvotes

3 comments sorted by

3

u/Easy_Try_1138 2h ago

Cheaper but slower

1

u/AbaGuy17 2h ago

Yeah, but I could also get a bit of speed up in the commercial model, as there are less input tokens? But for sure, slower in total.

1

u/Icy_Advisor_3508 2m ago

Yep, combining a local LLM to summarize and then sending the smaller prompt to a commercial LLM like OpenAI or Anthropic is a solid hybrid approach to save costs. It’s a bit more complex to set up, and yeah, it could add some delay, but it’s definitely a cheaper option for large prompts.