Go to file
tlg 7a0ff55eb5 fix: remove unsupported KV cache quantization in llama-cpp backend
GGML_TYPE_Q8_0 for type_k/type_v not supported in this llama-cpp-python
version. Keep reduced n_ctx=4096 for VRAM savings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 23:35:05 +02:00
2026-03-31 17:58:54 +02:00
Description
No description provided
144 KiB
Languages
Python 91.2%
Shell 6%
Dockerfile 2.8%