Go to file
tlg aa7a160118 fix: proper VRAM cleanup on model unload + CUDA alloc config
- Force gc.collect() before torch.cuda.empty_cache() to ensure all
  model references are released
- Set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True in container

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05 17:59:23 +02:00
2026-03-31 17:58:54 +02:00
Description
No description provided
144 KiB
Languages
Python 91.2%
Shell 6%
Dockerfile 2.8%