Skip to content

Commit 9208591

Browse files
authored
fix: use fp16 dtype for sm75 (#1136)
1 parent 5d0d40d commit 9208591

1 file changed

Lines changed: 5 additions & 0 deletions

File tree

python/sglang/srt/model_executor/model_runner.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,11 @@ def load_model(self):
148148
f"[gpu={self.gpu_id}] Load weight begin. "
149149
f"avail mem={get_available_gpu_memory(self.gpu_id):.2f} GB"
150150
)
151+
if torch.cuda.get_device_capability()[0] < 8:
152+
logger.info(
153+
"Compute capability below sm80 use float16 due to lack of bfloat16 support."
154+
)
155+
self.server_args.dtype = "float16"
151156

152157
monkey_patch_vllm_dummy_weight_loader()
153158
device_config = DeviceConfig()

0 commit comments

Comments
 (0)