Checklist
Describe the bug
The top log probabilities returned by the engine can be as low as -Infinity, as shown in this example:
{
"text": "Yes",
"meta_info": {
"id": "3125813d8a5b4751870955c586437e29",
"finish_reason": { "type": "stop", "matched": 128009 },
"prompt_tokens": 142,
"input_token_logprobs": [],
"output_token_logprobs": [
[-0.3755025565624237, 87575, null],
[0.0, 82, null],
[0.0, 128009, null]
],
"input_top_logprobs": [],
"output_top_logprobs": [
[
[-0.3755025565624237, 87575, null],
[-1.410658836364746, 9642, null],
[-3.209486961364746, 56, null],
[-3.568861961364746, 2822, null],
[-7.584486961364746, 45, null]
],
[
[0.0, 82, null],
[-Infinity, 2, null],
[-Infinity, 0, null],
[-Infinity, 3, null],
[-Infinity, 1, null]
],
[
[0.0, 128009, null],
[-Infinity, 2, null],
[-Infinity, 0, null],
[-Infinity, 3, null],
[-Infinity, 1, null]
]
],
"completion_tokens": 3,
"cached_tokens": 0
}
}
Because starlette by default doesn't allow nan of inf in the response, the -Infinity will trigger a "ValueError: Out of range float values are not JSON compliant" error.
A potential fix is to use ORJSONResponse for response_class of /generate. Orjson converts -inf to null.
Reproduction
This only happens in our fine-tuned classifier model, when we send the following request:
## SgLang Generate
curl -X "POST" "http://localhost:30000/generate" \
-d $'{
"sampling_params": {
"regex": "(0|1)",
"max_new_tokens": 5
},
"top_logprobs_num": 5,
"return_logprob": true,
"text": "Say 0 or 1"
}'
Environment
Python: 3.12.8 (main, Dec 6 2024, 19:59:28) [Clang 18.1.8 ]
CUDA available: True
GPU 0,1,2,3: NVIDIA L40S
GPU 0,1,2,3 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.127.05
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.2.post1+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 2.2.3
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.29.1
interegular: 0.3.3
modelscope: 1.23.1
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.65.2
tiktoken: 0.9.0
anthropic: 0.49.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS SYS SYS 0-47 0 N/A
GPU1 SYS X SYS SYS 0-47 0 N/A
GPU2 SYS SYS X SYS 0-47 0 N/A
GPU3 SYS SYS SYS X 0-47 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Hypervisor vendor: KVM
ulimit soft: 1048576
Checklist
Describe the bug
The top log probabilities returned by the engine can be as low as -Infinity, as shown in this example:
{ "text": "Yes", "meta_info": { "id": "3125813d8a5b4751870955c586437e29", "finish_reason": { "type": "stop", "matched": 128009 }, "prompt_tokens": 142, "input_token_logprobs": [], "output_token_logprobs": [ [-0.3755025565624237, 87575, null], [0.0, 82, null], [0.0, 128009, null] ], "input_top_logprobs": [], "output_top_logprobs": [ [ [-0.3755025565624237, 87575, null], [-1.410658836364746, 9642, null], [-3.209486961364746, 56, null], [-3.568861961364746, 2822, null], [-7.584486961364746, 45, null] ], [ [0.0, 82, null], [-Infinity, 2, null], [-Infinity, 0, null], [-Infinity, 3, null], [-Infinity, 1, null] ], [ [0.0, 128009, null], [-Infinity, 2, null], [-Infinity, 0, null], [-Infinity, 3, null], [-Infinity, 1, null] ] ], "completion_tokens": 3, "cached_tokens": 0 } }Because starlette by default doesn't allow nan of inf in the response, the -Infinity will trigger a "ValueError: Out of range float values are not JSON compliant" error.
A potential fix is to use
ORJSONResponsefor response_class of/generate. Orjson converts -inf to null.Reproduction
This only happens in our fine-tuned classifier model, when we send the following request:
Environment
Python: 3.12.8 (main, Dec 6 2024, 19:59:28) [Clang 18.1.8 ]
CUDA available: True
GPU 0,1,2,3: NVIDIA L40S
GPU 0,1,2,3 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.127.05
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.2.post1+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 2.2.3
aiohttp: 3.11.13
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.29.1
interegular: 0.3.3
modelscope: 1.23.1
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.65.2
tiktoken: 0.9.0
anthropic: 0.49.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS SYS SYS 0-47 0 N/A
GPU1 SYS X SYS SYS 0-47 0 N/A
GPU2 SYS SYS X SYS 0-47 0 N/A
GPU3 SYS SYS SYS X 0-47 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Hypervisor vendor: KVM
ulimit soft: 1048576