Checklist
Describe the bug
By default, Eagle speculative decoding uses a small “draft” model to generate candidate tokens, then validates them with the full target model. In FR-Spec, we further truncate the draft model’s vocabulary to just the most frequent tokens, shrinking the LM head and (in theory) speeding up each decode step without harming quality.
Caveat: In our tests using the provided default map (thunlp/LLaMA3-Instruct-8B-FR-Spec/freq_32768.pt), we actually saw a drop in performance:
Furthermore, neither the original PR (#3822) nor the documentation (https://docs.sglang.ai/backend/speculative_decoding.html#EAGLE-2-Decoding-via-Frequency-Ranked-Speculative-Sampling) include any concrete example demonstrating that FR-Spec provides a speedup over vanilla speculative decoding.
Reproduction
# Launch an Eagle server with FR-Spec enabled.
python3 -m sglang.launch_server \
--model meta-llama/Meta-Llama-3-8B-Instruct \
--speculative-draft-model-path lmsys/sglang-EAGLE-LLaMA3-Instruct-8B \
--speculative-eagle-topk 8 \
--speculative-num-draft-tokens 64 \
--speculative-token-map thunlp/LLaMA3-Instruct-8B-FR-Spec/freq_32768.pt \
--dtype float16 \
--cuda-graph-max-bs 2
Environment
python3 -m sglang.check_env:
Python: 3.10.18 | packaged by conda-forge | (main, Jun 4 2025, 14:45:41) [GCC 13.3.0]
CUDA available: True
GPU 0: NVIDIA A100-SXM4-80GB
GPU 0 Compute Capability: 8.0
CUDA_HOME: /apps/easybd/easybuild/amd/software/CUDA/12.4.0
NVCC: Cuda compilation tools, release 12.4, V12.4.99
CUDA Driver Version: 560.35.05
PyTorch: 2.7.1+cu126
sglang: 0.4.9.post4
sgl_kernel: 0.2.7
flashinfer_python: 0.2.9rc1
triton: 3.3.1
transformers: 4.53.2
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.12.14
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.34.1
interegular: 0.3.3
modelscope: 1.28.1
orjson: 3.11.1
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.21
openai: 1.97.1
tiktoken: 0.9.0
anthropic: 0.59.0
litellm: 1.74.8
decord: 0.6.0
NVIDIA Topology:
GPU0 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PXB NODE NODE NODE NODE 1 N/A
NIC0 PXB X NODE NODE NODE NODE
NIC1 NODE NODE X SYS SYS SYS
NIC2 NODE NODE SYS X PIX SYS
NIC3 NODE NODE SYS PIX X SYS
NIC4 NODE NODE SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
ulimit soft: 66000
Checklist
Describe the bug
By default, Eagle speculative decoding uses a small “draft” model to generate candidate tokens, then validates them with the full target model. In FR-Spec, we further truncate the draft model’s vocabulary to just the most frequent tokens, shrinking the LM head and (in theory) speeding up each decode step without harming quality.
Caveat: In our tests using the provided default map (
thunlp/LLaMA3-Instruct-8B-FR-Spec/freq_32768.pt), we actually saw a drop in performance:Furthermore, neither the original PR (#3822) nor the documentation (https://docs.sglang.ai/backend/speculative_decoding.html#EAGLE-2-Decoding-via-Frequency-Ranked-Speculative-Sampling) include any concrete example demonstrating that FR-Spec provides a speedup over vanilla speculative decoding.
Reproduction
Environment
python3 -m sglang.check_env: