Checklist
Describe the bug
Describe the bug
In python/sglang/srt/managers/data_parallel_controller.py line 222, the calculation of base_gpu_id seems incorrect. As the DP (data parallel) offset, it should be multiplied not only by server_args.tp_size, but also by server_args.pp_size.
Fix PR
#10741
Details
- Before the fix running result
(base) ubuntu@Hydra-Store-01:~/miniconda3/lib/python3.12/site-packages$ python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --pp-size 2 --dp-size 2
WARNING:sglang.srt.server_args:Pipeline parallelism is incompatible with overlap schedule.
[2025-09-22 10:25:08] server_args=ServerArgs(model_path='deepseek-ai/DeepSeek-R1-Distill-Qwen-7B', tokenizer_path='deepseek-ai/DeepSeek-R1-Distill-Qwen-7B', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', model_loader_extra_config='{}', trust_remote_code=False, context_length=None, is_embedding=False, enable_multimodal=None, revision=None, model_impl='auto', host='127.0.0.1', port=30000, skip_server_warmup=False, warmups=None, nccl_port=None, dtype='auto', quantization=None, quantization_param_path=None, kv_cache_dtype='auto', mem_fraction_static=0.867, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=2048, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, hybrid_kvcache_ratio=None, swa_full_tokens_ratio=0.8, disable_hybrid_swa_memory=False, device='cuda', tp_size=1, pp_size=2, max_micro_batch_size=None, stream_interval=1, stream_output=False, random_seed=821174131, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, sleep_on_idle=False, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, crash_dump_folder=None, show_time_cost=False, enable_metrics=False, enable_metrics_for_all_schedulers=False, bucket_time_to_first_token=None, bucket_inter_token_latency=None, bucket_e2e_request_latency=None, collect_tokens_histogram=False, decode_log_interval=40, enable_request_time_stats_logging=False, kv_events_config=None, api_key=None, served_model_name='deepseek-ai/DeepSeek-R1-Distill-Qwen-7B', chat_template=None, completion_template=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, tool_call_parser=None, dp_size=2, load_balance_method='round_robin', dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', preferred_sampling_params=None, enable_lora=None, max_lora_rank=None, lora_target_modules=None, lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend=None, sampling_backend='flashinfer', grammar_backend='xgrammar', mm_attention_backend=None, speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=None, speculative_eagle_topk=None, speculative_num_draft_tokens=None, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, ep_size=1, enable_ep_moe=False, enable_deepep_moe=False, enable_flashinfer_moe=False, enable_flashinfer_allreduce_fusion=False, deepep_mode='auto', ep_num_redundant_experts=0, ep_dispatch_algorithm='static', init_expert_location='trivial', enable_eplb=False, eplb_algorithm='auto', eplb_rebalance_num_iterations=1000, eplb_rebalance_layers_per_chunk=None, expert_distribution_recorder_mode=None, expert_distribution_recorder_buffer_size=1000, enable_expert_distribution_metrics=False, deepep_config=None, moe_dense_tp_size=None, enable_hierarchical_cache=False, hicache_ratio=2.0, hicache_size=0, hicache_write_policy='write_through_selective', hicache_io_backend='', hicache_storage_backend=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, cuda_graph_max_bs=8, cuda_graph_bs=None, disable_cuda_graph=False, disable_cuda_graph_padding=False, enable_profile_cuda_graph=False, enable_nccl_nvls=False, enable_tokenizer_batch_encode=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, enable_mscclpp=False, disable_overlap_schedule=True, enable_mixed_chunk=False, enable_dp_attention=False, enable_dp_lm_head=False, enable_two_batch_overlap=False, enable_torch_compile=False, torch_compile_max_bs=32, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, flashinfer_mla_disable_ragged=False, disable_shared_experts_fusion=False, disable_chunked_prefix_cache=False, disable_fast_image_processor=False, enable_return_hidden_states=False, enable_triton_kernel_moe=False, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False, debug_tensor_dump_prefill_only=False, disaggregation_mode='null', disaggregation_transfer_backend='mooncake', disaggregation_bootstrap_port=8998, disaggregation_decode_tp=None, disaggregation_decode_dp=None, disaggregation_prefill_pp=1, disaggregation_ib_device=None, num_reserved_decode_tokens=512, pdlb_url=None, custom_weight_loader=[], weight_loader_disable_mmap=False, enable_pdmux=False, sm_group_num=3)
[2025-09-22 10:25:13] Launch DP0 starting at GPU #0.
[2025-09-22 10:25:13] Launch DP1 starting at GPU #1.
gpu_id: 1, server_args.base_gpu_id: 0, base_gpu_id: 1, pp_rank: 0, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 0, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
gpu_id: 0, server_args.base_gpu_id: 0, base_gpu_id: 0, pp_rank: 0, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 0, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
gpu_id: 1, server_args.base_gpu_id: 0, base_gpu_id: 0, pp_rank: 1, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 1, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
gpu_id: 2, server_args.base_gpu_id: 0, base_gpu_id: 1, pp_rank: 1, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 1, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
......
[2025-09-22 10:25:22 DP0 PP0] Load weight end. type=Qwen2ForCausalLM, dtype=torch.bfloat16, avail mem=14.53 GB, mem usage=7.21 GB.
[2025-09-22 10:25:22 DP1 PP1] Capture cuda graph begin. This can take up to several minutes. avail mem=8.73 GB
[2025-09-22 10:25:22 DP1 PP0] Capture cuda graph begin. This can take up to several minutes. avail mem=1.22 GB
[2025-09-22 10:25:22 DP1 PP1] Capture cuda graph bs [1, 2, 4, 8]
[2025-09-22 10:25:22 DP0 PP1] Load weight end. type=Qwen2ForCausalLM, dtype=torch.bfloat16, avail mem=1.22 GB, mem usage=13.00 GB.
Capturing batches (bs=8 avail_mem=8.73 GB): 0%| | 0/4 [00:00<?, ?it/s][2025-09-22 10:25:22 DP0 PP1] Scheduler hit an exception: Traceback (most recent call last):
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/managers/scheduler.py", line 2936, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, pp_rank, dp_rank)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/managers/scheduler.py", line 344, in __init__
self.tp_worker = TpWorkerClass(
^^^^^^^^^^^^^^
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/managers/tp_worker.py", line 81, in __init__
self.model_runner = ModelRunner(
^^^^^^^^^^^^
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 233, in __init__
self.initialize(min_per_gpu_memory)
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 311, in initialize
self.init_memory_pool(
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 1133, in init_memory_pool
raise RuntimeError(
RuntimeError: Not enough memory. Please try to increase --mem-fraction-static.
[2025-09-22 10:25:22 DP1 PP0] Capture cuda graph bs [1, 2, 4, 8]
[2025-09-22 10:25:22 DP0 PP0] Scheduler hit an exception: Traceback (most recent call last):
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/managers/scheduler.py", line 2936, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, pp_rank, dp_rank)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/managers/scheduler.py", line 344, in __init__
self.tp_worker = TpWorkerClass(
^^^^^^^^^^^^^^
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/managers/tp_worker.py", line 81, in __init__
self.model_runner = ModelRunner(
^^^^^^^^^^^^
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 233, in __init__
self.initialize(min_per_gpu_memory)
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 311, in initialize
self.init_memory_pool(
File "/home/ubuntu/miniconda3/lib/python3.12/site-packages/sglang/srt/model_executor/model_runner.py", line 1133, in init_memory_pool
raise RuntimeError(
RuntimeError: Not enough memory. Please try to increase --mem-fraction-static.
Capturing batches (bs=8 avail_mem=1.22 GB): 0%| | 0/4 [00:00<?, ?it/s][2025-09-22 10:25:23] Child process unexpectedly failed with exitcode=131. pid=15064
- After the fix running result
(base) ubuntu@Hydra-Store-01:~/miniconda3/lib/python3.12/site-packages$ python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --pp-size 2 --dp-size 2
WARNING:sglang.srt.server_args:Pipeline parallelism is incompatible with overlap schedule.
[2025-09-22 10:31:21] server_args=ServerArgs(model_path='deepseek-ai/DeepSeek-R1-Distill-Qwen-7B', tokenizer_path='deepseek-ai/DeepSeek-R1-Distill-Qwen-7B', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', model_loader_extra_config='{}', trust_remote_code=False, context_length=None, is_embedding=False, enable_multimodal=None, revision=None, model_impl='auto', host='127.0.0.1', port=30000, skip_server_warmup=False, warmups=None, nccl_port=None, dtype='auto', quantization=None, quantization_param_path=None, kv_cache_dtype='auto', mem_fraction_static=0.867, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=2048, max_prefill_tokens=16384, schedule_policy='fcfs', schedule_conservativeness=1.0, cpu_offload_gb=0, page_size=1, hybrid_kvcache_ratio=None, swa_full_tokens_ratio=0.8, disable_hybrid_swa_memory=False, device='cuda', tp_size=1, pp_size=2, max_micro_batch_size=None, stream_interval=1, stream_output=False, random_seed=663390546, constrained_json_whitespace_pattern=None, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, sleep_on_idle=False, log_level='info', log_level_http=None, log_requests=False, log_requests_level=0, crash_dump_folder=None, show_time_cost=False, enable_metrics=False, enable_metrics_for_all_schedulers=False, bucket_time_to_first_token=None, bucket_inter_token_latency=None, bucket_e2e_request_latency=None, collect_tokens_histogram=False, decode_log_interval=40, enable_request_time_stats_logging=False, kv_events_config=None, api_key=None, served_model_name='deepseek-ai/DeepSeek-R1-Distill-Qwen-7B', chat_template=None, completion_template=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, tool_call_parser=None, dp_size=2, load_balance_method='round_robin', dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', preferred_sampling_params=None, enable_lora=None, max_lora_rank=None, lora_target_modules=None, lora_paths=None, max_loras_per_batch=8, lora_backend='triton', attention_backend=None, sampling_backend='flashinfer', grammar_backend='xgrammar', mm_attention_backend=None, speculative_algorithm=None, speculative_draft_model_path=None, speculative_num_steps=None, speculative_eagle_topk=None, speculative_num_draft_tokens=None, speculative_accept_threshold_single=1.0, speculative_accept_threshold_acc=1.0, speculative_token_map=None, ep_size=1, enable_ep_moe=False, enable_deepep_moe=False, enable_flashinfer_moe=False, enable_flashinfer_allreduce_fusion=False, deepep_mode='auto', ep_num_redundant_experts=0, ep_dispatch_algorithm='static', init_expert_location='trivial', enable_eplb=False, eplb_algorithm='auto', eplb_rebalance_num_iterations=1000, eplb_rebalance_layers_per_chunk=None, expert_distribution_recorder_mode=None, expert_distribution_recorder_buffer_size=1000, enable_expert_distribution_metrics=False, deepep_config=None, moe_dense_tp_size=None, enable_hierarchical_cache=False, hicache_ratio=2.0, hicache_size=0, hicache_write_policy='write_through_selective', hicache_io_backend='', hicache_storage_backend=None, enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, disable_radix_cache=False, cuda_graph_max_bs=8, cuda_graph_bs=None, disable_cuda_graph=False, disable_cuda_graph_padding=False, enable_profile_cuda_graph=False, enable_nccl_nvls=False, enable_tokenizer_batch_encode=False, disable_outlines_disk_cache=False, disable_custom_all_reduce=False, enable_mscclpp=False, disable_overlap_schedule=True, enable_mixed_chunk=False, enable_dp_attention=False, enable_dp_lm_head=False, enable_two_batch_overlap=False, enable_torch_compile=False, torch_compile_max_bs=32, torchao_config='', enable_nan_detection=False, enable_p2p_check=False, triton_attention_reduce_in_fp32=False, triton_attention_num_kv_splits=8, num_continuous_decode_steps=1, delete_ckpt_after_loading=False, enable_memory_saver=False, allow_auto_truncate=False, enable_custom_logit_processor=False, flashinfer_mla_disable_ragged=False, disable_shared_experts_fusion=False, disable_chunked_prefix_cache=False, disable_fast_image_processor=False, enable_return_hidden_states=False, enable_triton_kernel_moe=False, debug_tensor_dump_output_folder=None, debug_tensor_dump_input_file=None, debug_tensor_dump_inject=False, debug_tensor_dump_prefill_only=False, disaggregation_mode='null', disaggregation_transfer_backend='mooncake', disaggregation_bootstrap_port=8998, disaggregation_decode_tp=None, disaggregation_decode_dp=None, disaggregation_prefill_pp=1, disaggregation_ib_device=None, num_reserved_decode_tokens=512, pdlb_url=None, custom_weight_loader=[], weight_loader_disable_mmap=False, enable_pdmux=False, sm_group_num=3)
[2025-09-22 10:31:27] Launch DP0 starting at GPU #0.
[2025-09-22 10:31:27] Launch DP1 starting at GPU #2.
gpu_id: 0, server_args.base_gpu_id: 0, base_gpu_id: 0, pp_rank: 0, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 0, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
gpu_id: 2, server_args.base_gpu_id: 0, base_gpu_id: 2, pp_rank: 0, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 0, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
gpu_id: 1, server_args.base_gpu_id: 0, base_gpu_id: 0, pp_rank: 1, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 1, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
gpu_id: 3, server_args.base_gpu_id: 0, base_gpu_id: 2, pp_rank: 1, pp_size_per_node: 2, tp_rank: 0, tp_size_per_node: 1, server_args.gpu_id_step: 1, ((pp_rank % pp_size_per_node) * tp_size_per_node): 1, (tp_rank % tp_size_per_node) * server_args.gpu_id_step: 0
......
[2025-09-22 10:31:43] INFO: 127.0.0.1:44242 - "POST /generate HTTP/1.1" 200 OK
[2025-09-22 10:31:43] The server is fired up and ready to roll!
Reproduction
$ python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --pp-size 2 --dp-size 2
Environment
(base) ubuntu@Hydra-Store-01:~/miniconda3/lib/python3.12/site-packages$ python3 -m sglang.check_env
Python: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA L4
GPU 0,1,2,3 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.20
CUDA Driver Version: 570.158.01
PyTorch: 2.7.1+cu126
sglang: 0.4.9.post3
sgl_kernel: 0.2.6.post1
flashinfer_python: 0.2.7.post1
triton: 3.3.1
transformers: 4.53.2
torchao: 0.9.0
numpy: 2.3.1
aiohttp: 3.12.14
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.33.4
interegular: 0.3.3
modelscope: 1.28.0
orjson: 3.11.0
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.3
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.21
openai: 1.97.1
tiktoken: 0.9.0
anthropic: 0.58.2
litellm: 1.74.7
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE NODE NODE 0-47 0 N/A
GPU1 NODE X NODE NODE 0-47 0 N/A
GPU2 NODE NODE X NODE 0-47 0 N/A
GPU3 NODE NODE NODE X 0-47 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Hypervisor vendor: KVM
ulimit soft: 1024
Checklist
Describe the bug
Describe the bug
In
python/sglang/srt/managers/data_parallel_controller.pyline 222, the calculation ofbase_gpu_idseems incorrect. As the DP (data parallel) offset, it should be multiplied not only byserver_args.tp_size, but also byserver_args.pp_size.Fix PR
#10741
Details
Reproduction
$ python -m sglang.launch_server --model-path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --pp-size 2 --dp-size 2
Environment
(base) ubuntu@Hydra-Store-01:~/miniconda3/lib/python3.12/site-packages$ python3 -m sglang.check_env
Python: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA L4
GPU 0,1,2,3 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.20
CUDA Driver Version: 570.158.01
PyTorch: 2.7.1+cu126
sglang: 0.4.9.post3
sgl_kernel: 0.2.6.post1
flashinfer_python: 0.2.7.post1
triton: 3.3.1
transformers: 4.53.2
torchao: 0.9.0
numpy: 2.3.1
aiohttp: 3.12.14
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.33.4
interegular: 0.3.3
modelscope: 1.28.0
orjson: 3.11.0
outlines: 0.1.11
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.3
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.21
openai: 1.97.1
tiktoken: 0.9.0
anthropic: 0.58.2
litellm: 1.74.7
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE NODE NODE 0-47 0 N/A
GPU1 NODE X NODE NODE 0-47 0 N/A
GPU2 NODE NODE X NODE 0-47 0 N/A
GPU3 NODE NODE NODE X 0-47 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Hypervisor vendor: KVM
ulimit soft: 1024