Skip to content

[BUG] OpenMPI backend doesn't support custom mpi launch args #1924

@flyhighzy

Description

@flyhighzy

Describe the bug
In our business, we need to specify a custom mpi port rather than default 22 to communicate, in normal mpi, we can pass -mca plm_rsh_args -p 5000 to specify the port, but deepspeed not work.

I looked into the source code, found OpenMPI multi-node runner doesn't use the launcher_args passed by command --launcher_args

Expected behavior
deepspeed bin accept launcher_args and pass them to openMPI launcher

ds_report output

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
sparse_attn ............ [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/usr/local/lib/python3.6/dist-packages/torch']
torch version .................... 1.10.0+cu102
torch cuda version ............... 10.2
nvcc version ..................... 10.2
deepspeed install path ........... ['/usr/local/lib/python3.6/dist-packages/deepspeed']
deepspeed info ................... 0.5.6, unknown, unknown
deepspeed wheel compiled w. ...... torch 0.0, cuda 0.0

System info (please complete the following information):

  • OS: Ubuntu 18.04.6 LTS
  • GPU count and types : dynamic number of nodes, V100 or A100
  • Interconnects (if applicable) [not clear]
  • Python version: 3.6.9

Launcher context
Are you launching your experiment with the deepspeed launcher, MPI, or something else?
launch cmd like:

deepspeed --master_addr ${master_addr} --master_port 1234 --hostfile ${HOST_FILE} \
       --launcher OpenMPI \
       --launcher_args "--allow-run-as-root -mca plm_rsh_args -p 5000" \
       {user_script} {user_script_args}

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions