Chelsio Communications Chelsio T5 User's Guide Page 71

  • Download
  • Add to my manuals
  • Print
  • Page
    / 460
  • Table of contents
  • TROUBLESHOOTING
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 70
Chapter IV. iWARP (RDMA)
Chelsio T5/T4 Unified Wire For Linux Page 71
4.2.3. Configuration of various MPIs (Installation and Setup)
Intel-MPI
i. Download latest Intel MPI from the Intel website
ii. Copy the license file (.lic file) into l_mpi_p_x.y.z directory
iii. Create machines.LINUX (list of node names) in l_mpi_p_x.y.z
iv. Select advanced options during installation and register the MPI.
v. Install software on every node.
[root@host]# ./install.py
vi. Set IntelMPI with mpi-selector (do this on all nodes).
[root@host]# mpi-selector --register intelmpi --source-dir
/opt/intel/impi/3.1/bin/
[root@host]# mpi-selector --set intelmpi
vii. Edit .bashrc and add these lines:
export RSH=ssh
export DAPL_MAX_INLINE=64
export I_MPI_DEVICE=rdssm:chelsio
export MPIEXEC_TIMEOUT=180
export MPI_BIT_MODE=64
viii. Logout & log back in.
ix. Populate mpd.hosts with node names.
x. Contact Intel for obtaining their MPI with DAPL support.
xi. To run Intel MPI applications:
mpdboot -n <no_of_nodes_in_cluster> -r ssh
mpdtrace
mpiexec -ppn -n 2 /opt/intel/impi/3.1/tests/IMB-3.1/IMB-MPI1
The hosts in this file should be Chelsio interface IP addresses.
I_MPI_DEVICE=rdssm:chelsio assumes you have an entry in
/etc/dat.conf named chelsio.
MPIEXEC_TIMEOUT value might be required to increase if heavy traffic is
going across the systems.
Note
Page view 70
1 2 ... 66 67 68 69 70 71 72 73 74 75 76 ... 459 460

Comments to this Manuals

No comments