[mvapich-discuss] Installing mvapich2-2.1 on Dell cluster

Jonathan Perkins perkinjo at cse.ohio-state.edu
Tue Jun 14 12:56:52 EDT 2016


Hello, Dr. Diersing.  I've taken a quick look to see if I could find a good
site but I haven't found any to my liking at this point.  However, I will
point you to http://fscked.org/writings/clusters/cluster-1.html as an
example.

The main guidelines that I think you should follow

- Set up one head node that users initially log into
- use nfs to keep files synchronized and available to each compute node
- use something like nis or kerberos/ldap to synchronize user accounts
between head node and compute nodes
- set up firewall to allow open communication between compute nodes and
head node but not to the outside world
- if you manage a larger set of users consider using a scheduler such as
SLURM or TORQUE

If you have any specific follow-up questions I'll be happy to try and help
answer them.  Feel free to ping me off-list if you have any questions that
you feel are more specific to your situation.


On Mon, Jun 13, 2016 at 2:46 PM Robert Diersing <Robert.Diersing at tamuk.edu>
wrote:

> Ladies and Gentlemen:
>
>
>
> Some time ago I posted a request for information about installing/using
> mvapich2-2.1 on a cluster.  Someone kindly pointed me to the manual but I
> don’t find the manual too helpful for cluster installations.
>
>
>
> Here is what I have done so far:
>
>
>
> I have installed mvapich2-2.1 on a directory that is shared across all
> physical nodes via NFS.
>
> I have a single user ID and group that is identical on all nodes.
>
> I have run the mpihello program successfully on a single node as well as
> submitting and running successfully to all nodes (152 cores total) using
> mpiexec.
>
>
>
> My questions are:
>
>
>
> What is the standard way to make mpi available to all users?  They will
> all be logging in to one of two machines and it can be one machine if
> necessary?
>
> I am not able to run the benchmarks with mpirun_rsh but I suspect this is
> because rsh is not running on the nodes.
>
>
>
> Perhaps someone can recommend a web site with more details on cluster
> installations.
>
>
>
> Regards,
>
>
>
> Robert J. Diersing, Ph.D.
>
> Professor Emeritus of Computer Science
>
> HPCC System Administrator
>
> Executive Director of Special Projects
>
> Dean’s Office
>
> Frank H. Dotterweich College of Engineering
>
> EC 271
>
> 361.593.3964
>
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mailman.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20160614/2cb10077/attachment.html>


More information about the mvapich-discuss mailing list