[mvapich-discuss] SDR or DDR

Mike Hanby mhanby at uab.edu
Wed Feb 7 17:01:56 EST 2007


Thanks Scott, I wasn't aware of that. It looks like there's also mpicc.p
(pgi?).

 

I recompiled Amber9 using /usr/local/topspin/mpi/mpich/mpicc.i (and CC.i
and f77.i), and now it appears to running using mpirun_ssh (at least I'm
not getting a bunch of segfaults).

 

Thanks, again

 

Mike

 

________________________________

From: Scott Weitzenkamp (sweitzen) [mailto:sweitzen at cisco.com] 
Sent: Wednesday, February 07, 2007 12:48
To: Mike Hanby
Subject: RE: [mvapich-discuss] SDR or DDR

 

The Topspin roll should include Intel compiler support, I will admit
this is not well documented, we are working to correct this.  Look for
mpicc.i, mpiCC.i, mpif77.i, and mpif90.i in
/usr/local/topspin/mpi/mpich/bin/.

 

Scott Weitzenkamp

SQA and Release Manager

Server Virtualization Business Unit

Cisco Systems

 

	 

	
________________________________


	From: mvapich-discuss-bounces at cse.ohio-state.edu
[mailto:mvapich-discuss-bounces at cse.ohio-state.edu] On Behalf Of Mike
Hanby
	Sent: Wednesday, February 07, 2007 10:30 AM
	To: mvapich-discuss at cse.ohio-state.edu
	Subject: RE: [mvapich-discuss] SDR or DDR

	Thanks, I feel like a buffoon :-)

	 

	I compiled MVAPICH 0.9.8 on a x86_64 Rocks 4.2.1 Cluster system
using Intel 9.1 compilers. I have the Topspin roll installed on the
cluster, where /usr/local/topspin contains the libraries and binaries
for Infiniband. I could use the mvapich included with the Topspin roll,
however my users want their applications compiled using the Intel
compilers, and the mvapich on the roll is compiled with GNU.

	 

	If I compile a simple helloworld mpi c program using mpicc and
then run it using the command I get a Segmentation Fault:

	$ mpirun_rsh -np 1 node1 ~/mpi_hello

	bash: line 1: 12801 Segmentation fault      /usr/bin/env
MPIRUN_MPD=0 MPIRUN_HOST=headnode MPIRUN_PORT=41013
MPIRUN_PROCESSES=node1:' MPIRUN_RANK=0 MPIRUN_NPROCS=1 MPIRUN_ID=12669
/home/makeuser/mpi_hello

	 

	I looked through the make log and don't see any errors, just a
bunch of warnings like:

	graph_nbr.c(83): warning #187: use of "=" where "==" may have
been intended

	         ( (topo->type != MPI_GRAPH)    && (mpi_errno =
MPI_ERR_TOPOLOGY))

	 

	I've also compiled Amber9 using mpicc, mpixx and mpif77 and also
get a segmentation fault when I attempt to run sander.MPI (an Amber9
binary).

	 

	Something tells me I'm doing something wrong.

	 

	Here are the steps I followed to compile:

	I edit the file make.mvapich.vapi as follows:

	MTHOME=/usr/local/topspin

	PREFIX=/share/apps/mvapich/intel/mvapich-0.9.8-64

	export CC =icc

	export CXX=icpc

	export F77=ifort

	export F90=ifort

	IO_BUS=_PCI_EX_  # For PCI Express

	LINKS=_SDR_

	export CFLAGS="-D${ARCH} -DUSE_INLINE -DEARLY_SEND_COMPLETION
-DRDMA_FAST_PATH \

	               -DVIADEV_RPUT_SUPPORT -DLAZY_MEM_UNREGISTER
-D_SMP_ -D_SMP_RNDV_ \

	               $SUPPRESS -D${IO_BUS} -D${LINKS} \

	               ${HAVE_MPD_RING} -I${MTHOME}/include
-I${MTHOME}/include/vapi $OPT_FLAG"

	 

	I also have to edit mpid/vapi/viainit.c based on an error I
received:

	    case VAPI_PORT_ACTIVE:

	#ifdef VAPI_VERSION_CODE

	#if 0

	#if VAPI_VERSION_CODE >= VAPI_VERSION(4,1,0)

	    case VAPI_CLIENT_REREGISTER:

	    case VAPI_RECEIVE_QUEUE_DRAINED:

	    case VAPI_ECC_DETECT:

	    case VAPI_PATH_MIG_ARMED:

	#endif

	#endif

	#endif

	 

	...and...

	 

	    case VAPI_PORT_ERROR:

	#ifdef VAPI_VERSION_CODE

	#if 0

	#if VAPI_VERSION_CODE >= VAPI_VERSION(4,1,0)

	    case VAPI_SRQ_CATASTROPHIC_ERROR:

	#endif

	#endif

	#endif

	 

	I then just run:

	./make.mvapich.vapi

	 

	It appears to succeed and the directories and files get created
in the --prefix location. Does anyone see anything glaringly wrong here?

	 

	Thanks, Mike

	 

	
________________________________


	From: Gilad Shainer [mailto:Shainer at mellanox.com] 
	Sent: Wednesday, February 07, 2007 11:36
	To: Scott Weitzenkamp (sweitzen); Mike Hanby;
mvapich-discuss at cse.ohio-state.edu
	Subject: RE: [mvapich-discuss] SDR or DDR

	 

	The DDR or SDR is the link speed, 10Gb/s or 20Gb/s, and not the
RAM. 

	 

	Gilad. 

	 

	
________________________________


	From: mvapich-discuss-bounces at cse.ohio-state.edu
[mailto:mvapich-discuss-bounces at cse.ohio-state.edu] On Behalf Of Scott
Weitzenkamp (sweitzen)
	Sent: Wednesday, February 07, 2007 8:57 AM
	To: Mike Hanby; mvapich-discuss at cse.ohio-state.edu
	Subject: RE: [mvapich-discuss] SDR or DDR

	Your HCA is SDR, with DDR you will see DDR in the tvflash -i
output.

	 

	Scott

		 

		
________________________________


		From: mvapich-discuss-bounces at cse.ohio-state.edu
[mailto:mvapich-discuss-bounces at cse.ohio-state.edu] On Behalf Of Mike
Hanby
		Sent: Wednesday, February 07, 2007 8:36 AM
		To: mvapich-discuss at cse.ohio-state.edu
		Subject: [mvapich-discuss] SDR or DDR

		I need to fill in the value for LINKS=_DDR_ or _SDR_

		Does anyone know how I tell whether my Infiniband cards
have DDR or SDR ram?

		 

		Also, my cards are PCI Express. For IO_BUS, I would
choose _PCI_EX_, correct?

		 

		tvflash -i command reports the following:

		 

		HCA #0: MT25208 Tavor Compat, Lion Cub, revision A0

		  Primary image is v4.7.600 build 3.2.0.110, with label
'HCA.LionCub.A0'

		  Secondary image is v4.6.000 build 3.1.0.113, with
label 'HCA.LionCub.A0'

		 

		  Vital Product Data

		    Product Name: Lion cub

		    P/N: 99-00026-01

		    E/C: Rev: B04

		    S/N: TS0548X03797

		    Freq/Power: PW=10W;PCIe 8X

		    Date Code: 0548

		    Checksum: Ok

		 

		Thanks, Mike

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.cse.ohio-state.edu/pipermail/mvapich-discuss/attachments/20070207/c5579168/attachment-0001.html


More information about the mvapich-discuss mailing list