[mvapich-discuss] OSU benchmarks

Dhabaleswar Panda panda at cse.ohio-state.edu
Fri Jul 27 23:53:45 EDT 2007


Pavan, 

> We are planning to include the OSU benchmarks in our release testing given 
> their growing popularity. But it looks like though some of them (e.g., 
> osu_acc_latency) are intended for 2 processes, the tests themselves do not 
> add any check to ensure that they are run on two processes. It's probably 
> not really that big a deal, but it'll be convenient if such a check was 
> added. Will you be willing to patch these benchmarks with a small check to 
> abort if they are not run with the appropriate number of processes? I've 
> attached a sample patch with this email.

Thanks for your note indicating that you people plan to include the
OSU benchmarks in your release. Thanks for sending the patch. We will
incorporate the patch and update these benchmarks.

> Also, are these tests only available online individually, or are they 
> packaged as a test suite (with a Makefile and README, possibly)? If they 
> aren't, would you consider packaging them and releasing them separately as 
> well?

For some time we have been thinking to do this (a single package with
a Makefile and README file). However, we have not been able to do it
yet due to lack of time. This might be a good time for it. We will
work on it during the next 1-2 weeks and get it ready. 

DK

> Thanks.
> 
>   -- Pavan
> 
> -- 
> Pavan Balaji,
> Mathematics and Computer Science,
> Argonne National Laboratory
> Ph: 630.252.3017
> http://www.mcs.anl.gov/~balaji
> 
> --------------010305020202020206000609
> Content-Type: text/x-patch;
>  name="osu_2_proc_check.patch"
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline;
>  filename="osu_2_proc_check.patch"
> 
> clean_osu_benchmarks/osu_acc_latency.c
> --- clean_osu_benchmarks/osu_acc_latency.c	2007-07-26 10:19:08.000000000 -0500
> +++ osu_benchmarks/osu_acc_latency.c	2007-07-27 14:30:02.000000000 -0500
> @@ -74,6 +74,11 @@
>      MPI_Comm_rank (MPI_COMM_WORLD, &rank);
>      MPI_Comm_group (MPI_COMM_WORLD, &comm_group);
>  
> +    if (nprocs != 2) {
> +        printf ("Run this program with 2 processes\n");
> +        MPI_Abort (MPI_COMM_WORLD, 1);
> +    }
> +
>      loop = LOOP;
>      align_size = MESSAGE_ALIGNMENT;
>  
> clean_osu_benchmarks/osu_bcast.c
> clean_osu_benchmarks/osu_bibw.c
> clean_osu_benchmarks/osu_bw.c
> clean_osu_benchmarks/osu_get_bw.c
> --- clean_osu_benchmarks/osu_get_bw.c	2007-07-26 10:19:08.000000000 -0500
> +++ osu_benchmarks/osu_get_bw.c	2007-07-27 14:31:07.000000000 -0500
> @@ -79,6 +79,11 @@
>      MPI_Comm_rank (MPI_COMM_WORLD, &myid);
>      MPI_Comm_group (MPI_COMM_WORLD, &comm_group);
>  
> +    if (numprocs != 2) {
> +        printf ("Run this program with 2 processes\n");
> +        MPI_Abort (MPI_COMM_WORLD, 1);
> +    }
> +
>      loop = LOOP;
>      page_size = getpagesize ();
>  
> clean_osu_benchmarks/osu_get_latency.c
> clean_osu_benchmarks/osu_latency.c
> clean_osu_benchmarks/osu_latency_mt.c
> --- clean_osu_benchmarks/osu_latency_mt.c	2007-07-26 10:19:08.000000000 -0500
> +++ osu_benchmarks/osu_latency_mt.c	2007-07-27 14:31:29.000000000 -0500
> @@ -99,6 +99,11 @@
>      MPI_Comm_size (MPI_COMM_WORLD, &numprocs);
>      MPI_Comm_rank (MPI_COMM_WORLD, &myid);
>  
> +    if (numprocs != 2) {
> +        printf ("Run this program with 2 processes\n");
> +        MPI_Abort (MPI_COMM_WORLD, 1);
> +    }
> +
>      /* Check to make sure we actually have a thread-safe
>       * implementation 
>       */
> clean_osu_benchmarks/osu_mbw_mr.c
> clean_osu_benchmarks/osu_put_bibw.c
> --- clean_osu_benchmarks/osu_put_bibw.c	2007-07-26 10:19:08.000000000 -0500
> +++ osu_benchmarks/osu_put_bibw.c	2007-07-27 14:31:48.000000000 -0500
> @@ -79,6 +79,11 @@
>      MPI_Comm_rank (MPI_COMM_WORLD, &myid);
>      MPI_Comm_group (MPI_COMM_WORLD, &comm_group);
>  
> +    if (numprocs != 2) {
> +        printf ("Run this program with 2 processes\n");
> +        MPI_Abort (MPI_COMM_WORLD, 1);
> +    }
> +
>      loop = LOOP;
>      page_size = getpagesize ();
>      s_buf =
> clean_osu_benchmarks/osu_put_bw.c
> --- clean_osu_benchmarks/osu_put_bw.c	2007-07-26 10:19:08.000000000 -0500
> +++ osu_benchmarks/osu_put_bw.c	2007-07-27 14:32:02.000000000 -0500
> @@ -80,6 +80,11 @@
>      MPI_Comm_rank (MPI_COMM_WORLD, &myid);
>      MPI_Comm_group (MPI_COMM_WORLD, &comm_group);
>  
> +    if (numprocs != 2) {
> +        printf ("Run this program with 2 processes\n");
> +        MPI_Abort (MPI_COMM_WORLD, 1);
> +    }
> +
>      loop = LOOP;
>      page_size = getpagesize ();
>  
> clean_osu_benchmarks/osu_put_latency.c
> --------------010305020202020206000609
> Content-Type: text/plain; charset="us-ascii"
> MIME-Version: 1.0
> Content-Transfer-Encoding: 7bit
> Content-Disposition: inline
> 
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
> 
> --------------010305020202020206000609--
> 



More information about the mvapich-discuss mailing list