1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129
|
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2009, 2010 Université de Bordeaux 1
* Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2.1 of the License, or (at
* your option) any later version.
*
* StarPU is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*
* See the GNU Lesser General Public License in COPYING.LGPL for more details.
*/
#include <starpu_mpi.h>
#include "helper.h"
#define NITER 2048
unsigned token = 42;
starpu_data_handle_t token_handle;
#ifdef STARPU_USE_CUDA
extern void increment_cuda(void *descr[], __attribute__ ((unused)) void *_args);
#endif
void increment_cpu(void *descr[], __attribute__ ((unused)) void *_args)
{
unsigned *tokenptr = (unsigned *)STARPU_VECTOR_GET_PTR(descr[0]);
(*tokenptr)++;
}
static struct starpu_codelet increment_cl =
{
.where = STARPU_CPU|STARPU_CUDA,
#ifdef STARPU_USE_CUDA
.cuda_funcs = {increment_cuda, NULL},
#endif
.cpu_funcs = {increment_cpu, NULL},
.nbuffers = 1,
.modes = {STARPU_RW}
};
void increment_token(void)
{
struct starpu_task *task = starpu_task_create();
task->cl = &increment_cl;
task->handles[0] = token_handle;
task->synchronous = 1;
int ret = starpu_task_submit(task);
STARPU_CHECK_RETURN_VALUE(ret, "starpu_task_submit");
}
int main(int argc, char **argv)
{
int ret, rank, size;
MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (size < 2)
{
if (rank == 0)
FPRINTF(stderr, "We need at least 2 processes.\n");
MPI_Finalize();
return STARPU_TEST_SKIPPED;
}
ret = starpu_init(NULL);
STARPU_CHECK_RETURN_VALUE(ret, "starpu_init");
ret = starpu_mpi_initialize();
STARPU_CHECK_RETURN_VALUE(ret, "starpu_mpi_initialize");
starpu_vector_data_register(&token_handle, 0, (uintptr_t)&token, 1, sizeof(unsigned));
unsigned nloops = NITER;
unsigned loop;
unsigned last_loop = nloops - 1;
unsigned last_rank = size - 1;
for (loop = 0; loop < nloops; loop++)
{
int tag = loop*size + rank;
if (loop == 0 && rank == 0)
{
token = 0;
FPRINTF(stdout, "Start with token value %d\n", token);
}
else
{
MPI_Status status;
starpu_mpi_recv(token_handle, (rank+size-1)%size, tag, MPI_COMM_WORLD, &status);
}
increment_token();
if (loop == last_loop && rank == last_rank)
{
starpu_data_acquire(token_handle, STARPU_R);
FPRINTF(stdout, "Finished : token value %d\n", token);
starpu_data_release(token_handle);
}
else
{
starpu_mpi_send(token_handle, (rank+1)%size, tag+1, MPI_COMM_WORLD);
}
}
starpu_mpi_shutdown();
starpu_shutdown();
MPI_Finalize();
if (rank == last_rank)
{
STARPU_ASSERT(token == nloops*size);
}
return 0;
}
|