1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
|
# -*- text -*-
#
# Copyright (c) 2004-2005 The Trustees of Indiana University and Indiana
# University Research and Technology
# Corporation. All rights reserved.
# Copyright (c) 2004-2005 The University of Tennessee and The University
# of Tennessee Research Foundation. All rights
# reserved.
# Copyright (c) 2004-2005 High Performance Computing Center Stuttgart,
# University of Stuttgart. All rights reserved.
# Copyright (c) 2004-2005 The Regents of the University of California.
# All rights reserved.
# Copyright (c) 2007-2022 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2013 NVIDIA Corporation. All rights reserved.
# Copyright (c) 2017-2020 Intel, Inc. All rights reserved.
# $COPYRIGHT$
#
# Additional copyrights may follow
#
# $HEADER$
#
# This is the US/English general help file for Open MPI.
#
[mpi_init:startup:internal-failure]
It looks like %s failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during %s; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):
%s
--> Returned "%s" (%d) instead of "Success" (0)
#
[mpi_init:startup:pml-add-procs-fail]
MPI_INIT has failed because at least one MPI process is unreachable
from another. This *usually* means that an underlying communication
plugin -- such as a BTL or an MTL -- has either not loaded or not
allowed itself to be used. Your MPI job will now abort.
You may wish to try to narrow down the problem;
* Check the output of ompi_info to see which BTL/MTL plugins are
available.
* Run your application with MPI_THREAD_SINGLE.
* Set the MCA parameter btl_base_verbose to 100 (or mtl_base_verbose,
if using MTL-based communications) to see exactly which
communication plugins were considered and/or discarded.
#
[mpi-param-check-enabled-but-compiled-out]
WARNING: The MCA parameter mpi_param_check has been set to true, but
parameter checking has been compiled out of Open MPI. The
mpi_param_check value has therefore been ignored.
#
[mpi_init: invoked multiple times]
Open MPI has detected that this process has attempted to initialize
MPI (via MPI_INIT or MPI_INIT_THREAD) more than once. This is
erroneous.
#
[mpi_init: already finalized]
Open MPI has detected that this process has attempted to initialize
MPI (via MPI_INIT or MPI_INIT_THREAD) after MPI_FINALIZE has been
called. This is erroneous.
#
[mpi_finalize: not initialized]
The function MPI_FINALIZE was invoked before MPI was initialized in a
process on host %s, PID %d.
This indicates an erroneous MPI program; MPI must be initialized
before it can be finalized.
#
[mpi_finalize:invoked_multiple_times]
The function MPI_FINALIZE was invoked multiple times in a single
process on host %s, PID %d.
This indicates an erroneous MPI program; MPI_FINALIZE is only allowed
to be invoked exactly once in a process.
#
[sparse groups enabled but compiled out]
WARNING: The MCA parameter mpi_use_sparse_group_storage has been set
to true, but sparse group support was not compiled into Open MPI. The
mpi_use_sparse_group_storage value has therefore been ignored.
#
[heterogeneous-support-unavailable]
This installation of Open MPI was configured without support for
heterogeneous architectures, but at least one node in the allocation
was detected to have a different architecture. The detected node was:
Node: %s
In order to operate in a heterogeneous environment, please reconfigure
Open MPI with --enable-heterogeneous.
#
[no cuda support]
The user requested CUDA support with the --mca mpi_cuda_support 1 flag
but the library was not compiled with any support.
#
[lib-call-fail]
A library call unexpectedly failed. This is a terminal error; please
show this message to an Open MPI wizard:
Library call: %s
Source file: %s
Source line number: %d
Aborting...
#
[spc: MPI_T disabled]
There was an error registering software performance counters (SPCs) as
MPI_T performance variables. Your job will continue, but SPCs will be
disabled for MPI_T.
#
[no-pmi]
PMIx_Init failed for the following reason:
%s
Open MPI requires access to a local PMIx server to execute. Please ensure
that either you are operating in a PMIx-enabled environment, or use "mpirun"
to execute the job.
#
[no-pmix-but]
No PMIx server was reachable, but a PMI1/2 was detected.
If srun is being used to launch application, %d singletons will be started.
|