Description: fix a spelling mistake
Origin: vendor
Bug: https://rt.cpan.org/Ticket/Display.html?id=82392
Forwarded: https://rt.cpan.org/Ticket/Display.html?id=82392 (not last version)
Author: gregor herrmann <gregoa@debian.org>
Reviewed-by: Xavier Guimard <x.guimard@free.fr>
Last-Update: 2016-07-20

--- a/Coro.pm
+++ b/Coro.pm
@@ -640,7 +640,7 @@
    };
 
 This can be used to localise about any resource (locale, uid, current
-working directory etc.) to a block, despite the existance of other
+working directory etc.) to a block, despite the existence of other
 coros.
 
 Another interesting example implements time-sliced multitasking using
@@ -756,7 +756,7 @@
 Returns true iff this Coro object is "new", i.e. has never been run
 yet. Those states basically consist of only the code reference to call and
 the arguments, but consumes very little other resources. New states will
-automatically get assigned a perl interpreter when they are transfered to.
+automatically get assigned a perl interpreter when they are transferred to.
 
 =item $state->is_zombie
 
@@ -1125,7 +1125,7 @@
 event-based program, or when you use event-based libraries.
 
 These typically register a callback for some event, and call that callback
-when the event occured. In a coro, however, you typically want to
+when the event occurred. In a coro, however, you typically want to
 just wait for the event, simplyifying things.
 
 For example C<< AnyEvent->child >> registers a callback to be called when
@@ -1264,7 +1264,7 @@
 by the forks module, which gives you the (i-) threads API, just much
 faster).
 
-Sharing data is in the i-threads model is done by transfering data
+Sharing data is in the i-threads model is done by transferring data
 structures between threads using copying semantics, which is very slow -
 shared data simply does not exist. Benchmarks using i-threads which are
 communication-intensive show extremely bad behaviour with i-threads (in
--- a/Coro/State.pm
+++ b/Coro/State.pm
@@ -178,7 +178,7 @@
 everywhere.
 
 If the coderef is omitted this function will create a new "empty"
-thread, i.e. a thread that cannot be transfered to but can be used
+thread, i.e. a thread that cannot be transferred to but can be used
 to save the current thread state in (note that this is dangerous, as no
 reference is taken to ensure that the "current thread state" survives,
 the caller is responsible to ensure that the cloned state does not go
@@ -247,7 +247,7 @@
 
 Forcefully destructs the given Coro::State. While you can keep the
 reference, and some memory is still allocated, the Coro::State object is
-effectively dead, destructors have been freed, it cannot be transfered to
+effectively dead, destructors have been freed, it cannot be transferred to
 anymore, it's pushing up the daisies.
 
 =item $state->call ($coderef)
@@ -346,7 +346,7 @@
 
 =head3 METHODS FOR C CONTEXTS
 
-Most coros only consist of some Perl data structures - transfering to a
+Most coros only consist of some Perl data structures - transferring to a
 coro just reconfigures the interpreter to continue somewhere else.
 
 However. this is not always possible: For example, when Perl calls a C/XS function
--- a/README
+++ b/README
@@ -1038,7 +1038,7 @@
     evidenced by the forks module, which gives you the (i-) threads API,
     just much faster).
 
-    Sharing data is in the i-threads model is done by transfering data
+    Sharing data is in the i-threads model is done by transferring data
     structures between threads using copying semantics, which is very slow -
     shared data simply does not exist. Benchmarks using i-threads which are
     communication-intensive show extremely bad behaviour with i-threads (in
--- a/Coro/AnyEvent.pm
+++ b/Coro/AnyEvent.pm
@@ -292,7 +292,7 @@
 whichever happens first. No timeout counts as infinite timeout.
 
 Returns true when the file handle became ready, false when a timeout
-occured.
+occurred.
 
 Note that these functions are quite inefficient as compared to using a
 single watcher (they recreate watchers on every invocation) or compared to
--- a/Coro/Intro.pod
+++ b/Coro/Intro.pod
@@ -34,7 +34,7 @@
 Cooperative means that these threads must cooperate with each other, when
 it comes to CPU usage - only one thread ever has the CPU, and if another
 thread wants the CPU, the running thread has to give it up. The latter
-is either explicitly, by calling a function to do so, or implicity, when
+is either explicitly, by calling a function to do so, or implicitly, when
 waiting on a resource (such as a Semaphore, or the completion of some I/O
 request). This threading model is popular in scripting languages (such as
 python or ruby), and this implementation is typically far more efficient
--- a/Coro/LWP.pm
+++ b/Coro/LWP.pm
@@ -25,7 +25,7 @@
 Makes LWP use L<AnyEvent::HTTP>. Does not make LWP event-based, but allows
 Coro threads to schedule unimpeded through its AnyEvent integration.
 
-Let's you use the LWP API normally.
+Lets you use the LWP API normally.
 
 =item L<LWP::Protocol::Coro::http>
 
@@ -89,7 +89,7 @@
 
 =back
 
-All this likely makes other libraries than just LWP not block, but thats
+All this likely makes other libraries than just LWP not block, but that's
 just a side effect you cannot rely on.
 
 Increases parallelism is not supported by all libraries, some might cache
--- a/Coro/Semaphore.pm
+++ b/Coro/Semaphore.pm
@@ -17,7 +17,7 @@
 =head1 DESCRIPTION
 
 This module implements counting semaphores. You can initialize a mutex
-with any level of parallel users, that is, you can intialize a sempahore
+with any level of parallel users, that is, you can initialize a semaphore
 that can be C<down>ed more than once until it blocks. There is no owner
 associated with semaphores, so one thread can C<down> it while another can
 C<up> it (or vice versa), C<up> can be called before C<down> and so on:
@@ -44,9 +44,9 @@
 
 our $VERSION = 6.511;
 
-=item new [inital count]
+=item new [initial count]
 
-Creates a new sempahore object with the given initial lock count. The
+Creates a new semaphore object with the given initial lock count. The
 default lock count is 1, which means it is unlocked by default. Zero (or
 negative values) are also allowed, in which case the semaphore is locked
 by default.
--- a/Coro/SemaphoreSet.pm
+++ b/Coro/SemaphoreSet.pm
@@ -39,7 +39,7 @@
 
 use Coro::Semaphore ();
 
-=item new [inital count]
+=item new [initial count]
 
 Creates a new semaphore set with the given initial lock count for each
 individual semaphore. See L<Coro::Semaphore>.
--- a/Coro/Specific.pm
+++ b/Coro/Specific.pm
@@ -41,7 +41,7 @@
 =item new
 
 Create a new coroutine-specific scalar and return a reference to it. The
-scalar is guarenteed to be "undef". Once such a scalar has been allocated
+scalar is guaranteed to be "undef". Once such a scalar has been allocated
 you cannot deallocate it (yet), so allocate only when you must.
 
 =cut
--- a/EV/EV.pm
+++ b/EV/EV.pm
@@ -78,7 +78,7 @@
 
 =item $revents = Coro::EV::timed_io_once $fileno_or_fh, $events[, $timeout]
 
-Blocks the coroutine until either the given event set has occured on the
+Blocks the coroutine until either the given event set has occurred on the
 fd, or the timeout has been reached (if timeout is missing or C<undef>
 then there will be no timeout). Returns the received flags.
 
