File: entropy.Rd

package info (click to toggle)
r-cran-tcr 2.3.2%2Bds-1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, bullseye, trixie
  • size: 2,316 kB
  • sloc: cpp: 187; makefile: 5
file content (55 lines) | stat: -rw-r--r-- 2,020 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/measures.R
\name{entropy}
\alias{entropy}
\alias{js.div}
\alias{kl.div}
\title{Information measures.}
\usage{
entropy(.data, .norm = F, .do.norm = NA, .laplace = 1e-12)

kl.div(.alpha, .beta, .do.norm = NA, .laplace = 1e-12)

js.div(.alpha, .beta, .do.norm = NA, .laplace = 1e-12, .norm.entropy = F)
}
\arguments{
\item{.data, .alpha, .beta}{Vector of values.}

\item{.norm}{if T then compute normalised entropy (H / Hmax).}

\item{.do.norm}{One of the three values - NA, T or F. If NA than check for distrubution \code{(sum(.data) == 1)}.
and normalise if needed with the given laplace correction value. if T then do normalisation and laplace
correction. If F than don't do normalisaton and laplace correction.}

\item{.laplace}{Value for Laplace correction which will be added to every value in the .data.}

\item{.norm.entropy}{if T then normalise JS-divergence by entropy.}
}
\value{
Shannon entropy, Jensen-Shannon divergence or Kullback-Leibler divergence values.
}
\description{
Functions for information measures of and between distributions of values.

Warning!
Functions will check if \code{.data} if a distribution of random variable (sum == 1) or not.
To force normalisation and / or to prevent this, set \code{.do.norm} to TRUE (do normalisation)
or FALSE (don't do normalisation). For \code{js.div} and \code{kl.div} vectors of values must have
equal length.

Functions:

- The Shannon entropy quantifies the uncertainty (entropy or degree of surprise)
associated with this prediction.

- Kullback-Leibler divergence (information gain, information divergence, 
relative entropy, KLIC) is a non-symmetric measure of the difference between
two probability distributions P and Q (measure of information lost when Q is used to
approximate P).

- Jensen-Shannon divergence is a symmetric version of KLIC. Square root of this
is a metric often referred to as Jensen-Shannon distance.
}
\seealso{
\link{similarity}, \link{diversity}
}