The Conditional Bayes Principle

A Bayesian decision theory problem is defined by a probability space $ (\Omega, F, P)$, an action set $ A$, and a loss function $ L:A \times
\Omega \to R^+$. An element $ \omega \in \Omega$, sometimes called the ``state of nature'', represents complete knowledge relevant to the problem. Thus the loss function encodes how bad a given action would be if all the relevant problem information were available.

The Bayesian expected loss of an action $ a \in A$ is merely the expected value of the loss function for fixed $ a$.

$\displaystyle \rho (a) = E^P [L (a, \omega)]$ (2.1)

If the loss function has been constructed in accordance with utility theory, then this expectation is the relevant quantity for scoring the distribution of results associated with action $ a$. This leads directly to the Conditional Bayes Principle.

Definition 1 (Conditional Bayes Principle)   Choose any action $ a \in A$ that minimizes $ \rho (a)$.

In situations where no $ a \in A$ minimizes $ \rho$, there are straightforward modifications to the Conditional Bayes Principle available, such as the $ \epsilon$-Conditional Bayes Principle which states that any action within $ \epsilon$ of the minimum of $ \rho$ is acceptable.

Paul Mineiro 2001-04-18